Never Ending Security

It starts all here

Category Archives: Web Apps

BlindElephant Web Application Fingerprinter



BlindElephant Web Application Fingerprinter


The BlindElephant Web Application Fingerprinter attempts to discover the version of a (known) web application by comparing static files at known locations against precomputed hashes for versions of those files in all all available releases. The technique is fast, low-bandwidth, non-invasive, generic, and highly automatable.

Sourceforge Project Page: https://sourceforge.net/projects/blindelephant/
Discussion and Forums: http://www.qualys.com/blindelephant
License: LGPL

Getting Started

BlindElephant can be used directly as a tool on the command line, or as a library to provide fingerprinting functionality to another program.

Pre-requisites:

  • Python 2.6.x (prefer 2.6.5); users of earlier versions may have difficulty installing or running BlindElephant.

Get the code:

Installation:

Installation is only required if you plan to use BlindElephant as a library. Make sure that your python installation has distutils, and then do:cd blindelephant/srcsudo python setup.py install(Windows users, omit sudo)

Example Usage (Command Line):

setup.py will have placed BlindElephant.py in your /usr/local/bin dir.

$ BlindElephant.py 
Usage: BlindElephant.py [options] url appName

Options:
  -h, --help            show this help message and exit
  -p PLUGINNAME, --pluginName=PLUGINNAME
                        Fingerprint version of plugin (should apply to web app
                        given in appname)
  -s, --skip            Skip fingerprinting webpp, just fingerprint plugin
  -n NUMPROBES, --numProbes=NUMPROBES
                        Number of files to fetch (more may increase accuracy).
                        Default: 15
  -w, --winnow          If more than one version are returned, use winnowing
                        to attempt to narrow it down (up to numProbes
                        additional requests).
  -l, --list            List supported webapps and plugins

Use "guess" as app or plugin name to attempt to attempt to
discover which supported apps/plugins are installed.

$ python BlindElephant.py http://laws.qualys.com movabletype
Loaded /usr/local/lib/python2.6/dist-packages/blindelephant/dbs/movabletype.pkl with 96 versions, 2229 differentiating paths, and 209 version groups.
Starting BlindElephant fingerprint for version of movabletype at http://laws.qualys.com 

Fingerprinting resulted in:
4.22-en
4.22-en-COM
4.23-en
4.23-en-COM

Best Guess: 4.23-en-COM

Example Usage (Library):

$python
>>> from blindelephant.Fingerprinters import WebAppFingerprinter
>>> 
>>> #Construct the fingerprinter
>>> #use default logger pointing to console; can pass "logger" arg to change output
>>> fp = WebAppFingerprinter("http://laws.qualys.com", "movabletype")
>>> #do the fingerprint; data becomes available as instance vars
>>> fp.fingerprint()
(same as above)
>>> print "Possible versions:", fp.ver_list
Possible versions: [LooseVersion ('4.22-en'), LooseVersion ('4.22-en-COM'), LooseVersion ('4.23-en'), LooseVersion ('4.23-en-COM')]
>>> print "Max possible version: ", fp.best_guess
Max possible version:  4.23-en-COM

The Static File Fingerprinting Approach in One Picture

Other Projects Like This


More information about BlindElephant can be found on: http://blindelephant.sourceforge.net



Advertisements

How To Bypass SMS Verification Of Any Website/Service


If you don’t want to give your phone number to a website while creating an account, DON’T GIVE IT TO THEM, because today I’m going to show you a trick that you can use to bypass SMS verification of any website/service.

Bypassing SMS Verification:

  • Using Recieve-SMS-Online.info
Recieve SMS Online is a free service that allows anyone to receive SMS messages online. It has a fine list of disposable numbers from India, Romania, Germany, USA, United Kingdom, Netherlands, Italy, Spain and France.
Here is how to use Recieve SMS Online to bypass SMS verification:
Disposable Indian, American numbers
2. Select any phone number from the website, then enter the number as your mobile number on the “Phone number” box:
online sms verification
3. Send the verification code…. ( If that number is not working, skip to the next one)
4. Click on the selected number on the website.  You will be directed to the inbox:
bypass whatsapp verification code android,
5. You can find the verification code in the disposable inbox. Enter the code in the verification code field, then click verify code.
6. The account should now be verified.
There are many free SMS receive services available online. Some of them are given below:

WebPwn3r Web Applications Security Scanner For Security Researchers


WebPwn3r – Web Applications Security Scanner. WebPwn3r is a Web Applications Security Scanner coded in Python to help Security Researchers to scan Multiple links in the same time against Remote Code/Command Execution & XSS Vulnerabilities.

This tool is very helpful to bug bounty hunters they can find the vulnerable on their websites and submit to Companies. And enjoy the bounty if your bug is accepted

How to use?

1- python scan.py

2- The tool will ask you if you want to scan URL or List of URL 1- Enter number 1 to scan a URL 2- Enter number 2 to scan list of URL’s

3- URL should be a full link with a parameters

.e.g http://localhost/rand/news.php?com=val&id=11&page=24&text=zigoo

same thing with the list of links.

In it’s Current Public [Demo] version, WebPwn3r got below Features:

1- Scan a URL or List of URL’s

2- Detect and Exploit Remote Code Injection Vulnerabilities.

3- Remote Command Execution Vulnerabilities.

4- Typical XSS Vulnerabilities.

5- Detect Web Knight WAF.

6- Improved Payloads to bypass Security Filters/WAF’s.

7- Finger-Print the backend Technologies.

Video:


More information can be found at: https://github.com/zigoo0/webpwn3r

DAWS – Advanced Web Shell For Windows And Linux


About

There’s multiple things that makes DAws better than every Web Shell out there:

  1. Supports CGI by dropping Bash Shells (for Linux) and Batch Shells (for Windows).
  2. Bypasses WAFs, Disablers and Protection Systems; DAws isn’t just about using a particular function to get the job done, it uses up to 6 functions if needed, for example, if shell_exec was disabled it would automatically use exec or passthru or system or popen or proc_open instead, same for Downloading a File from a Link, if Curl was disabled then file_get_content is used instead and this Feature is widely used in every section and fucntion of the shell. (Yes, it bypasses Suhosin too)
  3. Automatic Encoding; DAws randomly and automatically encodes most of your GET and POST data using XOR(Randomized key for every session) + Base64(We created our own Base64 encoding functions instead of using the PHP ones to bypass Disablers) which will allow your shell to Bypass pretty much every WAF out there.
  4. Advanced File Manager; DAws’s File Manager contains everything a File Manager needs and even more but the main Feature is that everything is dynamically printed; the permissions of every File and Folder are checked, now, the functions that can be used will be available based on these permissions, this will save time and make life much easier.
  5. Tools: DAws holds bunch of useful tools such as “bpscan” which can identify useable and unblocked ports on the server within few minutes which can later on allow you to go for a bind shell for example.
  6. Everything that can’t be used at all will be simply removed so Users do not have to waste their time. We’re for example mentioning the execution of c++ scripts when there’s no c++ compilers on the server(DAws would have checked for multiple compilers in the first place) in this case, the function would be automatically removed and the User would know.
  7. Supports Windows and Linux.
  8. Openned Source.
Extra Info
  • Directory Romaing:
    • DAws checks, within the `web` directory, for a Writable and Readable Directory which will then be used to Drop and Execute needed scripts which will guarantee their success.
  • Eval Form:
    • `include`, `include_once`, `require` or `require_once` are being used instead PHP `eval` to bypass Protection Systems.
  • Download from Link – Methods:
    • PHP Curl
    • File_put_content
  • Zip – Methods:
    • Linux:
      • Zip
    • Windows:
      • Vbs Script
  • Shells and Tools:
    • Extra:
      • `nohup`, if installed, is automatically used for background processing.

More information can be found at: https://github.com/dotcppfile/DAws

BTS PenTesting Lab – Open Source vulnerable Web Application Platform


BTS PenTesting Lab – Open Source vulnerable Web Application Platform.

Are you a Penetration Tester, an Information Security Specialist and/or simply a Learner in Cyber Security?

This might be the right Pentesting Platform for perform your Penetratration Tests and Upgrade your Skillz! This is BTS Pentesting Lab an Open Source vulnerable Web Application Platform developed by Cyber Security & Privacy Foundation (www.cysecurity.org). It can be used to perform and learn all about many different types of web application vulnerabilities.

Currently, the App Contains the following Types of Vulnerabilities:

*SQL Injection

*XSS (includes Flash Based xss)

*CSRF

*Clickjacking

*SSRF

*File Inclusion

*Code Execution

*Insecure Direct Object Reference

*Unrestricted File Upload vulnerability

*Open URL Redirection

*Server Side Includes(SSI) Injection

and more…


More information can be found at: http://sourceforge.net/projects/btslab

WIG: WebApp Information Gathering Tool To Identify Fingerprinting of CMS


wig – WebApp Information Gatherer

wig is a web application information gathering tool, which can identify numerous Content Management Systems and other administrative applications.

The application fingerprinting is based on checksums and string matching of known files for different versions of CMSes. This results in a score being calculated for each detected CMS and its versions. Each detected CMS is displayed along with the most probable version(s) of it. The score calculation is based on weights and the amount of “hits” for a given checksum.

wig also tries to guess the operating system on the server based on the ‘server’ and ‘x-powered-by’ headers. A database containing known header values for different operating systems is included in wig, which allows wig to guess Microsoft Windows versions and Linux distribution and version.

wig features:
  • CMS version detection by: check sums, string matching and extraction
  • Lists detected package and platform versions such as asp.net, php, openssl, apache
  • Detects JavaScript libraries
  • Operation system fingerprinting by matching php, apache and other packages against a values in wig’s database
  • Checks for files of interest such as administrative login pages, readmes, etc
  • Currently the wig’s databases include 28,000 fingerprints
  • Reuse information from previous runs (save the cache)
  • Implement a verbose option
  • Remove dependency on ‘requests’
  • Support for proxy
  • Proper threading support
  • Included check for known vulnerabilities

Requirements

wig is built with Python 3, and is therefore not compatible with Python 2.

How it works

The default behavior of wig is to identify a CMS, and exit after version detection of the CMS. This is done to limit the amount of traffic sent to the target server. This behavior can be overwritten by setting the ‘-a’ flag, in which case wig will test all the known fingerprints. As some configurations of applications do not use the default location for files and resources, it is possible to have wig fetch all the static resources it encounters during its scan. This is done with the ‘-c’ option. The ‘-m’ option tests all fingerprints against all fetched URLs, which is helpful if the default location has been changed.

Help Screen

usage: wig.py [-h] [-l INPUT_FILE] [-n STOP_AFTER] [-a] [-m] [-u]
              [--no_cache_load] [--no_cache_save] [-N] [--verbosity]
              [--proxy PROXY] [-w OUTPUT_FILE]
              [url]

WebApp Information Gatherer

positional arguments:
  url              The url to scan e.g. http://example.com

optional arguments:
  -h, --help       show this help message and exit
  -l INPUT_FILE    File with urls, one per line.
  -n STOP_AFTER    Stop after this amount of CMSs have been detected. Default:
                   1
  -a               Do not stop after the first CMS is detected
  -m               Try harder to find a match without making more requests
  -u               User-agent to use in the requests
  --no_cache_load  Do not load cached responses
  --no_cache_save  Do not save the cache for later use
  -N               Shortcut for --no_cache_load and --no_cache_save
  --verbosity, -v  Increase verbosity. Use multiple times for more info
  --proxy PROXY    Tunnel through a proxy (format: localhost:8080)
  -w OUTPUT_FILE   File to dump results into (JSON)

Example of run:

$ ./wig.py example.com

dP   dP   dP    dP     .88888.
88   88   88    88    d8'   `88
88  .8P  .8P    88    88
88  d8'  d8'    88    88   YP88
88.d8P8.d8P     88    Y8.   .88
8888' Y88'      dP     `88888'

  WebApp Information Gatherer

Redirected to http://www.example.com. Continue? [Y|n]:

TITLE
--- HTML TITLE ---

IP
255.255.255.256



SOFTWARE                  VERSION                           CATEGORY
Drupal                    7.28 | 7.29 | 7.30 | 7.31 | 7.32  CMS
ASP.NET                   4.0.30319.18067                   Platform
Microsoft-HTTPAPI         2.0                               Platform
Microsoft-IIS             6.0 | 7.0 | 7.5 | 8.0             Platform
Microsoft Windows Server  2003 SP2 | 2008 | 2008 R2 | 2012  Operating System

SOFTWARE                  VULNERABILITIES                   LINK
Drupal 7.28               7                                 http://cvedetails.com/version/169265
Drupal 7.29               3                                 http://cvedetails.com/version/169917
Drupal 7.30               3                                 http://cvedetails.com/version/169916

URL                       NOTE                              CATEGORY
/login/                   Test directory                    Interesting URL
/login/index_form.html    ASP.NET detailed error            Interesting URL
/robots.txt               robots.txt index                  Interesting URL
/test/                    Test directory                    Interesting URL
_______________________________________________________________________________
Time: 15.7 sec            Urls: 351                         Fingerprints: 28989


More information can be found at: https://github.com/jekyc/wig

Commix – An Command Injection Exploiter To Test And Find Web Application Bugs


   ___    ___     ___ ___     ___ ___ /\_\   __  _ 
  /'___\ / __`\ /' __` __`\ /' __` __`\/\ \ /\ \/'\
 /\ \__//\ \L\ \/\ \/\ \/\ \/\ \/\ \/\ \ \ \\/>  </
 \ \____\ \____/\ \_\ \_\ \_\ \_\ \_\ \_\ \_\/\_/\_\
  \/____/\/___/  \/_/\/_/\/_/\/_/\/_/\/_/\/_/\//\/_/ { v0.1b }

+--
Automated All-in-One OS Command Injection and Exploitation Tool
Copyright (c) 2015 Anastasios Stasinopoulos (@ancst)
+--

General Information

Commix (short for [comm]and [i]njection e[x]ploiter) has a simple environment and it can be used, from web developers, penetration testers or even security researchers to test web applications with the view to find bugs, errors or vulnerabilities related to command injection attacks. By using this tool, it is very easy to find and exploit a command injection vulnerability in a certain vulnerable parameter or string. Commix is written in Python programming language.

Disclaimer

The tool is only for testing and academic purposes and can only be used where strict consent has been given. Do not use it for illegal purposes!!

Requirements

Python version 2.6.x or 2.7.x is required for running this program.

Installation

Commix comes pre-installed on the following Linux distributions:

Download commix by cloning the Git repository:

git clone https://github.com/stasinopoulos/commix.git commix

Usage

Usage: python commix.py [options]

Options

-h, --help            Show help and exit.
--verbose             Enable the verbose mode.
--install             Install 'commix' to your system.
--version             Show version number and exit.
--update              Check for updates (apply if any) and exit.

Target

This options has to be provided, to define the target URL.

--url=URL           Target URL
--url-reload        Reload target URL after command execution.

Request

These options can be used, to specify how to connect to the target
URL.

--host=HOST         HTTP Host header.
--referer=REFERER   HTTP Referer header.
--user-agent=AGENT  HTTP User-Agent header.
--cookie=COOKIE     HTTP Cookie header.
--random-agent      Use a randomly selected HTTP User-Agent header.
--headers=HEADERS   Extra headers (e.g. 'Header1:Value1\nHeader2:Value2').
--proxy=PROXY       Use a HTTP proxy (e.g. '127.0.0.1:8080').
--auth-url=AUTH_..  Login panel URL.
--auth-data=AUTH..  Login parameters and data.
--auth-cred=AUTH..  HTTP Basic Authentication credentials (e.g.
                    'admin:admin').

Enumeration

These options can be used, to enumerate the target host.

--current-user  Retrieve current user.
--hostname      Retrieve server hostname.
--is-root       Check if the current user have root privs.
--sys-info      Retrieve system information.

Injection

These options can be used, to specify which parameters to inject and
to provide custom injection payloads.

--data=DATA         POST data to inject (use 'INJECT_HERE' tag to specify
                    the testable parameter).
--suffix=SUFFIX     Injection payload suffix string.
--prefix=PREFIX     Injection payload prefix string.
--technique=TECH    Specify a certain injection technique : 'classic',
                    'eval-based', 'time-based' or 'file-based'.
--maxlen=MAXLEN     The length of the output on time-based technique
                    (Default: 10000 chars).
--delay=DELAY       Set Time-delay for time-based and file-based
                    techniques (Default: 1 sec).
--base64            Use Base64 (enc)/(de)code trick to prevent false-
                    positive results.
--tmp-path=TMP_P..  Set remote absolute path of temporary files directory.
--root-dir=SRV_R..  Set remote absolute path of web server's root
                    directory (Default: /var/www/).
--icmp-exfil=IP_..  Use the ICMP exfiltration technique (e.g.
                    'ip_src=192.168.178.1,ip_dst=192.168.178.3').
--alter-shell       Use an alternative os-shell (Python). Available only
                    for 'tempfile-based' injections.
--os-shell=OS_SH..  Execute a single operating system command.

Usage Examples

Exploiting Damn Vulnerable Web App:

python commix.py --url="http://192.168.178.58/DVWA-1.0.8/vulnerabilities/exec/#" --data="ip=INJECT_HERE&submit=submit" --cookie="security=medium; PHPSESSID=nq30op434117mo7o2oe5bl7is4"

Exploiting php-Charts 1.0 using injection payload suffix & prefix string:

python commix.py --url="http://192.168.178.55/php-charts_v1.0/wizard/index.php?type=INJECT_HERE" --prefix="'" --suffix="//"

Exploiting OWASP Mutillidae using extra headers and HTTP proxy:

python commix.py --url="http://192.168.178.46/mutillidae/index.php?popUpNotificationCode=SL5&page=dns-lookup.php" --data="target_host=INJECT_HERE" --headers="Accept-Language:fr\nETag:123\n" --proxy="127.0.0.1:8081"

Exploiting Persistence using ICMP exfiltration technique :

su -c "python commix.py --url="http://192.168.178.8/debug.php" --data="addr=127.0.0.1" --icmp-exfil="ip_src=192.168.178.5,ip_dst=192.168.178.8""

Exploiting Kioptrix: 2014 (#5) using custom user-agent and specified injection technique:

python commix.py --url="http://192.168.178.6:8080/phptax/drawimage.php?pfilez=INJECT_HERE&pdf=make" --user-agent="Mozilla/4.0 Mozilla4_browser" --technique="file-based" --root-dir="/"

Command injection testbeds

A collection of pwnable VMs, that includes web apps vulnerable to command injections.

Exploitation Demos


More information can be found at: https://github.com/stasinopoulos/commix

Kunai – A Tool For Pwning And Info Gathering via User Browser


Kunai 0.2

Sometimes there is a need to obtain ip address of specific person or perform client-side attacks via user browser. This is what you need in such situations.

Kunai is a simple script which collects many informations about a visitor and saves output to file; furthermore, you may try to perform attacks on user browser, using beef or metasploit.

In order to grab as many informations as possible, script detects whenever javascript is enabled to obtain more details about a visitor. For example, you can include this script in iframe, or perform redirects, to avoid detection of suspicious activities. Script can notify you via email about user that visit your script. Whenever someone will visit your hook (kunai), output fille will be updated.

Functions

  • Stores informations about users in elegant output
  • Website spoofing
  • Redirects
  • BeEF & Metasploit compatibility
  • Email notification
  • Diffrent reaction for javascript disabled browser
  • One file composition

Example configs

  • Website spoofing (more stable & better for autopwn & beef):
  • Redirect (better for quick ip catching):
goo.gl/urlink -> evilhost/x.php -> site.com/kitty.png
  • Cross Site Scripting (inclusion)

Screens


More information can be found on: https://github.com/Smaash/kunai

Meteor – a complete open source platform for building web and mobile apps in pure JavaScript


Meteor is a complete open source platform
for building web and mobile apps
in pure JavaScript.

Installing Meteor

Meteor supports OS X, Windows, and Linux.

On Windows? Download the official Meteor installer here.

On OS X or Linux? Install the latest official Meteor release from your terminal:

curl https://install.meteor.com/ | sh

The Windows installer supports Windows 7, Windows 8.1, Windows Server 2008, and Windows Server 2012. The command line installer supports Mac OS X 10.7 (Lion) and above, and Linux on x86 and x86_64 architectures.

Now that you’ve installed Meteor, check out the tutorial that teaches you how to build a collaborative todo list app while showing you Meteor’s most exciting and useful features. You can also read about the design of the Meteor platform or check out the complete documentation.

Creating your first app

To create a Meteor app, open your terminal and type:

meteor create simple-todos

This will create a new folder called simple-todos with all of the files that a Meteor app needs:

simple-todos.js       # a JavaScript file loaded on both client and server
simple-todos.html     # an HTML file that defines view templates
simple-todos.css      # a CSS file to define your app's styles
.meteor               # internal Meteor files

To run the newly created app:

cd simple-todos
meteor

Open your web browser and go to http://localhost:3000 to see the app running.

You can play around with this default app for a bit before we continue. For example, try editing the text in<h1> inside simple-todos.html using your favorite text editor. When you save the file, the page in your browser will automatically update with the new content. We call this “hot code push”.

Now that you have some experience editing the files in your Meteor app, let’s start working on a simple todo list application.

See the code for step 1 on GitHub!

Defining views with templates

To start working on our todo list app, let’s replace the code of the default starter app with the code below. Then we’ll talk about what it does.

<!-- simple-todos.html -->
<head>
  <title>Todo List</title>
</head>

<body>
  <div class="container">
    <header>
      <h1>Todo List</h1>
    </header>

    <ul>
      {{#each tasks}}
        {{> task}}
      {{/each}}
    </ul>
  </div>
</body>

<template name="task">
  <li>{{text}}</li>
</template>
// simple-todos.js
if (Meteor.isClient) {
  // This code only runs on the client
  Template.body.helpers({
    tasks: [
      { text: "This is task 1" },
      { text: "This is task 2" },
      { text: "This is task 3" }
    ]
  });
}

In our browser, the app will now look much like this:

Todo List

  • This is task 1
  • This is task 2
  • This is task 3

Now let’s find out what all these bits of code are doing!

HTML files in Meteor define templates

Meteor parses all of the HTML files in your app folder and identifies three top-level tags: <head>, <body>, and <template>.

Everything inside any <head> tags is added to the head section of the HTML sent to the client, and everything inside <body> tags is added to the body section, just like in a regular HTML file.

Everything inside <template> tags is compiled into Meteor templates, which can be included inside HTML with {{> templateName}} or referenced in your JavaScript with Template.templateName.

Adding logic and data to templates

All of the code in your HTML files is compiled with Meteor’s Spacebars compiler. Spacebars uses statements surrounded by double curly braces such as {{#each}} and {{#if}} to let you add logic and data to your views.

You can pass data into templates from your JavaScript code by defining helpers. In the code above, we defined a helper called tasks on Template.body that returns an array. Inside the body tag of the HTML, we can use {{#each tasks}} to iterate over the array and insert a task template for each value. Inside the #eachblock, we can display the text property of each array item using {{text}}.

In the next step, we will see how we can use helpers to make our templates display dynamic data from a database collection.

Adding CSS

Before we go any further, let’s make our app look nice by adding some CSS.

Since this tutorial is focused on working with HTML and JavaScript, just copy all the CSS code below intosimple-todos.css. This is all the CSS code you will need until the end of the tutorial. The app will still work without the CSS, but it will look much nicer if you add it.

Replace simple-todos.css with this codeSelect All
/* CSS declarations go here */
body {
  font-family: sans-serif;
  background-color: #315481;
  background-image: linear-gradient(to bottom, #315481, #918e82 100%);
  background-attachment: fixed;

  position: absolute;
  top: 0;
  bottom: 0;
  left: 0;
  right: 0;

  padding: 0;
  margin: 0;

  font-size: 14px;
}

.container {
  max-width: 600px;
  margin: 0 auto;
  min-height: 100%;
  background: white;
}

header {
  background: #d2edf4;
  background-image: linear-gradient(to bottom, #d0edf5, #e1e5f0 100%);
  padding: 20px 15px 15px 15px;
  position: relative;
}

#login-buttons {
  display: block;
}

h1 {
  font-size: 1.5em;
  margin: 0;
  margin-bottom: 10px;
  display: inline-block;
  margin-right: 1em;
}

form {
  margin-top: 10px;
  margin-bottom: -10px;
  position: relative;
}

.new-task input {
  box-sizing: border-box;
  padding: 10px 0;
  background: transparent;
  border: none;
  width: 100%;
  padding-right: 80px;
  font-size: 1em;
}

.new-task input:focus{
  outline: 0;
}

ul {
  margin: 0;
  padding: 0;
  background: white;
}

.delete {
  float: right;
  font-weight: bold;
  background: none;
  font-size: 1em;
  border: none;
  position: relative;
}

li {
  position: relative;
  list-style: none;
  padding: 15px;
  border-bottom: #eee solid 1px;
}

li .text {
  margin-left: 10px;
}

li.checked {
  color: #888;
}

li.checked .text {
  text-decoration: line-through;
}

li.private {
  background: #eee;
  border-color: #ddd;
}

header .hide-completed {
  float: right;
}

.toggle-private {
  margin-left: 5px;
}

@media (max-width: 600px) {
  li {
    padding: 12px 15px;
  }

  .search {
    width: 150px;
    clear: both;
  }

  .new-task input {
    padding-bottom: 5px;
  }
}
See the code for step 2 on GitHub!

Storing tasks in a collection

Collections are Meteor’s way of storing persistent data. The special thing about collections in Meteor is that they can be accessed from both the server and the client, making it easy to write view logic without having to write a lot of server code. They also update themselves automatically, so a template backed by a collection will automatically display the most up-to-date data.

Creating a new collection is as easy as calling MyCollection = new Mongo.Collection("my-collection"); in your JavaScript. On the server, this sets up a MongoDB collection called my-collection; on the client, this creates a cache connected to the server collection. We’ll learn more about the client/server divide in step 12, but for now we can write our code with the assumption that the entire database is present on the client.

Let’s update our JavaScript code to get our tasks from a collection instead of a static array:

// simple-todos.js
Tasks = new Mongo.Collection("tasks");

if (Meteor.isClient) {
  // This code only runs on the client
  Template.body.helpers({
    tasks: function () {
      return Tasks.find({});
    }
  });
}

When you make these changes to the code, you’ll notice that the tasks that used to be in the todo list have disappeared. That’s because our database is currently empty — we need to insert some tasks!

Inserting tasks from the console

Items inside collections are called documents. Let’s use the server database console to insert some documents into our collection. In a new terminal tab, go to your app directory and type:

meteor mongo

This opens a console into your app’s local development database. Into the prompt, type:

db.tasks.insert({ text: "Hello world!", createdAt: new Date() });

In your web browser, you will see the UI of your app immediately update to show the new task. You can see that we didn’t have to write any code to connect the server-side database to our front-end code — it just happened automatically.

Insert a few more tasks from the database console with different text. In the next step, we’ll see how to add functionality to our app’s UI so that we can add tasks without using the database console.

See the code for step 3 on GitHub!

Adding tasks with a form

In this step, we’ll add an input field for users to add tasks to the list.

First, let’s add a form to our HTML:

<header>
  <h1>Todo List</h1>

  <!-- add a form below the h1 -->
  <form class="new-task">
    <input type="text" name="text" placeholder="Type to add new tasks" />
  </form>
</header>

Here’s the JavaScript code we need to add to listen to the submit event on the form:

// Inside the if (Meteor.isClient) block, right after Template.body.helpers:
Template.body.events({
  "submit .new-task": function (event) {
    // This function is called when the new task form is submitted

    var text = event.target.text.value;

    Tasks.insert({
      text: text,
      createdAt: new Date() // current time
    });

    // Clear form
    event.target.text.value = "";

    // Prevent default form submit
    return false;
  }
});

Now your app has a new input field. To add a task, just type into the input field and hit enter. If you open a new browser window and open the app again, you’ll see that the list is automatically synchronized between all clients.

Attaching events to templates

Event listeners are added to templates in much the same way as helpers are: by callingTemplate.templateName.events(...) with a dictionary. The keys describe the event to listen for, and the values are event handlers that are called when the event happens.

In our case above, we are listening to the submit event on any element that matches the CSS selector .new-task. When this event is triggered by the user pressing enter inside the input field, our event handler function is called.

The event handler gets an argument called event that has some information about the event that was triggered. In this case event.target is our form element, and we can get the value of our input withevent.target.text.value. You can see all of the other properties of the event object by adding aconsole.log(event) and inspecting the object in your browser console.

The last two lines of our event handler perform some cleanup — first we make sure to make the input blank, and then we return false to tell the web browser to not do the default form submit action since we have already handled it.

Inserting into a collection

Inside the event handler, we are adding a task to the tasks collection by calling Tasks.insert(). We can assign any properties to the task object, such as the time created, since we don’t ever have to define a schema for the collection.

Being able to insert anything into the database from the client isn’t very secure, but it’s okay for now. In step 10 we’ll learn how we can make our app secure and restrict how data is inserted into the database.

Sorting our tasks

Currently, our code displays all new tasks at the bottom of the list. That’s not very good for a task list, because we want to see the newest tasks first.

We can solve this by sorting the results using the createdAt field that is automatically added by our new code. Just add a sort option to the find call inside the tasks helper:

Template.body.helpers({
  tasks: function () {
    // Show newest tasks first
    return Tasks.find({}, {sort: {createdAt: -1}});
  }
});

In the next step, we’ll add some very important todo list functions: checking off and deleting tasks.

See the code for step 4 on GitHub!

Checking off and deleting tasks

Until now, we have only interacted with a collection by inserting documents. Now, we will learn how to update and remove them.

Let’s add two elements to our task template, a checkbox and a delete button:

<!-- replace the existing task template with this code -->
<template name="task">
  <li class="{{#if checked}}checked{{/if}}">
    <button class="delete">&times;</button>

    <input type="checkbox" checked="{{checked}}" class="toggle-checked" />

    <span class="text">{{text}}</span>
  </li>
</template>

We have added UI elements, but they don’t do anything yet. We should add some event handlers:

// In the client code, below everything else
Template.task.events({
  "click .toggle-checked": function () {
    // Set the checked property to the opposite of its current value
    Tasks.update(this._id, {$set: {checked: ! this.checked}});
  },
  "click .delete": function () {
    Tasks.remove(this._id);
  }
});

Getting data in event handlers

Inside the event handlers, this refers to an individual task object. In a collection, every inserted document has a unique _id field that can be used to refer to that specific document. We can get the _id of the current task with this._id. Once we have the _id, we can use update and remove to modify the relevant task.

Update

The update function on a collection takes two arguments. The first is a selector that identifies a subset of the collection, and the second is an update parameter that specifies what should be done to the matched objects.

In this case, the selector is just the _id of the relevant task. The update parameter uses $set to toggle thechecked field, which will represent whether the task has been completed.

Remove

The remove function takes one argument, a selector that determines which item to remove from the collection.

Using object properties or helpers to add/remove classes

If you try checking off some tasks after adding all of the above code, you will see that checked off tasks have a line through them. This is enabled by the following snippet:

<li class="{{#if checked}}checked{{/if}}">

With this code, if the checked property of a task is true, the checked class is added to our list item. Using this class, we can make checked-off tasks look different in our CSS.

See the code for step 5 on GitHub!

Deploying your app

Now that we have a working todo list app, we can share it with our friends! Meteor makes it really easy to put an app up on the internet where other people can use it.

Simply go to your app directory, and type:

meteor deploy my_app_name.meteor.com

Once you answer all of the prompts and the upload completes, you can go tohttp://my_app_name.meteor.com and use your app from anywhere.

Try opening the app on multiple devices such as your phone and your friend’s computer. Add, remove, and check off some tasks and you will see that the UI of your app is really fast. That’s because Meteor doesn’t wait for the server to respond before updating the interface – we’ll talk about this more in step 11.

Congratulations, you’ve made a working app that you can now use with your friends! In later steps we will add more functionality involving multiple users, private tasks, and search. First, we’ll take a detour to see that while we were building a web app, we also created a pretty nice mobile app along the way.

Running your app on Android or iOS

So far, we’ve been building our app and testing only in a web browser, but Meteor has been designed to work across different platforms – your simple todo list website can become an iOS or Android app in just a few commands.

Meteor makes it easy to set up all of the tools required to build mobile apps, but downloading all of the programs can take a while – for Android the download is about 300MB and for iOS you need to install Xcode which is about 2GB. If you don’t want to wait to download these tools, feel free to skip to the next step.

Running on an Android emulator

In the terminal, go to your app folder and type:

meteor install-sdk android

This will help you install all of the necessary tools to build an Android app from your project. When you are done installing everything, type:

meteor add-platform android

After you agree to the license terms, type:

meteor run android

After some initialization, you will see an Android emulator pop up, running your app inside a native Android wrapper. The emulator can be somewhat slow, so if you want to see what it’s really like using your app, you should run it on an actual device.

Running on an Android device

First, complete all of the steps above to set up the Android tools on your system. Then, make sure you haveUSB Debugging enabled on your phone and the phone is plugged into your computer with a USB cable. Also, you must quit the Android emulator before running on a device.

Then, run the following command:

meteor run android-device

The app will be built and installed on your device. If you want to point your app to the server you deployed in the previous step, run:

meteor run android-device --mobile-server my_app_name.meteor.com

Running on an iOS simulator (Mac Only)

If you have a Mac, you can run your app inside the iOS simulator.

Go to your app folder and type:

meteor install-sdk ios

This will run you through the setup necessary to build an iOS app from your project. When you’re done, type:

meteor add-platform ios
meteor run ios

You will see the iOS simulator pop up with your app running inside.

Running on an iPhone or iPad (Mac Only; requires Apple developer account)

If you have an Apple developer account, you can also run your app on an iOS device. Run the following command:

meteor run ios-device

This will open Xcode with a project for your iOS app. You can use Xcode to then launch the app on any device or simulator that Xcode supports.

If you want to point your app at the previously deployed server, run:

meteor run ios-device --mobile-server my_app_name.meteor.com

Now that we have seen how easy it is to deploy our app and run it on mobile, let’s get to adding some more features.

Storing temporary UI state in Session

In this step, we’ll add a client-side data filtering feature to our app, so that users can check a box to only see incomplete tasks. We’re going to learn how to use Session to store temporary reactive state on the client.

First, we need to add a checkbox to our HTML:

<!-- add the checkbox to <body> right below the h1 -->
<label class="hide-completed">
  <input type="checkbox" checked="{{hideCompleted}}" />
  Hide Completed Tasks
</label>

Then, we need an event handler to update a Session variable when the checkbox is checked or unchecked.Session is a convenient place to store temporary UI state, and can be used in helpers just like a collection.

// Add to Template.body.events
"change .hide-completed input": function (event) {
  Session.set("hideCompleted", event.target.checked);
}

Now, we need to update Template.body.helpers. The code below has a new if block to filter the tasks if the checkbox is checked, and a helper to make sure the checkbox represents the state of our Session variable.

// Replace the existing Template.body.helpers
Template.body.helpers({
  tasks: function () {
    if (Session.get("hideCompleted")) {
      // If hide completed is checked, filter tasks
      return Tasks.find({checked: {$ne: true}}, {sort: {createdAt: -1}});
    } else {
      // Otherwise, return all of the tasks
      return Tasks.find({}, {sort: {createdAt: -1}});
    }
  },
  hideCompleted: function () {
    return Session.get("hideCompleted");
  }
});

Now if you check the box, the task list will only show tasks that haven’t been completed.

Session is a reactive data store for the client

Until now, we have stored all of our state in collections, and the view updated automatically when we modified the data inside these collections. This is because Meteor.Collection is recognized by Meteor as areactive data source, meaning Meteor knows when the data inside has changed. Session is the same way, but is not synced with the server like collections are. This makes Session a convenient place to store temporary UI state like the checkbox above. Just like with collections, we don’t have to write any extra code for the template to update when the Session variable changes — just calling Session.get(...) inside the helper is enough.

One more feature: Showing a count of incomplete tasks

Now that we have written a query that filters out completed tasks, we can use the same query to display a count of the tasks that haven’t been checked off. To do this we need to add a helper and change one line of the HTML.

// Add to Template.body.helpers
incompleteCount: function () {
  return Tasks.find({checked: {$ne: true}}).count();
}
<!-- display the count at the end of the <h1> tag -->
<h1>Todo List ({{incompleteCount}})</h1>
See the code for step 8 on GitHub!

Adding user accounts

Meteor comes with an accounts system and a drop-in login user interface that lets you add multi-user functionality to your app in minutes.

To enable the accounts system and UI, we need to add the relevant packages. In your app directory, run the following command:

meteor add accounts-ui accounts-password

In the HTML, right under the checkbox, include the following code to add a login dropdown:

{{> loginButtons}}

Then, in your JavaScript, add the following code to configure the accounts UI to use usernames instead of email addresses:

// At the bottom of the client code
Accounts.ui.config({
  passwordSignupFields: "USERNAME_ONLY"
});

Now users can create accounts and log into your app! This is very nice, but logging in and out isn’t very useful yet. Let’s add two functions:

  1. Only display the new task input field to logged in users
  2. Show which user created each task

To do this, we will add two new fields to the tasks collection:

  1. owner – the _id of the user that created the task.
  2. username – the username of the user that created the task. We will save the username directly in the task object so that we don’t have to look up the user every time we display the task.

First, let’s add some code to save these fields into the submit .new-task event handler:

Tasks.insert({
  text: text,
  createdAt: new Date(),            // current time
  owner: Meteor.userId(),           // _id of logged in user
  username: Meteor.user().username  // username of logged in user
});

Then, in our HTML, add an #if block helper to only show the form when there is a logged in user:

{{#if currentUser}}
  <form class="new-task">
    <input type="text" name="text" placeholder="Type to add new tasks" />
  </form>
{{/if}}

Finally, add a Spacebars statement to display the username field on each task right before the text:

<span class="text"><strong>{{username}}</strong> - {{text}}</span>

Now, users can log in and we can track which user each task belongs to. Let’s look at some of the concepts we just discovered in more detail.

Automatic accounts UI

If our app has the accounts-ui package, all we have to do to add a login dropdown is include theloginButtons template with {{> loginButtons}}. This dropdown detects which login methods have been added to the app and displays the appropriate controls. In our case, the only enabled login method is accounts-password, so the dropdown displays a password field. If you are adventurous, you can add the accounts-facebook package to enable Facebook login in your app – the Facebook button will automatically appear in the dropdown.

Getting information about the logged-in user

In your HTML, you can use the built-in {{currentUser}} helper to check if a user is logged in and get information about them. For example, {{currentUser.username}} will display the logged in user’s username.

In your JavaScript code, you can use Meteor.userId() to get the current user’s _id, or Meteor.user() to get the whole user document.

In the next step, we will learn how to make our app more secure by doing all of our data validation on the server instead of the client.

See the code for step 9 on GitHub!

Security with methods

Before this step, any user of the app could edit any part of the database. This might be okay for very small internal apps or demos, but any real application needs to control permissions for its data. In Meteor, the best way to do this is by declaring methods. Instead of the client code directly calling insert, update, andremove, it will instead call methods that will check if the user is authorized to complete the action and then make any changes to the database on the client’s behalf.

Removing insecure

Every newly created Meteor project has the insecure package added by default. This is the package that allows us to edit the database from the client. It’s useful when prototyping, but now we are taking off the training wheels. To remove this package, go to your app directory and run:

meteor remove insecure

If you try to use the app after removing this package, you will notice that none of the inputs or buttons work anymore. This is because all client-side database permissions have been revoked. Now we need to rewrite some parts of our app to use methods.

Defining methods

First, we need to define some methods. We need one method for each database operation we want to perform on the client. Methods should be defined in code that is executed on the client and the server – we will discuss this a bit later in the section titled Latency compensation.

// At the bottom of simple-todos.js, outside of the client-only block
Meteor.methods({
  addTask: function (text) {
    // Make sure the user is logged in before inserting a task
    if (! Meteor.userId()) {
      throw new Meteor.Error("not-authorized");
    }

    Tasks.insert({
      text: text,
      createdAt: new Date(),
      owner: Meteor.userId(),
      username: Meteor.user().username
    });
  },
  deleteTask: function (taskId) {
    Tasks.remove(taskId);
  },
  setChecked: function (taskId, setChecked) {
    Tasks.update(taskId, { $set: { checked: setChecked} });
  }
});

Now that we have defined our methods, we need to update the places we were operating on the collection to use the methods instead:

// replace Tasks.insert( ... ) with:
Meteor.call("addTask", text);

// replace Tasks.update( ... ) with:
Meteor.call("setChecked", this._id, ! this.checked);

// replace Tasks.remove( ... ) with:
Meteor.call("deleteTask", this._id);

Now all of our inputs and buttons will start working again. What did we gain from all of this work?

  1. When we insert tasks into the database, we can now securely verify that the user is logged in, that thecreatedAt field is correct, and that the owner and username fields are correct and the user isn’t impersonating anyone.
  2. We can add extra validation logic to setChecked and deleteTask in later steps when users can make tasks private.
  3. Our client code is now more separated from our database logic. Instead of a lot of stuff happening inside our event handlers, we now have methods that can be called from anywhere.

Latency compensation

So why do we want to define our methods on the client and on the server? We do this to enable a feature called latency compensation.

When you call a method on the client using Meteor.call, two things happen in parallel:

  1. The client sends a request to the server to run the method in a secure environment, just like an AJAX request would work
  2. A simulation of the method runs directly on the client to attempt to predict the outcome of the server call using the available information

What this means is that a newly created task actually appears on the screen before the result comes back from the server.

If the result from the server comes back and is consistent with the simulation on the client, everything remains as is. If the result on the server is different from the result of the simulation on the client, the UI is patched to reflect the actual state of the server.

With Meteor methods and latency compensation, you get the best of both worlds — the security of server code and no round-trip delay.

See the code for step 10 on GitHub!

Filtering data with publish and subscribe

Now that we have moved all of our app’s sensitive code into methods, we need to learn about the other half of Meteor’s security story. Until now, we have worked assuming the entire database is present on the client, meaning if we call Tasks.find() we will get every task in the collection. That’s not good if users of our application want to store privacy-sensitive data. We need a way of controlling which data Meteor sends to the client-side database.

Just like with insecure in the last step, all new Meteor apps start with the autopublish package. Let’s remove it and see what happens:

meteor remove autopublish

When the app refreshes, the task list will be empty. Without the autopublish package, we will have to specify explicitly what the server sends to the client. The functions in Meteor that do this are Meteor.publish andMeteor.subscribe.

Let’s add them now.

// At the bottom of simple-todos.js
if (Meteor.isServer) {
  Meteor.publish("tasks", function () {
    return Tasks.find();
  });
}
// At the top of our client code
Meteor.subscribe("tasks");

Once you have added this code, all of the tasks will reappear.

Calling Meteor.publish on the server registers a publication named "tasks". When Meteor.subscribe is called on the client with the publication name, the client subscribes to all the data from that publication, which in this case is all of the tasks in the database. To truly see the power of the publish/subscribe model, let’s implement a feature that allows users to mark tasks as “private” so that no other users can see them.

Implementing private tasks

First, let’s add another property to tasks called “private” and a button for users to mark a task as private. This button should only show up for the owner of a task. It will display the current state of the item.

<!-- add right below the code for the checkbox in the task template -->
{{#if isOwner}}
  <button class="toggle-private">
    {{#if private}}
      Private
    {{else}}
      Public
    {{/if}}
  </button>
{{/if}}

<!-- modify the li tag to have the private class if the item is private -->
<li class="{{#if checked}}checked{{/if}} {{#if private}}private{{/if}}">

We need to modify our JavaScript code in three places:

// Define a helper to check if the current user is the task owner
Template.task.helpers({
  isOwner: function () {
    return this.owner === Meteor.userId();
  }
});

// Add an event for the new button to Template.task.events
"click .toggle-private": function () {
  Meteor.call("setPrivate", this._id, ! this.private);
}

// Add a method to Meteor.methods called setPrivate
setPrivate: function (taskId, setToPrivate) {
  var task = Tasks.findOne(taskId);

  // Make sure only the task owner can make a task private
  if (task.owner !== Meteor.userId()) {
    throw new Meteor.Error("not-authorized");
  }

  Tasks.update(taskId, { $set: { private: setToPrivate } });
}

Now that we have a way of setting which tasks are private, we should modify our publication function to only send the tasks that a user is authorized to see:

// Modify the publish statement
// Only publish tasks that are public or belong to the current user
Meteor.publish("tasks", function () {
  return Tasks.find({
    $or: [
      { private: {$ne: true} },
      { owner: this.userId }
    ]
  });
});

To test that this functionality works, you can use your browser’s private browsing mode to log in as a different user. Put the two windows side by side and mark a task private to confirm that the other user can’t see it. Now make it public again and it will reappear!

In order to finish up our private task feature, we need to add checks to our deleteTask and setCheckedmethods to make sure only the task owner can delete or check off a private task:

// Inside the deleteTask method
var task = Tasks.findOne(taskId);
if (task.private && task.owner !== Meteor.userId()) {
  // If the task is private, make sure only the owner can delete it
  throw new Meteor.Error("not-authorized");
}

// Inside the setChecked method
var task = Tasks.findOne(taskId);
if (task.private && task.owner !== Meteor.userId()) {
  // If the task is private, make sure only the owner can check it off
  throw new Meteor.Error("not-authorized");
}

We’re done with our private task feature! Now our app is secure from attackers trying to view or modify someone’s private tasks.

See the code for step 11 on GitHub!

What’s next?

Congratulations on your newly built Meteor app! Don’t forget to deploy it again so your friends can use the new features.

Your app currently supports collaborating on a single todo list. To see how you could add more functionality, check out the Todos example — a more complete app that can handle sharing multiple lists. Also, try Local Market, a cross-platform customer engagement app that shows off native hardware functionality and social features.

meteor create --example todos
meteor create --example localmarket

Here are some options for where you can go next:

  1. Grab a copy of Discover Meteor, the best Meteor book out there
  2. Read about the design of the Meteor platform to see how all of the parts fit together
  3. Check out the complete documentation

More information can be found on: https://www.meteor.com

Raptor – A Web-based Source Code Vulnerability Scanner


Raptor

Raptor is a web-based (web-serivce + UI) github centric source-vulnerability scanner i.e. it scans a repository with just the github repo url. You can setup webhooks to ensure automated scans everytime you commit or merge a pull request. The scan is done asynchonously and the results are available only to the user who initiated the scan.

Some of the features of the Raptor:

  • Plug-in architecture (plug and play external tools and generate unified reports)
  • Web-service can be leveraged for custom automation (without the need of the UI)
  • Easy to create/edit/delete signatures for new vulnerabilities and/or programming languages.

This tool is an attempt to help the community and start-up companies to emphasize on secure-coding. This tool may or may not match the features/quality of commercial alternatives, nothing is guaranteed and you have been warned. This tool is targetted to be used by security code-reviewers and/or developers with secure-coding experience to find vulnerability entrypoints during code-audits or peer reviews. Please DO NOT trust the tool’s output blindly. This is best-used if you plug Raptor into your CI/CD pipeline.

Version

0.1 beta

Tech

Integrated Plugins (currently):

  • Mozilla ScanJS – for client-Side JavaScript, Node.JS, FireFox OS support
  • Brakeman – for Ruby On Rails support
  • RIPS – for PHP support
  • [Android] – for insecure permissions

Avaiables Rulepacks (currently):

  • ActionScript – supports ActionScript 2.0 & 3.0 source/sinks
  • Java – partial support for Android. J2EE and JSP support yet to be added.

Installation (Tested on a Ubuntu 14.04 x64 LAMP instance)

Installation Video: YouTube Install

$ wget https://github.com/dpnishant/raptor/archive/master.zip -O raptor.zip
$ unzip raptor.zip
$ cd raptor-master
$ sudo sh install.sh

Usage

Scanner

Installation Video: YouTube Usage

cd raptor-master
sudo sh start.sh #starts the backend web-service

Now point your browser to Raptor Home

Login with any username and any password (but remember the username to view scan history)

Rules Editor

You can use the bundled light-weight, GUI client-side rules editor for adding any new/custom rule(s) for your specific requirements(s) or any other plain-text editor as the rulepack files are just simple JSON structures. Use your browser to open rules located in ‘backend/rules’. When you are done, save your new/modified rules file in same directory i.e. ‘backend/rules’. All you need to do now is a minor edit, here: Init Script. Append your new rulepack filename to this array without the ‘.rulepack’ extension and restart the backend server. You are all set!

You can access it here: Rules Editor

Development

Want to contribute? Great! Get in touch with me if you have an idea or else feel free to fork and improve. :)

Contributors

License

GNU GPL v2.0

Free Software, Hell Yeah!


More information can be found on: https://github.com/dpnishant/raptor and on http://dpnishant.github.io/raptor

jsprime – a javascript static security analysis tool


Today, more and more developers are switching to JavaScript as their first choice of language. The reason is simple JavaScript has now been started to be accepted as the mainstream programming for applications, be it on the web or on the mobile; be it on client-side, be it on the server side. JavaScript flexibility and its loose typing is friendly to developers to create rich applications at an unbelievable speed. Major advancements in the performance of JavaScript interpreters, in recent days, have almost eliminated the question of scalability and throughput from many organizations. So the point is JavaScript is now a really important and powerful language we have today and it’s usage growing everyday. From client-side code in web applications it grew to server-side through Node.JS and it’s now supported as proper language to write applications on major mobile operating system platforms like Windows 8 apps and the upcoming Firefox OS apps.

But the problem is, many developers practice in-secure coding which leads to many clients side attacks, out of which DOM XSS is the most infamous. We tried to understand the root cause of this problem and figured out is that there are not enough practically usable tools that can solve real-world problems. Hence as our first attempt towards solving this problem, we want to talk about JSPrime: A javascript static analysis tool for the rest of us. It’s a very light-weight and very easy to use point-and-click tool! The static analysis tool is based on the very popular Esprima ECMAScript parser by Aria Hidayat.

I would like to highlight some of the interesting features of the tool below:

JS Library Aware Source & Sinks

Most dynamic or static analyzers are developed to support native/pure JavaScript which actually is a problem for most developers since the introductions and wide-adoption for JavaScript frameworks/libraries like jQuery, YUI etc. Since these scanners are designed to support pure JavaScript, they fail at understanding the context of the development due to the usage of libraries and produce many false-positives and false-negatives. To solve this we have identified the dangerous user input sources and code execution sink functions for jQuery and YUI, for the initial release and we shall talk about how users can easily extend it for other frameworks.

Variable & Function Tracing (This feature is a part of our code flow analysis algorithm)

Variable & Function Scope Aware analysis (This feature is a part of our code flow analysis algorithm)

Known filter function aware

OOP & Protoype Compliant

Minimum False Positive alerts

Supports minified javascript

Blazing fast performance

Point and Click :-) (my personal favorite)

Upcoming features:

Automatic code de-obfuscation & decompression through Hybrid Analysis (Ra.2 improvisation;http://code.google.com/p/ra2-dom-xss-scanner)

ECMAScript family support (ActionScript 3, Node.JS, WinJS)

Links

Test Cases Document URL: http://goo.gl/vf61Km

Sources & Sinks Document URL: http://goo.gl/olzYM4

BlackHat Slide: http://www.slideshare.net/nishantdp/jsprime-bhusa13new

Usage

Web Client

Open “index.html” in your browser.

Server-Side (Node.JS)

  1. In the terminal type “node server.js”
  2. Go to 127.0.0.1:8888 in your browser.

More information can be found on: https://github.com/dpnishant/jsprime and on http://www.jsprime.org

Ajenti – the web admin panel everyone wants


Ajenti

Crowdin

http://ajenti.org/

Ajenti is a Linux & BSD web admin panel.

Feature highlights

Easy installation

Ajenti is installed through your system’s package manager. Installation only takes a minute.

Existing configuration

Picks up your current configuration and works on your existing system as-is, without any preparation.

Caring

Does not overwrite your config files, options and comments. All changes are non-destructive.

Batteries included

Includes lots of plugins for system and software configuration, monitoring and management.

Extensible

Ajenti is easily extensible using Python. Plugin development is a quick and pleasant with Ajenti APIs.

Modern

Pleasant to look at, satisfying to click and accessible anywhere from tablets and mobile.

Lightweight

Small memory footprint and CPU usage. Runs on low-end machines, wall plugs, routers and so on.

Listening

We listen to your feedback and add features in the fast-paces weekly release cycle.


More information can be found on: https://github.com/Eugeny/ajenti and on http://ajenti.org/

TokID / IdFix- a PGP based web authentication token


IdFix: a PGP web authentication token

Author: Julien Vehent <jvehent@mozilla.com>
Date: 2015-02-23
Version: 1

IdFix proposes a method for generating and verifying authentication tokens using PGP. Its primary application is authenticating users on web APIs by sending an authentication token in the X-IDFIX HTTP header.

1   Motivation

IdFix was primarily designed for Mozilla InvestiGator as a way to authenticate users of the API without requiring them to enter credentials. IdFix has since been generalized to be used in any application that can emit and verify OpenPGP signatures.

A signed IdFix token transports the identity of the owner of an OpenPGP private key, which can be a human or a machine. A signed token is generated programmatically using the private keystore or GPG agent of a client. IdFix alleviates the need for a client to enter credentials during authentication, modulo any passphrase required to access the private key of the user (users should use a GPG Agent to prevent this).

When used on a secure channel, IdFix guarantees that the owner of a private key emitted a given request. It is up to the application to associate the identity carried by IdFix with proper accesses. We recommend to do this by comparing the key fingerprint that emitted a given signature against fingerprints of users stored in a database.

2   Security considerations

2.1   Replay attacks

In the IdFix protocol, protection against replay attacks is optional. A verifier may decide to permit token replay to allow signers to reuse tokens multiple times, and thus reduce the cost of generating tokens with every request.

In this mode, IdFix is vulnerable to replay attacks. It is therefore critical that a signed IdFix token is transmitted over a security channel, such as HTTPS. IdFix assumes that a token is only readable by its sender and receiver, and does not provide protection against token theft. We considers that HTTPS is sufficiently widespread that IdFix doesn’t need to provide protection against token theft.

A verifier may decide to implement replay protection by discarding token nonces that have already been consumed once. This mode assumes that the verifier keeps track of receives IdFix tokens for a given period of time.

2.2   PGP Fingerprint comparison

It must be noted that, when comparing PGP Fingerprints, the full 40 bytes of a given fingerprint must be used. Short IDs, commonly used to reference PGP keys, must be avoided as they are highly prone to collisions.

2.3   Time synchronization

IdFix requires the uses of a timestamp in RFC3339 format, and allows verifiers to discard tokens with timestamps outside of an acceptable time window. It is expected that clients (signers) and servers (verifiers) will be synchronized on a reliable time source, for example using NTP. Failure to synchronize on a reliable time source may lead to authentication failures.

3   Terminology

  • signer: a client that generates and signs an IdFix token
  • verifier: a server that receives and verifies an IdFix token
  • origin string: the cleartext string used by the signer to create an IdFix token
  • signature: an OpenPGP armored signature of the origin string, computed using the signer’s private key

4   Format

A signed IdFix token is a string that contains the following information:

  • version number
  • UTC timestamp in RFC3339 format
  • random nonce of at least 64 bits (preferred 128 bits)
  • armored signature of the three fields above, signed by a private key

An example token looks like the following:

`1;2006-01-02T15:04:05Z;1424703763646749449812569234;owEBYQGe/pANAwAIAaPWUhc7dj6...<truncated>`

Over HTTP, the IdFix protocol recommends to send the signed token in a HTTP header named X-IDFIX for interoperability.

5   Construction

An IdFix signed token is made of two parts: * an origin string composed of a version number, a timestamp and a nonce * an armored signature of the cleartext string

5.1   Origin string

Construction of a token starts by building the origin string using the following requirements:

  • the current version of IdFix is 1
  • the nonce value must be a random positive integer
  • the timestamp must be in the UTC timezone and follow the format defined in RFC3339
  • each component must be followed by a semicolon ; (ascii code 0x3B)
  • the origin string must be terminated by a newline character n (ascii code 0x0A)

A random nonce can be generated in bash with the command below ($RANDOM returns a 16 bits integer, so we invoke it 8 times to get 128 bits).

echo $RANDOM$RANDOM$RANDOM$RANDOM$RANDOM$RANDOM$RANDOM$RANDOM

A correct timestamp can be generated with the following bash command:

$ date -u +%Y-%m-%dT%H:%M:%SZ

An example of origin string is:

1;2006-01-02T15:04:05Z;182592280749063001756043640123749365059;

The hexadecimal version of which is represented below:

$ hexdump -C <<< '1;2006-01-02T15:04:05Z;182592280749063001756043640123749365059;'
00000000  31 3b 32 30 30 36 2d 30  31 2d 30 32 54 31 35 3a  |1;2006-01-02T15:|
00000010  30 34 3a 30 35 5a 3b 31  38 32 35 39 32 32 38 30  |04:05Z;182592280|
00000020  37 34 39 30 36 33 30 30  31 37 35 36 30 34 33 36  |7490630017560436|
00000030  34 30 31 32 33 37 34 39  33 36 35 30 35 39 3b 0a  |40123749365059;.|
0000003f

5.2   Signature of the origin string

The origin string constructed above must be signed using the digital signature method described in RFC 4880: OpenPGP Message Format. All acceptable OpenPGP signatures are authorized.

The signature must be detached and unwrapped into a single string, omitting the PGP SIGNATURE header and footer, Version line and blank lines, as follows:

  1. Create a detached signature of the origin string
$ gpg -a --detach-sig <<< '1;2006-01-02T15:04:05Z;182592280749063001756043640123749365059;'

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQEcBAABCAAGBQJU6+ZCAAoJEKPWUhc7dj6PsooH/3VLFc2gOL0ysHeLNZ8/UyWQ
7ZPt7guubKj3BXEb0C55yTM1ZV+ki9fjbf9BSfPHJLk+9PtmUEgLUkVZupJNXmRS
vKc0nQRFGiEB5rliN/9sF4vDMyVvFQ20SVSc36TCVcgi/LpicfT6Wonq/XB/JtDd
KD2SIheoOW0LAauEeRQGdmm42ByTC5zvL3Y3a/oKP359FEIgZKGXvk0WpBFsX5VM
9w4L6+PsvMIhTx1lOOVIZaCClgLjsofmPfaaPAYLbHf81GGQ/9cT4SkGSyiXbSFA
gWTPMEkZ8KUW4hTONDxDEoi7lFs2nudqb6fK21QjN55Yly4goTLT/FlrCJCQN6k=
=pStP
-----END PGP SIGNATURE-----
  1. Unwrap the signature data into a single string, without the lines —–BEGIN PGP SIGNATURE—–, Version: GnuPG v1, blank line, and —–END PGP SIGNATURE—–.
iQEcBAABCAAGBQJU6+ZCAAoJEKPWUhc7dj6PsooH/3VLFc2gOL0ysHeLNZ8/UyWQ7ZPt7guubKj3BXEb0C55yTM1ZV+ki9fjbf9BSfPHJLk+9PtmUEgLUkVZupJNXmRSvKc0nQRFGiEB5rliN/9sF4vDMyVvFQ20SVSc36TCVcgi/LpicfT6Wonq/XB/JtDdKD2SIheoOW0LAauEeRQGdmm42ByTC5zvL3Y3a/oKP359FEIgZKGXvk0WpBFsX5VM9w4L6+PsvMIhTx1lOOVIZaCClgLjsofmPfaaPAYLbHf81GGQ/9cT4SkGSyiXbSFAgWTPMEkZ8KUW4hTONDxDEoi7lFs2nudqb6fK21QjN55Yly4goTLT/FlrCJCQN6k==pStP

5.3   Token assembly

An IdFix token is finally build by concatenating the origin string and the signature. The trailing newline n of the origin string is removed before concatenating the signature, as follows:

1;2006-01-02T15:04:05Z;182592280749063001756043640123749365059;iQEcBAABCAAGBQJU6+ZCAAoJEKPWUhc7dj6PsooH/3VLFc2gOL0ysHeLNZ8/UyWQ7ZPt7guubKj3BXEb0C55yTM1ZV+ki9fjbf9BSfPHJLk+9PtmUEgLUkVZupJNXmRSvKc0nQRFGiEB5rliN/9sF4vDMyVvFQ20SVSc36TCVcgi/LpicfT6Wonq/XB/JtDdKD2SIheoOW0LAauEeRQGdmm42ByTC5zvL3Y3a/oKP359FEIgZKGXvk0WpBFsX5VM9w4L6+PsvMIhTx1lOOVIZaCClgLjsofmPfaaPAYLbHf81GGQ/9cT4SkGSyiXbSFAgWTPMEkZ8KUW4hTONDxDEoi7lFs2nudqb6fK21QjN55Yly4goTLT/FlrCJCQN6k==pStP

The hexadecimal version of which is represented below:

$ hexdump -C <<< '1;2006-01-02T15:04:05Z;182592280749063001756043640123749365059;iQEcBAABCAAGBQJU6+ZCAAoJEKPWUhc7dj6PsooH/3VLFc2gOL0ysHeLNZ8/UyWQ7ZPt7guubKj3BXEb0C55yTM1ZV+ki9fjbf9BSfPHJLk+9PtmUEgLUkVZupJNXmRSvKc0nQRFGiEB5rliN/9sF4vDMyVvFQ20SVSc36TCVcgi/LpicfT6Wonq/XB/JtDdKD2SIheoOW0LAauEeRQGdmm42ByTC5zvL3Y3a/oKP359FEIgZKGXvk0WpBFsX5VM9w4L6+PsvMIhTx1lOOVIZaCClgLjsofmPfaaPAYLbHf81GGQ/9cT4SkGSyiXbSFAgWTPMEkZ8KUW4hTONDxDEoi7lFs2nudqb6fK21QjN55Yly4goTLT/FlrCJCQN6k==pStP'
00000000  31 3b 32 30 30 36 2d 30  31 2d 30 32 54 31 35 3a  |1;2006-01-02T15:|
00000010  30 34 3a 30 35 5a 3b 31  38 32 35 39 32 32 38 30  |04:05Z;182592280|
00000020  37 34 39 30 36 33 30 30  31 37 35 36 30 34 33 36  |7490630017560436|
00000030  34 30 31 32 33 37 34 39  33 36 35 30 35 39 3b 69  |40123749365059;i|
00000040  51 45 63 42 41 41 42 43  41 41 47 42 51 4a 55 36  |QEcBAABCAAGBQJU6|
00000050  2b 5a 43 41 41 6f 4a 45  4b 50 57 55 68 63 37 64  |+ZCAAoJEKPWUhc7d|
00000060  6a 36 50 73 6f 6f 48 2f  33 56 4c 46 63 32 67 4f  |j6PsooH/3VLFc2gO|
00000070  4c 30 79 73 48 65 4c 4e  5a 38 2f 55 79 57 51 37  |L0ysHeLNZ8/UyWQ7|
00000080  5a 50 74 37 67 75 75 62  4b 6a 33 42 58 45 62 30  |ZPt7guubKj3BXEb0|
00000090  43 35 35 79 54 4d 31 5a  56 2b 6b 69 39 66 6a 62  |C55yTM1ZV+ki9fjb|
000000a0  66 39 42 53 66 50 48 4a  4c 6b 2b 39 50 74 6d 55  |f9BSfPHJLk+9PtmU|
000000b0  45 67 4c 55 6b 56 5a 75  70 4a 4e 58 6d 52 53 76  |EgLUkVZupJNXmRSv|
000000c0  4b 63 30 6e 51 52 46 47  69 45 42 35 72 6c 69 4e  |Kc0nQRFGiEB5rliN|
000000d0  2f 39 73 46 34 76 44 4d  79 56 76 46 51 32 30 53  |/9sF4vDMyVvFQ20S|
000000e0  56 53 63 33 36 54 43 56  63 67 69 2f 4c 70 69 63  |VSc36TCVcgi/Lpic|
000000f0  66 54 36 57 6f 6e 71 2f  58 42 2f 4a 74 44 64 4b  |fT6Wonq/XB/JtDdK|
00000100  44 32 53 49 68 65 6f 4f  57 30 4c 41 61 75 45 65  |D2SIheoOW0LAauEe|
00000110  52 51 47 64 6d 6d 34 32  42 79 54 43 35 7a 76 4c  |RQGdmm42ByTC5zvL|
00000120  33 59 33 61 2f 6f 4b 50  33 35 39 46 45 49 67 5a  |3Y3a/oKP359FEIgZ|
00000130  4b 47 58 76 6b 30 57 70  42 46 73 58 35 56 4d 39  |KGXvk0WpBFsX5VM9|
00000140  77 34 4c 36 2b 50 73 76  4d 49 68 54 78 31 6c 4f  |w4L6+PsvMIhTx1lO|
00000150  4f 56 49 5a 61 43 43 6c  67 4c 6a 73 6f 66 6d 50  |OVIZaCClgLjsofmP|
00000160  66 61 61 50 41 59 4c 62  48 66 38 31 47 47 51 2f  |faaPAYLbHf81GGQ/|
00000170  39 63 54 34 53 6b 47 53  79 69 58 62 53 46 41 67  |9cT4SkGSyiXbSFAg|
00000180  57 54 50 4d 45 6b 5a 38  4b 55 57 34 68 54 4f 4e  |WTPMEkZ8KUW4hTON|
00000190  44 78 44 45 6f 69 37 6c  46 73 32 6e 75 64 71 62  |DxDEoi7lFs2nudqb|
000001a0  36 66 4b 32 31 51 6a 4e  35 35 59 6c 79 34 67 6f  |6fK21QjN55Yly4go|
000001b0  54 4c 54 2f 46 6c 72 43  4a 43 51 4e 36 6b 3d 3d  |TLT/FlrCJCQN6k==|
000001c0  70 53 74 50 0a                                    |pStP.|
000001c5

6   Verification

6.1   Signature verification

Upon reception of an IdFix, a verifier must separate the origin string from the signature on the third semicolon.

A trailing new line must be re-added to the origin string.

Depending on the OpenPGP verification library used, the signature may need to be rewrapped before being passing it to the verifier.

The signature of the origin string must then be verified. An example of verification on the command line is shown below:

$ echo '1;2006-01-02T15:04:05Z;182592280749063001756043640123749365059;' | gpg --verify /tmp/tokid.signature -

gpg: Signature made Mon 23 Feb 2015 09:47:30 PM EST using RSA key ID 3B763E8F
gpg: Good signature from "Julien Vehent (personal) <julien@linuxwall.info>"
gpg:                 aka "Julien Vehent (ulfr) <jvehent@mozilla.com>"

If the signature is valid, the verifier should make sure that the user is authorized to communicate with the endpoint by comparing the user’s key fingerprint with an authorization database. IdFix does not specify how user authorizations should be performed.

6.2   Token expiration

If the signature is valid and the user authorized, the verifier must control the timestamp of the origin string, and discard timestamps that are too young or too old. IdFix recommends to use an acceptance window of 20 minutes: 10 minutes before the verifier time, and 10 minutes after the verifier time.

IdFix assumes that all participants, signers and verifiers, and somewhat synchronized on the same time source. If a participant’s clock desynchronizes too far out of the acceptance window, it will fail to authenticate or verify authentication.

6.3   Token replay

A verifier may require that a token must only be used once. It may do so by keeping track of nonce values for the duration of the timestamp validity. The verifier may issue a 403 Forbidden error code to the signer when duplicate nonces are used. With nonces of at least 64 bits, the chances of nonce collision (1 / 2^64) are considered negligeable, for a validity window of 20 minutes.

Upon reception of a code 403 Forbidden, a signer must generate a new token and retry at least once. The retry on 403 allows verifiers to request new authentication tokens upon conditions that the verifier controls.

7   Standard maintenance

This document is maintained and updated by the author. Changes must be submitted to the author for discussion, acceptance, and release in a new version of the IdFix standard.

8   License

This document and associated source code are subject to the terms of the Mozilla Public License, v. 2.0. If a copy of the MPL was not distributed with this file, you can obtain one at http://mozilla.org/MPL/2.0/.

More information can be found on: https://github.com/jvehent/idfix

Ray-Mon – PHP and Bash server status monitoring


Ray-Mon is a linux server monitoring script written in PHP and Bash, utilizing JSON as data storage. It requires only bash and a webserver on the client side, and only php on the server side. The client currently supports monitoring processes, uptime, updates, amount of users logged in, disk usage, RAM usage and network traffic.

Features

  • Ping monitor
  • History per host
  • Threshold per monitored item.
  • Monitors:
    • Processes (lighttpd, apache, nginx, sshd, munin etc.)
    • RAM
    • Disk
    • Uptime
    • Users logged on
    • Updates
    • Network (RX/TX)

Download

Either git clone the github repo:

git clone git://github.com/RaymiiOrg/raymon.git

Or download the zipfile from github:

https://github.com/RaymiiOrg/raymon/zipball/master

Or download the zipfile from Raymii.org

https://raymii.org/s/inc/software/raymon-0.0.2.zip

This is the github page: https://github.com/RaymiiOrg/raymon/

Changelog

v0.0.2
  • Server side now only requires 1 script instead of 2.
  • Client script creates the json better, if a value is missing the json file doesn’t break.
  • Changed the visual style to a better layout.
  • Thresholds implemented and configurable.
  • History per host now implemented.
v0.0.1
  • Initial release

Install

Client

The client.sh script is a bash script which outputs JSON. It requires root access and should be run as root. It also requires a webserver, so that the server can get the json file.

Software needed for the script:

  • bash
  • awk
  • grep
  • ifconfig
  • package managers supported: apt-get, yum and pacman (debian/ubuntu, centos/RHEL/SL, Arch)

Setup a webserver (lighttpd, apache, boa, thttpd, nginx) for the script output. If there is already a webserver running on the server you dont need to install another one.

Edit the script:

Network interfaces. First one is used for the IP, the second one is used for bandwidth calculations. This is done because openVZ has the “venet0” interface for the bandwidth, and the venet0:0 interface with an IP. If you run bare-metal or KVM or vserver etc. you can set these two to the same value (eth0 eth1 etc).

# Network interface for the IP address
iface="venet0:0"
# network interface for traffic monitoring (RX/TX bytes)
iface2="venet0"

The IP address of the server, this is used by me when deploying this script via chef or ansible. You can set it, but it is not required.

Services are checked by doing a ps to see if the process is running. The last service should be defined without a comma, for valid JSON. The code below monitors “sshd”, “lighttpd”, “munin-node” and “syslog”.

SERVICE=lighttpd
if ps ax | grep -v grep | grep $SERVICE > /dev/null; then echo -n ""$SERVICE" : "running","; else echo -n ""$SERVICE" : "not running","; fi
SERVICE=sshd
if ps ax | grep -v grep | grep $SERVICE > /dev/null; then echo -n ""$SERVICE" : "running","; else echo -n ""$SERVICE" : "not running","; fi
SERVICE=syslog
if ps ax | grep -v grep | grep $SERVICE > /dev/null; then echo -n ""$SERVICE" : "running","; else echo -n ""$SERVICE" : "not running","; fi
#LAST SERVICE HAS TO BE WITHOUT , FOR VALID JSON!!!
SERVICE=munin-node
if ps ax | grep -v grep | grep $SERVICE > /dev/null; then echo -n ""$SERVICE" : "running""; else echo -n ""$SERVICE" : "not running""; fi

To add a service, copy the 2 lines and replace the SERVICE=processname with the actual process name:

SERVICE=processname
if ps ax | grep -v grep | grep $SERVICE > /dev/null; then echo -n ""$SERVICE" : "running","; else echo -n ""$SERVICE" : "not running","; fi

And, make sure the last service montiored does not echo a comma at the end, else the JSON is not valid and the php script fails.

Now setup a cronjob to execute the script on a set interval and save the JSON to the webserver directory.

As root, create the file /etc/cron.d/raymon-client with the following contents:

SHELL=/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
*/5 * * * * root /root/scripts/client.sh | sed ':a;N;$!ba;s/n//g' > /var/www/stat.json

In my case, the client script is in /root/scripts, and my webserver directory is /var/www. Change this to your own setup. Also, you might want to change the time interval. /5 executes every 5 minutes. The sed line is in there to remove the newlines, this creates a shorter JSOn file, saves some KB’s. The *root after the cron time is special for a file in /etc/cron.d/, it tells cron as which user it has to execute the crontab file.

When this is setup you should get a stat.json file in the /var/www/ folder containing the status json. If so, the client is setup correctly.

Server

The status server is a php script which fetches the json files from the clients every 5 minutes, saves them and shows them. It also saves the history, but that is defined below.

Requirements:

  • Webserver with PHP (min. 5.2) and write access to the folder the script is located.

Steps:

Create a new folder on the webserver and make sure the webserver user (www-data) can write to it.

Place the php file “stat.php” in that folder.

Edit the host list in the php file to include your clients:

The first parameter is the filename the json file is saved to, and the second is the URL where the json file is located.

$hostlist=array(
                'example1.org.json' => 'http://example1.org/stat.json',
                'example2.nl.json' => 'http://example2.nl/stat.json',
                'special1.network.json' => 'http://special1.network.eu:8080/stat.json',
                'special2.network.json' => 'https://special2.network.eu/stat.json'
                );

Edit the values for the ping monitor:

$pinglist = array(
                  'github.com',
                  'google.nl',
                  'tweakers.net',
                  'jupiterbroadcasting.com',
                  'lowendtalk.com',
                  'lowendbox.com' 
                  );

Edit the threshold values:

## Set this to "secure" the history saving. This key has to be given as a parameter to save the history.
$historykey = "8A29691737D";
#the below values set the threshold before a value gets shown in bold on the page.
# Max updates available
$maxupdates = "10";
# Max users concurrently logged in
$maxusers = "3";
# Max load.
$maxload = "2";
# Max disk usage (in percent)
$maxdisk = "75";
# Max RAM usage (in percent)
$maxram = "75";
History

To save the history you have to setup a cronjob to get the status page with a special “history key”. You define this in the stat.php file:

## Set this to "secure" the history saving. This key has to be given as a parameter to save the history.
$historykey = "8A29691737D";    

And then the cronjob to get it:

## This saves the history every 8 hours. 
30 */8 * * * wget -qO /dev/null http://url-to-status.site/status/stat.php?action=save&&key=8A29691737D

The cronjob can be on any server which can access the status page, but preferably on the host where the status page is located.

Strong SSL Security on Nginx


This tutorial shows you how to set up strong SSL security on the nginx webserver. We do this by disabling SSL Compression to mitigate the CRIME attack, disable SSLv3 and below because of vulnerabilities in the protocol and we will set up a strong ciphersuite that enables Forward Secrecy when possible. We also enable HSTS and HPKP. This way we have a strong and future proof ssl configuration and we get an A on the Qually Labs SSL Test.

TL;DR: Copy-pastable strong cipherssuites for NGINX, Apache and Lighttpd: https://cipherli.st

This tutorial is tested on a Digital Ocean VPS. If you like this tutorial and want to support my website, use this link to order a Digital Ocean VPS: https://www.digitalocean.com

This tutorial works with the stricter requirements of the SSL Labs test announced on the 21st of January 2014 (It already did before that, if you follow(ed) it you get an A+)

This tutorial is also available for Apache
This tutorial is also available for Lighttpd
This tutorial is also available for FreeBSD, NetBSD and OpenBSD over at the BSD Now podcast: http://www.bsdnow.tv/tutorials/nginx

You can find more info on the topics by following the links below:

We are going to edit the nginx settings in the file /etc/nginx/sited-enabled/yoursite.com (On Ubuntu/Debian) or in/etc/nginx/conf.d/nginx.conf (On RHEL/CentOS).

For the entire tutorial, you need to edit the parts between the server block for the server config for port 443 (ssl config). At the end of the tutorial you can find the complete config example.

Make sure you back up the files before editing them!

The BEAST attack and RC4

In short, by tampering with an encryption algorithm’s CBC – cipher block chaining – mode’s, portions of the encrypted traffic can be secretly decrypted. More info on the above link.

Recent browser versions have enabled client side mitigation for the beast attack. The recommendation was to disable all TLS 1.0 ciphers and only offer RC4. However, [RC4 has a growing list of attacks against it],(http://www.isg.rhul.ac.uk/tls/) many of which have crossed the line from theoretical to practical. Moreover, there is reason to believe that the NSA has broken RC4, their so-called “big breakthrough.”

Disabling RC4 has several ramifications. One, users with shitty browsers such as Internet Explorer on Windows XP will use 3DES in lieu. Triple-DES is more secure than RC4, but it is significantly more expensive. Your server will pay the cost for these users. Two, RC4 mitigates BEAST. Thus, disabling RC4 makes TLS 1.0 users susceptible to that attack, by moving them to AES-CBC (the usual server-side BEAST “fix” is to prioritize RC4 above all else). I am confident that the flaws in RC4 significantly outweigh the risks from BEAST. Indeed, with client-side mitigation (which Chrome and Firefox both provide), BEAST is a nonissue. But the risk from RC4 only grows: More cryptanalysis will surface over time.

Factoring RSA-EXPORT Keys (FREAK)

FREAK is a man-in-the-middle (MITM) vulnerability discovered by a group of cryptographers at INRIA, Microsoft Research and IMDEA. FREAK stands for “Factoring RSA-EXPORT Keys.”

The vulnerability dates back to the 1990s, when the US government banned selling crypto software overseas, unless it used export cipher suites which involved encryption keys no longer than 512-bits.

It turns out that some modern TLS clients – including Apple’s SecureTransport and OpenSSL – have a bug in them. This bug causes them to accept RSA export-grade keys even when the client didn’t ask for export-grade RSA. The impact of this bug can be quite nasty: it admits a ‘man in the middle’ attack whereby an active attacker can force down the quality of a connection, provided that the client is vulnerable and the server supports export RSA.

There are two parts of the attack as the server must also accept “export grade RSA.”

The MITM attack works as follows:

  • In the client’s Hello message, it asks for a standard ‘RSA’ ciphersuite.
  • The MITM attacker changes this message to ask for ‘export RSA’.
  • The server responds with a 512-bit export RSA key, signed with its long-term key.
  • The client accepts this weak key due to the OpenSSL/SecureTransport bug.
  • The attacker factors the RSA modulus to recover the corresponding RSA decryption key.
  • When the client encrypts the ‘pre-master secret’ to the server, the attacker can now decrypt it to recover the TLS ‘master secret’.
  • From here on out, the attacker sees plaintext and can inject anything it wants.

The ciphersuite offered here on this page does not enable EXPORT grade ciphers. Make sure your OpenSSL is updated to the latest available version and urge your clients to also use upgraded software.

Heartbleed

Heartbleed is a security bug disclosed in April 2014 in the OpenSSL cryptography library, which is a widely used implementation of the Transport Layer Security (TLS) protocol. Heartbleed may be exploited regardless of whether the party using a vulnerable OpenSSL instance for TLS is a server or a client. It results from improper input validation (due to a missing bounds check) in the implementation of the DTLS heartbeat extension (RFC6520), thus the bug’s name derives from “heartbeat”. The vulnerability is classified as a buffer over-read, a situation where more data can be read than should be allowed.

What versions of the OpenSSL are affected by Heartbleed?

Status of different versions:

  • OpenSSL 1.0.1 through 1.0.1f (inclusive) are vulnerable
  • OpenSSL 1.0.1g is NOT vulnerable
  • OpenSSL 1.0.0 branch is NOT vulnerable
  • OpenSSL 0.9.8 branch is NOT vulnerable

The bug was introduced to OpenSSL in December 2011 and has been out in the wild since OpenSSL release 1.0.1 on 14th of March 2012. OpenSSL 1.0.1g released on 7th of April 2014 fixes the bug.

By updating OpenSSL you are not vulnerable to this bug.

SSL Compression (CRIME attack)

The CRIME attack uses SSL Compression to do its magic. SSL compression is turned off by default in nginx 1.1.6+/1.0.9+ (if OpenSSL 1.0.0+ used) and nginx 1.3.2+/1.2.2+ (if older versions of OpenSSL are used).

If you are using al earlier version of nginx or OpenSSL and your distro has not backported this option then you need to recompile OpenSSL without ZLIB support. This will disable the use of OpenSSL using the DEFLATE compression method. If you do this then you can still use regular HTML DEFLATE compression.

SSLv2 and SSLv3

SSL v2 is insecure, so we need to disable it. We also disable SSLv3, as TLS 1.0 suffers a downgrade attack, allowing an attacker to force a connection to use SSLv3 and therefore disable forward secrecy.

Again edit the config file:

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

Poodle and TLS-FALLBACK-SCSV

SSLv3 allows exploiting of the POODLE bug. This is one more major reason to disable this.

Google have proposed an extension to SSL/TLS named TLSFALLBACKSCSV that seeks to prevent forced SSL downgrades. This is automatically enabled if you upgrade OpenSSL to the following versions:

  • OpenSSL 1.0.1 has TLSFALLBACKSCSV in 1.0.1j and higher.
  • OpenSSL 1.0.0 has TLSFALLBACKSCSV in 1.0.0o and higher.
  • OpenSSL 0.9.8 has TLSFALLBACKSCSV in 0.9.8zc and higher.

More info on the NGINX documentation

The Cipher Suite

Forward Secrecy ensures the integrity of a session key in the event that a long-term key is compromised. PFS accomplishes this by enforcing the derivation of a new key for each and every session.

This means that when the private key gets compromised it cannot be used to decrypt recorded SSL traffic.

The cipher suites that provide Perfect Forward Secrecy are those that use an ephemeral form of the Diffie-Hellman key exchange. Their disadvantage is their overhead, which can be improved by using the elliptic curve variants.

The following two ciphersuites are recommended by me, and the latter by the Mozilla Foundation.

The recommended cipher suite:

ssl_ciphers 'AES128+EECDH:AES128+EDH';

The recommended cipher suite for backwards compatibility (IE6/WinXP):

ssl_ciphers "ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4";

If your version of OpenSSL is old, unavailable ciphers will be discarded automatically. Always use the full ciphersuite above and let OpenSSL pick the ones it supports.

The ordering of a ciphersuite is very important because it decides which algorithms are going to be selected in priority. The recommendation above prioritizes algorithms that provide perfect forward secrecy.

Older versions of OpenSSL may not return the full list of algorithms. AES-GCM and some ECDHE are fairly recent, and not present on most versions of OpenSSL shipped with Ubuntu or RHEL.

Prioritization logic

  • ECDHE+AESGCM ciphers are selected first. These are TLS 1.2 ciphers and not widely supported at the moment. No known attack currently target these ciphers.
  • PFS ciphersuites are preferred, with ECDHE first, then DHE.
  • AES 128 is preferred to AES 256. There has been discussions on whether AES256 extra security was worth the cost, and the result is far from obvious. At the moment, AES128 is preferred, because it provides good security, is really fast, and seems to be more resistant to timing attacks.
  • In the backward compatible ciphersuite, AES is preferred to 3DES. BEAST attacks on AES are mitigated in TLS 1.1 and above, and difficult to achieve in TLS 1.0. In the non-backward compatible ciphersuite, 3DES is not present.
  • RC4 is removed entirely. 3DES is used for backward compatibility. See discussion in #RC4_weaknesses

Mandatory discards

  • aNULL contains non-authenticated Diffie-Hellman key exchanges, that are subject to Man-In-The-Middle (MITM) attacks
  • eNULL contains null-encryption ciphers (cleartext)
  • EXPORT are legacy weak ciphers that were marked as exportable by US law
  • RC4 contains ciphers that use the deprecated ARCFOUR algorithm
  • DES contains ciphers that use the deprecated Data Encryption Standard
  • SSLv2 contains all ciphers that were defined in the old version of the SSL standard, now deprecated
  • MD5 contains all the ciphers that use the deprecated message digest 5 as the hashing algorithm

Extra settings

Make sure you also add these lines:

ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;

When choosing a cipher during an SSLv3 or TLSv1 handshake, normally the client’s preference is used. If this directive is enabled, the server’s preference will be used instead.

More info on sslpreferserver_ciphers
More info on ssl_ciphers

Forward Secrecy & Diffie Hellman Ephemeral Parameters

The concept of forward secrecy is simple: client and server negotiate a key that never hits the wire, and is destroyed at the end of the session. The RSA private from the server is used to sign a Diffie-Hellman key exchange between the client and the server. The pre-master key obtained from the Diffie-Hellman handshake is then used for encryption. Since the pre-master key is specific to a connection between a client and a server, and used only for a limited amount of time, it is called Ephemeral.

With Forward Secrecy, if an attacker gets a hold of the server’s private key, it will not be able to decrypt past communications. The private key is only used to sign the DH handshake, which does not reveal the pre-master key. Diffie-Hellman ensures that the pre-master keys never leave the client and the server, and cannot be intercepted by a MITM.

All versions of nginx as of 1.4.4 rely on OpenSSL for input parameters to Diffie-Hellman (DH). Unfortunately, this means that Ephemeral Diffie-Hellman (DHE) will use OpenSSL’s defaults, which include a 1024-bit key for the key-exchange. Since we’re using a 2048-bit certificate, DHE clients will use a weaker key-exchange than non-ephemeral DH clients.

We need generate a stronger DHE parameter:

cd /etc/ssl/certs
openssl dhparam -out dhparam.pem 4096

And then tell nginx to use it for DHE key-exchange:

ssl_dhparam /etc/ssl/certs/dhparam.pem;

OCSP Stapling

When connecting to a server, clients should verify the validity of the server certificate using either a Certificate Revocation List (CRL), or an Online Certificate Status Protocol (OCSP) record. The problem with CRL is that the lists have grown huge and takes forever to download.

OCSP is much more lightweight, as only one record is retrieved at a time. But the side effect is that OCSP requests must be made to a 3rd party OCSP responder when connecting to a server, which adds latency and potential failures. In fact, the OCSP responders operated by CAs are often so unreliable that browser will fail silently if no response is received in a timely manner. This reduces security, by allowing an attacker to DoS an OCSP responder to disable the validation.

The solution is to allow the server to send its cached OCSP record during the TLS handshake, therefore bypassing the OCSP responder. This mechanism saves a roundtrip between the client and the OCSP responder, and is called OCSP Stapling.

The server will send a cached OCSP response only if the client requests it, by announcing support for the status_request TLS extension in its CLIENT HELLO.

Most servers will cache OCSP response for up to 48 hours. At regular intervals, the server will connect to the OCSP responder of the CA to retrieve a fresh OCSP record. The location of the OCSP responder is taken from the Authority Information Access field of the signed certificate.

View my tutorial on enabling OCSP stapling on NGINX

HTTP Strict Transport Security

When possible, you should enable HTTP Strict Transport Security (HSTS), which instructs browsers to communicate with your site only over HTTPS.

View my article on HTST to see how to configure it.

HTTP Public Key Pinning Extension

You should also enable the HTTP Public Key Pinning Extension.

Public Key Pinning means that a certificate chain must include a whitelisted public key. It ensures only whitelisted Certificate Authorities (CA) can sign certificates for *.example.com, and not any CA in your browser store.

I’ve written an article about it that has background theory and configuration examples for Apache, Lighttpd and NGINX

Config Example

server {

  listen [::]:443 default_server;

  ssl on;
  ssl_certificate_key /etc/ssl/cert/raymii_org.pem;
  ssl_certificate /etc/ssl/cert/ca-bundle.pem;

  ssl_ciphers 'AES128+EECDH:AES128+EDH:!aNULL';

  ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
  ssl_session_cache shared:SSL:10m;

  ssl_stapling on;
  ssl_stapling_verify on;
  resolver 8.8.4.4 8.8.8.8 valid=300s;
  resolver_timeout 10s;

  ssl_prefer_server_ciphers on;
  ssl_dhparam /etc/ssl/certs/dhparam.pem;

  add_header Strict-Transport-Security max-age=63072000;
  add_header X-Frame-Options DENY;
  add_header X-Content-Type-Options nosniff;

  root /var/www/;
  index index.html index.htm;
  server_name raymii.org;

}

Conclusion

If you have applied the above config lines you need to restart nginx:

# Check the config first:
/etc/init.d/nginx configtest
# Then restart:
/etc/init.d/nginx restart

Now use the SSL Labs test to see if you get a nice A. And, of course, have a safe, strong and future proof SSL configuration!

Strong SSL Security on Apache2


This tutorial shows you how to set up strong SSL security on the Apache2 webserver. We do this by disabling SSL Compression to mitigate the CRIME attack, disable SSLv3 and below because of vulnerabilities in the protocol and we will set up a ciphersuite that enables Forward Secrecy when possible. We also set up HSTS and HPKP. This way we have a strong and future proof ssl configuration and we get an A on the Qually Labs SSL Test.

TL;DR: Copy-pastable strong cipherssuites for NGINX, Apache and Lighttpd: https://cipherli.st

This tutorial is tested on a Digital Ocean VPS. If you like this tutorial and want to support my website, use this link to order a Digital Ocean VPS: https://www.digitalocean.com/

This tutorial works with the strict requirements of the SSL Labs test

This tutorial is also available for NGINX
This tutorial is also available for Lighttpd
This tutorial is also available for FreeBSD, NetBSD and OpenBSD over at the BSD Now podcast: [http://www.bsdnow.tv/tutorials/nginx](http://www.bsdnow.tv/tutorials/nginx

You can find more info on the topics by following the links below:

Make sure you back up the files before editing them!

The BEAST attack and RC4

In short, by tampering with an encryption algorithm’s CBC – cipher block chaining – mode’s, portions of the encrypted traffic can be secretly decrypted. More info on the above link.

Recent browser versions have enabled client side mitigation for the beast attack. The recommendation was to disable all TLS 1.0 ciphers and only offer RC4. However, [RC4 has a growing list of attacks against it],(http://www.isg.rhul.ac.uk/tls/) many of which have crossed the line from theoretical to practical. Moreover, there is reason to believe that the NSA has broken RC4, their so-called “big breakthrough.”

Disabling RC4 has several ramifications. One, users with shitty browsers such as Internet Explorer on Windows XP will use 3DES in lieu. Triple-DES is more secure than RC4, but it is significantly more expensive. Your server will pay the cost for these users. Two, RC4 mitigates BEAST. Thus, disabling RC4 makes TLS 1.0 users susceptible to that attack, by moving them to AES-CBC (the usual server-side BEAST “fix” is to prioritize RC4 above all else). I am confident that the flaws in RC4 significantly outweigh the risks from BEAST. Indeed, with client-side mitigation (which Chrome and Firefox both provide), BEAST is a nonissue. But the risk from RC4 only grows: More cryptanalysis will surface over time.

Factoring RSA-EXPORT Keys (FREAK)

FREAK is a man-in-the-middle (MITM) vulnerability discovered by a group of cryptographers at INRIA, Microsoft Research and IMDEA. FREAK stands for “Factoring RSA-EXPORT Keys.”

The vulnerability dates back to the 1990s, when the US government banned selling crypto software overseas, unless it used export cipher suites which involved encryption keys no longer than 512-bits.

It turns out that some modern TLS clients – including Apple’s SecureTransport and OpenSSL – have a bug in them. This bug causes them to accept RSA export-grade keys even when the client didn’t ask for export-grade RSA. The impact of this bug can be quite nasty: it admits a ‘man in the middle’ attack whereby an active attacker can force down the quality of a connection, provided that the client is vulnerable and the server supports export RSA.

There are two parts of the attack as the server must also accept “export grade RSA.”

The MITM attack works as follows:

  • In the client’s Hello message, it asks for a standard ‘RSA’ ciphersuite.
  • The MITM attacker changes this message to ask for ‘export RSA’.
  • The server responds with a 512-bit export RSA key, signed with its long-term key.
  • The client accepts this weak key due to the OpenSSL/SecureTransport bug.
  • The attacker factors the RSA modulus to recover the corresponding RSA decryption key.
  • When the client encrypts the ‘pre-master secret’ to the server, the attacker can now decrypt it to recover the TLS ‘master secret’.
  • From here on out, the attacker sees plaintext and can inject anything it wants.

The ciphersuite offered here on this page does not enable EXPORT grade ciphers. Make sure your OpenSSL is updated to the latest available version and urge your clients to also use upgraded software.

Heartbleed

Heartbleed is a security bug disclosed in April 2014 in the OpenSSL cryptography library, which is a widely used implementation of the Transport Layer Security (TLS) protocol. Heartbleed may be exploited regardless of whether the party using a vulnerable OpenSSL instance for TLS is a server or a client. It results from improper input validation (due to a missing bounds check) in the implementation of the DTLS heartbeat extension (RFC6520), thus the bug’s name derives from “heartbeat”. The vulnerability is classified as a buffer over-read, a situation where more data can be read than should be allowed.

What versions of the OpenSSL are affected by Heartbleed?

Status of different versions:

  • OpenSSL 1.0.1 through 1.0.1f (inclusive) are vulnerable
  • OpenSSL 1.0.1g is NOT vulnerable
  • OpenSSL 1.0.0 branch is NOT vulnerable
  • OpenSSL 0.9.8 branch is NOT vulnerable

The bug was introduced to OpenSSL in December 2011 and has been out in the wild since OpenSSL release 1.0.1 on 14th of March 2012. OpenSSL 1.0.1g released on 7th of April 2014 fixes the bug.

By updating OpenSSL you are not vulnerable to this bug.

SSL Compression (CRIME attack)

The CRIME attack uses SSL Compression to do its magic, so we need to disable that. On Apache 2.2.24+ we can add the following line to the SSL config file we also edited above:

SSLCompression off

If you are using al earlier version of Apache and your distro has not backported this option then you need to recompile OpenSSL without ZLIB support. This will disable the use of OpenSSL using the DEFLATE compression method. If you do this then you can still use regular HTML DEFLATE compression.

SSLv2 and SSLv3

SSL v2 is insecure, so we need to disable it. We also disable SSLv3, as TLS 1.0 suffers a downgrade attack, allowing an attacker to force a connection to use SSLv3 and therefore disable forward secrecy.

SSLv3 allows exploiting of the POODLE bug. This is one more major reason to disable this!

Again edit the config file:

SSLProtocol All -SSLv2 -SSLv3

All is a shortcut for +SSLv2 +SSLv3 +TLSv1 or – when using OpenSSL 1.0.1 and later – +SSLv2 +SSLv3 +TLSv1 +TLSv1.1 +TLSv1.2, respectively. The above line enables everything except SSLv2 and SSLv3. More info on the apache website

Poodle and TLS-FALLBACK-SCSV

SSLv3 allows exploiting of the POODLE bug. This is one more major reason to disable this.

Google have proposed an extension to SSL/TLS named TLSFALLBACKSCSV that seeks to prevent forced SSL downgrades. This is automatically enabled if you upgrade OpenSSL to the following versions:

  • OpenSSL 1.0.1 has TLSFALLBACKSCSV in 1.0.1j and higher.
  • OpenSSL 1.0.0 has TLSFALLBACKSCSV in 1.0.0o and higher.
  • OpenSSL 0.9.8 has TLSFALLBACKSCSV in 0.9.8zc and higher.

The Cipher Suite

(Perfect) Forward Secrecy ensures the integrity of a session key in the event that a long-term key is compromised. PFS accomplishes this by enforcing the derivation of a new key for each and every session.

This means that when the private key gets compromised it cannot be used to decrypt recorded SSL traffic.

The cipher suites that provide Perfect Forward Secrecy are those that use an ephemeral form of the Diffie-Hellman key exchange. Their disadvantage is their overhead, which can be improved by using the elliptic curve variants.

The following two ciphersuites are recommended by me, and the latter by the Mozilla Foundation.

The recommended cipher suite:

SSLCipherSuite AES128+EECDH:AES128+EDH

The recommended cipher suite for backwards compatibility (IE6/WinXP):

SSLCipherSuite ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4

If your version of OpenSSL is old, unavailable ciphers will be discarded automatically. Always use the full ciphersuite above and let OpenSSL pick the ones it supports.

The ordering of a ciphersuite is very important because it decides which algorithms are going to be selected in priority. The recommendation above prioritizes algorithms that provide perfect forward secrecy.

Older versions of OpenSSL may not return the full list of algorithms. AES-GCM and some ECDHE are fairly recent, and not present on most versions of OpenSSL shipped with Ubuntu or RHEL.

Prioritization logic

  • ECDHE+AESGCM ciphers are selected first. These are TLS 1.2 ciphers and not widely supported at the moment. No known attack currently target these ciphers.
  • PFS ciphersuites are preferred, with ECDHE first, then DHE.
  • AES 128 is preferred to AES 256. There has been discussions on whether AES256 extra security was worth the cost, and the result is far from obvious. At the moment, AES128 is preferred, because it provides good security, is really fast, and seems to be more resistant to timing attacks.
  • In the backward compatible ciphersuite, AES is preferred to 3DES. BEAST attacks on AES are mitigated in TLS 1.1 and above, and difficult to achieve in TLS 1.0. In the non-backward compatible ciphersuite, 3DES is not present.
  • RC4 is removed entirely. 3DES is used for backward compatibility. See discussion in #RC4_weaknesses

Mandatory discards

  • aNULL contains non-authenticated Diffie-Hellman key exchanges, that are subject to Man-In-The-Middle (MITM) attacks
  • eNULL contains null-encryption ciphers (cleartext)
  • EXPORT are legacy weak ciphers that were marked as exportable by US law
  • RC4 contains ciphers that use the deprecated ARCFOUR algorithm
  • DES contains ciphers that use the deprecated Data Encryption Standard
  • SSLv2 contains all ciphers that were defined in the old version of the SSL standard, now deprecated
  • MD5 contains all the ciphers that use the deprecated message digest 5 as the hashing algorithm

With Apache 2.2.x you have only DHE suites to work with, but they are not enough. Internet Explorer (in all versions) does not support the required DHE suites to achieve Forward Secrecy. (Unless youre using DSA keys, but no one does; that’s a long story.) Apache does not support configurable DH parameters in any version, but there are patches you could use if you can install from source.

Even if openssl can provide ECDHE the apache 2.2 in debian stable does not support this mechanism. You need apache 2.4 to fully support forward secrecy.

A workaround could be the usage of nginx as a reverse proxy because it fully supports ECDHE.

Make sure you also add this line:

SSLHonorCipherOrder on

When choosing a cipher during an SSLv3 or TLSv1 handshake, normally the client’s preference is used. If this directive is enabled, the server’s preference will be used instead.

Forward Secrecy & Diffie Hellman Ephemeral Parameters

The concept of forward secrecy is simple: client and server negotiate a key that never hits the wire, and is destroyed at the end of the session. The RSA private from the server is used to sign a Diffie-Hellman key exchange between the client and the server. The pre-master key obtained from the Diffie-Hellman handshake is then used for encryption. Since the pre-master key is specific to a connection between a client and a server, and used only for a limited amount of time, it is called Ephemeral.

With Forward Secrecy, if an attacker gets a hold of the server’s private key, it will not be able to decrypt past communications. The private key is only used to sign the DH handshake, which does not reveal the pre-master key. Diffie-Hellman ensures that the pre-master keys never leave the client and the server, and cannot be intercepted by a MITM.

Apache prior to version 2.4.7 and all versions of Nginx as of 1.4.4 rely on OpenSSL for input parameters to Diffie-Hellman (DH). Unfortunately, this means that Ephemeral Diffie-Hellman (DHE) will use OpenSSL’s defaults, which include a 1024-bit key for the key-exchange. Since we’re using a 2048-bit certificate, DHE clients will use a weaker key-exchange than non-ephemeral DH clients.

For Apache, there is no fix except to upgrade to 2.4.7 or later. With that version, Apache automatically selects a stronger key.

Around May, Debian backported ECDH ciphers to work with apache 2.2, and it’s possible to get PFS: http://metadata.ftp-master.debian.org/changelogs//main/a/apache2/apache22.2.22-13+deb7u3changelog

> apache2 (2.2.22-13+deb7u2) wheezy; urgency=medium

  * Backport support for SSL ECC keys and ECDH ciphers.

HTTP Strict Transport Security

When possible, you should enable HTTP Strict Transport Security (HSTS), which instructs browsers to communicate with your site only over HTTPS.

View my article on HTST to see how to configure it.

HTTP Public Key Pinning Extension

You should also enable the HTTP Public Key Pinning Extension.

Public Key Pinning means that a certificate chain must include a whitelisted public key. It ensures only whitelisted Certificate Authorities (CA) can sign certificates for *.example.com, and not any CA in your browser store.

I’ve written an article about it that has background theory and configuration examples for Apache, Lighttpd and NGINX

OCSP Stapling

When connecting to a server, clients should verify the validity of the server certificate using either a Certificate Revocation List (CRL), or an Online Certificate Status Protocol (OCSP) record. The problem with CRL is that the lists have grown huge and takes forever to download.

OCSP is much more lightweight, as only one record is retrieved at a time. But the side effect is that OCSP requests must be made to a 3rd party OCSP responder when connecting to a server, which adds latency and potential failures. In fact, the OCSP responders operated by CAs are often so unreliable that browser will fail silently if no response is received in a timely manner. This reduces security, by allowing an attacker to DoS an OCSP responder to disable the validation.

The solution is to allow the server to send its cached OCSP record during the TLS handshake, therefore bypassing the OCSP responder. This mechanism saves a roundtrip between the client and the OCSP responder, and is called OCSP Stapling.

The server will send a cached OCSP response only if the client requests it, by announcing support for the status_request TLS extension in its CLIENT HELLO.

Most servers will cache OCSP response for up to 48 hours. At regular intervals, the server will connect to the OCSP responder of the CA to retrieve a fresh OCSP record. The location of the OCSP responder is taken from the Authority Information Access field of the signed certificate.

View my tutorial on enabling OCSP stapling on Apache

Conclusion

If you have applied the above config lines you need to restart apache:

# Check the config first:
apache2ctl -t
# Then restart:
/etc/init.d/apache2 restart

# If you are on RHEL/CentOS:
apachectl -t
/etc/init.d/httpd restart

Now use the SSL Labs test to see if you get a nice A. And, of course, have a safe, strong and future proof SSL configuration!

Stong SSL Security on lighttpd


This tutorial shows you how to set up strong SSL security on the Lighttpd webserver. We do this by disabling SSL Compression to mitigate the CRIME attack, disable SSLv3 and below because of vulnerabilities in the protocol and we will set up a strong ciphersuite that enables Forward Secrecy when possible. We also set up HSTS and HPKP. This way we have a strong and future proof ssl configuration and we get an A on the Qually Labs SSL Test.

TL;DR: Copy-pastable strong cipherssuites for NGINX, Apache and Lighttpd: https://cipherli.st

This tutorial is tested on a Digital Ocean VPS. If you like this tutorial and want to support my website, use this link to order a Digital Ocean VPS: https://www.digitalocean.com

This tutorial works with the stricter requirements of the SSL Labs test announced on the 21st of January 2014 (It already did before that, if you follow(ed) it you get an A+)

This tutorial is also available for Apache2 This tutorial is also available for NGINX
This tutorial is also available for FreeBSD, NetBSD and OpenBSD over at the BSD Now podcast: http://www.bsdnow.tv/tutorials/nginx

You can find more info on the topics by following the links below:

Make sure you backup the files before editing them!

I’m using lighttpd 1.4.31 from the Debian Wheezy repositories on this website. The CentOS 5/6 EPEL versions wouldn’t work for me because either lighttpd or OpenSSL being to old. Debian Squeeze also failed.

Mitigate the BEAST attack

In short, by tampering with an encryption algorithm’s CBC – cipher block chaining – mode’s, portions of the encrypted traffic can be secretly decrypted. More info on the above link.

Recent browser versions have enabled client side mitigation for the beast attack. The recommendation was to disable all TLS 1.0 ciphers and only offer RC4. However, [RC4 has a growing list of attacks against it],(http://www.isg.rhul.ac.uk/tls/) many of which have crossed the line from theoretical to practical. Moreover, there is reason to believe that the NSA has broken RC4, their so-called “big breakthrough.”

Disabling RC4 has several ramifications. One, users with shitty browsers such as Internet Explorer on Windows XP will use 3DES in lieu. Triple-DES is more secure than RC4, but it is significantly more expensive. Your server will pay the cost for these users. Two, RC4 mitigates BEAST. Thus, disabling RC4 makes TLS 1.0 users susceptible to that attack, by moving them to AES-CBC (the usual server-side BEAST “fix” is to prioritize RC4 above all else). I am confident that the flaws in RC4 significantly outweigh the risks from BEAST. Indeed, with client-side mitigation (which Chrome and Firefox both provide), BEAST is a nonissue. But the risk from RC4 only grows: More cryptanalysis will surface over time.

Factoring RSA-EXPORT Keys (FREAK)

FREAK is a man-in-the-middle (MITM) vulnerability discovered by a group of cryptographers at INRIA, Microsoft Research and IMDEA. FREAK stands for “Factoring RSA-EXPORT Keys.”

The vulnerability dates back to the 1990s, when the US government banned selling crypto software overseas, unless it used export cipher suites which involved encryption keys no longer than 512-bits.

It turns out that some modern TLS clients – including Apple’s SecureTransport and OpenSSL – have a bug in them. This bug causes them to accept RSA export-grade keys even when the client didn’t ask for export-grade RSA. The impact of this bug can be quite nasty: it admits a ‘man in the middle’ attack whereby an active attacker can force down the quality of a connection, provided that the client is vulnerable and the server supports export RSA.

There are two parts of the attack as the server must also accept “export grade RSA.”

The MITM attack works as follows:

  • In the client’s Hello message, it asks for a standard ‘RSA’ ciphersuite.
  • The MITM attacker changes this message to ask for ‘export RSA’.
  • The server responds with a 512-bit export RSA key, signed with its long-term key.
  • The client accepts this weak key due to the OpenSSL/SecureTransport bug.
  • The attacker factors the RSA modulus to recover the corresponding RSA decryption key.
  • When the client encrypts the ‘pre-master secret’ to the server, the attacker can now decrypt it to recover the TLS ‘master secret’.
  • From here on out, the attacker sees plaintext and can inject anything it wants.

The ciphersuite offered here on this page does not enable EXPORT grade ciphers. Make sure your OpenSSL is updated to the latest available version and urge your clients to also use upgraded software.

Heartbleed

Heartbleed is a security bug disclosed in April 2014 in the OpenSSL cryptography library, which is a widely used implementation of the Transport Layer Security (TLS) protocol. Heartbleed may be exploited regardless of whether the party using a vulnerable OpenSSL instance for TLS is a server or a client. It results from improper input validation (due to a missing bounds check) in the implementation of the DTLS heartbeat extension (RFC6520), thus the bug’s name derives from “heartbeat”. The vulnerability is classified as a buffer over-read, a situation where more data can be read than should be allowed.

What versions of the OpenSSL are affected by Heartbleed?

Status of different versions:

  • OpenSSL 1.0.1 through 1.0.1f (inclusive) are vulnerable
  • OpenSSL 1.0.1g is NOT vulnerable
  • OpenSSL 1.0.0 branch is NOT vulnerable
  • OpenSSL 0.9.8 branch is NOT vulnerable

The bug was introduced to OpenSSL in December 2011 and has been out in the wild since OpenSSL release 1.0.1 on 14th of March 2012. OpenSSL 1.0.1g released on 7th of April 2014 fixes the bug.

By updating OpenSSL you are not vulnerable to this bug.

SSL Compression

The CRIME attack uses SSL Compression to do its magic, so we need to disable that. The following option disables SSL compression:

ssl.use-compression = "disable"

By default lighttpd disables SSL compression at compile time. If you find it to be enabled, either use the above option, or recompile OpenSSL without ZLIB support. This will disable the use of OpenSSL using the DEFLATE compression method. If you do this then you can still use regular HTML DEFLATE compression.

SSLv2 and SSLv3

SSL v2 is insecure, so we need to disable it. We also disable SSLv3, as TLS 1.0 suffers a downgrade attack, allowing an attacker to force a connection to use SSLv3 and therefore disable forward secrecy.

Again edit the config file:

ssl.use-sslv2 = "disable"
ssl.use-sslv3 = "disable"

Poodle and TLS-FALLBACK-SCSV

SSLv3 allows exploiting of the POODLE bug. This is one more major reason to disable this.

Google have proposed an extension to SSL/TLS named TLSFALLBACKSCSV that seeks to prevent forced SSL downgrades. This is automatically enabled if you upgrade OpenSSL to the following versions:

  • OpenSSL 1.0.1 has TLSFALLBACKSCSV in 1.0.1j and higher.
  • OpenSSL 1.0.0 has TLSFALLBACKSCSV in 1.0.0o and higher.
  • OpenSSL 0.9.8 has TLSFALLBACKSCSV in 0.9.8zc and higher.

The Cipher Suite

Forward Secrecy ensures the integrity of a session key in the event that a long-term key is compromised. PFS accomplishes this by enforcing the derivation of a new key for each and every session.

This means that when the private key gets compromised it cannot be used to decrypt recorded SSL traffic.

The cipher suites that provide Perfect Forward Secrecy are those that use an ephemeral form of the Diffie-Hellman key exchange. Their disadvantage is their overhead, which can be improved by using the elliptic curve variants.

The following two ciphersuites are recommended by me, and the latter by the Mozilla Foundation.

The recommended cipher suite:

ssl.cipher-list = "AES128+EECDH:AES128+EDH"

The recommended cipher suite for backwards compatibility (IE6/WinXP):

ssl.cipher-list = "ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4"

If your version of OpenSSL is old, unavailable ciphers will be discarded automatically. Always use the full ciphersuite above and let OpenSSL pick the ones it supports.

The ordering of a ciphersuite is very important because it decides which algorithms are going to be selected in priority. The recommendation above prioritizes algorithms that provide perfect forward secrecy.

Older versions of OpenSSL may not return the full list of algorithms. AES-GCM and some ECDHE are fairly recent, and not present on most versions of OpenSSL shipped with Ubuntu or RHEL.

Prioritization logic

  • ECDHE+AESGCM ciphers are selected first. These are TLS 1.2 ciphers and not widely supported at the moment. No known attack currently target these ciphers.
  • PFS ciphersuites are preferred, with ECDHE first, then DHE.
  • AES 128 is preferred to AES 256. There has been discussions on whether AES256 extra security was worth the cost, and the result is far from obvious. At the moment, AES128 is preferred, because it provides good security, is really fast, and seems to be more resistant to timing attacks.
  • In the backward compatible ciphersuite, AES is preferred to 3DES. BEAST attacks on AES are mitigated in TLS 1.1 and above, and difficult to achieve in TLS 1.0. In the non-backward compatible ciphersuite, 3DES is not present.
  • RC4 is removed entirely. 3DES is used for backward compatibility. See discussion in #RC4_weaknesses

Mandatory discards

  • aNULL contains non-authenticated Diffie-Hellman key exchanges, that are subject to Man-In-The-Middle (MITM) attacks
  • eNULL contains null-encryption ciphers (cleartext)
  • EXPORT are legacy weak ciphers that were marked as exportable by US law
  • RC4 contains ciphers that use the deprecated ARCFOUR algorithm
  • DES contains ciphers that use the deprecated Data Encryption Standard
  • SSLv2 contains all ciphers that were defined in the old version of the SSL standard, now deprecated
  • MD5 contains all the ciphers that use the deprecated message digest 5 as the hashing algorithm

HTTP Strict Transport Security

When possible, you should enable HTTP Strict Transport Security (HSTS), which instructs browsers to communicate with your site only over HTTPS.

View my article on HTST to see how to configure it.

HTTP Public Key Pinning Extension

You should also enable the HTTP Public Key Pinning Extension.

Public Key Pinning means that a certificate chain must include a whitelisted public key. It ensures only whitelisted Certificate Authorities (CA) can sign certificates for *.example.com, and not any CA in your browser store.

I’ve written an article about it that has background theory and configuration examples for Apache, Lighttpd and NGINX:https://raymii.org/s/articles/HTTPPublicKeyPinningExtension_HPKP.html

Config Example

var.confdir = "/etc/ssl/certs"
$SERVER["socket"] == ":443" {
  ssl.engine = "enable"
  ssl.pemfile = var.confdir + "/example.org.pem"
  ssl.ca-file = var.confdir + "/example.org.bundle.crt"
  server.name = var.confdir + "/example.org"
  server.document-root = "/srv/html"
  ssl.use-sslv2 = "disable"
  ssl.use-sslv3 = "disable"
  ssl.use-compression = "disable"
  ssl.honor-cipher-order = "enable"
  ssl.cipher-list = "AES128+EECDH:AES128+EDH:!aNULL:!eNULL"
}

Conclusion

If you have applied the above config lines you need to restart lighttpd:

/etc/init.d/lighttpd restart

Now use the SSL Labs test to see if you get a nice A. And of course have a safe and future proof SSL configuration!

HTTP Public Key Pinning Extension HPKP for Apache, NGINX and Lighttpd


Public Key Pinning means that a certificate chain must include a whitelisted public key. It ensures only whitelisted Certificate Authorities (CA) can sign certificates for *.example.com, and not any CA in your browser store. This article has background theory and configuration examples for Apache, Lighttpd and NGINX.

HTTP Public Key Pinning Extension

An example might be your bank, which always have their certificate from CA Company A. With the current certificate system, CA Company B, CA Company C and the NSA CA can all create a certificate for your bank, which your browser will hapily accept because those companies are also trusted root CA’s.

If the bank implements HPKP and pin’s their first intermidiate certificate (from CA Company A), browsers will not accept certificates from CA Company B and CA Company C, even if they have a valid trust path. HPKP also allows your browser to report back the failure to the bank, so that they know they are under attack.

Public Key Pinning Extension for HTTP (HPKP) is a standard for public key pinning for HTTP user agents that’s been in development since 2011. It was started by Google, which, even though it had implemented pinning in Chrome, understood that manually maintaining a list of pinned sites can’t scale.

Here is a quick feature overview of HPKP:

  • HPKP is set at the HTTP level, using the Public-Key-Pins response header.
  • The policy retention period is set with the max-age parameter, it specifies duration in seconds.
  • The PKP header can only be used over an error-free secure encryption.
  • If multiple headers are seen, only the first one is processed.
  • Pinning can be extended to subdomains with the includeSubDomains parameter.
  • When a new PKP header is received, it overwrites previously stored pins and metadata.
  • A pin consists out of the hashing algorithm and an “Subject Public Key Info” fingerprint.

This article first has some theory about the workings of HPKP, down below you’ll find the part which shows you how to get the required fingerprints and has web server configuration.

SPKI Fingerprint – Theory

As explained by Adam Langley in his post, we hash a public key, not a certificate:

In general, hashing certificates is the obvious solution, but the wrong one. The problem is that CA certificates are often reissued: there are multiple certificates with the same public key, subject name etc but different extensions or expiry dates. Browsers build certificates chains from a pool of certificates, bottom up, and an alternative version of a certificate might be substituted for the one that you expect.

For example, StartSSL has two root certificates: one signed with SHA1 and the other with SHA256. If you wished to pin to StartSSL as your CA, which certificate hash would you use? You would have to use both, but how would you know about the other root if I hadn’t just told you?

Conversely, public key hashes must be correct:

Browsers assume that the leaf certificate is fixed: it’s always the starting point of the chain. The leaf certificate contains a signature which must be a valid signature, from its parent, for that certificate. That implies that the public key of the parent is fixed by the leaf certificate. So, inductively, the chain of public keys is fixed, modulo truncation.

The only sharp edge is that you mustn’t pin to a cross-certifying root. For example, GoDaddy’s root is signed by Valicert so that older clients, which don’t recognise GoDaddy as a root, still trust those certificates. However, you wouldn’t want to pin to Valicert because newer clients will stop their chain at GoDaddy.

Also, we’re hashing the SubjectPublicKeyInfo not the public key bit string. The SPKI includes the type of the public key and some parameters along with the public key itself. This is important because just hashing the public key leaves one open to misinterpretation attacks. Consider a Diffie-Hellman public key: if one only hashes the public key, not the full SPKI, then an attacker can use the same public key but make the client interpret it in a different group. Likewise one could force an RSA key to be interpreted as a DSA key etc.

Where to Pin

Where should you pin? Pinning your own public key is not the best idea. The key might change or get compromised. You might have multiple certificates in use. The key might change because you rotate your certificates every so often. It might key compromised because the web server was hacked.

The easiest, but not most secure place to pin is the first intermediate CA certificate. The signature of that certificate is on your websites certificate so the issuing CA’s public key must always be in the chain.

This way you can renew your end certificate from the same CA and have no pinning issues. If the CA issues a different root, then you have a problem, there is no clear solution for this yet. There is one thing you can do to mitigate this:

  • Always have a backup pin and a spare certificate from a different CA.

The RFC states that you need to provide at least two pins. One of the pins must be present in the chain used in the connection over which the pins were received, the other pin must not be present.

This other pin is your backup public key. It can also be the SPKI fingerprint of a different CA where you have a certificate issued.

An alternative and more secure take on this issue is to create at least three seperate public keys beforehand (using OpenSSL, see this pagefor a Javascript OpenSSL command generator) and to keep two of those keys as a backup in a safe place, offline and off-site.

You create the SPKI hashes for the three certificates and pin those. You only use the first key as the active certificate. When it is needed, you can then use one of the alternative keys. You do however need to let that certificate sign by a CA to create a certificate pair and that process can take a few days depending on the certificate.

This is not a problem for the HPKP because we take the SPKI hash of the public key, and not of the certificate. Expiration or a different chain of CA signer do not matter in this case.

If you have the means and procedures to create and securely save at least three seperate keys as described above and pin those, it would also protect you from your CA provider getting compromised and giving out a fake certificate for your specific website.

SPKI Fingerprint

To get the SPKI fingerprint from a certificate we can use the following OpenSSL command, as shown in the RFC draft:

openssl x509 -noout -in certificate.pem -pubkey | \
openssl asn1parse -noout -inform pem -out public.key;
openssl dgst -sha256 -binary public.key | openssl enc -base64

Result:

klO23nT2ehFDXCfx3eHTDRESMz3asj1muO+4aIdjiuY=

The input certificate.pem file is the first certificate in the chain for this website. (At the time of writing,COMODO RSA Domain Validation Secure Server CA, Serial 2B:2E:6E:EA:D9:75:36:6C:14:8A:6E:DB:A3:7C:8C:07.)

You need to also do this with your backup public key, ending up with two fingerprints.

Bugs

At the time of writing this article (2015-Jan) the only browser supporting HPKP (Chrome) has a serious issue where Chrome doesn’t treat the max-age and includeSubdomains directives from HSTS and HPKP headers as mutually exclusive. This means that if you have HSTS and HPKP with different policiesfor max-age or includeSubdomains they will be interchanged. See this bug for more info:https://code.google.com/p/chromium/issues/detail?id=444511. Thanks to Scott Helme from https://scotthelme.co.uk for finding and notifying me and the Chromium project about it.

Webserver configuration

Below you’ll find configuration instructions for the three most populair web servers. Since this is just a HTTP header, almost all web servers will allow you to set this. It needs to be set for the HTTPS website.

The example below pins the COMODO RSA Domain Validation Secure Server CA and the Comodo PositiveSSL CA 2 as a backup, with a 30 day expire time including all subdomains.

Apache

Edit your apache configuration file (/etc/apache2/sites-enabled/website.conf or /etc/apache2/httpd.conf for example) and add the following to your VirtualHost:

# Optionally load the headers module:
LoadModule headers_module modules/mod_headers.so

Header set Public-Key-Pins "pin-sha256=\"klO23nT2ehFDXCfx3eHTDRESMz3asj1muO+4aIdjiuY=\"; pin-sha256=\"633lt352PKRXbOwf4xSEa1M517scpD3l5f79xMD9r9Q=\"; max-age=2592000; includeSubDomains"

Lighttpd

The lighttpd variant is just as simple. Add it to your Lighttpd configuration file (/etc/lighttpd/lighttpd.conf for example):

server.modules += ( "mod_setenv" )
$HTTP["scheme"] == "https" {
    setenv.add-response-header  = ( "Public-Key-Pins" => "pin-sha256=\"klO23nT2ehFDXCfx3eHTDRESMz3asj1muO+4aIdjiuY=\"; pin-sha256=\"633lt352PKRXbOwf4xSEa1M517scpD3l5f79xMD9r9Q=\"; max-age=2592000; includeSubDomains")
}

NGINX

NGINX is even shorter with its config. Add this in the server block for your HTTPS configuration:

add_header Public-Key-Pins 'pin-sha256="klO23nT2ehFDXCfx3eHTDRESMz3asj1muO+4aIdjiuY="; pin-sha256="633lt352PKRXbOwf4xSEa1M517scpD3l5f79xMD9r9Q="; max-age=2592000; includeSubDomains';

Reporting

HPKP reporting allows the user-agent to report any failures back to you.

If you add an aditional report-uri="http://example.org/hpkp-report" parameter to the header and set up a listener there, clients will send reports if they encounter a failure. A report is sent as a POST request to the report-uri with a JSON body like this:

{
    "date-time": "2014-12-26T11:52:10Z",
    "hostname": "www.example.org",
    "port": 443,
    "effective-expiration-date": "2014-12-31T12:59:59",
    "include-subdomains": true,
    "served-certificate-chain": [
        "-----BEGINCERTIFICATE-----\nMIIAuyg[...]tqU0CkVDNx\n-----ENDCERTIFICATE-----"
    ],
    "validated-certificate-chain": [
        "-----BEGINCERTIFICATE-----\nEBDCCygAwIBA[...]PX4WecNx\n-----ENDCERTIFICATE-----"
    ],
    "known-pins": [
        "pin-sha256=\"dUezRu9zOECb901Md727xWltNsj0e6qzGk\"",
        "pin-sha256=\"E9CqVKB9+xZ9INDbd+2eRQozqbQ2yXLYc\""
    ]
}

No Enforcment, report only

HPKP can be set up without enforcement, in reporting mode by using the Public-Key-Pins-Report-Only response header.

This approach allows you to set up pinning without your site being unreachable or HPKP being configured incorrectly. You can later move to enforcement by changing the header back to Public-Key-Pins.

Set up a federated XMPP Chat Network with ejabberd, and how to Configure and Setup SSL Certificate for Ejabberd


This tutorial shows you how to set up your own federated chat network using ejabberd. It covers a basic single node ejabberd server and also the setup of an ejabberd cluster, including errors and DNS SRV record examples. Last but not least federation is also covered. You can use (almost) any VPS.

Why set up your own XMPP server

There are a few reasons to set up your own XMPP server.

You might use Google Talk or as it now is named Hangouts. Google’s service recently changed and it is going to drop XMPP compatibility. If you have non-gmail chat contacts you can keep chatting to them. And still use an open protocol which is widely supported, not being locked in to google specific software and hardware.

Or you might want to have more control over the logging of your data. Turn of ejabberd logging and use Off The Record which gives you full privacy (and perfect forward secrecy).

You might want to use awesome multi-account chatting applications like Pidgin, Psi+, Empathy, Adium, iChat/Messages or Miranda IM. And on Android you can use Xabber, Beem or OneTeam. Did you know that big players like Facebook, WhatsApp and Google (used) to use XMPP as their primary chat protocol?

Or you might be a sysadmin in need of an internal chat solution. I’ve got a ejabberd cluster running for a client consisting of 4 Debian 7 VM’s (2GB RAM each) spread over 3 sites and 1 datacenter, serving 12000 total users and most of the time 6000 concurrently.

XMPP is an awesome and extendible protocol, on which you can find more here: https://en.wikipedia.org/wiki/XMPP

Information

This setup is tested on Debian 7, Ubuntu 12.04 and 10.04 and OS X 10.8 Server, all running ejabberd installed via the package manager, either apt or ports. It also works on Windows Server 2012 with the ejabberd compiled from the erlang source but that is not covered in this tutorial.

This tutorial uses the example.org domain as the chat domain, and the server chat.example.org as the xmpp server domain. For the clustering part the servers srv1.example.org and srv2.example.org are used. Replace these values for your setup.

Single node / master node ejabberd installation

If you want to set up a single node installation of ejabberd, e.g. no clustering, then follow only this part and the DNS part of the tutorial. If you want to set up a cluster, then also follow this part and continue with the next part.

Installing Ejabberd

This is simple, use your package manager to install ejabberd:

apt-get install ejabberd

You will also install a few dependencies for the erlang runtime.

Configuring ejabberd

We are going to configure the ejabberd service. First stop it:

/etc/init.d/ejabberd stop

Now use your favorite text editor to edit the config files. The ejabberd config is erlang config, so comments are not # but %%. Also, every config option ends with a dot (.).

vim /etc/ejabberd/ejabberd.cfg

First we are going to add our chat domain name:

{hosts, ["example.org"]}.

If you want more domains then you add them as shown below:

{hosts, ["sparklingclouds.nl", "raymii.org", "sparklingnetwork.nl"]}.

This domain name is not the name of the servers you are adding.

Next we define an admin user:

{acl, admin, {user, "remy", "example.org"}}.

remy corresponds with the part before the @ in the XMPP ID, and example.org with the part after. If you need more admin users, add another ACL line.

Now if you want people to be able to register via their XMPP client enable in band registration:

{access, register, [{allow, all}]}.

If you are using MySQL or LDAP authentication then you wouldn’t enable this.

I like to have a shared roster with roster groups, and some clients of mine use a shared roster with everybody so that nobody has to add contacts but they see all online users, enable the modsharedroster:

%% Do this in the modules block
  {mod_shared_roster,[]},

If you are pleased with the config file, save it and restart ejabberd:

/etc/init.d/ejabberd restart

We now need to register a user to test our setup. If you’ve enabled in-band registration you can use your XMPP client, and if you did not enable in-band registration you can use the ejabberdctl command:

ejabberdctl register remy example.org 'passw0rd'

Now test it using an XMPP client like Pidgin, Psi+ or Empathy. If you can connect, then you can continue with the tutorial. If you cannot connect, check your ejabberd logs, firewall setting and such to troubleshoot it.

Clustering ejabberd

Note that you have to have a correctly working master node to continue with the ejabberd clustering. If your master node is not working then fix that first.

Important: the modules you use should be the same on every cluster node. If you use LDAP/MySQL authentication, or a shared_roster, or special MUC settings, or offline messaging, for the clustering this does not matter as long as it is on all nodes.

So lets get started. We are first going to configure the master node, and then the slave nodes.

Prepare the master node

Stop the ejabberd server on the master and edit the /etc/default/ejabberd file:

vim /etc/default/ejabberd

Uncomment the hostname option and change it to a FQDN hostname:

ERLANG_NODE=ejabberd@srv1.example.org

And add the external (public) IP addres as a tuple (no dots but comma’s):

INET_DIST_INTERFACE={20,30,10,5}

If you use ejabberd internally then use the primary NIC address.

We are going to remove all the mnesia tables. They will be rebuilt with an ejabberd restart. This is way easier then changing the mnesia data itself. Don’t do this on a already configured node without backing up the erlang cookie.

First backup the erlang cookie:

cp /var/lib/ejabberd/.erlang.cookie ~/

Then remove the mnesia database:

rm /var/lib/ejabberd/*

And restore the erlang cookie:

cp ~/.erlang.cookie /var/lib/ejabberd/.erlang.cookie

To make sure all erlang processes are stopped kill all processes from the ejabberd user. This is not needed but the epmd supervisor process might still be running:

killall -u ejabberd

And start ejabberd again:

/etc/init.d/ejabberd start 

If you can still connect and chat, then continue with the next part, configuring the slave nodes.

Prepare the slave nodes

*A slave node should first be configured and working as described in the first part of this tutorial. You can copy the config files from the master node. *

Stop the ejabberd server:

/etc/init.d/ejabberd stop

Stop the ejabberd server on the master and edit the /etc/default/ejabberd file:

vim /etc/default/ejabberd

Uncomment the hostname option and change it to a FQDN hostname:

ERLANG_NODE=ejabberd@srv2.example.org

And add the external (public) IP addres as a tuple (no dots but comma’s):

INET_DIST_INTERFACE={30,40,20,6}

If you use ejabberd internally then use the primary NIC address.

Now remove all the mnesia tables:

rm /var/lib/ejabberd/*

Copy the cookie from the ejabberd master node, either by cat and vim or via scp:

# On the master node
cat /var/lib/ejabberd/.erlang.cookie
HFHHGYYEHF362GG1GF

# On the slave node
echo "HFHHGYYEHF362GG1GF" > /var/lib/ejabberd/.erlang.cookie
chown ejabberd:ejabberd /var/lib/ejabberd/.erlang.cookie

We are now going to add and compile an erlang module, the easy_cluster module. This is a very small module which adds an erlang shell command to make the cluster addition easier. You can also execute the commands in the erlang functions itself on an erlang debug shell, but I find this easier and it gives less errors:

vim /usr/lib/ejabberd/ebin/easy_cluster.erl

Add the following contents:

-module(easy_cluster).

-export([test_node/1,join/1]).

test_node(MasterNode) ->
    case net_adm:ping(MasterNode) of 'pong' ->
        io:format("server is reachable.~n");
    _ ->
        io:format("server could NOT be reached.~n")
    end.

join(MasterNode) ->
    application:stop(ejabberd),
    mnesia:stop(),
    mnesia:delete_schema([node()]),
    mnesia:start(),
    mnesia:change_config(extra_db_nodes, [MasterNode]),
    mnesia:change_table_copy_type(schema, node(), disc_copies),
    application:start(ejabberd).

Save it and compile it into a working erlang module:

cd /usr/lib/ejabberd/ebin/
erlc easy_cluster.erl

Now check if it succeeded:

ls | grep easy_cluster.beam

If you see the file it worked. You can find more info on the module here: https://github.com/chadillac/ejabberd-easy_cluster/

We are now going to join the cluster node to the master node. Make sure the master is working and running. Also make sure the erlang cookies are synchronized.

On the slave node, start an ejabberd live shell:

/etc/init.d/ejabberd live

This will start an erlang shell and it will give some output. If it stops outputting then you can press ENTER to get a prompt. Enter the following command to test if the master node can be reached:

easy_cluster:test_node('ejabberd@srv1.example.org').

You should get the following response: server is reachable. If so, continue.

Enter the following command to actually join the node:

easy_cluster:join('ejabberd@srv1.example.org').

Here’s example output from a successful test and join join:

/etc/init.d/ejabberd live
*******************************************************
* To quit, press Ctrl-g then enter q and press Return *
*******************************************************

Erlang R15B01 (erts-5.9.1)  [async-threads:0] [kernel-poll:false]

Eshell V5.9.1  (abort with ^G)

=INFO REPORT==== 10-Jun-2013::20:38:15 ===
I(<0.39.0>:cyrsasl_digest:44) : FQDN used to check DIGEST-MD5 SASL authentication: "srv2.example.org"

=INFO REPORT==== 10-Jun-2013::20:38:15 ===
I(<0.576.0>:ejabberd_listener:166) : Reusing listening port for 5222

=INFO REPORT==== 10-Jun-2013::20:38:15 ===
I(<0.577.0>:ejabberd_listener:166) : Reusing listening port for 5269

=INFO REPORT==== 10-Jun-2013::20:38:15 ===
I(<0.578.0>:ejabberd_listener:166) : Reusing listening port for 5280

=INFO REPORT==== 10-Jun-2013::20:38:15 ===
I(<0.39.0>:ejabberd_app:72) : ejabberd 2.1.10 is started in the node 'ejabberd@srv2.example.org'
easy_cluster:test_node('ejabberd@srv1.example.org').
server is reachable.
ok
(ejabberd@srv2.example.org)2> easy_cluster:join('ejabberd@srv1.example.org').

=INFO REPORT==== 10-Jun-2013::20:38:51 ===
I(<0.39.0>:ejabberd_app:89) : ejabberd 2.1.10 is stopped in the node 'ejabberd@srv2.example.org'

=INFO REPORT==== 10-Jun-2013::20:38:51 ===
    application: ejabberd
    exited: stopped
    type: temporary

=INFO REPORT==== 10-Jun-2013::20:38:51 ===
    application: mnesia
    exited: stopped
    type: permanent

=INFO REPORT==== 10-Jun-2013::20:38:52 ===
I(<0.628.0>:cyrsasl_digest:44) : FQDN used to check DIGEST-MD5 SASL authentication: "srv2.example.org"

=INFO REPORT==== 10-Jun-2013::20:38:53 ===
I(<0.1026.0>:ejabberd_listener:166) : Reusing listening port for 5222

=INFO REPORT==== 10-Jun-2013::20:38:53 ===
I(<0.1027.0>:ejabberd_listener:166) : Reusing listening port for 5269

=INFO REPORT==== 10-Jun-2013::20:38:53 ===
I(<0.1028.0>:ejabberd_listener:166) : Reusing listening port for 5280
ok
(ejabberd@srv2.example.org)3>
=INFO REPORT==== 10-Jun-2013::20:38:53 ===
I(<0.628.0>:ejabberd_app:72) : ejabberd 2.1.10 is started in the node 'ejabberd@srv2.example.org'

Exit your erlang shell by pressing CTRL+C twice. Now stop ejabberd and start it again:

/etc/init.d/ejabberd restart

You can now check in the admin webinterface if the cluster join succeeded:

http://srv1.example.org:5280/admin/nodes/

Ejabberd nodes

If it shows the other node you are finished. If not, see if the steps worked and check the below section on troubleshooting.

Repeat the above steps for every node you want to add. You can add as many nodes as you want.

Errors when clustering

When setting up your cluster you might run into errors. Below are my notes for the errors I found.

  • ejabberd restart does not restart epmd (erlang daemon)
    • overkill solution: killall -u ejabberd
  • ejabberd gives hostname errors
    • make sure the hostname is set correctly (hostname srv1.example.com)
  • ejabberd gives inconsistent database errors
    • backup the erlang cookie (/var/lib/ejabberd/.erlang.cookie) and then remove the contents of the /var/lib/ejabberd folder so that mnesia rebuilds its tables.
  • ejabberd reports “Connection attempt from disallowed node”
    • make sure the erlang cookie is correct (/var/lib/ejabberd/.erlang.cookie). Set vim in insert mode before pasting…

DNS SRV Records and Federation

The DNS SRV Record is used both by chat clients to find the right server address as well as by other XMPP servers for federation. Example: Alice configures her XMPP clients with the email address alice@example.org. Her chat client looks up the SRV record and knows the chat server to connect to is chat.example.org. Bob sets up his client with the address bob@bobsbussiness.com, and adds Alice as a contact. The XMPP server at bobsbussiness.com looks up the SRV record and knows that it should initiate a server2server connection tochat.example.org to federate and let Bob connect with Alice.

The BIND 9 config looks like this:

; XMPP
_xmpp-client._tcp                       IN SRV 5 0 5222 chat.example.org.
_xmpp-server._tcp                       IN SRV 5 0 5269 chat.example.org.
_jabber._tcp                            IN SRV 5 0 5269 chat.example.org.

It is your basic SRV record, both the client port and the server2server port, and legacy Jabber. If you have hosted DNS then either enter it in your panel or consult your service provider.

You can use the following dig query to verify your SRV records:

dig _xmpp-client._tcp.example.org SRV
dig _xmpp-server._tcp.example.org SRV

Or if you are on Windows and have to use nslookup:

nslookup -querytype=SRV _xmpp-client._tcp.example.org
nslookup -querytype=SRV _xmpp-server._tcp.example.org

If you get a result like this then you are set up correctly:

;; QUESTION SECTION:
;_xmpp-client._tcp.raymii.org.  IN      SRV

;; ANSWER SECTION:
_xmpp-client._tcp.raymii.org. 3600 IN   SRV     5 0 5222 chat.raymii.org.

The actual record for chat.raymii.org in my case are multiple A records:

;; ADDITIONAL SECTION:
chat.raymii.org.        3600    IN      A       84.200.77.167
chat.raymii.org.        3600    IN      A       205.185.117.74
chat.raymii.org.        3600    IN      A       205.185.124.11

But if you run a single node this can also be a CNAME or just one A/AAAA record.

Final testing

To test if it all worked you can add the Duck Duck Go XMPP bot. If this works flawlessly and you can add it and chat to it, then you have done everything correctly. The email address to add is im@ddg.gg.

Ejabberd SSL Certificate

This tutorial shows you how to set up an SSL Certificate for use with Ejabberd. It covers both the creation of the Certificate Signing Request, the preparing of the certificate for use with Ejabberd and the installation of the certificate.

This tutorial assumes a working ejabberd installation. It is tested on Debian and Ubuntu, but should work on any ejabberd installation.

Steps and Explanation

To get an SSL certificate working on ejabberd we need to do a few things:

  • Create an Certificate Signing Request (CSR) and a Private Key
  • Submit the CSR to a Certificate Authority, let them sign it and give you a Certificate
  • Combine the certificate, private key (and chain) into a ejabberd compatible PEM file
  • Install the certificate in ejabberd

With a certificate we can secure our XMPP connection and conversations. This way it is much harder for others to spy on your conversations. Combined with OTR this enabled a super secure channel for conversation.

Creating the Certificate Signing Request

Create a folder to store all the files and cd to that:

mkdir -p ~/Certificates/xmpp cd ~/Certificates/xmpp

Now use OpenSSL to create both a Private Key and a CSR. The first command will do it interactively, the second command will do it non-interactive. Make sure to set the correct values, your Common Name (CN) should be your XMPP server URL:

Interactive:

openssl req -nodes -newkey rsa:2048 -keyout private.key -out CSR.csr

Non-interactive:

openssl req -nodes -newkey rsa:2048 -keyout private.key -out CSR.csr -subj “/C=NL/ST=State/L=City/O=Company Name/OU=Department/CN=chat.example.org”

This will result in two files, CSR.csr and private.key. You now have to submit the CSR to a Certificate Authority. This can be any CA, I myself have good experiences with Xolphin, but there are others like Digicert and Verisign.

Once you have submitted your CSR and have gotten a Certificate you can continue.

Creating the ejabberd certificate

Once you have all the files (private key, certificate and certificate chain), put them all in a folder and continue. We are going to cat all the required files into a ejabberd.pem file.

This needs to happen in a specific order:

  • private key
  • certificate
  • chains

So adapt the following commands to your filenames and create the pem file:

cat private.key >> ejabberd.pem cat certificate.pem >> ejabberd.pem cat chain-1.pem >> ejabberd.pem cat chain-2.pem >> ejabberd.pem

If that all works out continue.

Installing the certificate in ejabberd

Copy the certificate to all your ejabberd servers:

scp ejabberd.pem user@srv1.example.org:

The place the certificate in the /etc/ejabberd folder:

cp ejabberd.pem /etc/ejabberd/ejabberd.pem

Now change the ejabberd config to point to the new certificate:

vim /etc/ejabberd/ejabberd.cfg

Check/change the following to point to the new certificate:

[…] {listen, [ {5222, ejabberdc2s, [ {access, c2s}, {shaper, c2sshaper}, {maxstanzasize, 65536}, starttls, {certfile, “/etc/ejabberd/ejabberd.pem”} ]}, […] {s2susestarttls, true}. {s2s_certfile, “/etc/ejabberd/ejabberd.pem”}. […]

Afterwards restart ejabberd:

/etc/init.d/ejabberd restart

You can now use any XMPP client to connect with SSL/TLS to see if it works.

Self Hosted CryptoCat – Secure self hosted multiuser webchat and SSL Certificate Setup


This is a guide on setting up a self hosted secure multiuser webchat service with CryptoCat. It covers the set up of ejabberd, nginx and the web interface for CryptoCat. It supports secure encrypted group chat, secure encrypted private chat and file and photo sharing.

There were/are some issues with the encryption provided by CryptoCat. These seem to be fixed now, but still, beware.

This tutorial is tested on Ubuntu 12.04.

Set up a DNS record

Make sure you set up two DNS A records to your chat server. One should be for example chat.sparklingclouds.nl and the other is for the conferencing: conference.chat.sparklingclouds.nl. You should contact your provider if you need help with this.

In the configuration files, you should replace chat.sparklingclouds.nl with your own domain name.

Install required packages

First we install the required packages:

apt-get install ejabberd nginx vim git

ejabberd configuration

Edit the ejabberd configuratio file located:

/etc/ejabberd/ejabberd.cfg

And place the following contents in it, replacing chat.sparklingclouds.nl with your own domain:

%% Hostname
{hosts, ["chat.sparklingclouds.nl"]}.

%% Logging
{loglevel, 0}.

{listen,
 [
  {5222, ejabberd_c2s, [
            {access, c2s},
            {shaper, c2s_shaper},
            {max_stanza_size, infinite},
                        %%zlib,
            starttls, {certfile, "/etc/ejabberd/ejabberd.pem"}
               ]},

  {5280, ejabberd_http, [
             http_bind,
             http_poll
            ]}
 ]}.

{s2s_use_starttls, true}.

{s2s_certfile, "/etc/ejabberd/ejabberd.pem"}.

{auth_method, internal}.
{auth_password_format, scram}.

{shaper, normal, {maxrate, 500000000}}.

{shaper, fast, {maxrate, 500000000}}.

{acl, local, {user_regexp, ""}}.

{access, max_user_sessions, [{10, all}]}.

{access, max_user_offline_messages, [{5000, admin}, {100, all}]}. 

{access, c2s, [{deny, blocked},
           {allow, all}]}.

{access, c2s_shaper, [{none, admin},
              {normal, all}]}.

{access, s2s_shaper, [{fast, all}]}.

{access, announce, [{allow, admin}]}.

{access, configure, [{allow, admin}]}.

{access, muc_admin, [{allow, admin}]}.

{access, muc, [{allow, all}]}.

{access, register, [{allow, all}]}.

{registration_timeout, infinity}.

{language, "en"}.

{modules,
 [
  {mod_privacy,  []},
  {mod_ping, []},
  {mod_private,  []},
  {mod_http_bind, []},
  {mod_admin_extra, []},
  {mod_muc,      [
          {host, "conference.@HOST@"},
          {access, muc},
          {access_create, muc},
          {access_persistent, muc},
          {access_admin, muc_admin},
          {max_users, 500},
          {default_room_options, [
            {allow_change_subj, false},
            {allow_private_messages, true},
            {allow_query_users, true},
            {allow_user_invites, false},
            {anonymous, true},
            {logging, false},
            {members_by_default, false},
            {members_only, false},
            {moderated, false},
            {password_protected, false},
            {persistent, false},
            {public, false},
            {public_list, true}
              ]}
                 ]},
  {mod_register, [
          {welcome_message, {"Welcome!"}},
          {access, register}
         ]}
 ]}.

NGINX Configuration

We need an SSL certificate for the web server. You can generate one yourself using the following command:

cd /etc/ssl/certs
openssl req -nodes -x509 -newkey rsa:4096 -keyout key.pem -out cert.crt -days 356

Or generate a CSR and let it sign by a “official” CA like verisign or digicert:

cd /etc/ssl/certs
openssl req -nodes -newkey rsa:4096 -keyout private.key -out CSR.csr 

When the certificate is in place you can continue to configure NGINX.

Edit the file or create a new virtual host.

vim /etc/nginx/sites-enabled/default

And place the following contents in it, replacing chat.sparklingclouds.nl with your own domain:

server {
    listen 80;
    listen [::]:80 default ipv6only=on;

    server_name chat.sparklingclouds.nl;
    rewrite     ^   https://$server_name$request_uri? permanent;

    add_header Strict-Transport-Security max-age=31536000;

    location / {
            root /var/www;
            index index.html index.htm;
    }
}

# HTTPS server
server {
    listen 443;
    server_name chat.sparklingclouds.nl;

    add_header Strict-Transport-Security max-age=31536000;

    ssl  on;
    ssl_certificate  /etc/ssl/certs/cert.crt;
    ssl_certificate_key  /etc/ssl/certs/key.pem;

    ssl_session_cache shared:SSL:10m;
    ssl_session_timeout 5m;

    ssl_protocols TLSv1.1 TLSv1.2;
            ssl_ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-RC4-SHA:ECDHE-RSA-AES128-SHA:RC4:HIGH:!MD5:!aNULL:!EDH;
            ssl_prefer_server_ciphers on;

    location / {
        root /var/www;
        index index.html index.htm;
    }

    location /http-bind {
        proxy_buffering off;
        tcp_nodelay on;
        keepalive_timeout 55;
        proxy_pass http://127.0.0.1:5280/http-bind;
    }
}

Save it and restart NGINX:

/etc/init.d/nginx restart

Cronjob for ejabberd

This is important, it cleans up unused ejabberd accounts. Create a new crontab like so:

crontab -e

And place the following in it:

1 1 * * * ejabberdctl delete-old-users 1

That way once every 24 hours the ejabberd server gets cleaned up.

Web Frontend

Note that you now already can use your own server with the CryptoCat frontend via: https://crypto.cat. We are going to set up our own frontend on our webserver so we don’t need Crypto.Cat.

Setting up a web frontend is not recommended by the cryptocat developers. See the comment below, and read the full thread on this Reddit post

When you host Cryptocat as a website, this means that every time someone wants to use it, they technically will need to re-download the entire code by visiting the website. This means that every use needs a full re-download of the Cryptocat code. By centralizing the code redistribution in a "web front-end" and making it necessary for everyone to redownload the code every time, you create an opportunity for malicious code poisoning by the host, or code injection by a third party. This is why the only recommended Cryptocat download is the browser extension from the official website, which downloads only once as opposed to every time (just like a regular desktop application), and is authenticated by Cryptocat's development team as genuine.  
Kaepora - 12-11-2013 on Reddit

Take that into consideration when setting up the frontend. A use case could be an internal cryptocat chat service where people don’t need to change the default server address and such.

First get the source code:

cd /tmp
git clone https://github.com/cryptocat/cryptocat.git

Then place it in the right folder;

cp -r cryptocat/src/core /var/www/

Edit the config file to use your own server:

cd /var/www
vim js/cryptocat.js

And place the following contents in it, replacing chat.sparklingclouds.nl with your own domain:

/* Configuration */
// Domain name to connect to for XMPP.
var defaultDomain = 'chat.sparklingclouds.nl'
// Address of the XMPP MUC server.
var defaultConferenceServer = 'conference.chat.sparklingclouds.nl'
// BOSH is served over an HTTPS proxy for better security and availability.
var defaultBOSH = 'https://chat.sparklingclouds.nl/http-bind/'

Now save the file.

You are finished now. Go to your website and test the chat out.

Ejabberd SSL Certificate

This tutorial shows you how to set up an SSL Certificate for use with Ejabberd. It covers both the creation of the Certificate Signing Request, the preparing of the certificate for use with Ejabberd and the installation of the certificate.

This tutorial assumes a working ejabberd installation. It is tested on Debian and Ubuntu, but should work on any ejabberd installation.

Steps and Explanation

To get an SSL certificate working on ejabberd we need to do a few things:

  • Create an Certificate Signing Request (CSR) and a Private Key
  • Submit the CSR to a Certificate Authority, let them sign it and give you a Certificate
  • Combine the certificate, private key (and chain) into a ejabberd compatible PEM file
  • Install the certificate in ejabberd

With a certificate we can secure our XMPP connection and conversations. This way it is much harder for others to spy on your conversations. Combined with OTR this enabled a super secure channel for conversation.

Creating the Certificate Signing Request

Create a folder to store all the files and cd to that:

mkdir -p ~/Certificates/xmpp cd ~/Certificates/xmpp

Now use OpenSSL to create both a Private Key and a CSR. The first command will do it interactively, the second command will do it non-interactive. Make sure to set the correct values, your Common Name (CN) should be your XMPP server URL:

Interactive:

openssl req -nodes -newkey rsa:2048 -keyout private.key -out CSR.csr

Non-interactive:

openssl req -nodes -newkey rsa:2048 -keyout private.key -out CSR.csr -subj “/C=NL/ST=State/L=City/O=Company Name/OU=Department/CN=chat.example.org”

This will result in two files, CSR.csr and private.key. You now have to submit the CSR to a Certificate Authority. This can be any CA, I myself have good experiences with Xolphin, but there are others like Digicert and Verisign.

Once you have submitted your CSR and have gotten a Certificate you can continue.

Creating the ejabberd certificate

Once you have all the files (private key, certificate and certificate chain), put them all in a folder and continue. We are going to cat all the required files into a ejabberd.pem file.

This needs to happen in a specific order:

  • private key
  • certificate
  • chains

So adapt the following commands to your filenames and create the pem file:

cat private.key >> ejabberd.pem cat certificate.pem >> ejabberd.pem cat chain-1.pem >> ejabberd.pem cat chain-2.pem >> ejabberd.pem

If that all works out continue.

Installing the certificate in ejabberd

Copy the certificate to all your ejabberd servers:

scp ejabberd.pem user@srv1.example.org:

The place the certificate in the /etc/ejabberd folder:

cp ejabberd.pem /etc/ejabberd/ejabberd.pem

Now change the ejabberd config to point to the new certificate:

vim /etc/ejabberd/ejabberd.cfg

Check/change the following to point to the new certificate:

[…] {listen, [ {5222, ejabberdc2s, [ {access, c2s}, {shaper, c2sshaper}, {maxstanzasize, 65536}, starttls, {certfile, “/etc/ejabberd/ejabberd.pem”} ]}, […] {s2susestarttls, true}. {s2s_certfile, “/etc/ejabberd/ejabberd.pem”}. […]

Afterwards restart ejabberd:

/etc/init.d/ejabberd restart

You can now use any XMPP client to connect with SSL/TLS to see if it works.