Never Ending Security

It starts all here

Category Archives: Advisory’s

Common Types of Network Attacks



Common Types of Network Attacks


Without security measures and controls in place, your data might be subjected to an attack. Some attacks are passive, meaning information is monitored; others are active, meaning the information is altered with intent to corrupt or destroy the data or the network itself.

Your networks and data are vulnerable to any of the following types of attacks if you do not have a security plan in place.


Eavesdropping

In general, the majority of network communications occur in an unsecured or “cleartext” format, which allows an attacker who has gained access to data paths in your network to “listen in” or interpret (read) the traffic. When an attacker is eavesdropping on your communications, it is referred to as sniffing or snooping. The ability of an eavesdropper to monitor the network is generally the biggest security problem that administrators face in an enterprise. Without strong encryption services that are based on cryptography, your data can be read by others as it traverses the network.


Data Modification

After an attacker has read your data, the next logical step is to alter it. An attacker can modify the data in the packet without the knowledge of the sender or receiver. Even if you do not require confidentiality for all communications, you do not want any of your messages to be modified in transit. For example, if you are exchanging purchase requisitions, you do not want the items, amounts, or billing information to be modified.


Identity Spoofing (IP Address Spoofing)

Most networks and operating systems use the IP address of a computer to identify a valid entity. In certain cases, it is possible for an IP address to be falsely assumed— identity spoofing. An attacker might also use special programs to construct IP packets that appear to originate from valid addresses inside the corporate intranet.

After gaining access to the network with a valid IP address, the attacker can modify, reroute, or delete your data. The attacker can also conduct other types of attacks, as described in the following sections.


Password-Based Attacks

A common denominator of most operating system and network security plans is password-based access control. This means your access rights to a computer and network resources are determined by who you are, that is, your user name and your password.

Older applications do not always protect identity information as it is passed through the network for validation. This might allow an eavesdropper to gain access to the network by posing as a valid user.

When an attacker finds a valid user account, the attacker has the same rights as the real user. Therefore, if the user has administrator-level rights, the attacker also can create accounts for subsequent access at a later time.

After gaining access to your network with a valid account, an attacker can do any of the following:

  • Obtain lists of valid user and computer names and network information.
  • Modify server and network configurations, including access controls and routing tables.
  • Modify, reroute, or delete your data.

Denial-of-Service Attack

Unlike a password-based attack, the denial-of-service attack prevents normal use of your computer or network by valid users.

After gaining access to your network, the attacker can do any of the following:

  • Randomize the attention of your internal Information Systems staff so that they do not see the intrusion immediately, which allows the attacker to make more attacks during the diversion.
  • Send invalid data to applications or network services, which causes abnormal termination or behavior of the applications or services.
  • Flood a computer or the entire network with traffic until a shutdown occurs because of the overload.
  • Block traffic, which results in a loss of access to network resources by authorized users.

Man-in-the-Middle Attack

As the name indicates, a man-in-the-middle attack occurs when someone between you and the person with whom you are communicating is actively monitoring, capturing, and controlling your communication transparently. For example, the attacker can re-route a data exchange. When computers are communicating at low levels of the network layer, the computers might not be able to determine with whom they are exchanging data.

Man-in-the-middle attacks are like someone assuming your identity in order to read your message. The person on the other end might believe it is you because the attacker might be actively replying as you to keep the exchange going and gain more information. This attack is capable of the same damage as an application-layer attack, described later in this section.


Compromised-Key Attack

A key is a secret code or number necessary to interpret secured information. Although obtaining a key is a difficult and resource-intensive process for an attacker, it is possible. After an attacker obtains a key, that key is referred to as a compromised key.

An attacker uses the compromised key to gain access to a secured communication without the sender or receiver being aware of the attack.With the compromised key, the attacker can decrypt or modify data, and try to use the compromised key to compute additional keys, which might allow the attacker access to other secured communications.


Sniffer Attack

A sniffer is an application or device that can read, monitor, and capture network data exchanges and read network packets. If the packets are not encrypted, a sniffer provides a full view of the data inside the packet. Even encapsulated (tunneled) packets can be broken open and read unless they are encrypted and the attacker does not have access to the key.

Using a sniffer, an attacker can do any of the following:

  • Analyze your network and gain information to eventually cause your network to crash or to become corrupted.
  • Read your communications.

Application-Layer Attack

An application-layer attack targets application servers by deliberately causing a fault in a server’s operating system or applications. This results in the attacker gaining the ability to bypass normal access controls. The attacker takes advantage of this situation, gaining control of your application, system, or network, and can do any of the following:

  • Read, add, delete, or modify your data or operating system.
  • Introduce a virus program that uses your computers and software applications to copy viruses throughout your network.
  • Introduce a sniffer program to analyze your network and gain information that can eventually be used to crash or to corrupt your systems and network.
  • Abnormally terminate your data applications or operating systems.
  • Disable other security controls to enable future attacks.

CWE/SANS TOP 25 Most Dangerous Software Errors


The Top 25 Software Errors are listed below in three categories:

The New 25 Most Dangerous Programming Errors

The Scoring System

The Risk Management System

Click on the CWE ID in any of the listings and you will be directed to the relevant spot in the MITRE CWE site where you will find the following:

  • Ranking of each Top 25 entry,
  • Links to the full CWE entry data,
  • Data fields for weakness prevalence and consequences,
  • Remediation cost,
  • Ease of detection,
  • Code examples,
  • Detection Methods,
  • Attack frequency and attacker awareness
  • Related CWE entries, and
  • Related patterns of attack for this weakness.

Each entry at the Top 25 Software Errors site also includes fairly extensive prevention and remediation steps that developers can take to mitigate or eliminate the weakness.

Archive

Insecure Interaction Between Components

These weaknesses are related to insecure ways in which data is sent and received between separate components, modules, programs, processes, threads, or systems.

CWE ID Name
CWE-89 Improper Neutralization of Special Elements used in an SQL Command (‘SQL Injection’)
CWE-78 Improper Neutralization of Special Elements used in an OS Command (‘OS Command Injection’)
CWE-79 Improper Neutralization of Input During Web Page Generation (‘Cross-site Scripting’)
CWE-434 Unrestricted Upload of File with Dangerous Type
CWE-352 Cross-Site Request Forgery (CSRF)
CWE-601 URL Redirection to Untrusted Site (‘Open Redirect’)

Risky Resource Management

The weaknesses in this category are related to ways in which software does not properly manage the creation, usage, transfer, or destruction of important system resources.

CWE ID Name
CWE-120 Buffer Copy without Checking Size of Input (‘Classic Buffer Overflow’)
CWE-22 Improper Limitation of a Pathname to a Restricted Directory (‘Path Traversal’)
CWE-494 Download of Code Without Integrity Check
CWE-829 Inclusion of Functionality from Untrusted Control Sphere
CWE-676 Use of Potentially Dangerous Function
CWE-131 Incorrect Calculation of Buffer Size
CWE-134 Uncontrolled Format String
CWE-190 Integer Overflow or Wraparound

Porous Defenses

The weaknesses in this category are related to defensive techniques that are often misused, abused, or just plain ignored.

CWE ID Name
CWE-306 Missing Authentication for Critical Function
CWE-862 Missing Authorization
CWE-798 Use of Hard-coded Credentials
CWE-311 Missing Encryption of Sensitive Data
CWE-807 Reliance on Untrusted Inputs in a Security Decision
CWE-250 Execution with Unnecessary Privileges
CWE-863 Incorrect Authorization
CWE-732 Incorrect Permission Assignment for Critical Resource
CWE-327 Use of a Broken or Risky Cryptographic Algorithm
CWE-307 Improper Restriction of Excessive Authentication Attempts
CWE-759 Use of a One-Way Hash without a Salt

Resources to Help Eliminate The Top 25 Software Errors

  1. The TOP 25 Errors List will be updated regularly and will be posted at both the SANS and MITRE sites
    SANS Top 25 Software Errors Site
    CWE Top 25 Software Errors Site

    MITRE maintains the CWE (Common Weakness Enumeration) web site, with the support of the US Department of Homeland Security’s National Cyber Security Division, presenting detailed descriptions of the top 25 Software errors along with authoritative guidance for mitigating and avoiding them. That site also contains data on more than 700 additional Software errors, design errors and architecture errors that can lead to exploitable vulnerabilities. CWE Web Site

    SANS maintains a series of assessments of secure coding skills in three languages along with certification exams that allow programmers to determine gaps in their knowledge of secure coding and allows buyers to ensure outsourced programmers have sufficient programming skills. Organizations with more than 500 programmers can assess the secure coding skills of up to 100 programmers at no cost.

    Email spa@sans.org for details.

  2. SAFECode – The Software Assurance Forum for Excellence in Code (members include EMC, Juniper, Microsoft, Nokia, SAP and Symantec) has produced two excellent publications outlining industry best practices for software assurance and providing practical advice for implementing proven methods for secure software development.

    Fundamental Practices for Secure Software Development 2nd Edition
    http://www.safecode.org/publications/SAFECode_Dev_Practices0211.pdf

    Overview of Software Integrity Controls
    http://www.safecode.org/publications/SAFECode_Software_Integrity_Controls0610.pdf

    Framework for Software Supply Chain Integrity
    http://www.safecode.org/publications/SAFECode_Supply_Chain0709.pdf

    Fundamental Practices for Secure Software Development
    http://www.safecode.org/publications/SAFECode_Dev_Practices1108.pdf

    Software Assurance: An Overview of Current Industry Best Practices
    http://www.safecode.org/publications/SAFECode_BestPractices0208.pdf

  3. Software Assurance Community Resources Site and DHS web sitesAs part of DHS risk mitigation efforts to enable greater resilience of cyber assets, the Software Assurance Program seeks to reduce software vulnerabilities, minimize exploitation, and address ways to routinely acquire, develop and deploy reliable and trustworthy software products with predictable execution, and to improve diagnostic capabilities to analyze systems for exploitable weaknesses.
  4. Nearly a dozen software companies offer automated tools that test programs for these errors.
  5. New York State has produced draft procurement standards to allow companies to buy software with security baked in.

    If you wish to join the working group to help improve the procurement guidelines you can go to the New York State Cyber Security and Critical Infrastructure Coordination web site.

    Draft New York State procurement language will be posted at SANS Application Security Contract.

For additional information on any of these:
SANS: Mason Brown, mbrown@sans.org
MITRE: Bob Martin, ramartin@mitre.org
MITRE: Steve Christey, coley@mitre.org

Securing Web Application Technologies – [SWAT] Checklist


Securing Web Application Technologies
[SWAT] Checklist

The SWAT Checklist provides an easy to reference set of best practices that raise awareness and help development teams create more secure applications. It’s a first step toward building a base of security knowledge around web application security. Use this checklist to identify the minimum standard that is required to neutralize vulnerabilities in your critical applications.

ERROR HANDLING AND LOGGING

BEST PRACTICE DESCRIPTION CWE ID
Display Generic Error Messages Error messages should not reveal details about the internal state of the application. For example, file system path and stack information should not be exposed to the user through error messages. CWE-209
No Unhandled Exceptions Given the languages and frameworks in use for web application development, never allow an unhandled exception to occur. Error handlers should be configured to handle unexpected errors and gracefully return controlled output to the user. CWE-391
Suppress Framework Generated Errors Your development framework or platform may generate default error messages. These should be suppressed or replaced with customized error messages as framework generated messages may reveal sensitive information to the user. CWE-209
Log All Authentication Activities Any authentication activities, whether successful or not, should be logged. CWE-778
Log All Privilege Changes Any activities or occasions where the user’s privilege level changes should be logged. CWE-778
Log Administrative Activities Any administrative activities on the application or any of its components should be logged. CWE-778
Log Access To Sensitive Data Any access to sensitive data should be logged. This is particularly important for corporations that have to meet regulatory requirements like HIPAA, PCI, or SOX. CWE-778
Do Not Log Inappropriate Data While logging errors and auditing access is important, sensitive data should never be logged in an unencrypted form. For example, under HIPAA and PCI, it would be a violation to log sensitive data into the log itself unless the log is encrypted on the disk. Additionally, it can create a serious exposure point should the web application itself become compromised. CWE-532
Store Logs Securely Logs should be stored and maintained appropriately to avoid information loss or tampering by intruder. Log retention should also follow the retention policy set forth by the organization to meet regulatory requirements and provide enough information for forensic and incident response activities. CWE-533

DATA PROTECTION

BEST PRACTICE DESCRIPTION CWE ID
Use SSL Everywhere Ideally, SSL should be used for your entire application. If you have to limit where it’s used then SSL must be applied to any authentication pages as well as all pages after the user is authenticated. If sensitive information (e.g. personal information) can be submitted before authentication those features must also be sent over SSL.
EXAMPLE: Firesheep
CWE-311
CWE-319
CWE-523
Disable HTTP Access For All SSL Enabled Resources For all pages requiring protection by SSL, the same URL should not be accessible via the non-SSL channel. CWE-319
Use The Strict- Transport-Security Header The Strict-Transport-Security header ensures that the browser does not talk to the server over non-SSL. This helps reduce the risk of SSL stripping attacks as implemented by the sslsniff tool.
Store User Passwords Using A Strong, Iterative, Salted Hash User passwords must be stored using secure hashing techniques with a strong algorithm like SHA-256. Simply hashing the password a single time does not sufficiently protect the password. Use iterative hashing with a random salt to make the hash strong.
EXAMPLE: LinkedIn password leak
CWE-257
Securely Exchange Encryption Keys If encryption keys are exchanged or pre-set in your application then any key establishment or exchange must be performed over a secure channel
Set Up Secure Key Management Processes When keys are stored in your system they must be properly secured and only accessible to the appropriate staff on a need to know basis. CWE-320
Disable Weak SSL Ciphers On Servers Weak SSL ciphers must be disabled on all servers. For example, SSL v2 has known weaknesses and is not considered to be secure. Additionally, some ciphers are cryptographically weak and should be disabled.
Use Valid SSL Certificates From A Reputable Ca SSL certificates should be signed by a reputable certificate authority. The name on the certificate should match the FQDN of the website. The certificate itself should be valid and not expired.
EXAMPLE: CA Compromise (http://en.wikipedia.org/wiki/DigiNotar)
Disable Data Caching Using Cache Control Headers And Autocomplete Browser data caching should be disabled using the cache control HTTP headers or meta tags within the HTML page. Additionally, sensitive input fields, such as the login form, should have the autocomplete=off setting in the HTML form to instruct the browser not to cache the credentials. CWE-524
Limit The Use And Storage Of Sensitive Data Conduct an evaluation to ensure that sensitive data is not being unnecessarily transported or stored. Where possible, use tokenization to reduce data exposure risks.

CONFIGURATION AND OPERATIONS

BEST PRACTICE DESCRIPTION CWE ID
Establish A Rigorous Change Management Process A rigorous change management process must be maintained during change management operations. For example, new releases should only be deployed after process
EXAMPLE: RBS production outage (http://www.computing.co.uk/ctg/analysis/2186972/rbs-wrong-rbs-manager)
CWE-439
Define Security Requirements Engage the business owner to define security requirements for the application. This includes items that range from the whitelist validation rules all the way to nonfunctional requirements like the performance of the login function. Defining these requirements up front ensures that security is baked into the system.
Conduct A Design Review Integrating security into the design phase saves money and time. Conduct a risk review with security professionals and threat model the application to identify key risks. The helps you integrate appropriate countermeasures into the design and architecture of the application. CWE-701
CWE-656
Perform Code Reviews Security focused code reviews can be one of the most effective ways to find security bugs. Regularly review your code looking for common issues like SQL Injection and Cross-Site Scripting. CWE-702
Perform Security Testing Conduct security testing both during and after development to ensure the application meets security standards. Testing should also be conducted after major releases to ensure vulnerabilities did not get introduced during the update process.
Harden The Infrastructure All components of infrastructure that support the application should be configured according to security best practices and hardening guidelines. In a typical web application this can include routers, firewalls, network switches, operating systems, web servers, application servers, databases, and application frameworks. CWE-15
CWE-656
Define An Incident Handling Plan An incident handling plan should be drafted and tested on a regular basis. The contact list of people to involve in a security incident related to the application should be well defined and kept up to date.
Educate The Team On Security Training helps define a common language that the team can use to improve the security of the application. Education should not be confined solely to software developers, testers, and architects. Anyone associated with the development process, such as business analysts and project managers, should all have periodic software security awareness training.

AUTHENTICATION

BEST PRACTICE DESCRIPTION CWE ID
Don’t Hardcode Credentials Never allow credentials to be stored directly within the application code. While it can be convenient to test application code with hardcoded credentials during development this significantly increases risk and should be avoided.
EXAMPLE: Hard coded passwords in networking devices https://www.us-cert.gov/control_systems/pdf/ICSA-12-243-01.pdf
CWE-798
Develop A Strong Password Reset System Password reset systems are often the weakest link in an application. These systems are often based on the user answering personal questions to establish their identity and in turn reset the password. The system needs to be based on questions that are both hard to guess and brute force. Additionally, any password reset option must not reveal whether or not an account is valid, preventing username harvesting.
EXAMPLE: Sara Palin password hack (http://en.wikipedia.org/wiki/Sarah_Palin_email_hack)
CWE-640
Implement A Strong Password Policy A password policy should be created and implemented so that passwords meet specific strength criteria.
EXAMPLE: http://www.pcworld.com/article/128823/study_weak_passwords_really_do_help_hackers.html
CWE-521
Implement Account Lockout Against Brute Force Attacks Account lockout needs to be implemented to guard against brute forcing attacks against both the authentication and password reset functionality. After several tries on a specific user account, the account should be locked for a period of time or until manually unlocked. Additionally, it is best to continue the same failure message indicating that the credentials are incorrect or the account is locked to prevent an attacker from harvesting usernames. CWE-307
Don’t Disclose Too Much Information In Error Messages Messages for authentication errors must be clear and, at the same time, be written so that sensitive information about the system is not disclosed. For example, error messages which reveal that the userid is valid but that the corresponding password is incorrect confirms to an attacker that the account does exist on the system.
Store Database Credentials Securely Modern web applications usually consist of multiple layers. The business logic tier (processing of information) often connects to the data tier (database). Connecting to the database, of course, requires authentication. The authentication credentials in the business logic tier must be stored in a centralized location that is locked down. Scattering credentials throughout the source code is not acceptable. Some development frameworks provide a centralized secure location for storing credentials to the backend database. These encrypted stores should be leveraged when possible. CWE-257
Applications And Middleware Should Run With Minimal Privileges If an application becomes compromised it is important that the application itself and any middleware services be configured to run with minimal privileges. For instance, while the application layer or business layer needs the ability to read and write data to the underlying database, administrative credentials that grant access to other databases or tables should not be provided. CWE-250

SESSION MANAGEMENT

BEST PRACTICE DESCRIPTION CWE ID
Ensure That Session Identifiers Are Sufficiently Random Session tokens must be generated by secure random functions and must be of a sufficient length so as to withstand analysis and prediction. CWE-6
Regenerate Session Tokens Session tokens should be regenerated when the user authenticates to the application and when the user privilege level changes. Additionally, should the encryption status change, the session token should always be regenerated CWE-384
Implement An Idle Session Timeout When a user is not active, the application should automatically log the user out. Be aware that Ajax applications may make recurring calls to the application effectively resetting the timeout counter automatically. CWE-613
Implement An Absolute Session Timeout Users should be logged out after an extensive amount of time (e.g. 4-8 hours) has passed since they logged in. This helps mitigate the risk of an attacker using a hijacked session. CWE-613
Destroy Sessions At Any Sign Of Tampering Unless the application requires multiple simultaneous sessions for a single user, implement features to detect session cloning attempts. Should any sign of session cloning be detected, the session should be destroyed, forcing the real user to re-authenticate.
Invalidate The Session After Logout When the user logs out of the application the session and corresponding data on the server must be destroyed. This ensures that the session can not be accidentally revived. CWE-613
Place A Logout Button On Every Page The logout button or logout link should be easily accessible to the user on every page after they have authenticated.
Use Secure Cookie Attributes (I.E. Httponly And Secure Flags) The session cookie should be set with both the HttpOnly and the Secure flags. This ensures that the session id will not be accessible to client-side scripts and it will only be transmitted over SSL, respectively. CWE-79
CWE-614
Set The Cookie Domain And Path Correctly The cookie domain and path scope should be set to the most restrictive settings for your application. Any wildcard domain scoped cookie must have a good justification for its existence.
Set The Cookie Expiration Time The session cookie should have a reasonable expiration time. Non-expiring session cookies should be avoided.

INPUT AND OUTPUT HANDLING

BEST PRACTICE DESCRIPTION CWE ID
Conduct Contextual Output Encoding All output functions must contextually encode data before sending it to the user. Depending on where the output will end up in the HTML page, the output must be encoded differently. For example, data placed in the URL context must be encoded differently than data placed in JavaScript context within the HTML page.
EXAMPLE: Resource: https://www.owasp.org/index.php/XSS_(Cross_Site_Scripting)_Prevention_Cheat_Sheet
CWE-79
Prefer Whitelists Over Blacklists For each user input field, there should be validation on the input content. Whitelisting input is the preferred approach. Only accept data that meets a certain criteria. For input that needs more flexibility, blacklisting can also be applied where known bad input patterns or characters are blocked. CWE-159
CWE-144
Use Parameterized SQL Queries SQL queries should be crafted with user content passed into a bind variable. Queries written this way are safe against SQL injection attacks. SQL queries should not be created dynamically using string concatenation. Similarly, the SQL query string used in a bound or parameterized query should never be dynamically built from user input.
EXAMPLE: Sony SQL injection Hack (http://www.infosecurity-magazine.com/view/27930/lulzsec-sony-pictures-hackers-were-school-chums)
CWE-89
CWE-564
Use Tokens To Prevent Forged Requests In order to prevent Cross-Site Request Forgery attacks, you must embed a random value that is not known to third parties into the HTML form. This CSRF protection token must be unique to each request. This prevents a forged CSRF request from being submitted because the attacker does not know the value of the token. CWE-352
Set The Encoding For Your Application For every page in your application set the encoding using HTTP headers or meta tags within HTML. This ensures that the encoding of the page is always defined and that browser will not have to determine the encoding on its own. Setting a consistent encoding, like UTF-8, for your application reduces the overall risk of issues like Cross-Site Scripting. CWE-172
Validate Uploaded Files When accepting file uploads from the user make sure to validate the size of the file, the file type, and the file contents as well as ensuring that it is not possible to override the destination path for the file. CWE-434
CWE-616
CWE-22
Use The Nosniff Header For Uploaded Content When hosting user uploaded content which can be viewed by other users, use the X-Content-Type-Options: nosniff header so that browsers do not try to guess the data type. Sometimes the browser can be tricked into displaying the data type incorrectly (e.g. showing a GIF file as HTML). Always let the server or application determine the data type. CWE-430
Validate The Source Of Input The source of the input must be validated. For example, if input is expected from a POST request do not accept the input variable from a GET request. CWE-20
CWE-346
Use The X-Frame- Options Header Use the X-Frame-Options header to prevent content from being loaded by a foreign site in a frame. This mitigates Clickjacking attacks. For older browsers that do not support this header add framebusting Javascript code to mitigate Clickjacking (although this method is not foolproof and can be circumvented).
EXAMPLE: Flash camera and mic hack (http://jeremiahgrossman.blogspot.com/2008/10/clickjacking-web-pages-can-see-and-hear.html)
CAPEC-103
CWE-693
Use Content Security Policy (CsP) Or X-Xss- Protection Headers Content Security Policy (CSP) and X-XSS-Protection headers help defend against many common reflected Cross-Site Scripting (XSS) attacks. CWE-79
CWE-692

ACCESS CONTROL

BEST PRACTICE DESCRIPTION CWE ID
Apply Access Controls Checks Consistently Always apply the principle of complete mediation, forcing all requests through a common security “gate keeper.” This ensures that access control checks are triggered whether or not the user is authenticated. CWE-284
Apply The Principle Of Least Privilege Make use of a Mandatory Access Control system. All access decisions will be based on the principle of least privilege. If not explicitly allowed then access should be denied. Additionally, after an account is created, rights must be specifically added to that account to grant access to resources. CWE-272
CWE-250
Don’t Use Direct Object References For Access Control Checks Do not allow direct references to files or parameters that can be manipulated to grant excessive access. Access control decisions must be based on the authenticated user identity and trusted server side information. CWE-284
Don’t Use Unvalidated Forwards Or Redirects An unvalidated forward can allow an attacker to access private content without authentication. Unvalidated redirects allow an attacker to lure victims into visiting malicious sites. Prevent these from occurring by conducting the appropriate access controls checks before sending the user to the given location. CWE-601

Bsdnow.tv Tutorials List


Tutorials:

from: http://www.bsdnow.tv/tutorials/

Ray-Mon – PHP and Bash server status monitoring


Ray-Mon is a linux server monitoring script written in PHP and Bash, utilizing JSON as data storage. It requires only bash and a webserver on the client side, and only php on the server side. The client currently supports monitoring processes, uptime, updates, amount of users logged in, disk usage, RAM usage and network traffic.

Features

  • Ping monitor
  • History per host
  • Threshold per monitored item.
  • Monitors:
    • Processes (lighttpd, apache, nginx, sshd, munin etc.)
    • RAM
    • Disk
    • Uptime
    • Users logged on
    • Updates
    • Network (RX/TX)

Download

Either git clone the github repo:

git clone git://github.com/RaymiiOrg/raymon.git

Or download the zipfile from github:

https://github.com/RaymiiOrg/raymon/zipball/master

Or download the zipfile from Raymii.org

https://raymii.org/s/inc/software/raymon-0.0.2.zip

This is the github page: https://github.com/RaymiiOrg/raymon/

Changelog

v0.0.2
  • Server side now only requires 1 script instead of 2.
  • Client script creates the json better, if a value is missing the json file doesn’t break.
  • Changed the visual style to a better layout.
  • Thresholds implemented and configurable.
  • History per host now implemented.
v0.0.1
  • Initial release

Install

Client

The client.sh script is a bash script which outputs JSON. It requires root access and should be run as root. It also requires a webserver, so that the server can get the json file.

Software needed for the script:

  • bash
  • awk
  • grep
  • ifconfig
  • package managers supported: apt-get, yum and pacman (debian/ubuntu, centos/RHEL/SL, Arch)

Setup a webserver (lighttpd, apache, boa, thttpd, nginx) for the script output. If there is already a webserver running on the server you dont need to install another one.

Edit the script:

Network interfaces. First one is used for the IP, the second one is used for bandwidth calculations. This is done because openVZ has the “venet0” interface for the bandwidth, and the venet0:0 interface with an IP. If you run bare-metal or KVM or vserver etc. you can set these two to the same value (eth0 eth1 etc).

# Network interface for the IP address
iface="venet0:0"
# network interface for traffic monitoring (RX/TX bytes)
iface2="venet0"

The IP address of the server, this is used by me when deploying this script via chef or ansible. You can set it, but it is not required.

Services are checked by doing a ps to see if the process is running. The last service should be defined without a comma, for valid JSON. The code below monitors “sshd”, “lighttpd”, “munin-node” and “syslog”.

SERVICE=lighttpd
if ps ax | grep -v grep | grep $SERVICE > /dev/null; then echo -n ""$SERVICE" : "running","; else echo -n ""$SERVICE" : "not running","; fi
SERVICE=sshd
if ps ax | grep -v grep | grep $SERVICE > /dev/null; then echo -n ""$SERVICE" : "running","; else echo -n ""$SERVICE" : "not running","; fi
SERVICE=syslog
if ps ax | grep -v grep | grep $SERVICE > /dev/null; then echo -n ""$SERVICE" : "running","; else echo -n ""$SERVICE" : "not running","; fi
#LAST SERVICE HAS TO BE WITHOUT , FOR VALID JSON!!!
SERVICE=munin-node
if ps ax | grep -v grep | grep $SERVICE > /dev/null; then echo -n ""$SERVICE" : "running""; else echo -n ""$SERVICE" : "not running""; fi

To add a service, copy the 2 lines and replace the SERVICE=processname with the actual process name:

SERVICE=processname
if ps ax | grep -v grep | grep $SERVICE > /dev/null; then echo -n ""$SERVICE" : "running","; else echo -n ""$SERVICE" : "not running","; fi

And, make sure the last service montiored does not echo a comma at the end, else the JSON is not valid and the php script fails.

Now setup a cronjob to execute the script on a set interval and save the JSON to the webserver directory.

As root, create the file /etc/cron.d/raymon-client with the following contents:

SHELL=/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
*/5 * * * * root /root/scripts/client.sh | sed ':a;N;$!ba;s/n//g' > /var/www/stat.json

In my case, the client script is in /root/scripts, and my webserver directory is /var/www. Change this to your own setup. Also, you might want to change the time interval. /5 executes every 5 minutes. The sed line is in there to remove the newlines, this creates a shorter JSOn file, saves some KB’s. The *root after the cron time is special for a file in /etc/cron.d/, it tells cron as which user it has to execute the crontab file.

When this is setup you should get a stat.json file in the /var/www/ folder containing the status json. If so, the client is setup correctly.

Server

The status server is a php script which fetches the json files from the clients every 5 minutes, saves them and shows them. It also saves the history, but that is defined below.

Requirements:

  • Webserver with PHP (min. 5.2) and write access to the folder the script is located.

Steps:

Create a new folder on the webserver and make sure the webserver user (www-data) can write to it.

Place the php file “stat.php” in that folder.

Edit the host list in the php file to include your clients:

The first parameter is the filename the json file is saved to, and the second is the URL where the json file is located.

$hostlist=array(
                'example1.org.json' => 'http://example1.org/stat.json',
                'example2.nl.json' => 'http://example2.nl/stat.json',
                'special1.network.json' => 'http://special1.network.eu:8080/stat.json',
                'special2.network.json' => 'https://special2.network.eu/stat.json'
                );

Edit the values for the ping monitor:

$pinglist = array(
                  'github.com',
                  'google.nl',
                  'tweakers.net',
                  'jupiterbroadcasting.com',
                  'lowendtalk.com',
                  'lowendbox.com' 
                  );

Edit the threshold values:

## Set this to "secure" the history saving. This key has to be given as a parameter to save the history.
$historykey = "8A29691737D";
#the below values set the threshold before a value gets shown in bold on the page.
# Max updates available
$maxupdates = "10";
# Max users concurrently logged in
$maxusers = "3";
# Max load.
$maxload = "2";
# Max disk usage (in percent)
$maxdisk = "75";
# Max RAM usage (in percent)
$maxram = "75";
History

To save the history you have to setup a cronjob to get the status page with a special “history key”. You define this in the stat.php file:

## Set this to "secure" the history saving. This key has to be given as a parameter to save the history.
$historykey = "8A29691737D";    

And then the cronjob to get it:

## This saves the history every 8 hours. 
30 */8 * * * wget -qO /dev/null http://url-to-status.site/status/stat.php?action=save&&key=8A29691737D

The cronjob can be on any server which can access the status page, but preferably on the host where the status page is located.

OpenSSL: Manually verify a certificate against a CRL


This article shows you how to manually verfify a certificate against a CRL. CRL stands for Certificate Revocation List and is one way to validate a certificate status. It is an alternative to the OCSP, Online Certificate Status Protocol.

You can read more about CRL’s on Wikipedia.

We will be using OpenSSL in this article. I’m using the following version:

$ openssl version
OpenSSL 1.0.2 22 Jan 2015

Get a certificate with a CRL

First we will need a certificate from a website. I’ll be using Wikipedia as an example here. We can retreive this with the following openssl command:

openssl s_client -connect wikipedia.org:443 2>&1 < /dev/null | sed -n '/-----BEGIN/,/-----END/p'

Save this output to a file, for example, wikipedia.pem:

openssl s_client -connect wikipedia.org:443 2>&1 < /dev/null | sed -n '/-----BEGIN/,/-----END/p' > wikipedia.pem

Now, check if this certificate has an CRL URI:

openssl x509 -noout -text -in wikipedia.pem | grep -A 4 'X509v3 CRL Distribution Points'
X509v3 CRL Distribution Points: 
    Full Name:
      URI:http://crl.globalsign.com/gs/gsorganizationvalsha2g2.crl

If it does not give any output, the certificate has no CRL URI. You cannot valdiate it against a CRL.

Download the CRL:

wget -O crl.der http://crl.globalsign.com/gs/gsorganizationvalsha2g2.crl

The CRL will be in DER (binary) format. The OpenSSL command needs it in PEM (base64 encoded DER) format, so convert it:

openssl crl -inform DER -in crl.der -outform PEM -out crl.pem

Getting the certificate chain

It is required to have the certificate chain together with the certificate you want to validate. So, we need to get the certificate chain for our domain, wikipedia.org. Using the -showcerts option with openssl s_client, we can see all the certificates, including the chain:

openssl s_client -connect wikipedia.org:443 -showcerts 2>&1 < /dev/null

Results in a lot of output, but what we are interested in is the following:

 1 s:/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert High Assurance CA-3
   i:/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert High Assurance EV Root CA
-----BEGIN CERTIFICATE-----
MIIGWDCCBUCgAwIBAgIQCl8RTQNbF5EX0u/UA4w/OzANBgkqhkiG9w0BAQUFADBs
MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3
d3cuZGlnaWNlcnQuY29tMSswKQYDVQQDEyJEaWdpQ2VydCBIaWdoIEFzc3VyYW5j
ZSBFViBSb290IENBMB4XDTA4MDQwMjEyMDAwMFoXDTIyMDQwMzAwMDAwMFowZjEL
MAkGA1UEBhMCVVMxFTATBgNVBAoTDERpZ2lDZXJ0IEluYzEZMBcGA1UECxMQd3d3
LmRpZ2ljZXJ0LmNvbTElMCMGA1UEAxMcRGlnaUNlcnQgSGlnaCBBc3N1cmFuY2Ug
Q0EtMzCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAL9hCikQH17+NDdR
CPge+yLtYb4LDXBMUGMmdRW5QYiXtvCgFbsIYOBC6AUpEIc2iihlqO8xB3RtNpcv
KEZmBMcqeSZ6mdWOw21PoF6tvD2Rwll7XjZswFPPAAgyPhBkWBATaccM7pxCUQD5
BUTuJM56H+2MEb0SqPMV9Bx6MWkBG6fmXcCabH4JnudSREoQOiPkm7YDr6ictFuf
1EutkozOtREqqjcYjbTCuNhcBoz4/yO9NV7UfD5+gw6RlgWYw7If48hl66l7XaAs
zPw82W3tzPpLQ4zJ1LilYRyyQLYoEt+5+F/+07LJ7z20Hkt8HEyZNp496+ynaF4d
32duXvsCAwEAAaOCAvowggL2MA4GA1UdDwEB/wQEAwIBhjCCAcYGA1UdIASCAb0w
ggG5MIIBtQYLYIZIAYb9bAEDAAIwggGkMDoGCCsGAQUFBwIBFi5odHRwOi8vd3d3
LmRpZ2ljZXJ0LmNvbS9zc2wtY3BzLXJlcG9zaXRvcnkuaHRtMIIBZAYIKwYBBQUH
AgIwggFWHoIBUgBBAG4AeQAgAHUAcwBlACAAbwBmACAAdABoAGkAcwAgAEMAZQBy
AHQAaQBmAGkAYwBhAHQAZQAgAGMAbwBuAHMAdABpAHQAdQB0AGUAcwAgAGEAYwBj
AGUAcAB0AGEAbgBjAGUAIABvAGYAIAB0AGgAZQAgAEQAaQBnAGkAQwBlAHIAdAAg
AEMAUAAvAEMAUABTACAAYQBuAGQAIAB0AGgAZQAgAFIAZQBsAHkAaQBuAGcAIABQ
AGEAcgB0AHkAIABBAGcAcgBlAGUAbQBlAG4AdAAgAHcAaABpAGMAaAAgAGwAaQBt
AGkAdAAgAGwAaQBhAGIAaQBsAGkAdAB5ACAAYQBuAGQAIABhAHIAZQAgAGkAbgBj
AG8AcgBwAG8AcgBhAHQAZQBkACAAaABlAHIAZQBpAG4AIABiAHkAIAByAGUAZgBl
AHIAZQBuAGMAZQAuMBIGA1UdEwEB/wQIMAYBAf8CAQAwNAYIKwYBBQUHAQEEKDAm
MCQGCCsGAQUFBzABhhhodHRwOi8vb2NzcC5kaWdpY2VydC5jb20wgY8GA1UdHwSB
hzCBhDBAoD6gPIY6aHR0cDovL2NybDMuZGlnaWNlcnQuY29tL0RpZ2lDZXJ0SGln
aEFzc3VyYW5jZUVWUm9vdENBLmNybDBAoD6gPIY6aHR0cDovL2NybDQuZGlnaWNl
cnQuY29tL0RpZ2lDZXJ0SGlnaEFzc3VyYW5jZUVWUm9vdENBLmNybDAfBgNVHSME
GDAWgBSxPsNpA/i/RwHUmCYaCALvY2QrwzAdBgNVHQ4EFgQUUOpzidsp+xCPnuUB
INTeeZlIg/cwDQYJKoZIhvcNAQEFBQADggEBAB7ipUiebNtTOA/vphoqrOIDQ+2a
vD6OdRvw/S4iWawTwGHi5/rpmc2HCXVUKL9GYNy+USyS8xuRfDEIcOI3ucFbqL2j
CwD7GhX9A61YasXHJJlIR0YxHpLvtF9ONMeQvzHB+LGEhtCcAarfilYGzjrpDq6X
dF3XcZpCdF/ejUN83ulV7WkAywXgemFhM9EZTfkI7qA5xSU1tyvED7Ld8aW3DiTE
JiiNeXf1L/BXunwH1OH8zVowV36GEEfdMR/X/KLCvzB8XSSq6PmuX2p0ws5rs0bY
Ib4p1I5eFdZCSucyb6Sxa1GDWL4/bcf72gMhy2oWGU4K8K2Eyl2Us1p292E=
-----END CERTIFICATE-----

As you can see, this is number 1. Number 0 is the certificate for Wikipedia, we already have that. If your site has more certificates in its chain, you will see more here. Save them all, in the order OpenSSL sends them (as in, first the one which directly issued your server certificate, then the one that issues that certificate and so on, with the root or most-root at the end of the file) to a file, named chain.pem.

You can use the following command to save all the certificates OpenSSL command returns to a file named chain.pem. See [this article for more information)[https://raymii.org/s/articles/OpenSSLGetallcertificatesfromawebsiteinplaintext.html).

OLDIFS=$IFS; IFS=':' certificates=$(openssl s_client -connect wikipedia.org:443 -showcerts -tlsextdebug -tls1 2>&1 </dev/null | sed -n '/-----BEGIN/,/-----END/ {/-----BEGIN/ s/^/:/; p}'); for certificate in ${certificates#:}; do echo $certificate | tee -a chain.pem ; done; IFS=$OLDIFS 

Combining the CRL and the Chain

The Openssl command needs both the certificate chain and the CRL, in PEM format concatenated together for the validation to work. You can omit the CRL, but then the CRL check will not work, it will just validate the certificate against the chain.

cat chain.pem crl.pem > crl_chain.pem

OpenSSL Verify

We now have all the data we need can validate the certificate.

$ openssl verify -crl_check -CAfile crl_chain.pem wikipedia.pem 
wikipedia.pem: OK

Above shows a good certificate status.

Revoked certificate

If you have a revoked certificate, you can also test it the same way as stated above. The response looks like this:

$ openssl verify -crl_check -CAfile crl_chain.pem revoked-test.pem 
revoked-test.pem: OU = Domain Control Validated, OU = PositiveSSL, CN = xs4all.nl
error 23 at 0 depth lookup:certificate revoked

You can test this using the certificate and chain on the Verisign revoked certificate test page: https://test-sspev.verisign.com:2443/test-SSPEV-revoked-verisign.html.

OpenSSL: Manually verify a certificate against an OCSP


This article shows you how to manually verfify a certificate against an OCSP server. OCSP stands for the Online Certificate Status Protocol and is one way to validate a certificate status. It is an alternative to the CRL, certificate revocation list.

Compared to CRL’s:

  • Since an OCSP response contains less information than a typical CRL (certificate revocation list), OCSP can use networks and client resources more efficiently.
  • Using OCSP, clients do not need to parse CRLs themselves, saving client-side complexity. However, this is balanced by the practical need to maintain a cache. In practice, such considerations are of little consequence, since most applications rely on third-party libraries for all X.509 functions.
  • OCSP discloses to the responder that a particular network host used a particular certificate at a particular time. OCSP does not mandate encryption, so other parties may intercept this information.

You can read more about the OCSP on wikipedia

We will be using OpenSSL in this article. I’m using the following version:

$ openssl version
OpenSSL 1.0.1g 7 Apr 2014

Get a certificate with an OCSP

First we will need a certificate from a website. I’ll be using Wikipedia as an example here. We can retreive this with the following openssl command:

openssl s_client -connect wikipedia.org:443 2>&1 < /dev/null | sed -n '/-----BEGIN/,/-----END/p'

Save this output to a file, for example, wikipedia.pem:

openssl s_client -connect wikipedia.org:443 2>&1 < /dev/null | sed -n '/-----BEGIN/,/-----END/p' > wikipedia.pem

Now, check if this certificate has an OCSP URI:

openssl x509 -noout -ocsp_uri -in wikipedia.pem
http://ocsp.digicert.com

If it does not give any output, the certificate has no OCSP URI. You cannot valdiate it against an OCSP.

Getting the certificate chain

It is required to send the certificate chain along with the certificate you want to validate. So, we need to get the certificate chain for our domain, wikipedia.org. Using the -showcerts option with openssl s_client, we can see all the certificates, including the chain:

openssl s_client -connect wikipedia.org:443 -showcerts 2>&1 < /dev/null

Results in a boatload of output, but what we are interested in is the following:

 1 s:/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert High Assurance CA-3
   i:/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert High Assurance EV Root CA
-----BEGIN CERTIFICATE-----
MIIGWDCCBUCgAwIBAgIQCl8RTQNbF5EX0u/UA4w/OzANBgkqhkiG9w0BAQUFADBs
MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3
d3cuZGlnaWNlcnQuY29tMSswKQYDVQQDEyJEaWdpQ2VydCBIaWdoIEFzc3VyYW5j
ZSBFViBSb290IENBMB4XDTA4MDQwMjEyMDAwMFoXDTIyMDQwMzAwMDAwMFowZjEL
MAkGA1UEBhMCVVMxFTATBgNVBAoTDERpZ2lDZXJ0IEluYzEZMBcGA1UECxMQd3d3
LmRpZ2ljZXJ0LmNvbTElMCMGA1UEAxMcRGlnaUNlcnQgSGlnaCBBc3N1cmFuY2Ug
Q0EtMzCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAL9hCikQH17+NDdR
CPge+yLtYb4LDXBMUGMmdRW5QYiXtvCgFbsIYOBC6AUpEIc2iihlqO8xB3RtNpcv
KEZmBMcqeSZ6mdWOw21PoF6tvD2Rwll7XjZswFPPAAgyPhBkWBATaccM7pxCUQD5
BUTuJM56H+2MEb0SqPMV9Bx6MWkBG6fmXcCabH4JnudSREoQOiPkm7YDr6ictFuf
1EutkozOtREqqjcYjbTCuNhcBoz4/yO9NV7UfD5+gw6RlgWYw7If48hl66l7XaAs
zPw82W3tzPpLQ4zJ1LilYRyyQLYoEt+5+F/+07LJ7z20Hkt8HEyZNp496+ynaF4d
32duXvsCAwEAAaOCAvowggL2MA4GA1UdDwEB/wQEAwIBhjCCAcYGA1UdIASCAb0w
ggG5MIIBtQYLYIZIAYb9bAEDAAIwggGkMDoGCCsGAQUFBwIBFi5odHRwOi8vd3d3
LmRpZ2ljZXJ0LmNvbS9zc2wtY3BzLXJlcG9zaXRvcnkuaHRtMIIBZAYIKwYBBQUH
AgIwggFWHoIBUgBBAG4AeQAgAHUAcwBlACAAbwBmACAAdABoAGkAcwAgAEMAZQBy
AHQAaQBmAGkAYwBhAHQAZQAgAGMAbwBuAHMAdABpAHQAdQB0AGUAcwAgAGEAYwBj
AGUAcAB0AGEAbgBjAGUAIABvAGYAIAB0AGgAZQAgAEQAaQBnAGkAQwBlAHIAdAAg
AEMAUAAvAEMAUABTACAAYQBuAGQAIAB0AGgAZQAgAFIAZQBsAHkAaQBuAGcAIABQ
AGEAcgB0AHkAIABBAGcAcgBlAGUAbQBlAG4AdAAgAHcAaABpAGMAaAAgAGwAaQBt
AGkAdAAgAGwAaQBhAGIAaQBsAGkAdAB5ACAAYQBuAGQAIABhAHIAZQAgAGkAbgBj
AG8AcgBwAG8AcgBhAHQAZQBkACAAaABlAHIAZQBpAG4AIABiAHkAIAByAGUAZgBl
AHIAZQBuAGMAZQAuMBIGA1UdEwEB/wQIMAYBAf8CAQAwNAYIKwYBBQUHAQEEKDAm
MCQGCCsGAQUFBzABhhhodHRwOi8vb2NzcC5kaWdpY2VydC5jb20wgY8GA1UdHwSB
hzCBhDBAoD6gPIY6aHR0cDovL2NybDMuZGlnaWNlcnQuY29tL0RpZ2lDZXJ0SGln
aEFzc3VyYW5jZUVWUm9vdENBLmNybDBAoD6gPIY6aHR0cDovL2NybDQuZGlnaWNl
cnQuY29tL0RpZ2lDZXJ0SGlnaEFzc3VyYW5jZUVWUm9vdENBLmNybDAfBgNVHSME
GDAWgBSxPsNpA/i/RwHUmCYaCALvY2QrwzAdBgNVHQ4EFgQUUOpzidsp+xCPnuUB
INTeeZlIg/cwDQYJKoZIhvcNAQEFBQADggEBAB7ipUiebNtTOA/vphoqrOIDQ+2a
vD6OdRvw/S4iWawTwGHi5/rpmc2HCXVUKL9GYNy+USyS8xuRfDEIcOI3ucFbqL2j
CwD7GhX9A61YasXHJJlIR0YxHpLvtF9ONMeQvzHB+LGEhtCcAarfilYGzjrpDq6X
dF3XcZpCdF/ejUN83ulV7WkAywXgemFhM9EZTfkI7qA5xSU1tyvED7Ld8aW3DiTE
JiiNeXf1L/BXunwH1OH8zVowV36GEEfdMR/X/KLCvzB8XSSq6PmuX2p0ws5rs0bY
Ib4p1I5eFdZCSucyb6Sxa1GDWL4/bcf72gMhy2oWGU4K8K2Eyl2Us1p292E=
-----END CERTIFICATE-----

As you can see, this is number 1. Number 0 is the certificate for Wikipedia, we already have that. If your site has more certificates in its chain, you will see more here. Save them all, in the order OpenSSL sends them (as in, first the one which directly issued your server certificate, then the one that issues that certificate and so on, with the root or most-root at the end of the file) to a file, named chain.pem.

Sending the OCSP request

We now have all the data we need to do an OCSP request. Using the following Openssl command we can send an OCSP request and only get the text output:

openssl ocsp -issuer chain.pem -cert wikipedia.pem -text -url http://ocsp.digicert.com

Results in:

OCSP Request Data:
    Version: 1 (0x0)
    Requestor List:
        Certificate ID:
          Hash Algorithm: sha1
          Issuer Name Hash: ED48ADDDCB7B00E20E842AA9B409F1AC3034CF96
          Issuer Key Hash: 50EA7389DB29FB108F9EE50120D4DE79994883F7
          Serial Number: 0114195F66FAFF8FD66E12496E516F4F
    Request Extensions:
        OCSP Nonce:
            0410DA634F2ADC31DC48AE89BE64E8252D12
OCSP Response Data:
    OCSP Response Status: successful (0x0)
    Response Type: Basic OCSP Response
    Version: 1 (0x0)
    Responder Id: 50EA7389DB29FB108F9EE50120D4DE79994883F7
    Produced At: Apr  9 08:45:00 2014 GMT
    Responses:
    Certificate ID:
      Hash Algorithm: sha1
      Issuer Name Hash: ED48ADDDCB7B00E20E842AA9B409F1AC3034CF96
      Issuer Key Hash: 50EA7389DB29FB108F9EE50120D4DE79994883F7
      Serial Number: 0114195F66FAFF8FD66E12496E516F4F
    Cert Status: good
    This Update: Apr  9 08:45:00 2014 GMT
    Next Update: Apr 16 09:00:00 2014 GMT

    Signature Algorithm: sha1WithRSAEncryption
         56:21:4c:dc:84:21:f7:a8:ac:a7:b9:bc:10:19:f8:19:f1:34:
         c1:63:ca:14:7f:8f:5a:85:2a:cc:02:b0:f8:b5:05:4a:0f:28:
         50:2a:4a:4d:04:01:b5:05:ef:a5:88:41:d8:9d:38:00:7d:76:
         1a:aa:ff:21:50:68:90:d2:0c:93:85:49:e7:8e:f1:58:08:77:
         a0:4e:e2:22:98:01:b7:e3:27:75:11:f5:b7:8f:e0:75:7d:19:
         9b:74:cf:05:dc:ae:1c:36:09:95:b6:08:bc:e7:3f:ea:a2:e3:
         ae:d7:8f:c0:9d:8e:c2:37:67:c7:5b:d8:b0:67:23:f1:51:53:
         26:c2:96:b0:1a:df:4e:fb:4e:e3:da:a3:98:26:59:a8:d7:17:
         69:87:a3:68:47:08:92:d0:37:04:6b:49:9a:96:9d:9c:b1:e8:
         cb:dc:68:7b:4a:4d:cb:08:f7:92:67:41:99:b6:54:56:80:0c:
         18:a7:24:53:ac:c6:da:1f:4d:f4:3c:7d:68:44:1d:a4:df:1d:
         48:07:85:52:86:59:46:d1:35:45:1a:c7:6b:6b:92:de:24:ae:
         c0:97:66:54:29:7a:c6:86:a6:da:9f:06:24:dc:ac:80:66:95:
         e0:eb:49:fd:fb:d4:81:6a:2b:81:41:57:24:78:3b:e0:66:70:
         d4:2e:52:92
wikipedia.pem: good
    This Update: Apr  9 08:45:00 2014 GMT
    Next Update: Apr 16 09:00:00 2014 GMT

If you want to have a more summarized output, leave out the -text option. I most of the time include it to find out problems with an OCSP.

This is how a good certificate status looks:

openssl ocsp -issuer chain.pem -cert wikipedia.pem -url http://ocsp.digicert.com
wikipedia.pem: good
    This Update: Apr  9 08:45:00 2014 GMT
    Next Update: Apr 16 09:00:00 2014 GMT

Revoked certificate

If you have a revoked certificate, you can also test it the same way as stated above. The response looks like this:

Response verify OK
test-revoked.pem: revoked
    This Update: Apr  9 03:02:45 2014 GMT
    Next Update: Apr 10 03:02:45 2014 GMT
    Revocation Time: Mar 25 15:45:55 2014 GMT

You can test this using the certificate and chain on the Verisign revoked certificate test page: https://test-sspev.verisign.com:2443/test-SSPEV-revoked-verisign.html

Other errors

If we send this request to another OCSP, one who did not issued this certificate, we should receive an unauthorized error:

openssl ocsp -issuer chain.pem -cert wikipedia.pem -url http://rapidssl-ocsp.geotrust.com
Responder Error: unauthorized (6)

The -text option here shows more information:

OCSP Request Data:
    Version: 1 (0x0)
    Requestor List:
        Certificate ID:
          Hash Algorithm: sha1
          Issuer Name Hash: ED48ADDDCB7B00E20E842AA9B409F1AC3034CF96
          Issuer Key Hash: 50EA7389DB29FB108F9EE50120D4DE79994883F7
          Serial Number: 0114195F66FAFF8FD66E12496E516F4F
    Request Extensions:
        OCSP Nonce:
            041015BB718C43C46C41122E841DB2282ECE
Responder Error: unauthorized (6)

Some OCSP’s are configured differently and give out this error:

openssl ocsp -issuer chain.pem -cert wikipedia.pem -url http://ocsp.digidentity.eu/L4/services/ocsp
Response Verify Failure
140735308649312:error:2706B06F:OCSP routines:OCSP_CHECK_IDS:response contains no revocation data:ocsp_vfy.c:269:
140735308649312:error:2706B06F:OCSP routines:OCSP_CHECK_IDS:response contains no revocation data:ocsp_vfy.c:269:
wikipedia.pem: ERROR: No Status found.

If we do include the -text option here we can see that a response is sent, however, that it has no data in it:

OCSP Response Data:
    OCSP Response Status: successful (0x0)
    Response Type: Basic OCSP Response
    Version: 1 (0x0)
    Responder Id: C = NL, O = Digidentity B.V., CN = Digidentity OCSP
    Produced At: Apr  9 12:02:00 2014 GMT
    Responses:
    Response Extensions:
OCSP Nonce:
    0410EB540472EA2D8246E88F3317B014BEEF
Signature Algorithm: sha256WithRSAEncryption

Other OCSP’s give out the “unknown” status:

openssl ocsp -issuer chain.pem -cert wikipedia.pem  -url http://ocsp.quovadisglobal.com/
Response Verify Failure
140735308649312:error:27069070:OCSP routines:OCSP_basic_verify:root ca not trusted:ocsp_vfy.c:152:
wikipedia.pem: unknown
    This Update: Apr  9 12:09:18 2014 GMT

The -text options shows us more:

OCSP Response Data:
    OCSP Response Status: successful (0x0)
    Response Type: Basic OCSP Response
    Version: 1 (0x0)
    Responder Id: C = CH, O = QuoVadis Limited, OU = OCSP Responder, CN = QuoVadis OCSP Authority Signature
    Produced At: Apr  9 12:09:10 2014 GMT
    Responses:
    Certificate ID:
      Hash Algorithm: sha1
      Issuer Name Hash: ED48ADDDCB7B00E20E842AA9B409F1AC3034CF96
      Issuer Key Hash: 50EA7389DB29FB108F9EE50120D4DE79994883F7
      Serial Number: 0114195F66FAFF8FD66E12496E516F4F
    Cert Status: unknown
    This Update: Apr  9 12:09:10 2014 GMT

    Response Extensions:

Sources

OpenSSL command line Root and Intermediate CA including OCSP, CRL and revocation


These are quick and dirty notes on generating a certificate authority (CA), intermediate certificate authorities and end certificates using OpenSSL. It includes OCSP, CRL and CA Issuer information and specific issue and expiry dates.

We’ll set up our own root CA. We’ll use the root CA to generate an example intermediate CA. We’ll use the intermediate CA to sign end user certificates.

Root CA

Create and move in to a folder for the root ca:

mkdir ~/SSLCA/root/
cd ~/SSLCA/root/

Generate a 8192-bit long SHA-256 RSA key for our root CA:

openssl genrsa -aes256 -out rootca.key 8192

Example output:

Generating RSA private key, 8192 bit long modulus
.........++
....................................................................................................................++
e is 65537 (0x10001)

If you want to password-protect this key, add the option -aes256.

Create the self-signed root CA certificate ca.crt; you’ll need to provide an identity for your root CA:

openssl req -sha256 -new -x509 -days 1826 -key rootca.key -out rootca.crt

Example output:

You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:NL
State or Province Name (full name) [Some-State]:Zuid Holland
Locality Name (eg, city) []:Rotterdam
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Sparkling Network
Organizational Unit Name (eg, section) []:Sparkling CA
Common Name (e.g. server FQDN or YOUR name) []:Sparkling Root CA
Email Address []:

Create a few files where the CA will store it’s serials:

touch certindex
echo 1000 > certserial
echo 1000 > crlnumber

Place the CA config file. This file has stubs for CRL and OCSP endpoints.

# vim ca.conf
[ ca ]
default_ca = myca

[ crl_ext ]
issuerAltName=issuer:copy 
authorityKeyIdentifier=keyid:always

 [ myca ]
 dir = ./
 new_certs_dir = $dir
 unique_subject = no
 certificate = $dir/rootca.crt
 database = $dir/certindex
 private_key = $dir/rootca.key
 serial = $dir/certserial
 default_days = 730
 default_md = sha1
 policy = myca_policy
 x509_extensions = myca_extensions
 crlnumber = $dir/crlnumber
 default_crl_days = 730

 [ myca_policy ]
 commonName = supplied
 stateOrProvinceName = supplied
 countryName = optional
 emailAddress = optional
 organizationName = supplied
 organizationalUnitName = optional

 [ myca_extensions ]
 basicConstraints = critical,CA:TRUE
 keyUsage = critical,any
 subjectKeyIdentifier = hash
 authorityKeyIdentifier = keyid:always,issuer
 keyUsage = digitalSignature,keyEncipherment,cRLSign,keyCertSign
 extendedKeyUsage = serverAuth
 crlDistributionPoints = @crl_section
 subjectAltName  = @alt_names
 authorityInfoAccess = @ocsp_section

 [ v3_ca ]
 basicConstraints = critical,CA:TRUE,pathlen:0
 keyUsage = critical,any
 subjectKeyIdentifier = hash
 authorityKeyIdentifier = keyid:always,issuer
 keyUsage = digitalSignature,keyEncipherment,cRLSign,keyCertSign
 extendedKeyUsage = serverAuth
 crlDistributionPoints = @crl_section
 subjectAltName  = @alt_names
 authorityInfoAccess = @ocsp_section

 [alt_names]
 DNS.0 = Sparkling Intermidiate CA 1
 DNS.1 = Sparkling CA Intermidiate 1

 [crl_section]
 URI.0 = http://pki.sparklingca.com/SparklingRoot.crl
 URI.1 = http://pki.backup.com/SparklingRoot.crl

 [ocsp_section]
 caIssuers;URI.0 = http://pki.sparklingca.com/SparklingRoot.crt
 caIssuers;URI.1 = http://pki.backup.com/SparklingRoot.crt
 OCSP;URI.0 = http://pki.sparklingca.com/ocsp/
 OCSP;URI.1 = http://pki.backup.com/ocsp/

If you need to set a specific certificate start / expiry date, add the following to [myca]

# format: YYYYMMDDHHMMSS
default_enddate = 20191222035911
default_startdate = 20181222035911

Creating Intermediate 1 CA

Generate the intermediate CA’s private key:

openssl genrsa -out intermediate1.key 4096

Generate the intermediate1 CA’s CSR:

openssl req -new -sha256 -key intermediate1.key -out intermediate1.csr

Example output:

You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:NL
State or Province Name (full name) [Some-State]:Zuid Holland
Locality Name (eg, city) []:Rotterdam
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Sparkling Network
Organizational Unit Name (eg, section) []:Sparkling CA
Common Name (e.g. server FQDN or YOUR name) []:Sparkling Intermediate CA
Email Address []:

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:

Make sure the subject (CN) of the intermediate is different from the root.

Sign the intermediate1 CSR with the Root CA:

openssl ca -batch -config ca.conf -notext -in intermediate1.csr -out intermediate1.crt

Example Output:

Using configuration from ca.conf
Check that the request matches the signature
Signature ok
The Subject's Distinguished Name is as follows
countryName           :PRINTABLE:'NL'
stateOrProvinceName   :ASN.1 12:'Zuid Holland'
localityName          :ASN.1 12:'Rotterdam'
organizationName      :ASN.1 12:'Sparkling Network'
organizationalUnitName:ASN.1 12:'Sparkling CA'
commonName            :ASN.1 12:'Sparkling Intermediate CA'
Certificate is to be certified until Mar 30 15:07:43 2017 GMT (730 days)

Write out database with 1 new entries
Data Base Updated

Generate the CRL (both in PEM and DER):

openssl ca -config ca.conf -gencrl -keyfile rootca.key -cert rootca.crt -out rootca.crl.pem

openssl crl -inform PEM -in rootca.crl.pem -outform DER -out rootca.crl

Generate the CRL after every certificate you sign with the CA.

If you ever need to revoke the this intermediate cert:

openssl ca -config ca.conf -revoke intermediate1.crt -keyfile rootca.key -cert rootca.crt

Configuring the Intermediate CA 1

Create a new folder for this intermediate and move in to it:

mkdir ~/SSLCA/intermediate1/
cd ~/SSLCA/intermediate1/

Copy the Intermediate cert and key from the Root CA:

cp ~/SSLCA/root/intermediate1.key ./
cp ~/SSLCA/root/intermediate1.crt ./

Create the index files:

touch certindex
echo 1000 > certserial
echo 1000 > crlnumber

Create a new ca.conf file:

# vim ca.conf
[ ca ]
default_ca = myca

[ crl_ext ]
issuerAltName=issuer:copy 
authorityKeyIdentifier=keyid:always

 [ myca ]
 dir = ./
 new_certs_dir = $dir
 unique_subject = no
 certificate = $dir/intermediate1.crt
 database = $dir/certindex
 private_key = $dir/intermediate1.key
 serial = $dir/certserial
 default_days = 365
 default_md = sha1
 policy = myca_policy
 x509_extensions = myca_extensions
 crlnumber = $dir/crlnumber
 default_crl_days = 365

 [ myca_policy ]
 commonName = supplied
 stateOrProvinceName = supplied
 countryName = optional
 emailAddress = optional
 organizationName = supplied
 organizationalUnitName = optional

 [ myca_extensions ]
 basicConstraints = critical,CA:FALSE
 keyUsage = critical,any
 subjectKeyIdentifier = hash
 authorityKeyIdentifier = keyid:always,issuer
 keyUsage = digitalSignature,keyEncipherment
 extendedKeyUsage = serverAuth
 crlDistributionPoints = @crl_section
 subjectAltName  = @alt_names
 authorityInfoAccess = @ocsp_section

 [alt_names]
 DNS.0 = example.com
 DNS.1 = example.org

 [crl_section]
 URI.0 = http://pki.sparklingca.com/SparklingIntermidiate1.crl
 URI.1 = http://pki.backup.com/SparklingIntermidiate1.crl

 [ocsp_section]
 caIssuers;URI.0 = http://pki.sparklingca.com/SparklingIntermediate1.crt
 caIssuers;URI.1 = http://pki.backup.com/SparklingIntermediate1.crt
 OCSP;URI.0 = http://pki.sparklingca.com/ocsp/
 OCSP;URI.1 = http://pki.backup.com/ocsp/

Change the [alt_names] section to whatever you need as Subject Alternative names. Remove it including thesubjectAltName = @alt_names line if you don’t want a Subject Alternative Name.

If you need to set a specific certificate start / expiry date, add the following to [myca]

# format: YYYYMMDDHHMMSS
default_enddate = 20191222035911
default_startdate = 20181222035911

Generate an empty CRL (both in PEM and DER):

openssl ca -config ca.conf -gencrl -keyfile rootca.key -cert rootca.crt -out rootca.crl.pem

openssl crl -inform PEM -in rootca.crl.pem -outform DER -out rootca.crl

Creating end user certificates

We use this new intermediate CA to generate an end user certificate. Repeat these steps for every end user certificate you want to sign with this CA.

mkdir enduser-certs

Generate the end user’s private key:

openssl genrsa -out enduser-certs/enduser-example.com.key 4096

Generate the end user’s CSR:

openssl req -new -sha256 -key enduser-certs/enduser-example.com.key -out enduser-certs/enduser-example.com.csr

Example output:

You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:NL
State or Province Name (full name) [Some-State]:Noord Holland
Locality Name (eg, city) []:Amsterdam
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Example Inc
Organizational Unit Name (eg, section) []:IT Dept
Common Name (e.g. server FQDN or YOUR name) []:example.com
Email Address []:

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:

Sign the end user’s CSR with the Intermediate 1 CA:

openssl ca -batch -config ca.conf -notext -in enduser-certs/enduser-example.com.csr -out enduser-certs/enduser-example.com.crt

Example output:

Using configuration from ca.conf
Check that the request matches the signature
Signature ok
The Subject's Distinguished Name is as follows
countryName           :PRINTABLE:'NL'
stateOrProvinceName   :ASN.1 12:'Noord Holland'
localityName          :ASN.1 12:'Amsterdam'
organizationName      :ASN.1 12:'Example Inc'
organizationalUnitName:ASN.1 12:'IT Dept'
commonName            :ASN.1 12:'example.com'
Certificate is to be certified until Mar 30 15:18:26 2016 GMT (365 days)

Write out database with 1 new entries
Data Base Updated

Generate the CRL (both in PEM and DER):

openssl ca -config ca.conf -gencrl -keyfile intermediate1.key -cert intermediate1.crt -out intermediate1.crl.pem

openssl crl -inform PEM -in intermediate1.crl.pem -outform DER -out intermediate1.crl

Generate the CRL after every certificate you sign with the CA.

If you ever need to revoke the this end users cert:

openssl ca -config ca.conf -revoke enduser-certs/enduser-example.com.crt -keyfile intermediate1.key -cert intermediate1.crt

Example output:

Using configuration from ca.conf
Revoking Certificate 1000.
Data Base Updated

Create the certificate chain file by concatenating the Root and intermediate 1 certificates together.

cat ../root/rootca.crt intermediate1.crt > enduser-certs/enduser-example.com.chain

Send the following files to the end user:

enduser-example.com.crt
enduser-example.com.key
enduser-example.com.chain

You can also let the end user supply their own CSR and just send them the .crt file. Do not delete that from the server, otherwise you cannot revoke it.

Validating the certificate

You can validate the end user certificate against the chain using the following command:

openssl verify -CAfile enduser-certs/enduser-example.com.chain enduser-certs/enduser-example.com.crt 
enduser-certs/enduser-example.com.crt: OK

You can also validate it against the CRL. Concatenate the PEM CRL and the chain together first:

cat ../root/rootca.crt intermediate1.crt intermediate1.crl.pem > enduser-certs/enduser-example.com.crl.chain

Verify the certificate:

openssl verify -crl_check -CAfile enduser-certs/enduser-example.com.crl.chain enduser-certs/enduser-example.com.crt

Output when not revoked:

enduser-certs/enduser-example.com.crt: OK

Output when revoked:

enduser-certs/enduser-example.com.crt: CN = example.com, ST = Noord Holland, C = NL, O = Example Inc, OU = IT Dept
error 23 at 0 depth lookup:certificate revoked

Strong SSL Security on Nginx


This tutorial shows you how to set up strong SSL security on the nginx webserver. We do this by disabling SSL Compression to mitigate the CRIME attack, disable SSLv3 and below because of vulnerabilities in the protocol and we will set up a strong ciphersuite that enables Forward Secrecy when possible. We also enable HSTS and HPKP. This way we have a strong and future proof ssl configuration and we get an A on the Qually Labs SSL Test.

TL;DR: Copy-pastable strong cipherssuites for NGINX, Apache and Lighttpd: https://cipherli.st

This tutorial is tested on a Digital Ocean VPS. If you like this tutorial and want to support my website, use this link to order a Digital Ocean VPS: https://www.digitalocean.com

This tutorial works with the stricter requirements of the SSL Labs test announced on the 21st of January 2014 (It already did before that, if you follow(ed) it you get an A+)

This tutorial is also available for Apache
This tutorial is also available for Lighttpd
This tutorial is also available for FreeBSD, NetBSD and OpenBSD over at the BSD Now podcast: http://www.bsdnow.tv/tutorials/nginx

You can find more info on the topics by following the links below:

We are going to edit the nginx settings in the file /etc/nginx/sited-enabled/yoursite.com (On Ubuntu/Debian) or in/etc/nginx/conf.d/nginx.conf (On RHEL/CentOS).

For the entire tutorial, you need to edit the parts between the server block for the server config for port 443 (ssl config). At the end of the tutorial you can find the complete config example.

Make sure you back up the files before editing them!

The BEAST attack and RC4

In short, by tampering with an encryption algorithm’s CBC – cipher block chaining – mode’s, portions of the encrypted traffic can be secretly decrypted. More info on the above link.

Recent browser versions have enabled client side mitigation for the beast attack. The recommendation was to disable all TLS 1.0 ciphers and only offer RC4. However, [RC4 has a growing list of attacks against it],(http://www.isg.rhul.ac.uk/tls/) many of which have crossed the line from theoretical to practical. Moreover, there is reason to believe that the NSA has broken RC4, their so-called “big breakthrough.”

Disabling RC4 has several ramifications. One, users with shitty browsers such as Internet Explorer on Windows XP will use 3DES in lieu. Triple-DES is more secure than RC4, but it is significantly more expensive. Your server will pay the cost for these users. Two, RC4 mitigates BEAST. Thus, disabling RC4 makes TLS 1.0 users susceptible to that attack, by moving them to AES-CBC (the usual server-side BEAST “fix” is to prioritize RC4 above all else). I am confident that the flaws in RC4 significantly outweigh the risks from BEAST. Indeed, with client-side mitigation (which Chrome and Firefox both provide), BEAST is a nonissue. But the risk from RC4 only grows: More cryptanalysis will surface over time.

Factoring RSA-EXPORT Keys (FREAK)

FREAK is a man-in-the-middle (MITM) vulnerability discovered by a group of cryptographers at INRIA, Microsoft Research and IMDEA. FREAK stands for “Factoring RSA-EXPORT Keys.”

The vulnerability dates back to the 1990s, when the US government banned selling crypto software overseas, unless it used export cipher suites which involved encryption keys no longer than 512-bits.

It turns out that some modern TLS clients – including Apple’s SecureTransport and OpenSSL – have a bug in them. This bug causes them to accept RSA export-grade keys even when the client didn’t ask for export-grade RSA. The impact of this bug can be quite nasty: it admits a ‘man in the middle’ attack whereby an active attacker can force down the quality of a connection, provided that the client is vulnerable and the server supports export RSA.

There are two parts of the attack as the server must also accept “export grade RSA.”

The MITM attack works as follows:

  • In the client’s Hello message, it asks for a standard ‘RSA’ ciphersuite.
  • The MITM attacker changes this message to ask for ‘export RSA’.
  • The server responds with a 512-bit export RSA key, signed with its long-term key.
  • The client accepts this weak key due to the OpenSSL/SecureTransport bug.
  • The attacker factors the RSA modulus to recover the corresponding RSA decryption key.
  • When the client encrypts the ‘pre-master secret’ to the server, the attacker can now decrypt it to recover the TLS ‘master secret’.
  • From here on out, the attacker sees plaintext and can inject anything it wants.

The ciphersuite offered here on this page does not enable EXPORT grade ciphers. Make sure your OpenSSL is updated to the latest available version and urge your clients to also use upgraded software.

Heartbleed

Heartbleed is a security bug disclosed in April 2014 in the OpenSSL cryptography library, which is a widely used implementation of the Transport Layer Security (TLS) protocol. Heartbleed may be exploited regardless of whether the party using a vulnerable OpenSSL instance for TLS is a server or a client. It results from improper input validation (due to a missing bounds check) in the implementation of the DTLS heartbeat extension (RFC6520), thus the bug’s name derives from “heartbeat”. The vulnerability is classified as a buffer over-read, a situation where more data can be read than should be allowed.

What versions of the OpenSSL are affected by Heartbleed?

Status of different versions:

  • OpenSSL 1.0.1 through 1.0.1f (inclusive) are vulnerable
  • OpenSSL 1.0.1g is NOT vulnerable
  • OpenSSL 1.0.0 branch is NOT vulnerable
  • OpenSSL 0.9.8 branch is NOT vulnerable

The bug was introduced to OpenSSL in December 2011 and has been out in the wild since OpenSSL release 1.0.1 on 14th of March 2012. OpenSSL 1.0.1g released on 7th of April 2014 fixes the bug.

By updating OpenSSL you are not vulnerable to this bug.

SSL Compression (CRIME attack)

The CRIME attack uses SSL Compression to do its magic. SSL compression is turned off by default in nginx 1.1.6+/1.0.9+ (if OpenSSL 1.0.0+ used) and nginx 1.3.2+/1.2.2+ (if older versions of OpenSSL are used).

If you are using al earlier version of nginx or OpenSSL and your distro has not backported this option then you need to recompile OpenSSL without ZLIB support. This will disable the use of OpenSSL using the DEFLATE compression method. If you do this then you can still use regular HTML DEFLATE compression.

SSLv2 and SSLv3

SSL v2 is insecure, so we need to disable it. We also disable SSLv3, as TLS 1.0 suffers a downgrade attack, allowing an attacker to force a connection to use SSLv3 and therefore disable forward secrecy.

Again edit the config file:

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

Poodle and TLS-FALLBACK-SCSV

SSLv3 allows exploiting of the POODLE bug. This is one more major reason to disable this.

Google have proposed an extension to SSL/TLS named TLSFALLBACKSCSV that seeks to prevent forced SSL downgrades. This is automatically enabled if you upgrade OpenSSL to the following versions:

  • OpenSSL 1.0.1 has TLSFALLBACKSCSV in 1.0.1j and higher.
  • OpenSSL 1.0.0 has TLSFALLBACKSCSV in 1.0.0o and higher.
  • OpenSSL 0.9.8 has TLSFALLBACKSCSV in 0.9.8zc and higher.

More info on the NGINX documentation

The Cipher Suite

Forward Secrecy ensures the integrity of a session key in the event that a long-term key is compromised. PFS accomplishes this by enforcing the derivation of a new key for each and every session.

This means that when the private key gets compromised it cannot be used to decrypt recorded SSL traffic.

The cipher suites that provide Perfect Forward Secrecy are those that use an ephemeral form of the Diffie-Hellman key exchange. Their disadvantage is their overhead, which can be improved by using the elliptic curve variants.

The following two ciphersuites are recommended by me, and the latter by the Mozilla Foundation.

The recommended cipher suite:

ssl_ciphers 'AES128+EECDH:AES128+EDH';

The recommended cipher suite for backwards compatibility (IE6/WinXP):

ssl_ciphers "ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4";

If your version of OpenSSL is old, unavailable ciphers will be discarded automatically. Always use the full ciphersuite above and let OpenSSL pick the ones it supports.

The ordering of a ciphersuite is very important because it decides which algorithms are going to be selected in priority. The recommendation above prioritizes algorithms that provide perfect forward secrecy.

Older versions of OpenSSL may not return the full list of algorithms. AES-GCM and some ECDHE are fairly recent, and not present on most versions of OpenSSL shipped with Ubuntu or RHEL.

Prioritization logic

  • ECDHE+AESGCM ciphers are selected first. These are TLS 1.2 ciphers and not widely supported at the moment. No known attack currently target these ciphers.
  • PFS ciphersuites are preferred, with ECDHE first, then DHE.
  • AES 128 is preferred to AES 256. There has been discussions on whether AES256 extra security was worth the cost, and the result is far from obvious. At the moment, AES128 is preferred, because it provides good security, is really fast, and seems to be more resistant to timing attacks.
  • In the backward compatible ciphersuite, AES is preferred to 3DES. BEAST attacks on AES are mitigated in TLS 1.1 and above, and difficult to achieve in TLS 1.0. In the non-backward compatible ciphersuite, 3DES is not present.
  • RC4 is removed entirely. 3DES is used for backward compatibility. See discussion in #RC4_weaknesses

Mandatory discards

  • aNULL contains non-authenticated Diffie-Hellman key exchanges, that are subject to Man-In-The-Middle (MITM) attacks
  • eNULL contains null-encryption ciphers (cleartext)
  • EXPORT are legacy weak ciphers that were marked as exportable by US law
  • RC4 contains ciphers that use the deprecated ARCFOUR algorithm
  • DES contains ciphers that use the deprecated Data Encryption Standard
  • SSLv2 contains all ciphers that were defined in the old version of the SSL standard, now deprecated
  • MD5 contains all the ciphers that use the deprecated message digest 5 as the hashing algorithm

Extra settings

Make sure you also add these lines:

ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;

When choosing a cipher during an SSLv3 or TLSv1 handshake, normally the client’s preference is used. If this directive is enabled, the server’s preference will be used instead.

More info on sslpreferserver_ciphers
More info on ssl_ciphers

Forward Secrecy & Diffie Hellman Ephemeral Parameters

The concept of forward secrecy is simple: client and server negotiate a key that never hits the wire, and is destroyed at the end of the session. The RSA private from the server is used to sign a Diffie-Hellman key exchange between the client and the server. The pre-master key obtained from the Diffie-Hellman handshake is then used for encryption. Since the pre-master key is specific to a connection between a client and a server, and used only for a limited amount of time, it is called Ephemeral.

With Forward Secrecy, if an attacker gets a hold of the server’s private key, it will not be able to decrypt past communications. The private key is only used to sign the DH handshake, which does not reveal the pre-master key. Diffie-Hellman ensures that the pre-master keys never leave the client and the server, and cannot be intercepted by a MITM.

All versions of nginx as of 1.4.4 rely on OpenSSL for input parameters to Diffie-Hellman (DH). Unfortunately, this means that Ephemeral Diffie-Hellman (DHE) will use OpenSSL’s defaults, which include a 1024-bit key for the key-exchange. Since we’re using a 2048-bit certificate, DHE clients will use a weaker key-exchange than non-ephemeral DH clients.

We need generate a stronger DHE parameter:

cd /etc/ssl/certs
openssl dhparam -out dhparam.pem 4096

And then tell nginx to use it for DHE key-exchange:

ssl_dhparam /etc/ssl/certs/dhparam.pem;

OCSP Stapling

When connecting to a server, clients should verify the validity of the server certificate using either a Certificate Revocation List (CRL), or an Online Certificate Status Protocol (OCSP) record. The problem with CRL is that the lists have grown huge and takes forever to download.

OCSP is much more lightweight, as only one record is retrieved at a time. But the side effect is that OCSP requests must be made to a 3rd party OCSP responder when connecting to a server, which adds latency and potential failures. In fact, the OCSP responders operated by CAs are often so unreliable that browser will fail silently if no response is received in a timely manner. This reduces security, by allowing an attacker to DoS an OCSP responder to disable the validation.

The solution is to allow the server to send its cached OCSP record during the TLS handshake, therefore bypassing the OCSP responder. This mechanism saves a roundtrip between the client and the OCSP responder, and is called OCSP Stapling.

The server will send a cached OCSP response only if the client requests it, by announcing support for the status_request TLS extension in its CLIENT HELLO.

Most servers will cache OCSP response for up to 48 hours. At regular intervals, the server will connect to the OCSP responder of the CA to retrieve a fresh OCSP record. The location of the OCSP responder is taken from the Authority Information Access field of the signed certificate.

View my tutorial on enabling OCSP stapling on NGINX

HTTP Strict Transport Security

When possible, you should enable HTTP Strict Transport Security (HSTS), which instructs browsers to communicate with your site only over HTTPS.

View my article on HTST to see how to configure it.

HTTP Public Key Pinning Extension

You should also enable the HTTP Public Key Pinning Extension.

Public Key Pinning means that a certificate chain must include a whitelisted public key. It ensures only whitelisted Certificate Authorities (CA) can sign certificates for *.example.com, and not any CA in your browser store.

I’ve written an article about it that has background theory and configuration examples for Apache, Lighttpd and NGINX

Config Example

server {

  listen [::]:443 default_server;

  ssl on;
  ssl_certificate_key /etc/ssl/cert/raymii_org.pem;
  ssl_certificate /etc/ssl/cert/ca-bundle.pem;

  ssl_ciphers 'AES128+EECDH:AES128+EDH:!aNULL';

  ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
  ssl_session_cache shared:SSL:10m;

  ssl_stapling on;
  ssl_stapling_verify on;
  resolver 8.8.4.4 8.8.8.8 valid=300s;
  resolver_timeout 10s;

  ssl_prefer_server_ciphers on;
  ssl_dhparam /etc/ssl/certs/dhparam.pem;

  add_header Strict-Transport-Security max-age=63072000;
  add_header X-Frame-Options DENY;
  add_header X-Content-Type-Options nosniff;

  root /var/www/;
  index index.html index.htm;
  server_name raymii.org;

}

Conclusion

If you have applied the above config lines you need to restart nginx:

# Check the config first:
/etc/init.d/nginx configtest
# Then restart:
/etc/init.d/nginx restart

Now use the SSL Labs test to see if you get a nice A. And, of course, have a safe, strong and future proof SSL configuration!

Strong SSL Security on Apache2


This tutorial shows you how to set up strong SSL security on the Apache2 webserver. We do this by disabling SSL Compression to mitigate the CRIME attack, disable SSLv3 and below because of vulnerabilities in the protocol and we will set up a ciphersuite that enables Forward Secrecy when possible. We also set up HSTS and HPKP. This way we have a strong and future proof ssl configuration and we get an A on the Qually Labs SSL Test.

TL;DR: Copy-pastable strong cipherssuites for NGINX, Apache and Lighttpd: https://cipherli.st

This tutorial is tested on a Digital Ocean VPS. If you like this tutorial and want to support my website, use this link to order a Digital Ocean VPS: https://www.digitalocean.com/

This tutorial works with the strict requirements of the SSL Labs test

This tutorial is also available for NGINX
This tutorial is also available for Lighttpd
This tutorial is also available for FreeBSD, NetBSD and OpenBSD over at the BSD Now podcast: [http://www.bsdnow.tv/tutorials/nginx](http://www.bsdnow.tv/tutorials/nginx

You can find more info on the topics by following the links below:

Make sure you back up the files before editing them!

The BEAST attack and RC4

In short, by tampering with an encryption algorithm’s CBC – cipher block chaining – mode’s, portions of the encrypted traffic can be secretly decrypted. More info on the above link.

Recent browser versions have enabled client side mitigation for the beast attack. The recommendation was to disable all TLS 1.0 ciphers and only offer RC4. However, [RC4 has a growing list of attacks against it],(http://www.isg.rhul.ac.uk/tls/) many of which have crossed the line from theoretical to practical. Moreover, there is reason to believe that the NSA has broken RC4, their so-called “big breakthrough.”

Disabling RC4 has several ramifications. One, users with shitty browsers such as Internet Explorer on Windows XP will use 3DES in lieu. Triple-DES is more secure than RC4, but it is significantly more expensive. Your server will pay the cost for these users. Two, RC4 mitigates BEAST. Thus, disabling RC4 makes TLS 1.0 users susceptible to that attack, by moving them to AES-CBC (the usual server-side BEAST “fix” is to prioritize RC4 above all else). I am confident that the flaws in RC4 significantly outweigh the risks from BEAST. Indeed, with client-side mitigation (which Chrome and Firefox both provide), BEAST is a nonissue. But the risk from RC4 only grows: More cryptanalysis will surface over time.

Factoring RSA-EXPORT Keys (FREAK)

FREAK is a man-in-the-middle (MITM) vulnerability discovered by a group of cryptographers at INRIA, Microsoft Research and IMDEA. FREAK stands for “Factoring RSA-EXPORT Keys.”

The vulnerability dates back to the 1990s, when the US government banned selling crypto software overseas, unless it used export cipher suites which involved encryption keys no longer than 512-bits.

It turns out that some modern TLS clients – including Apple’s SecureTransport and OpenSSL – have a bug in them. This bug causes them to accept RSA export-grade keys even when the client didn’t ask for export-grade RSA. The impact of this bug can be quite nasty: it admits a ‘man in the middle’ attack whereby an active attacker can force down the quality of a connection, provided that the client is vulnerable and the server supports export RSA.

There are two parts of the attack as the server must also accept “export grade RSA.”

The MITM attack works as follows:

  • In the client’s Hello message, it asks for a standard ‘RSA’ ciphersuite.
  • The MITM attacker changes this message to ask for ‘export RSA’.
  • The server responds with a 512-bit export RSA key, signed with its long-term key.
  • The client accepts this weak key due to the OpenSSL/SecureTransport bug.
  • The attacker factors the RSA modulus to recover the corresponding RSA decryption key.
  • When the client encrypts the ‘pre-master secret’ to the server, the attacker can now decrypt it to recover the TLS ‘master secret’.
  • From here on out, the attacker sees plaintext and can inject anything it wants.

The ciphersuite offered here on this page does not enable EXPORT grade ciphers. Make sure your OpenSSL is updated to the latest available version and urge your clients to also use upgraded software.

Heartbleed

Heartbleed is a security bug disclosed in April 2014 in the OpenSSL cryptography library, which is a widely used implementation of the Transport Layer Security (TLS) protocol. Heartbleed may be exploited regardless of whether the party using a vulnerable OpenSSL instance for TLS is a server or a client. It results from improper input validation (due to a missing bounds check) in the implementation of the DTLS heartbeat extension (RFC6520), thus the bug’s name derives from “heartbeat”. The vulnerability is classified as a buffer over-read, a situation where more data can be read than should be allowed.

What versions of the OpenSSL are affected by Heartbleed?

Status of different versions:

  • OpenSSL 1.0.1 through 1.0.1f (inclusive) are vulnerable
  • OpenSSL 1.0.1g is NOT vulnerable
  • OpenSSL 1.0.0 branch is NOT vulnerable
  • OpenSSL 0.9.8 branch is NOT vulnerable

The bug was introduced to OpenSSL in December 2011 and has been out in the wild since OpenSSL release 1.0.1 on 14th of March 2012. OpenSSL 1.0.1g released on 7th of April 2014 fixes the bug.

By updating OpenSSL you are not vulnerable to this bug.

SSL Compression (CRIME attack)

The CRIME attack uses SSL Compression to do its magic, so we need to disable that. On Apache 2.2.24+ we can add the following line to the SSL config file we also edited above:

SSLCompression off

If you are using al earlier version of Apache and your distro has not backported this option then you need to recompile OpenSSL without ZLIB support. This will disable the use of OpenSSL using the DEFLATE compression method. If you do this then you can still use regular HTML DEFLATE compression.

SSLv2 and SSLv3

SSL v2 is insecure, so we need to disable it. We also disable SSLv3, as TLS 1.0 suffers a downgrade attack, allowing an attacker to force a connection to use SSLv3 and therefore disable forward secrecy.

SSLv3 allows exploiting of the POODLE bug. This is one more major reason to disable this!

Again edit the config file:

SSLProtocol All -SSLv2 -SSLv3

All is a shortcut for +SSLv2 +SSLv3 +TLSv1 or – when using OpenSSL 1.0.1 and later – +SSLv2 +SSLv3 +TLSv1 +TLSv1.1 +TLSv1.2, respectively. The above line enables everything except SSLv2 and SSLv3. More info on the apache website

Poodle and TLS-FALLBACK-SCSV

SSLv3 allows exploiting of the POODLE bug. This is one more major reason to disable this.

Google have proposed an extension to SSL/TLS named TLSFALLBACKSCSV that seeks to prevent forced SSL downgrades. This is automatically enabled if you upgrade OpenSSL to the following versions:

  • OpenSSL 1.0.1 has TLSFALLBACKSCSV in 1.0.1j and higher.
  • OpenSSL 1.0.0 has TLSFALLBACKSCSV in 1.0.0o and higher.
  • OpenSSL 0.9.8 has TLSFALLBACKSCSV in 0.9.8zc and higher.

The Cipher Suite

(Perfect) Forward Secrecy ensures the integrity of a session key in the event that a long-term key is compromised. PFS accomplishes this by enforcing the derivation of a new key for each and every session.

This means that when the private key gets compromised it cannot be used to decrypt recorded SSL traffic.

The cipher suites that provide Perfect Forward Secrecy are those that use an ephemeral form of the Diffie-Hellman key exchange. Their disadvantage is their overhead, which can be improved by using the elliptic curve variants.

The following two ciphersuites are recommended by me, and the latter by the Mozilla Foundation.

The recommended cipher suite:

SSLCipherSuite AES128+EECDH:AES128+EDH

The recommended cipher suite for backwards compatibility (IE6/WinXP):

SSLCipherSuite ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4

If your version of OpenSSL is old, unavailable ciphers will be discarded automatically. Always use the full ciphersuite above and let OpenSSL pick the ones it supports.

The ordering of a ciphersuite is very important because it decides which algorithms are going to be selected in priority. The recommendation above prioritizes algorithms that provide perfect forward secrecy.

Older versions of OpenSSL may not return the full list of algorithms. AES-GCM and some ECDHE are fairly recent, and not present on most versions of OpenSSL shipped with Ubuntu or RHEL.

Prioritization logic

  • ECDHE+AESGCM ciphers are selected first. These are TLS 1.2 ciphers and not widely supported at the moment. No known attack currently target these ciphers.
  • PFS ciphersuites are preferred, with ECDHE first, then DHE.
  • AES 128 is preferred to AES 256. There has been discussions on whether AES256 extra security was worth the cost, and the result is far from obvious. At the moment, AES128 is preferred, because it provides good security, is really fast, and seems to be more resistant to timing attacks.
  • In the backward compatible ciphersuite, AES is preferred to 3DES. BEAST attacks on AES are mitigated in TLS 1.1 and above, and difficult to achieve in TLS 1.0. In the non-backward compatible ciphersuite, 3DES is not present.
  • RC4 is removed entirely. 3DES is used for backward compatibility. See discussion in #RC4_weaknesses

Mandatory discards

  • aNULL contains non-authenticated Diffie-Hellman key exchanges, that are subject to Man-In-The-Middle (MITM) attacks
  • eNULL contains null-encryption ciphers (cleartext)
  • EXPORT are legacy weak ciphers that were marked as exportable by US law
  • RC4 contains ciphers that use the deprecated ARCFOUR algorithm
  • DES contains ciphers that use the deprecated Data Encryption Standard
  • SSLv2 contains all ciphers that were defined in the old version of the SSL standard, now deprecated
  • MD5 contains all the ciphers that use the deprecated message digest 5 as the hashing algorithm

With Apache 2.2.x you have only DHE suites to work with, but they are not enough. Internet Explorer (in all versions) does not support the required DHE suites to achieve Forward Secrecy. (Unless youre using DSA keys, but no one does; that’s a long story.) Apache does not support configurable DH parameters in any version, but there are patches you could use if you can install from source.

Even if openssl can provide ECDHE the apache 2.2 in debian stable does not support this mechanism. You need apache 2.4 to fully support forward secrecy.

A workaround could be the usage of nginx as a reverse proxy because it fully supports ECDHE.

Make sure you also add this line:

SSLHonorCipherOrder on

When choosing a cipher during an SSLv3 or TLSv1 handshake, normally the client’s preference is used. If this directive is enabled, the server’s preference will be used instead.

Forward Secrecy & Diffie Hellman Ephemeral Parameters

The concept of forward secrecy is simple: client and server negotiate a key that never hits the wire, and is destroyed at the end of the session. The RSA private from the server is used to sign a Diffie-Hellman key exchange between the client and the server. The pre-master key obtained from the Diffie-Hellman handshake is then used for encryption. Since the pre-master key is specific to a connection between a client and a server, and used only for a limited amount of time, it is called Ephemeral.

With Forward Secrecy, if an attacker gets a hold of the server’s private key, it will not be able to decrypt past communications. The private key is only used to sign the DH handshake, which does not reveal the pre-master key. Diffie-Hellman ensures that the pre-master keys never leave the client and the server, and cannot be intercepted by a MITM.

Apache prior to version 2.4.7 and all versions of Nginx as of 1.4.4 rely on OpenSSL for input parameters to Diffie-Hellman (DH). Unfortunately, this means that Ephemeral Diffie-Hellman (DHE) will use OpenSSL’s defaults, which include a 1024-bit key for the key-exchange. Since we’re using a 2048-bit certificate, DHE clients will use a weaker key-exchange than non-ephemeral DH clients.

For Apache, there is no fix except to upgrade to 2.4.7 or later. With that version, Apache automatically selects a stronger key.

Around May, Debian backported ECDH ciphers to work with apache 2.2, and it’s possible to get PFS: http://metadata.ftp-master.debian.org/changelogs//main/a/apache2/apache22.2.22-13+deb7u3changelog

> apache2 (2.2.22-13+deb7u2) wheezy; urgency=medium

  * Backport support for SSL ECC keys and ECDH ciphers.

HTTP Strict Transport Security

When possible, you should enable HTTP Strict Transport Security (HSTS), which instructs browsers to communicate with your site only over HTTPS.

View my article on HTST to see how to configure it.

HTTP Public Key Pinning Extension

You should also enable the HTTP Public Key Pinning Extension.

Public Key Pinning means that a certificate chain must include a whitelisted public key. It ensures only whitelisted Certificate Authorities (CA) can sign certificates for *.example.com, and not any CA in your browser store.

I’ve written an article about it that has background theory and configuration examples for Apache, Lighttpd and NGINX

OCSP Stapling

When connecting to a server, clients should verify the validity of the server certificate using either a Certificate Revocation List (CRL), or an Online Certificate Status Protocol (OCSP) record. The problem with CRL is that the lists have grown huge and takes forever to download.

OCSP is much more lightweight, as only one record is retrieved at a time. But the side effect is that OCSP requests must be made to a 3rd party OCSP responder when connecting to a server, which adds latency and potential failures. In fact, the OCSP responders operated by CAs are often so unreliable that browser will fail silently if no response is received in a timely manner. This reduces security, by allowing an attacker to DoS an OCSP responder to disable the validation.

The solution is to allow the server to send its cached OCSP record during the TLS handshake, therefore bypassing the OCSP responder. This mechanism saves a roundtrip between the client and the OCSP responder, and is called OCSP Stapling.

The server will send a cached OCSP response only if the client requests it, by announcing support for the status_request TLS extension in its CLIENT HELLO.

Most servers will cache OCSP response for up to 48 hours. At regular intervals, the server will connect to the OCSP responder of the CA to retrieve a fresh OCSP record. The location of the OCSP responder is taken from the Authority Information Access field of the signed certificate.

View my tutorial on enabling OCSP stapling on Apache

Conclusion

If you have applied the above config lines you need to restart apache:

# Check the config first:
apache2ctl -t
# Then restart:
/etc/init.d/apache2 restart

# If you are on RHEL/CentOS:
apachectl -t
/etc/init.d/httpd restart

Now use the SSL Labs test to see if you get a nice A. And, of course, have a safe, strong and future proof SSL configuration!

Stong SSL Security on lighttpd


This tutorial shows you how to set up strong SSL security on the Lighttpd webserver. We do this by disabling SSL Compression to mitigate the CRIME attack, disable SSLv3 and below because of vulnerabilities in the protocol and we will set up a strong ciphersuite that enables Forward Secrecy when possible. We also set up HSTS and HPKP. This way we have a strong and future proof ssl configuration and we get an A on the Qually Labs SSL Test.

TL;DR: Copy-pastable strong cipherssuites for NGINX, Apache and Lighttpd: https://cipherli.st

This tutorial is tested on a Digital Ocean VPS. If you like this tutorial and want to support my website, use this link to order a Digital Ocean VPS: https://www.digitalocean.com

This tutorial works with the stricter requirements of the SSL Labs test announced on the 21st of January 2014 (It already did before that, if you follow(ed) it you get an A+)

This tutorial is also available for Apache2 This tutorial is also available for NGINX
This tutorial is also available for FreeBSD, NetBSD and OpenBSD over at the BSD Now podcast: http://www.bsdnow.tv/tutorials/nginx

You can find more info on the topics by following the links below:

Make sure you backup the files before editing them!

I’m using lighttpd 1.4.31 from the Debian Wheezy repositories on this website. The CentOS 5/6 EPEL versions wouldn’t work for me because either lighttpd or OpenSSL being to old. Debian Squeeze also failed.

Mitigate the BEAST attack

In short, by tampering with an encryption algorithm’s CBC – cipher block chaining – mode’s, portions of the encrypted traffic can be secretly decrypted. More info on the above link.

Recent browser versions have enabled client side mitigation for the beast attack. The recommendation was to disable all TLS 1.0 ciphers and only offer RC4. However, [RC4 has a growing list of attacks against it],(http://www.isg.rhul.ac.uk/tls/) many of which have crossed the line from theoretical to practical. Moreover, there is reason to believe that the NSA has broken RC4, their so-called “big breakthrough.”

Disabling RC4 has several ramifications. One, users with shitty browsers such as Internet Explorer on Windows XP will use 3DES in lieu. Triple-DES is more secure than RC4, but it is significantly more expensive. Your server will pay the cost for these users. Two, RC4 mitigates BEAST. Thus, disabling RC4 makes TLS 1.0 users susceptible to that attack, by moving them to AES-CBC (the usual server-side BEAST “fix” is to prioritize RC4 above all else). I am confident that the flaws in RC4 significantly outweigh the risks from BEAST. Indeed, with client-side mitigation (which Chrome and Firefox both provide), BEAST is a nonissue. But the risk from RC4 only grows: More cryptanalysis will surface over time.

Factoring RSA-EXPORT Keys (FREAK)

FREAK is a man-in-the-middle (MITM) vulnerability discovered by a group of cryptographers at INRIA, Microsoft Research and IMDEA. FREAK stands for “Factoring RSA-EXPORT Keys.”

The vulnerability dates back to the 1990s, when the US government banned selling crypto software overseas, unless it used export cipher suites which involved encryption keys no longer than 512-bits.

It turns out that some modern TLS clients – including Apple’s SecureTransport and OpenSSL – have a bug in them. This bug causes them to accept RSA export-grade keys even when the client didn’t ask for export-grade RSA. The impact of this bug can be quite nasty: it admits a ‘man in the middle’ attack whereby an active attacker can force down the quality of a connection, provided that the client is vulnerable and the server supports export RSA.

There are two parts of the attack as the server must also accept “export grade RSA.”

The MITM attack works as follows:

  • In the client’s Hello message, it asks for a standard ‘RSA’ ciphersuite.
  • The MITM attacker changes this message to ask for ‘export RSA’.
  • The server responds with a 512-bit export RSA key, signed with its long-term key.
  • The client accepts this weak key due to the OpenSSL/SecureTransport bug.
  • The attacker factors the RSA modulus to recover the corresponding RSA decryption key.
  • When the client encrypts the ‘pre-master secret’ to the server, the attacker can now decrypt it to recover the TLS ‘master secret’.
  • From here on out, the attacker sees plaintext and can inject anything it wants.

The ciphersuite offered here on this page does not enable EXPORT grade ciphers. Make sure your OpenSSL is updated to the latest available version and urge your clients to also use upgraded software.

Heartbleed

Heartbleed is a security bug disclosed in April 2014 in the OpenSSL cryptography library, which is a widely used implementation of the Transport Layer Security (TLS) protocol. Heartbleed may be exploited regardless of whether the party using a vulnerable OpenSSL instance for TLS is a server or a client. It results from improper input validation (due to a missing bounds check) in the implementation of the DTLS heartbeat extension (RFC6520), thus the bug’s name derives from “heartbeat”. The vulnerability is classified as a buffer over-read, a situation where more data can be read than should be allowed.

What versions of the OpenSSL are affected by Heartbleed?

Status of different versions:

  • OpenSSL 1.0.1 through 1.0.1f (inclusive) are vulnerable
  • OpenSSL 1.0.1g is NOT vulnerable
  • OpenSSL 1.0.0 branch is NOT vulnerable
  • OpenSSL 0.9.8 branch is NOT vulnerable

The bug was introduced to OpenSSL in December 2011 and has been out in the wild since OpenSSL release 1.0.1 on 14th of March 2012. OpenSSL 1.0.1g released on 7th of April 2014 fixes the bug.

By updating OpenSSL you are not vulnerable to this bug.

SSL Compression

The CRIME attack uses SSL Compression to do its magic, so we need to disable that. The following option disables SSL compression:

ssl.use-compression = "disable"

By default lighttpd disables SSL compression at compile time. If you find it to be enabled, either use the above option, or recompile OpenSSL without ZLIB support. This will disable the use of OpenSSL using the DEFLATE compression method. If you do this then you can still use regular HTML DEFLATE compression.

SSLv2 and SSLv3

SSL v2 is insecure, so we need to disable it. We also disable SSLv3, as TLS 1.0 suffers a downgrade attack, allowing an attacker to force a connection to use SSLv3 and therefore disable forward secrecy.

Again edit the config file:

ssl.use-sslv2 = "disable"
ssl.use-sslv3 = "disable"

Poodle and TLS-FALLBACK-SCSV

SSLv3 allows exploiting of the POODLE bug. This is one more major reason to disable this.

Google have proposed an extension to SSL/TLS named TLSFALLBACKSCSV that seeks to prevent forced SSL downgrades. This is automatically enabled if you upgrade OpenSSL to the following versions:

  • OpenSSL 1.0.1 has TLSFALLBACKSCSV in 1.0.1j and higher.
  • OpenSSL 1.0.0 has TLSFALLBACKSCSV in 1.0.0o and higher.
  • OpenSSL 0.9.8 has TLSFALLBACKSCSV in 0.9.8zc and higher.

The Cipher Suite

Forward Secrecy ensures the integrity of a session key in the event that a long-term key is compromised. PFS accomplishes this by enforcing the derivation of a new key for each and every session.

This means that when the private key gets compromised it cannot be used to decrypt recorded SSL traffic.

The cipher suites that provide Perfect Forward Secrecy are those that use an ephemeral form of the Diffie-Hellman key exchange. Their disadvantage is their overhead, which can be improved by using the elliptic curve variants.

The following two ciphersuites are recommended by me, and the latter by the Mozilla Foundation.

The recommended cipher suite:

ssl.cipher-list = "AES128+EECDH:AES128+EDH"

The recommended cipher suite for backwards compatibility (IE6/WinXP):

ssl.cipher-list = "ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4"

If your version of OpenSSL is old, unavailable ciphers will be discarded automatically. Always use the full ciphersuite above and let OpenSSL pick the ones it supports.

The ordering of a ciphersuite is very important because it decides which algorithms are going to be selected in priority. The recommendation above prioritizes algorithms that provide perfect forward secrecy.

Older versions of OpenSSL may not return the full list of algorithms. AES-GCM and some ECDHE are fairly recent, and not present on most versions of OpenSSL shipped with Ubuntu or RHEL.

Prioritization logic

  • ECDHE+AESGCM ciphers are selected first. These are TLS 1.2 ciphers and not widely supported at the moment. No known attack currently target these ciphers.
  • PFS ciphersuites are preferred, with ECDHE first, then DHE.
  • AES 128 is preferred to AES 256. There has been discussions on whether AES256 extra security was worth the cost, and the result is far from obvious. At the moment, AES128 is preferred, because it provides good security, is really fast, and seems to be more resistant to timing attacks.
  • In the backward compatible ciphersuite, AES is preferred to 3DES. BEAST attacks on AES are mitigated in TLS 1.1 and above, and difficult to achieve in TLS 1.0. In the non-backward compatible ciphersuite, 3DES is not present.
  • RC4 is removed entirely. 3DES is used for backward compatibility. See discussion in #RC4_weaknesses

Mandatory discards

  • aNULL contains non-authenticated Diffie-Hellman key exchanges, that are subject to Man-In-The-Middle (MITM) attacks
  • eNULL contains null-encryption ciphers (cleartext)
  • EXPORT are legacy weak ciphers that were marked as exportable by US law
  • RC4 contains ciphers that use the deprecated ARCFOUR algorithm
  • DES contains ciphers that use the deprecated Data Encryption Standard
  • SSLv2 contains all ciphers that were defined in the old version of the SSL standard, now deprecated
  • MD5 contains all the ciphers that use the deprecated message digest 5 as the hashing algorithm

HTTP Strict Transport Security

When possible, you should enable HTTP Strict Transport Security (HSTS), which instructs browsers to communicate with your site only over HTTPS.

View my article on HTST to see how to configure it.

HTTP Public Key Pinning Extension

You should also enable the HTTP Public Key Pinning Extension.

Public Key Pinning means that a certificate chain must include a whitelisted public key. It ensures only whitelisted Certificate Authorities (CA) can sign certificates for *.example.com, and not any CA in your browser store.

I’ve written an article about it that has background theory and configuration examples for Apache, Lighttpd and NGINX:https://raymii.org/s/articles/HTTPPublicKeyPinningExtension_HPKP.html

Config Example

var.confdir = "/etc/ssl/certs"
$SERVER["socket"] == ":443" {
  ssl.engine = "enable"
  ssl.pemfile = var.confdir + "/example.org.pem"
  ssl.ca-file = var.confdir + "/example.org.bundle.crt"
  server.name = var.confdir + "/example.org"
  server.document-root = "/srv/html"
  ssl.use-sslv2 = "disable"
  ssl.use-sslv3 = "disable"
  ssl.use-compression = "disable"
  ssl.honor-cipher-order = "enable"
  ssl.cipher-list = "AES128+EECDH:AES128+EDH:!aNULL:!eNULL"
}

Conclusion

If you have applied the above config lines you need to restart lighttpd:

/etc/init.d/lighttpd restart

Now use the SSL Labs test to see if you get a nice A. And of course have a safe and future proof SSL configuration!

HTTP Strict Transport Security for Apache, NGINX and Lighttpd


HTTP Strict Transport Security (often abbreviated as HSTS) is a security feature that lets a web site tell browsers that it should only be communicated with using HTTPS, instead of using HTTP. This tutorial will show you how to set up HSTS in Apache2, NGINX and Lighttpd. It is tested with all mentioned webservers, NGINX 1.1.19, Lighttpd 1.4.28 and Apache 2.2.22 on Ubuntu 12.04, Debian 6 & 7 and CentOS 6.It should work on other distro’s however, these are just reference values.

What is HTTP Strict Transport Security?

Quoting the Mozilla Developer Network:

If a web site accepts a connection through HTTP and redirects to HTTPS, the user in this case may initially talk to the non-encrypted version of the site before being redirected, if, for example, the user types http://www.foo.com/ or even just foo.com.

This opens up the potential for a man-in-the-middle attack, where the redirect could be exploited to direct a user to a malicious site instead of the secure version of the original page.

The HTTP Strict Transport Security feature lets a web site inform the browser that it should never load the site using HTTP, and should automatically convert all attempts to access the site using HTTP to HTTPS requests instead.

An example scenario:

You log into a free WiFi access point at an airport and start surfing the web, visiting your online banking service to check your balance and pay a couple of bills. Unfortunately, the access point you're using is actually a hacker's laptop, and they're intercepting your original HTTP request and redirecting you to a clone of your bank's site instead of the real thing. Now your private data is exposed to the hacker.

Strict Transport Security resolves this problem; as long as you've accessed your bank's web site once using HTTPS, and the bank's web site uses Strict Transport Security, your browser will know to automatically use only HTTPS, which prevents hackers from performing this sort of man-in-the-middle attack.

Do note that HSTS does not work if you’ve never visited the website before. A website needs to tell you it is HTTPS only.

Set up HSTS in Apache2

Edit your apache configuration file (/etc/apache2/sites-enabled/website.conf and /etc/apache2/httpd.conf for example) and add the following to your VirtualHost:

# Optionally load the headers module:
LoadModule headers_module modules/mod_headers.so

<VirtualHost 67.89.123.45:443>
    Header always set Strict-Transport-Security "max-age=63072000; includeSubdomains; preload"
</VirtualHost>

Now your website will set the header every time someone visits, with an expiration date of two years (in seconds). It sets it at every visit. So tomorrow, it will say two years again.
You do have to set it on the HTTPS vhost only. It cannot be in the http vhost.

To redirect your visitors to the HTTPS version of your website, use the following configuration:

<VirtualHost *:80>
  [...]
  ServerName example.com
  Redirect permanent / https://example.com/
</VirtualHost>

If you only redirect, you dont even need a document root.

You can also use modrewrite, however the above method is simpler and safer. However, modrewrite below redirects the user to the page they were visiting over https, the above config just redirects to /:

<VirtualHost *:80>
  [...]
  <IfModule mod_rewrite.c>
    RewriteEngine On
    RewriteCond %{HTTPS} off
    RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
  </IfModule>
</VirtualHost>

And don’t forget to restart Apache.

Lighttpd

The lighttpd variant is just as simple. Add it to your Lighttpd configuration file (/etc/lighttpd/lighttpd.conf for example):

server.modules += ( "mod_setenv" )
$HTTP["scheme"] == "https" {
    setenv.add-response-header  = ( "Strict-Transport-Security" => "max-age=63072000; includeSubdomains; preload")
}

And restart Lighttpd. Here the time is also two years.

NGINX

NGINX is even shorter with its config. Add this in the server block for your HTTPS configuration:

add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";

Don’t forget to restart NGINX.

X-Frame-Options header

The last tip I’ll give you is the X-Frame-Options header, which you can add to your HTTPS website to make sure it is not embedded in a frame or iframe. This avoids clickjacking, and might be helpfull for HTTPS websites. Quoting the Mozilla Developer Network again:

The X-Frame-Options HTTP response header can be used to indicate whether or not a browser should be allowed to render a page in a `<frame>` or `<iframe>`. Sites can use this to avoid clickjacking attacks, by ensuring that their content is not embedded into other sites.

You can change DENY to SAMEORIGIN or ALLOW-FROM uri, see the Mozilla link above for more information on that. (Or the RFC.)

X-Frame-Options for Apache2

As above, add this to the apache config file:

Header always set X-Frame-Options DENY

Lighttpd

This goes in the lighttpd config. Make sure you don’t double the above set config, if you have that, just add the rule it to it.

server.modules += ( "mod_setenv" )
$HTTP["scheme"] == "https" {
    setenv.add-response-header  = ( "X-Frame-Options" => "DENY")
}

NGINX

Yet again, in a server block:

add_header X-Frame-Options "DENY";

HTTP Public Key Pinning Extension HPKP for Apache, NGINX and Lighttpd


Public Key Pinning means that a certificate chain must include a whitelisted public key. It ensures only whitelisted Certificate Authorities (CA) can sign certificates for *.example.com, and not any CA in your browser store. This article has background theory and configuration examples for Apache, Lighttpd and NGINX.

HTTP Public Key Pinning Extension

An example might be your bank, which always have their certificate from CA Company A. With the current certificate system, CA Company B, CA Company C and the NSA CA can all create a certificate for your bank, which your browser will hapily accept because those companies are also trusted root CA’s.

If the bank implements HPKP and pin’s their first intermidiate certificate (from CA Company A), browsers will not accept certificates from CA Company B and CA Company C, even if they have a valid trust path. HPKP also allows your browser to report back the failure to the bank, so that they know they are under attack.

Public Key Pinning Extension for HTTP (HPKP) is a standard for public key pinning for HTTP user agents that’s been in development since 2011. It was started by Google, which, even though it had implemented pinning in Chrome, understood that manually maintaining a list of pinned sites can’t scale.

Here is a quick feature overview of HPKP:

  • HPKP is set at the HTTP level, using the Public-Key-Pins response header.
  • The policy retention period is set with the max-age parameter, it specifies duration in seconds.
  • The PKP header can only be used over an error-free secure encryption.
  • If multiple headers are seen, only the first one is processed.
  • Pinning can be extended to subdomains with the includeSubDomains parameter.
  • When a new PKP header is received, it overwrites previously stored pins and metadata.
  • A pin consists out of the hashing algorithm and an “Subject Public Key Info” fingerprint.

This article first has some theory about the workings of HPKP, down below you’ll find the part which shows you how to get the required fingerprints and has web server configuration.

SPKI Fingerprint – Theory

As explained by Adam Langley in his post, we hash a public key, not a certificate:

In general, hashing certificates is the obvious solution, but the wrong one. The problem is that CA certificates are often reissued: there are multiple certificates with the same public key, subject name etc but different extensions or expiry dates. Browsers build certificates chains from a pool of certificates, bottom up, and an alternative version of a certificate might be substituted for the one that you expect.

For example, StartSSL has two root certificates: one signed with SHA1 and the other with SHA256. If you wished to pin to StartSSL as your CA, which certificate hash would you use? You would have to use both, but how would you know about the other root if I hadn’t just told you?

Conversely, public key hashes must be correct:

Browsers assume that the leaf certificate is fixed: it’s always the starting point of the chain. The leaf certificate contains a signature which must be a valid signature, from its parent, for that certificate. That implies that the public key of the parent is fixed by the leaf certificate. So, inductively, the chain of public keys is fixed, modulo truncation.

The only sharp edge is that you mustn’t pin to a cross-certifying root. For example, GoDaddy’s root is signed by Valicert so that older clients, which don’t recognise GoDaddy as a root, still trust those certificates. However, you wouldn’t want to pin to Valicert because newer clients will stop their chain at GoDaddy.

Also, we’re hashing the SubjectPublicKeyInfo not the public key bit string. The SPKI includes the type of the public key and some parameters along with the public key itself. This is important because just hashing the public key leaves one open to misinterpretation attacks. Consider a Diffie-Hellman public key: if one only hashes the public key, not the full SPKI, then an attacker can use the same public key but make the client interpret it in a different group. Likewise one could force an RSA key to be interpreted as a DSA key etc.

Where to Pin

Where should you pin? Pinning your own public key is not the best idea. The key might change or get compromised. You might have multiple certificates in use. The key might change because you rotate your certificates every so often. It might key compromised because the web server was hacked.

The easiest, but not most secure place to pin is the first intermediate CA certificate. The signature of that certificate is on your websites certificate so the issuing CA’s public key must always be in the chain.

This way you can renew your end certificate from the same CA and have no pinning issues. If the CA issues a different root, then you have a problem, there is no clear solution for this yet. There is one thing you can do to mitigate this:

  • Always have a backup pin and a spare certificate from a different CA.

The RFC states that you need to provide at least two pins. One of the pins must be present in the chain used in the connection over which the pins were received, the other pin must not be present.

This other pin is your backup public key. It can also be the SPKI fingerprint of a different CA where you have a certificate issued.

An alternative and more secure take on this issue is to create at least three seperate public keys beforehand (using OpenSSL, see this pagefor a Javascript OpenSSL command generator) and to keep two of those keys as a backup in a safe place, offline and off-site.

You create the SPKI hashes for the three certificates and pin those. You only use the first key as the active certificate. When it is needed, you can then use one of the alternative keys. You do however need to let that certificate sign by a CA to create a certificate pair and that process can take a few days depending on the certificate.

This is not a problem for the HPKP because we take the SPKI hash of the public key, and not of the certificate. Expiration or a different chain of CA signer do not matter in this case.

If you have the means and procedures to create and securely save at least three seperate keys as described above and pin those, it would also protect you from your CA provider getting compromised and giving out a fake certificate for your specific website.

SPKI Fingerprint

To get the SPKI fingerprint from a certificate we can use the following OpenSSL command, as shown in the RFC draft:

openssl x509 -noout -in certificate.pem -pubkey | \
openssl asn1parse -noout -inform pem -out public.key;
openssl dgst -sha256 -binary public.key | openssl enc -base64

Result:

klO23nT2ehFDXCfx3eHTDRESMz3asj1muO+4aIdjiuY=

The input certificate.pem file is the first certificate in the chain for this website. (At the time of writing,COMODO RSA Domain Validation Secure Server CA, Serial 2B:2E:6E:EA:D9:75:36:6C:14:8A:6E:DB:A3:7C:8C:07.)

You need to also do this with your backup public key, ending up with two fingerprints.

Bugs

At the time of writing this article (2015-Jan) the only browser supporting HPKP (Chrome) has a serious issue where Chrome doesn’t treat the max-age and includeSubdomains directives from HSTS and HPKP headers as mutually exclusive. This means that if you have HSTS and HPKP with different policiesfor max-age or includeSubdomains they will be interchanged. See this bug for more info:https://code.google.com/p/chromium/issues/detail?id=444511. Thanks to Scott Helme from https://scotthelme.co.uk for finding and notifying me and the Chromium project about it.

Webserver configuration

Below you’ll find configuration instructions for the three most populair web servers. Since this is just a HTTP header, almost all web servers will allow you to set this. It needs to be set for the HTTPS website.

The example below pins the COMODO RSA Domain Validation Secure Server CA and the Comodo PositiveSSL CA 2 as a backup, with a 30 day expire time including all subdomains.

Apache

Edit your apache configuration file (/etc/apache2/sites-enabled/website.conf or /etc/apache2/httpd.conf for example) and add the following to your VirtualHost:

# Optionally load the headers module:
LoadModule headers_module modules/mod_headers.so

Header set Public-Key-Pins "pin-sha256=\"klO23nT2ehFDXCfx3eHTDRESMz3asj1muO+4aIdjiuY=\"; pin-sha256=\"633lt352PKRXbOwf4xSEa1M517scpD3l5f79xMD9r9Q=\"; max-age=2592000; includeSubDomains"

Lighttpd

The lighttpd variant is just as simple. Add it to your Lighttpd configuration file (/etc/lighttpd/lighttpd.conf for example):

server.modules += ( "mod_setenv" )
$HTTP["scheme"] == "https" {
    setenv.add-response-header  = ( "Public-Key-Pins" => "pin-sha256=\"klO23nT2ehFDXCfx3eHTDRESMz3asj1muO+4aIdjiuY=\"; pin-sha256=\"633lt352PKRXbOwf4xSEa1M517scpD3l5f79xMD9r9Q=\"; max-age=2592000; includeSubDomains")
}

NGINX

NGINX is even shorter with its config. Add this in the server block for your HTTPS configuration:

add_header Public-Key-Pins 'pin-sha256="klO23nT2ehFDXCfx3eHTDRESMz3asj1muO+4aIdjiuY="; pin-sha256="633lt352PKRXbOwf4xSEa1M517scpD3l5f79xMD9r9Q="; max-age=2592000; includeSubDomains';

Reporting

HPKP reporting allows the user-agent to report any failures back to you.

If you add an aditional report-uri="http://example.org/hpkp-report" parameter to the header and set up a listener there, clients will send reports if they encounter a failure. A report is sent as a POST request to the report-uri with a JSON body like this:

{
    "date-time": "2014-12-26T11:52:10Z",
    "hostname": "www.example.org",
    "port": 443,
    "effective-expiration-date": "2014-12-31T12:59:59",
    "include-subdomains": true,
    "served-certificate-chain": [
        "-----BEGINCERTIFICATE-----\nMIIAuyg[...]tqU0CkVDNx\n-----ENDCERTIFICATE-----"
    ],
    "validated-certificate-chain": [
        "-----BEGINCERTIFICATE-----\nEBDCCygAwIBA[...]PX4WecNx\n-----ENDCERTIFICATE-----"
    ],
    "known-pins": [
        "pin-sha256=\"dUezRu9zOECb901Md727xWltNsj0e6qzGk\"",
        "pin-sha256=\"E9CqVKB9+xZ9INDbd+2eRQozqbQ2yXLYc\""
    ]
}

No Enforcment, report only

HPKP can be set up without enforcement, in reporting mode by using the Public-Key-Pins-Report-Only response header.

This approach allows you to set up pinning without your site being unreachable or HPKP being configured incorrectly. You can later move to enforcement by changing the header back to Public-Key-Pins.

OCSP Stapling on Nginx


When connecting to a server, clients should verify the validity of the server certificate using either a Certificate Revocation List (CRL), or an Online Certificate Status Protocol (OCSP) record. The problem with CRL is that the lists have grown huge and takes forever to download.

OCSP is much more lightweight, as only one record is retrieved at a time. But the side effect is that OCSP requests must be made to a 3rd party OCSP responder when connecting to a server, which adds latency and potential failures. In fact, the OCSP responders operated by CAs are often so unreliable that browser will fail silently if no response is received in a timely manner. This reduces security, by allowing an attacker to DoS an OCSP responder to disable the validation.

The solution is to allow the server to send its cached OCSP record during the TLS handshake, therefore bypassing the OCSP responder. This mechanism saves a roundtrip between the client and the OCSP responder, and is called OCSP Stapling.

The server will send a cached OCSP response only if the client requests it, by announcing support for the status_request TLS extension in its CLIENT HELLO.

Most servers will cache OCSP response for up to 48 hours. At regular intervals, the server will connect to the OCSP responder of the CA to retrieve a fresh OCSP record. The location of the OCSP responder is taken from the Authority Information Access field of the signed certificate.

What is OCSP Stapling

OCSP stapling is defined in the IETF RFC 6066. The term “stapling” is a popular term used to describe how the OCSP response is obtained by the web server. The web server caches the response from the CA that issued the certificate. When an SSL/TLS handshake is initiated, the response is returned by the web server to the client by attaching the cached OCSP response to the CertificateStatus message. To make use of OCSP stapling, a client must include the “status_request” extension with its SSL/TSL Client “Hello” message.

OCSP stapling presents several advantages including the following:

  • The relying party receives the status of the web servers certificate when it is needed (during the SSL/TLS handshake).
  • No additional HTTP connection needs to be set up with the issuing CA.
  • OCSP stapling provides added security by reducing the number of attack vectors.

Read one of the following links for more information on OCSP and OCSP stapling.

Requirements

You need at least nginx 1.3.7 for this to work. This is not available in the current Ubuntu LTS releases (12.04), it has 1.1.19 and on CentOS you need EPEL or the official repositories. However, it is easy to install the latest version of nginx.

You also need create a firewall exception to allow your server to make outbound connections to the upstream OCSP’s. You can view all OCSP URI’s from a website using this one liner:

OLDIFS=$IFS; IFS=':' certificates=$(openssl s_client -connect google.com:443 -showcerts -tlsextdebug -tls1 2>&1 </dev/null | sed -n '/-----BEGIN/,/-----END/ {/-----BEGIN/ s/^/:/; p}'); for certificate in ${certificates#:}; do echo $certificate | openssl x509 -noout -ocsp_uri; done; IFS=$OLDIFS

It results for google.com in:

http://clients1.google.com/ocsp
http://gtglobal-ocsp.geotrust.com

nginx Configuration

Add the below configuration to your https (443) server block:

ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;

For the OCSP stapling to work, the certificate of the server certificate issuer should be known. If the ssl_certificate file does not contain intermediate certificates, the certificate of the server certificate issuer should be present in the ssl_trusted_certificate file.

My certificate for raymii.org is issues by Positive CA 2. That certificate is issued by Addtrust External CA Root. In my nginxssl_certificate file all these certificates are present. If that for you is not the case, create a file with the certificate chain and use it like so:

  ssl_trusted_certificate /etc/ssl/certs/domain.chain.stapling.pem;

Before version 1.1.7, only a single name server could be configured. Specifying name servers using IPv6 addresses is supported starting from versions 1.3.1 and 1.2.2. By default, nginx will look up both IPv4 and IPv6 addresses while resolving. If looking up of IPv6 addresses is not desired, the ipv6=off parameter can be specified. Resolving of names into IPv6 addresses is supported starting from version 1.5.8.

By default, nginx caches answers using the TTL value of a response. The (optional) valid parameter allows overrides it to be 5 minutes. Before version 1.1.9, tuning of caching time was not possible, and nginx always cached answers for the duration of 5 minutes.

Restart your nginx to load the new configuration:

service nginx restart

And it should work. Let’s test it.

Testing it

Fire up a terminal and use the following OpenSSL command to connect to your website:

openssl s_client -connect example.org:443 -tls1 -tlsextdebug -status

In the response, look for the following:

OCSP response:
======================================
OCSP Response Data:
    OCSP Response Status: successful (0x0)
    Response Type: Basic OCSP Response
    Version: 1 (0x0)
    Responder Id: 99E4405F6B145E3E05D9DDD36354FC62B8F700AC
    Produced At: Feb  3 04:25:39 2014 GMT
    Responses:
    Certificate ID:
      Hash Algorithm: sha1
      Issuer Name Hash: 0226EE2F5FA2810834DACC3380E680ACE827F604
      Issuer Key Hash: 99E4405F6B145E3E05D9DDD36354FC62B8F700AC
      Serial Number: C1A3D8D00D72FCE483CD84759E9EC0BC
    Cert Status: good
    This Update: Feb  3 04:25:39 2014 GMT
    Next Update: Feb  7 04:25:39 2014 GMT

That means it is working. If you get a response like below, it is not working:

OCSP response: no response sent

You can also use the SSL Labs test to see if OCSP stapling works.

Sources

OCSP Stapling on Apache


When connecting to a server, clients should verify the validity of the server certificate using either a Certificate Revocation List (CRL), or an Online Certificate Status Protocol (OCSP) record. The problem with CRL is that the lists have grown huge and takes forever to download.

OCSP is much more lightweight, as only one record is retrieved at a time. But the side effect is that OCSP requests must be made to a 3rd party OCSP responder when connecting to a server, which adds latency and potential failures. In fact, the OCSP responders operated by CAs are often so unreliable that browser will fail silently if no response is received in a timely manner. This reduces security, by allowing an attacker to DoS an OCSP responder to disable the validation.

The solution is to allow the server to send its cached OCSP record during the TLS handshake, therefore bypassing the OCSP responder. This mechanism saves a roundtrip between the client and the OCSP responder, and is called OCSP Stapling.

The server will send a cached OCSP response only if the client requests it, by announcing support for the status_request TLS extension in its CLIENT HELLO.

Most servers will cache OCSP response for up to 48 hours. At regular intervals, the server will connect to the OCSP responder of the CA to retrieve a fresh OCSP record. The location of the OCSP responder is taken from the Authority Information Access field of the signed certificate.

What is OCSP Stapling

OCSP stapling is defined in the IETF RFC 6066. The term “stapling” is a popular term used to describe how the OCSP response is obtained by the web server. The web server caches the response from the CA that issued the certificate. When an SSL/TLS handshake is initiated, the response is returned by the web server to the client by attaching the cached OCSP response to the CertificateStatus message. To make use of OCSP stapling, a client must include the “status_request” extension with its SSL/TSL Client “Hello” message.

OCSP stapling presents several advantages including the following:

  • The relying party receives the status of the web servers certificate when it is needed (during the SSL/TLS handshake).
  • No additional HTTP connection needs to be set up with the issuing CA.
  • OCSP stapling provides added security by reducing the number of attack vectors.

Read one of the following links for more information on OCSP and OCSP stapling.

Requirements

You need at least Apache 2.3.3 and later plus OpenSSL 0.9.8h or later for this to work. This is not available in the current Ubuntu LTS releases (12.04), it has 2.2.22 and CentOS 6 has 2.2.15. Either search for PPA’s/unofficial repositories or compile them yourself.

You also need create a firewall exception to allow your server to make outbound connections to the upstream OCSP’s. You can view all OCSP URI’s from a website using this one liner:

OLDIFS=$IFS; IFS=':' certificates=$(openssl s_client -connect google.com:443 -showcerts -tlsextdebug -tls1 2>&1 </dev/null | sed -n '/-----BEGIN/,/-----END/ {/-----BEGIN/ s/^/:/; p}'); for certificate in ${certificates#:}; do echo $certificate | openssl x509 -noout -ocsp_uri; done; IFS=$OLDIFS

It results for google.com in:

http://clients1.google.com/ocsp
http://gtglobal-ocsp.geotrust.com

Replace google.com with your domain. Also note that you need the GNU version of sed and bash. It does not work on OS X or BSD.

Apache Configuration

Add the below configuration to your virtualhost:

SSLUseStapling on
SSLStaplingCache "shmcb:logs/stapling-cache(150000)"

Here’s the explanation for the two lines:

SSLUseStapling

OCSP stapling relieves the client of querying the OCSP responder on its own, but it should be noted that with the RFC 6066 specification, the server's CertificateStatus reply may only include an OCSP response for a single cert. For server certificates with intermediate CA certificates in their chain (the typical case nowadays), stapling in its current implementation therefore only partially achieves the stated goal of "saving roundtrips and resources" - see also RFC 6961 (TLS Multiple Certificate Status Extension). 

SSLStaplingCache

Configures the cache used to store OCSP responses which get included in the TLS handshake if SSLUseStapling is enabled. Configuration of a cache is mandatory for OCSP stapling. With the exception of none and nonenotnull, the same storage types are supported as with SSLSessionCache

The shmbc part:

This makes use of a high-performance cyclic buffer (approx. size bytes in size) inside a shared memory segment in RAM (established via /path/to/datafile) to synchronize the local OpenSSL memory caches of the server processes. This is the recommended session cache. To use this, ensure that mod_socache_shmcb is loaded.

You can also give a few more options. For example, a freshness timeout, how old the OCSP response can be:

SSLStaplingResponseMaxAge 900

This lets the response only be max 15 minutes old (900 seconds).

If your apache server is behind a HTTP proxy and you need to do your OCSP queries through a proxy you can use SSLStaplingForceURL. This overrides the URL provided by the certificate:

SSLStaplingForceURL http://internal-proxy.example.org

Restart your apache to load the new configuration:

service apache2 restart

And it should work. Let’s test it.

Testing it

Fire up a terminal and use the following OpenSSL command to connect to your website:

openssl s_client -connect example.org:443 -tls1 -tlsextdebug -status

In the response, look for the following:

OCSP response:
======================================
OCSP Response Data:
    OCSP Response Status: successful (0x0)
    Response Type: Basic OCSP Response
    Version: 1 (0x0)
    Responder Id: 99E4405F6B145E3E05D9DDD36354FC62B8F700AC
    Produced At: Feb  3 04:25:39 2014 GMT
    Responses:
    Certificate ID:
      Hash Algorithm: sha1
      Issuer Name Hash: 0226EE2F5FA2810834DACC3380E680ACE827F604
      Issuer Key Hash: 99E4405F6B145E3E05D9DDD36354FC62B8F700AC
      Serial Number: C1A3D8D00D72FCE483CD84759E9EC0BC
    Cert Status: good
    This Update: Feb  3 04:25:39 2014 GMT
    Next Update: Feb  7 04:25:39 2014 GMT

That means it is working. If you get a response like below, it is not working:

OCSP response: no response sent

You can also use the SSL Labs test to see if OCSP stapling works.

Sources

Set up a federated XMPP Chat Network with ejabberd, and how to Configure and Setup SSL Certificate for Ejabberd


This tutorial shows you how to set up your own federated chat network using ejabberd. It covers a basic single node ejabberd server and also the setup of an ejabberd cluster, including errors and DNS SRV record examples. Last but not least federation is also covered. You can use (almost) any VPS.

Why set up your own XMPP server

There are a few reasons to set up your own XMPP server.

You might use Google Talk or as it now is named Hangouts. Google’s service recently changed and it is going to drop XMPP compatibility. If you have non-gmail chat contacts you can keep chatting to them. And still use an open protocol which is widely supported, not being locked in to google specific software and hardware.

Or you might want to have more control over the logging of your data. Turn of ejabberd logging and use Off The Record which gives you full privacy (and perfect forward secrecy).

You might want to use awesome multi-account chatting applications like Pidgin, Psi+, Empathy, Adium, iChat/Messages or Miranda IM. And on Android you can use Xabber, Beem or OneTeam. Did you know that big players like Facebook, WhatsApp and Google (used) to use XMPP as their primary chat protocol?

Or you might be a sysadmin in need of an internal chat solution. I’ve got a ejabberd cluster running for a client consisting of 4 Debian 7 VM’s (2GB RAM each) spread over 3 sites and 1 datacenter, serving 12000 total users and most of the time 6000 concurrently.

XMPP is an awesome and extendible protocol, on which you can find more here: https://en.wikipedia.org/wiki/XMPP

Information

This setup is tested on Debian 7, Ubuntu 12.04 and 10.04 and OS X 10.8 Server, all running ejabberd installed via the package manager, either apt or ports. It also works on Windows Server 2012 with the ejabberd compiled from the erlang source but that is not covered in this tutorial.

This tutorial uses the example.org domain as the chat domain, and the server chat.example.org as the xmpp server domain. For the clustering part the servers srv1.example.org and srv2.example.org are used. Replace these values for your setup.

Single node / master node ejabberd installation

If you want to set up a single node installation of ejabberd, e.g. no clustering, then follow only this part and the DNS part of the tutorial. If you want to set up a cluster, then also follow this part and continue with the next part.

Installing Ejabberd

This is simple, use your package manager to install ejabberd:

apt-get install ejabberd

You will also install a few dependencies for the erlang runtime.

Configuring ejabberd

We are going to configure the ejabberd service. First stop it:

/etc/init.d/ejabberd stop

Now use your favorite text editor to edit the config files. The ejabberd config is erlang config, so comments are not # but %%. Also, every config option ends with a dot (.).

vim /etc/ejabberd/ejabberd.cfg

First we are going to add our chat domain name:

{hosts, ["example.org"]}.

If you want more domains then you add them as shown below:

{hosts, ["sparklingclouds.nl", "raymii.org", "sparklingnetwork.nl"]}.

This domain name is not the name of the servers you are adding.

Next we define an admin user:

{acl, admin, {user, "remy", "example.org"}}.

remy corresponds with the part before the @ in the XMPP ID, and example.org with the part after. If you need more admin users, add another ACL line.

Now if you want people to be able to register via their XMPP client enable in band registration:

{access, register, [{allow, all}]}.

If you are using MySQL or LDAP authentication then you wouldn’t enable this.

I like to have a shared roster with roster groups, and some clients of mine use a shared roster with everybody so that nobody has to add contacts but they see all online users, enable the modsharedroster:

%% Do this in the modules block
  {mod_shared_roster,[]},

If you are pleased with the config file, save it and restart ejabberd:

/etc/init.d/ejabberd restart

We now need to register a user to test our setup. If you’ve enabled in-band registration you can use your XMPP client, and if you did not enable in-band registration you can use the ejabberdctl command:

ejabberdctl register remy example.org 'passw0rd'

Now test it using an XMPP client like Pidgin, Psi+ or Empathy. If you can connect, then you can continue with the tutorial. If you cannot connect, check your ejabberd logs, firewall setting and such to troubleshoot it.

Clustering ejabberd

Note that you have to have a correctly working master node to continue with the ejabberd clustering. If your master node is not working then fix that first.

Important: the modules you use should be the same on every cluster node. If you use LDAP/MySQL authentication, or a shared_roster, or special MUC settings, or offline messaging, for the clustering this does not matter as long as it is on all nodes.

So lets get started. We are first going to configure the master node, and then the slave nodes.

Prepare the master node

Stop the ejabberd server on the master and edit the /etc/default/ejabberd file:

vim /etc/default/ejabberd

Uncomment the hostname option and change it to a FQDN hostname:

ERLANG_NODE=ejabberd@srv1.example.org

And add the external (public) IP addres as a tuple (no dots but comma’s):

INET_DIST_INTERFACE={20,30,10,5}

If you use ejabberd internally then use the primary NIC address.

We are going to remove all the mnesia tables. They will be rebuilt with an ejabberd restart. This is way easier then changing the mnesia data itself. Don’t do this on a already configured node without backing up the erlang cookie.

First backup the erlang cookie:

cp /var/lib/ejabberd/.erlang.cookie ~/

Then remove the mnesia database:

rm /var/lib/ejabberd/*

And restore the erlang cookie:

cp ~/.erlang.cookie /var/lib/ejabberd/.erlang.cookie

To make sure all erlang processes are stopped kill all processes from the ejabberd user. This is not needed but the epmd supervisor process might still be running:

killall -u ejabberd

And start ejabberd again:

/etc/init.d/ejabberd start 

If you can still connect and chat, then continue with the next part, configuring the slave nodes.

Prepare the slave nodes

*A slave node should first be configured and working as described in the first part of this tutorial. You can copy the config files from the master node. *

Stop the ejabberd server:

/etc/init.d/ejabberd stop

Stop the ejabberd server on the master and edit the /etc/default/ejabberd file:

vim /etc/default/ejabberd

Uncomment the hostname option and change it to a FQDN hostname:

ERLANG_NODE=ejabberd@srv2.example.org

And add the external (public) IP addres as a tuple (no dots but comma’s):

INET_DIST_INTERFACE={30,40,20,6}

If you use ejabberd internally then use the primary NIC address.

Now remove all the mnesia tables:

rm /var/lib/ejabberd/*

Copy the cookie from the ejabberd master node, either by cat and vim or via scp:

# On the master node
cat /var/lib/ejabberd/.erlang.cookie
HFHHGYYEHF362GG1GF

# On the slave node
echo "HFHHGYYEHF362GG1GF" > /var/lib/ejabberd/.erlang.cookie
chown ejabberd:ejabberd /var/lib/ejabberd/.erlang.cookie

We are now going to add and compile an erlang module, the easy_cluster module. This is a very small module which adds an erlang shell command to make the cluster addition easier. You can also execute the commands in the erlang functions itself on an erlang debug shell, but I find this easier and it gives less errors:

vim /usr/lib/ejabberd/ebin/easy_cluster.erl

Add the following contents:

-module(easy_cluster).

-export([test_node/1,join/1]).

test_node(MasterNode) ->
    case net_adm:ping(MasterNode) of 'pong' ->
        io:format("server is reachable.~n");
    _ ->
        io:format("server could NOT be reached.~n")
    end.

join(MasterNode) ->
    application:stop(ejabberd),
    mnesia:stop(),
    mnesia:delete_schema([node()]),
    mnesia:start(),
    mnesia:change_config(extra_db_nodes, [MasterNode]),
    mnesia:change_table_copy_type(schema, node(), disc_copies),
    application:start(ejabberd).

Save it and compile it into a working erlang module:

cd /usr/lib/ejabberd/ebin/
erlc easy_cluster.erl

Now check if it succeeded:

ls | grep easy_cluster.beam

If you see the file it worked. You can find more info on the module here: https://github.com/chadillac/ejabberd-easy_cluster/

We are now going to join the cluster node to the master node. Make sure the master is working and running. Also make sure the erlang cookies are synchronized.

On the slave node, start an ejabberd live shell:

/etc/init.d/ejabberd live

This will start an erlang shell and it will give some output. If it stops outputting then you can press ENTER to get a prompt. Enter the following command to test if the master node can be reached:

easy_cluster:test_node('ejabberd@srv1.example.org').

You should get the following response: server is reachable. If so, continue.

Enter the following command to actually join the node:

easy_cluster:join('ejabberd@srv1.example.org').

Here’s example output from a successful test and join join:

/etc/init.d/ejabberd live
*******************************************************
* To quit, press Ctrl-g then enter q and press Return *
*******************************************************

Erlang R15B01 (erts-5.9.1)  [async-threads:0] [kernel-poll:false]

Eshell V5.9.1  (abort with ^G)

=INFO REPORT==== 10-Jun-2013::20:38:15 ===
I(<0.39.0>:cyrsasl_digest:44) : FQDN used to check DIGEST-MD5 SASL authentication: "srv2.example.org"

=INFO REPORT==== 10-Jun-2013::20:38:15 ===
I(<0.576.0>:ejabberd_listener:166) : Reusing listening port for 5222

=INFO REPORT==== 10-Jun-2013::20:38:15 ===
I(<0.577.0>:ejabberd_listener:166) : Reusing listening port for 5269

=INFO REPORT==== 10-Jun-2013::20:38:15 ===
I(<0.578.0>:ejabberd_listener:166) : Reusing listening port for 5280

=INFO REPORT==== 10-Jun-2013::20:38:15 ===
I(<0.39.0>:ejabberd_app:72) : ejabberd 2.1.10 is started in the node 'ejabberd@srv2.example.org'
easy_cluster:test_node('ejabberd@srv1.example.org').
server is reachable.
ok
(ejabberd@srv2.example.org)2> easy_cluster:join('ejabberd@srv1.example.org').

=INFO REPORT==== 10-Jun-2013::20:38:51 ===
I(<0.39.0>:ejabberd_app:89) : ejabberd 2.1.10 is stopped in the node 'ejabberd@srv2.example.org'

=INFO REPORT==== 10-Jun-2013::20:38:51 ===
    application: ejabberd
    exited: stopped
    type: temporary

=INFO REPORT==== 10-Jun-2013::20:38:51 ===
    application: mnesia
    exited: stopped
    type: permanent

=INFO REPORT==== 10-Jun-2013::20:38:52 ===
I(<0.628.0>:cyrsasl_digest:44) : FQDN used to check DIGEST-MD5 SASL authentication: "srv2.example.org"

=INFO REPORT==== 10-Jun-2013::20:38:53 ===
I(<0.1026.0>:ejabberd_listener:166) : Reusing listening port for 5222

=INFO REPORT==== 10-Jun-2013::20:38:53 ===
I(<0.1027.0>:ejabberd_listener:166) : Reusing listening port for 5269

=INFO REPORT==== 10-Jun-2013::20:38:53 ===
I(<0.1028.0>:ejabberd_listener:166) : Reusing listening port for 5280
ok
(ejabberd@srv2.example.org)3>
=INFO REPORT==== 10-Jun-2013::20:38:53 ===
I(<0.628.0>:ejabberd_app:72) : ejabberd 2.1.10 is started in the node 'ejabberd@srv2.example.org'

Exit your erlang shell by pressing CTRL+C twice. Now stop ejabberd and start it again:

/etc/init.d/ejabberd restart

You can now check in the admin webinterface if the cluster join succeeded:

http://srv1.example.org:5280/admin/nodes/

Ejabberd nodes

If it shows the other node you are finished. If not, see if the steps worked and check the below section on troubleshooting.

Repeat the above steps for every node you want to add. You can add as many nodes as you want.

Errors when clustering

When setting up your cluster you might run into errors. Below are my notes for the errors I found.

  • ejabberd restart does not restart epmd (erlang daemon)
    • overkill solution: killall -u ejabberd
  • ejabberd gives hostname errors
    • make sure the hostname is set correctly (hostname srv1.example.com)
  • ejabberd gives inconsistent database errors
    • backup the erlang cookie (/var/lib/ejabberd/.erlang.cookie) and then remove the contents of the /var/lib/ejabberd folder so that mnesia rebuilds its tables.
  • ejabberd reports “Connection attempt from disallowed node”
    • make sure the erlang cookie is correct (/var/lib/ejabberd/.erlang.cookie). Set vim in insert mode before pasting…

DNS SRV Records and Federation

The DNS SRV Record is used both by chat clients to find the right server address as well as by other XMPP servers for federation. Example: Alice configures her XMPP clients with the email address alice@example.org. Her chat client looks up the SRV record and knows the chat server to connect to is chat.example.org. Bob sets up his client with the address bob@bobsbussiness.com, and adds Alice as a contact. The XMPP server at bobsbussiness.com looks up the SRV record and knows that it should initiate a server2server connection tochat.example.org to federate and let Bob connect with Alice.

The BIND 9 config looks like this:

; XMPP
_xmpp-client._tcp                       IN SRV 5 0 5222 chat.example.org.
_xmpp-server._tcp                       IN SRV 5 0 5269 chat.example.org.
_jabber._tcp                            IN SRV 5 0 5269 chat.example.org.

It is your basic SRV record, both the client port and the server2server port, and legacy Jabber. If you have hosted DNS then either enter it in your panel or consult your service provider.

You can use the following dig query to verify your SRV records:

dig _xmpp-client._tcp.example.org SRV
dig _xmpp-server._tcp.example.org SRV

Or if you are on Windows and have to use nslookup:

nslookup -querytype=SRV _xmpp-client._tcp.example.org
nslookup -querytype=SRV _xmpp-server._tcp.example.org

If you get a result like this then you are set up correctly:

;; QUESTION SECTION:
;_xmpp-client._tcp.raymii.org.  IN      SRV

;; ANSWER SECTION:
_xmpp-client._tcp.raymii.org. 3600 IN   SRV     5 0 5222 chat.raymii.org.

The actual record for chat.raymii.org in my case are multiple A records:

;; ADDITIONAL SECTION:
chat.raymii.org.        3600    IN      A       84.200.77.167
chat.raymii.org.        3600    IN      A       205.185.117.74
chat.raymii.org.        3600    IN      A       205.185.124.11

But if you run a single node this can also be a CNAME or just one A/AAAA record.

Final testing

To test if it all worked you can add the Duck Duck Go XMPP bot. If this works flawlessly and you can add it and chat to it, then you have done everything correctly. The email address to add is im@ddg.gg.

Ejabberd SSL Certificate

This tutorial shows you how to set up an SSL Certificate for use with Ejabberd. It covers both the creation of the Certificate Signing Request, the preparing of the certificate for use with Ejabberd and the installation of the certificate.

This tutorial assumes a working ejabberd installation. It is tested on Debian and Ubuntu, but should work on any ejabberd installation.

Steps and Explanation

To get an SSL certificate working on ejabberd we need to do a few things:

  • Create an Certificate Signing Request (CSR) and a Private Key
  • Submit the CSR to a Certificate Authority, let them sign it and give you a Certificate
  • Combine the certificate, private key (and chain) into a ejabberd compatible PEM file
  • Install the certificate in ejabberd

With a certificate we can secure our XMPP connection and conversations. This way it is much harder for others to spy on your conversations. Combined with OTR this enabled a super secure channel for conversation.

Creating the Certificate Signing Request

Create a folder to store all the files and cd to that:

mkdir -p ~/Certificates/xmpp cd ~/Certificates/xmpp

Now use OpenSSL to create both a Private Key and a CSR. The first command will do it interactively, the second command will do it non-interactive. Make sure to set the correct values, your Common Name (CN) should be your XMPP server URL:

Interactive:

openssl req -nodes -newkey rsa:2048 -keyout private.key -out CSR.csr

Non-interactive:

openssl req -nodes -newkey rsa:2048 -keyout private.key -out CSR.csr -subj “/C=NL/ST=State/L=City/O=Company Name/OU=Department/CN=chat.example.org”

This will result in two files, CSR.csr and private.key. You now have to submit the CSR to a Certificate Authority. This can be any CA, I myself have good experiences with Xolphin, but there are others like Digicert and Verisign.

Once you have submitted your CSR and have gotten a Certificate you can continue.

Creating the ejabberd certificate

Once you have all the files (private key, certificate and certificate chain), put them all in a folder and continue. We are going to cat all the required files into a ejabberd.pem file.

This needs to happen in a specific order:

  • private key
  • certificate
  • chains

So adapt the following commands to your filenames and create the pem file:

cat private.key >> ejabberd.pem cat certificate.pem >> ejabberd.pem cat chain-1.pem >> ejabberd.pem cat chain-2.pem >> ejabberd.pem

If that all works out continue.

Installing the certificate in ejabberd

Copy the certificate to all your ejabberd servers:

scp ejabberd.pem user@srv1.example.org:

The place the certificate in the /etc/ejabberd folder:

cp ejabberd.pem /etc/ejabberd/ejabberd.pem

Now change the ejabberd config to point to the new certificate:

vim /etc/ejabberd/ejabberd.cfg

Check/change the following to point to the new certificate:

[…] {listen, [ {5222, ejabberdc2s, [ {access, c2s}, {shaper, c2sshaper}, {maxstanzasize, 65536}, starttls, {certfile, “/etc/ejabberd/ejabberd.pem”} ]}, […] {s2susestarttls, true}. {s2s_certfile, “/etc/ejabberd/ejabberd.pem”}. […]

Afterwards restart ejabberd:

/etc/init.d/ejabberd restart

You can now use any XMPP client to connect with SSL/TLS to see if it works.

Persistent reverse (NAT bypassing) SSH tunnel access with autossh


Situation: you are in a restricted network (company, hotel, hospital) where you have a “server” which you want to access from outside that network. You cannot forward ports to that machine, but you can ssh outside (to your own server). This tutorial solves this problem.

You need another server to which you setup a persistent ssh connection with a reverse tunnel. Then if you need to access the machine you ssh into the other server, and from there you ssh through the tunnel to the restriced machine.

Make sure you have permission to do this from the administrators. They generally don’t like holes in the firewall/security. They don’t block it for no reason.

Naming convention:

restricted machine: machine inside the restricted network middleman: machine to which the restricted machine sets up the tunnel, and from which you access the restricted server

Install the tools

We are going to use autossh. This is in the debian/ubuntu repositories. Make sure you also install openssh server.

Execute on: restricted machine.

sudo apt-get install autossh ssh

Create your ssh-key.

Execute on: restricted machine.

ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): */root/.ssh/nopwd*
Enter passphrase (empty for no passphrase): *leave empty*
Enter same passphrase again: *leave empty*

Copy your key to the middleman machine

Execute on: restricted machine.

ssh-copy-id -i .ssh/nopwd.pub "-p 2222 remy@middleman"

(replace remy@middleman with your username and middleman ssh server. Also note how you can give a custom port in the ssh-copy-id.)

Test the connection with autossh

Execute on: restricted machine

autossh -M 10984 -o "PubkeyAuthentication=yes" -o "PasswordAuthentication=no" -i /root/.ssh/nopwd -R 6666:localhost:22 remy@middleman -p 2222

Explanation”of options:

  • -M 10984: autossh monitoring port.
  • -o “PubkeyAuthentication=yes”: authenticate with ssh-keys instead of password.
  • -o “PasswordAuthentication=no”: explicitly disable password authentication.
  • -i /root/.ssh/nopwd: the location of the ssh key to use.
  • -R 6666:localhost:22: reverse tunnel. forward all traffic on port 6666 on host middleman to port 22 on host restricted machine.
  • remy@middleman -p 2222: ssh user remy, ssh host middleman, ssh port 2222

If this all goes well you should be logged in to the middleman host without being asked for a password. You might get the question if you want to add the ssh key. Say yes to this.

If it does not go well, check the permissions on the ssh key (should be 600), and make sure you have the correct values in the autossh command.

SSH back in the restricted host

From another machine (outside the restricted network preferably) ssh into the middleman host.

Execute on: other machine

ssh -p 2222 remy@middleman

From the middleman, ssh into the restricted host via the reverse tunnel we created:

Execute on: middleman

ssh -p 6666 remy@127.0.0.1

If all goes well, you should see a prompt to login to the restricted machine. Enter your password and go. If this goes well, you can continue. If this does not work, check the values in the command and the ssh configs. Also make sure you have executed the steps above correctly.

Enable the tunnel on boot

We are going to edit the /etc/rc.local file. This script normally does nothing, but gets executed at boot. If you make any errors in this script, your machine might not boot so make sure to do this correctly.

Execute on: restricted machine

sudo nano /etc/rc.local

Add (and change) the following line

autossh -M 10984 -N -f -o "PubkeyAuthentication=yes" -o "PasswordAuthentication=no" -i /root/.ssh/nopwd -R 6666:localhost:22 remy@middleman -p 2222 &

We have three new things in this command:

  • -N: Do not execute a command on the middleman machine
  • -f: drop in the background
  • &: Execute this command but do not wait for output or an exit code. If this is not added, your machine might hang at boot.

Save the file, and as make it executable:

Execute on: restricted machine

sudo chmod +x /etc/rc.local

And test it:

Execute on: restricted machine

sudo /etc/rc.local

If you get your regular promt back without any output you’ve done it correct.z

Forward a website, not ssh

You might want to forward a website on the restricted host. Follow the above tutorial, but change the autossh command:

autossh -M 10984 -o "PubkeyAuthentication=yes" -o "PasswordAuthentication=no" -i /root/.ssh/nopwd -R 8888:localhost:80 remy@middleman -p 2222
  • -R 8888:localhost:80: this forwards all traffic on host middleman to port 80 on host restrictedhost. (port 80 = website).

Other host inside restricted network

You can also forward ports from other restricted hosts in the network:

autossh -M 10984 -o "PubkeyAuthentication=yes" -o "PasswordAuthentication=no" -i /root/.ssh/nopwd -R 7777:host2.restrictednetwork:22 remy@middleman -p 2222

This will forward all traffic to port 7777 on host middleman, via host restrictedhost, to host host2.restrictednetwork port 22.

Self Hosted CryptoCat – Secure self hosted multiuser webchat and SSL Certificate Setup


This is a guide on setting up a self hosted secure multiuser webchat service with CryptoCat. It covers the set up of ejabberd, nginx and the web interface for CryptoCat. It supports secure encrypted group chat, secure encrypted private chat and file and photo sharing.

There were/are some issues with the encryption provided by CryptoCat. These seem to be fixed now, but still, beware.

This tutorial is tested on Ubuntu 12.04.

Set up a DNS record

Make sure you set up two DNS A records to your chat server. One should be for example chat.sparklingclouds.nl and the other is for the conferencing: conference.chat.sparklingclouds.nl. You should contact your provider if you need help with this.

In the configuration files, you should replace chat.sparklingclouds.nl with your own domain name.

Install required packages

First we install the required packages:

apt-get install ejabberd nginx vim git

ejabberd configuration

Edit the ejabberd configuratio file located:

/etc/ejabberd/ejabberd.cfg

And place the following contents in it, replacing chat.sparklingclouds.nl with your own domain:

%% Hostname
{hosts, ["chat.sparklingclouds.nl"]}.

%% Logging
{loglevel, 0}.

{listen,
 [
  {5222, ejabberd_c2s, [
            {access, c2s},
            {shaper, c2s_shaper},
            {max_stanza_size, infinite},
                        %%zlib,
            starttls, {certfile, "/etc/ejabberd/ejabberd.pem"}
               ]},

  {5280, ejabberd_http, [
             http_bind,
             http_poll
            ]}
 ]}.

{s2s_use_starttls, true}.

{s2s_certfile, "/etc/ejabberd/ejabberd.pem"}.

{auth_method, internal}.
{auth_password_format, scram}.

{shaper, normal, {maxrate, 500000000}}.

{shaper, fast, {maxrate, 500000000}}.

{acl, local, {user_regexp, ""}}.

{access, max_user_sessions, [{10, all}]}.

{access, max_user_offline_messages, [{5000, admin}, {100, all}]}. 

{access, c2s, [{deny, blocked},
           {allow, all}]}.

{access, c2s_shaper, [{none, admin},
              {normal, all}]}.

{access, s2s_shaper, [{fast, all}]}.

{access, announce, [{allow, admin}]}.

{access, configure, [{allow, admin}]}.

{access, muc_admin, [{allow, admin}]}.

{access, muc, [{allow, all}]}.

{access, register, [{allow, all}]}.

{registration_timeout, infinity}.

{language, "en"}.

{modules,
 [
  {mod_privacy,  []},
  {mod_ping, []},
  {mod_private,  []},
  {mod_http_bind, []},
  {mod_admin_extra, []},
  {mod_muc,      [
          {host, "conference.@HOST@"},
          {access, muc},
          {access_create, muc},
          {access_persistent, muc},
          {access_admin, muc_admin},
          {max_users, 500},
          {default_room_options, [
            {allow_change_subj, false},
            {allow_private_messages, true},
            {allow_query_users, true},
            {allow_user_invites, false},
            {anonymous, true},
            {logging, false},
            {members_by_default, false},
            {members_only, false},
            {moderated, false},
            {password_protected, false},
            {persistent, false},
            {public, false},
            {public_list, true}
              ]}
                 ]},
  {mod_register, [
          {welcome_message, {"Welcome!"}},
          {access, register}
         ]}
 ]}.

NGINX Configuration

We need an SSL certificate for the web server. You can generate one yourself using the following command:

cd /etc/ssl/certs
openssl req -nodes -x509 -newkey rsa:4096 -keyout key.pem -out cert.crt -days 356

Or generate a CSR and let it sign by a “official” CA like verisign or digicert:

cd /etc/ssl/certs
openssl req -nodes -newkey rsa:4096 -keyout private.key -out CSR.csr 

When the certificate is in place you can continue to configure NGINX.

Edit the file or create a new virtual host.

vim /etc/nginx/sites-enabled/default

And place the following contents in it, replacing chat.sparklingclouds.nl with your own domain:

server {
    listen 80;
    listen [::]:80 default ipv6only=on;

    server_name chat.sparklingclouds.nl;
    rewrite     ^   https://$server_name$request_uri? permanent;

    add_header Strict-Transport-Security max-age=31536000;

    location / {
            root /var/www;
            index index.html index.htm;
    }
}

# HTTPS server
server {
    listen 443;
    server_name chat.sparklingclouds.nl;

    add_header Strict-Transport-Security max-age=31536000;

    ssl  on;
    ssl_certificate  /etc/ssl/certs/cert.crt;
    ssl_certificate_key  /etc/ssl/certs/key.pem;

    ssl_session_cache shared:SSL:10m;
    ssl_session_timeout 5m;

    ssl_protocols TLSv1.1 TLSv1.2;
            ssl_ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-RC4-SHA:ECDHE-RSA-AES128-SHA:RC4:HIGH:!MD5:!aNULL:!EDH;
            ssl_prefer_server_ciphers on;

    location / {
        root /var/www;
        index index.html index.htm;
    }

    location /http-bind {
        proxy_buffering off;
        tcp_nodelay on;
        keepalive_timeout 55;
        proxy_pass http://127.0.0.1:5280/http-bind;
    }
}

Save it and restart NGINX:

/etc/init.d/nginx restart

Cronjob for ejabberd

This is important, it cleans up unused ejabberd accounts. Create a new crontab like so:

crontab -e

And place the following in it:

1 1 * * * ejabberdctl delete-old-users 1

That way once every 24 hours the ejabberd server gets cleaned up.

Web Frontend

Note that you now already can use your own server with the CryptoCat frontend via: https://crypto.cat. We are going to set up our own frontend on our webserver so we don’t need Crypto.Cat.

Setting up a web frontend is not recommended by the cryptocat developers. See the comment below, and read the full thread on this Reddit post

When you host Cryptocat as a website, this means that every time someone wants to use it, they technically will need to re-download the entire code by visiting the website. This means that every use needs a full re-download of the Cryptocat code. By centralizing the code redistribution in a "web front-end" and making it necessary for everyone to redownload the code every time, you create an opportunity for malicious code poisoning by the host, or code injection by a third party. This is why the only recommended Cryptocat download is the browser extension from the official website, which downloads only once as opposed to every time (just like a regular desktop application), and is authenticated by Cryptocat's development team as genuine.  
Kaepora - 12-11-2013 on Reddit

Take that into consideration when setting up the frontend. A use case could be an internal cryptocat chat service where people don’t need to change the default server address and such.

First get the source code:

cd /tmp
git clone https://github.com/cryptocat/cryptocat.git

Then place it in the right folder;

cp -r cryptocat/src/core /var/www/

Edit the config file to use your own server:

cd /var/www
vim js/cryptocat.js

And place the following contents in it, replacing chat.sparklingclouds.nl with your own domain:

/* Configuration */
// Domain name to connect to for XMPP.
var defaultDomain = 'chat.sparklingclouds.nl'
// Address of the XMPP MUC server.
var defaultConferenceServer = 'conference.chat.sparklingclouds.nl'
// BOSH is served over an HTTPS proxy for better security and availability.
var defaultBOSH = 'https://chat.sparklingclouds.nl/http-bind/'

Now save the file.

You are finished now. Go to your website and test the chat out.

Ejabberd SSL Certificate

This tutorial shows you how to set up an SSL Certificate for use with Ejabberd. It covers both the creation of the Certificate Signing Request, the preparing of the certificate for use with Ejabberd and the installation of the certificate.

This tutorial assumes a working ejabberd installation. It is tested on Debian and Ubuntu, but should work on any ejabberd installation.

Steps and Explanation

To get an SSL certificate working on ejabberd we need to do a few things:

  • Create an Certificate Signing Request (CSR) and a Private Key
  • Submit the CSR to a Certificate Authority, let them sign it and give you a Certificate
  • Combine the certificate, private key (and chain) into a ejabberd compatible PEM file
  • Install the certificate in ejabberd

With a certificate we can secure our XMPP connection and conversations. This way it is much harder for others to spy on your conversations. Combined with OTR this enabled a super secure channel for conversation.

Creating the Certificate Signing Request

Create a folder to store all the files and cd to that:

mkdir -p ~/Certificates/xmpp cd ~/Certificates/xmpp

Now use OpenSSL to create both a Private Key and a CSR. The first command will do it interactively, the second command will do it non-interactive. Make sure to set the correct values, your Common Name (CN) should be your XMPP server URL:

Interactive:

openssl req -nodes -newkey rsa:2048 -keyout private.key -out CSR.csr

Non-interactive:

openssl req -nodes -newkey rsa:2048 -keyout private.key -out CSR.csr -subj “/C=NL/ST=State/L=City/O=Company Name/OU=Department/CN=chat.example.org”

This will result in two files, CSR.csr and private.key. You now have to submit the CSR to a Certificate Authority. This can be any CA, I myself have good experiences with Xolphin, but there are others like Digicert and Verisign.

Once you have submitted your CSR and have gotten a Certificate you can continue.

Creating the ejabberd certificate

Once you have all the files (private key, certificate and certificate chain), put them all in a folder and continue. We are going to cat all the required files into a ejabberd.pem file.

This needs to happen in a specific order:

  • private key
  • certificate
  • chains

So adapt the following commands to your filenames and create the pem file:

cat private.key >> ejabberd.pem cat certificate.pem >> ejabberd.pem cat chain-1.pem >> ejabberd.pem cat chain-2.pem >> ejabberd.pem

If that all works out continue.

Installing the certificate in ejabberd

Copy the certificate to all your ejabberd servers:

scp ejabberd.pem user@srv1.example.org:

The place the certificate in the /etc/ejabberd folder:

cp ejabberd.pem /etc/ejabberd/ejabberd.pem

Now change the ejabberd config to point to the new certificate:

vim /etc/ejabberd/ejabberd.cfg

Check/change the following to point to the new certificate:

[…] {listen, [ {5222, ejabberdc2s, [ {access, c2s}, {shaper, c2sshaper}, {maxstanzasize, 65536}, starttls, {certfile, “/etc/ejabberd/ejabberd.pem”} ]}, […] {s2susestarttls, true}. {s2s_certfile, “/etc/ejabberd/ejabberd.pem”}. […]

Afterwards restart ejabberd:

/etc/init.d/ejabberd restart

You can now use any XMPP client to connect with SSL/TLS to see if it works.

How to install Virtualmin GPL on Ubuntu Server


My installation was done using Ubuntu Server 14.04.1 LTS.

Connect to your fresh install Ubuntu Server using SSH.

ssh -l lab1 123.123.123.123

I prefer to run the installation process as root for this I will have to create a password for root user on Ubuntu server.  The Virtualmin install script also requires to be run as root.

Create root password.

sudo passwd root

Enter password of lab1 (replace lab1with your account).

Then enter new UNIX password for root.

Then change to root user.

su root

Enter the password of your user account when prompted.

Then move to directory.

How to install Virtualmin GPL on Ubuntu Server

cd /usr/local/src
wget http://software.virtualmin.com/gpl/scripts/install.sh

Make the script executable.

chmod +x install.sh
 ./install.sh

If you OS is among the supported OS listed your good to proceed.

os supported

Enter y.

When asked for a fully qualified hostname enter your hostname.

lab1.joealdeguer.com

Depending on how fast your server this could take a few minutes or longer.

When you get this point and see the message Succeeded, your Virtualmin was installed successfully.

install success

Run to see if there are any upgrades for the server.

apt-get update && apt-get upgrade
Enter y.
Reboot

Check for more upgrades.

apt-get update && apt-get dist-upgrade
Enter y.
Reboot

Install the following packages after installing Virtualmin.

apt-get install fetchmail unzip zip pbzip2 lvm2 ntpdate curl rsync vim iftop smartmontools libjpeg-turbo-progs pvrg-jpeg atop bsd-mailx mailutils optipng pnqnq

And more.

apt-get install php5-curl php5-dev php5-gd php5-idn php5-imagick php5-imap php5-mcrypt php5-memcache php5-mhash php5-ming php5-mysql php5-ps php5-pspell php5-recode php5-snmp php5-sqlite php5-tidy php5-xmlrpc php5-xsl php5-odbc php5-dev apache2-prefork-dev aspell-en imagemagick make memcached g++ graphicsmagick whowatch zabbix-agent

If you use WordPress like I do there are some plugins which require software which may not be available through the normal Ubuntu or Debian repository.  Like for instance pngout, this can be downloaded from here then upload it to /usr/bin.

Do it one more time.

apt-get update && apt-get upgrade

Then do this to remove no longer needed packages.

apt-get autoremove
Reboot

When there are no more upgrades proceed below.

To complete installation go to URL of the newly installed Virtualmin GPL.

https://123.123.123.123:10000

On Firefox click to add exception to accept self-signed certificate.  Then confirm.

self sign cert

Login using root.

Click Next.
Pre-load Virtualmin Libraries: Y
Run email domain lookup server: Y
Run ClamAV server scanner: Y
Run SpamAssassin server filter: Y
Run MySQL database server: Y
Enter MySQL password. Click next

You can create the MySQL root password using this site.

pass generator

MySQL configuration size: Leave default settings.

Click next.

Skip check for resolvability (If DNS is not yet setup for server’s fully qualified domain name)

Password storage mode: Only store hashed passwords.

Click next, next.

Install updates if Virtualmin shows any.

virtualmin updates

Click System Information menu.

Click re-check and refresh configuration. I got this error after running check.

virtualmin configuration error

Easy enough to fix. I just have to add 127.0.0.1 into list of DNS servers as the error suggest. If you missed that part go to Webmin > Networking > Network Configuration > Host and DNS client. Click save. Then apply configuration.

DNS settings

Another error.

Another error

I don’t need to use mailman so I have this disabled by going to menu Virtualmin > Systems Settings > Features and Plugins then uncheck mailman. Click save.

disable mailman

When you see the Virtualmin status page you’re ready to go.

virtualmin dashboard

At this point Virtualmin should be setup ready for use as indicated by two separate top menus on the left. Virtualmin and Webmin.  Virtualmin is primarily used for managing virtual websites. Webmin is used to manage the Linux server using point and click instead of only using command line tools.

virtualmin and webmin menu

Before I start creating any virtual website accounts I like to tighten up security by doing the following.

Changing the administrator account and deleting the root account created by the install script. Going to Webmin > Webmin Users > click on root > clone. Fill out the account details. I prefer to use a difficult username for the Webmin account on top of a hard password to guess. Also limiting what IP addresses can connect to the admin port.

Note: to find your IP address click here.

Click create.

cloning root

Your user account appears as one of the Webmin users. Clicking on it then switch to user, then refresh page. I am now login as user admin P3p0t.  The next step is to delete the root account. Going to Webmin > Webmin Users > delete root.

Next is to stop brute force attempts directed at Webmin. Under Webmin configuration > Authentication. Limiting failed login attempts to 3 and blocking host and users for 3 hours. Click save.

block failed login attempts

Webmin’s default port uses 10000. I prefer to change this as well to something else. You can come up with your own unused port to use.  In Webmin configuration > Ports and Addresses.

change webmin port

I will also change the default port SSH uses to something else other than 22.  By going to Servers > SSH Server > Networking. Save and Apply changes.

ssh port change

Check my other article to tighten your security here.

Delete Virtualmin GPL install script when everything has been setup.

rm /usr/local/src/install.sh

Enjoy the best free Open Source cpanel!

How to install CSF firewall on your VPS


How to install CSF firewall on your VPS

Is by far the easiest firewall script to date I have worked with which even comes with a Webmin module a web interface front-end to manage the firewall configuration.  If you’re serious about your server security then add another layer of defense by installing CSF the user-friendly server firewall.  CSF has been tested to work with different Virtual Private Servers.  But I would suggest to use a Xen or KVM VPS instead of an Open VZ so you will have all the IPtables modules needed for CSF to work correctly. On a personal note after using CSF on my servers I have noticed a significant reduction of brute force attempts directed against FTP or SASL.

My CSF installation was done on an Ubuntu 12.04 LTS and Debian 7 Wheezy.

Install CSF Firewall

cd /usr/local/src

wget http://download.configserver.com/csf.tgz

tar -xzf csf.tgz

cd csf

sh install.sh

Iptables Module Test

Do a test to make sure you have the needed iptables modules installed for it to work.  This is one of the many reasons to use a Xen or KVM VPS so you don’t run into any missing iptables modules when using OpenVZ type VPS.

perl /usr/local/csf/bin/csftest.pl

You should get something like this.

iptables module test

Remove Advance Policy Firewall & Brute Force Detection

Run the this script if you already APF & BFD firewall installed like I do.

sh /usr/local/csf/bin/remove_apf_bfd.sh

Installing CSF Webmin Module

Install the Webmin module to manage the firewall through a web interface.   Since I already have Webmin installed on my server all I had to do was to go to Webmin > Webmin Configuration > Webmin Modules > Install from local file > Browse to /usr/local/csf/csfwebmin.tgz.

Click install.

install csf webmin module

CSF Firewall Webmin Menu

After it has been installed there will now be a menu call ConfigServer Security & Firewallunder the System menu.

csf webmin module

CSF Basic Security Test

CSF can perform a basic security check on your server with suggestions on how to fix any issues found.

Click  Check Server Security.

csf server security check

These were the results I got.  So I have some work to do.

csf security check results

Green indicator Firewall is running

Very important to keep on doing the test and fixing any issues found by the check until you get the OK.

One of the suggested fix is to enable the CSF upgrade connection to use SSL.

You can install the LWP perl module using Webmin’s perl module.

perl module

Then edit the csf.conf file.

vi /etc/csf.conf

ssl upgrade

green indicator

Firewall Configuration

Clicking on Firewall Configuration to make your edits.

firewall configuration

To quickly jump to sections of the firewall settings you can choose from the drop down menu.

firewall configuration menu

Before I start changing settings on the CSF firewall Webmin module I added my current IP address so I don’t lock myself out.  By clicking Quick Allow.

allow ip through

Or you could set the CSF firewall to test mode by setting the values like below.

csf testmode

Clicking on Firewall Configuration next to start managing the firewall configuration script.

firewall configuration

Using the recommended setting for RESTRICT_SYSLOG.

restrict_syslog recommended setting

Create a group for Syslog.

syslog group

Restricted UI set to the recommended setting.

restrict ui setting

Set the auto update to on so the cron script can check daily for newer versions of CSF.

auto updates on

If an update becomes available this will appear as below.  You can view details of the upgrade by clicking View ChangeLog.

Clicking Upgrade csf will perform upgrade.

csf update available

Allow which ports to receive and send connections otherwise those services will not be able to communicate.

allowed ports

I ran into an issue where my outbound SSH connections were being blocked by the firewall.  I forgot to add the new port number on the outbound TCP ports.  I am using a non-default SSH port.

outbound ssh port

Enable or disable ping replies.

allow ping replies

How many IPs to keep in the deny lists.  Change this setting depending on your server resources.

deny ip limit

The following settings are enable so LFD can check for login failures to ban.  The setting will also check to make sure CSF has not been stopped so it can be restarted.

lfd set to check for login failures

Set the default drop target for connections matching a rule.  Set it to DROP.  This will cause anyone trying to port scan your server to hang.

drop target

I like to enable the drop connections should I need to see which IPs got blocked.

drop logging

How to block countries from accessing your server

CSF, makes this very easy to do compared to other scripts I have used in the past.  You just need to add the country code separated by comma.

block countries

Blocking a specific IP address or a network

I have used this feature a lot whenever I get phishing emails or lots of spam coming from an IP address or IP addresses from the same network block I will add the IP address or network address in here with a comment.  Any IP address added here will be permanently blocked.  I have used this online whois to determine who owns the IP address and which ISP provides hosting.

deny ip

Login failure blocking when enabled will trigger LFD Login Failure Daemon to block any failed login attempts when it reaches the number of failed attempts set.

lf trigger

When you have LFD enabled you will sometimes need to add IP addresses you own in here so you don’t get locked out if you mistype a password.  Click edit then add in your IP address or network.  Then restart LFD.

lfd ignore list

Block lists

let us enable these block lists from Spamhaus, Dshield, Honeypot, Tor nodes, etc.

Clicking lfd Blocklist.  Uncommenting the blocklist you want to use.  Using this has reduced intrusion attempts against my server from compromised hosts.  What a great option to have on a firewall.  CSF makes it incredibly easy to enable.  Before you enable this blocklist or country blocking you need to consider if your server has enough to resources to handle the load.  My VPS typically have more than 3 GB of ram some even more.  I usually do not have less than 4 CPUs for my VPS.  So I am able to use all the blocklist rules with no noticeable performance hit.

Don’t forget to click change to apply the new settings.

vi /etc/csf/csf.blocklist

blocklist

If you’re curios to see what rules your CSF firewall has loaded click on view iptables rules. Depending on what you have enable be prepared to scroll for a long time.  This is just a sample of mine which shows connections from China are blocked.  I had to snipped it for the output was very long.

china blocked

If you want to see connections being dropped in real time you could do so by clicking watch system logs.  Then choosing from drop down kern.log.

watch system logs

dropped connections

If you wanted to permanently block an IP or IP range click Firewall Deny IPs.  Enter each IP or CIDR addressing one per each line.

Click change to apply configuration changes.

block ip permanently

block ip list

Login Failure Daemon (LFD)

LFD Daemon is a process which continuously scans the logs for failed login attempts the script will immediately block the offending host when a set number of failed attempts is reached per IP.  It can also detect distributed attacks.  Compared to Fail2ban which I used before the resource consumption created by LFD is much lower.

Very Important! If you want your home IP address not being blocked by LFD due to failed login attempts (You making SSH, IMAP, etc connections while putting in the wrong password) you will have to add them into csf.ignore.  Add the IPs you don’t want blocked one per each line. I learned this the hard way!

From the web interface choose from the drop down which LFD file to edit to add IP addresses you never want locked out.

lfd ignore web interface

vi /etc/csf/csf.ignore

If you end up blocking yourself you will have to login at the console to stop LFD  through init.  
/etc/init.d/lfd stop

Check if Syslog is running

syslog is running check

ConfigSecurity Firewall & LFD Brute Force Detection Blocking Specific Settings for Ubuntu & Debian

For LSF to block failed attempts against ProFTPD, SASL on Ubuntu & Debian the following log paths on CSF.conf have to be changed.

vi /etc/csf/csf.conf
HTACCESS_LOG = "/var/log/apache2/error.log"

MODSEC_LOG = "/var/log/apache2/error.log"

SSHD_LOG = "/var/log/auth.log"

SU_LOG = "/var/log/messages"

FTPD_LOG = "/var/log/proftpd/proftpd.log"

SMTPAUTH_LOG = "/var/log/mail.log"

POP3D_LOG = "/var/log/mail.log"

IMAPD_LOG = "/var/log/mail.log"

IPTABLES_LOG = "/var/log/syslog"

SUHOSIN_LOG = "/var/log/syslog"

BIND_LOG = "/var/log/syslog"

SYSLOG_LOG = "/var/log/syslog"

WEBMIN_LOG = "/var/log/auth.log"

Then on the CUSTOM LOG.

CUSTOM1_LOG = "/var/log/mail.log"

Then you will need to add the regex to catch the failed attempts against SASL.

vi /usr/local/csf/bin/regex.custom.pm

Add the following code in the middle of  “Do not edit before this point &  Do not edit beyond this point”  The numbers after “mysaslmatch” are used for the following: “1” is the number of failed attempts which triggers a block IPTable rule.  The next number indicates the port to monitor “25,58”. You could separate the multiple ports using a comma.  The next number “6000” is the time in seconds the host will be kept in the deny lists.

if (($lgfile eq $config{CUSTOM1_LOG}) and ($line =~ /^\S+\s+\d+\s+\S+ \S+ postfix\/smtpd\[\d+\]: warning:.*\[(\d+\.\d+\.\d+\.\d+)\]: SASL [A-Z]*? authentication failed/)) {

return ("Failed SASL login from",$1,"mysaslmatch","1","25","6000");

}

Restart the CSF firewall to apply settings.

csf -r

As soon as I have the SASL custom regex applied an offending host was caught abusing SASL.  The log which was emailed to me.  It has been very effective blocking brute force detection targeted against my FTP and SASL services that I decided to do away with Fail2ban.

sasl blocked host

Checking the Temporary IP Entries came up with the following results.

temporary block ips

From this window you can easily unblock or permanently ban an IP by clicking the icons.  Any hosts added to this list will be banned accessing any ports until the set banned time limit is reached.

blocked ip gui

If you want to allow only specific IPs from connecting to your SSH port you could do so by removing SSH port 22 in the IPv4 port settings.

Allow specific ips from connecting

Then adding the IP addresses you want to be able to connect to your SSH port in.

vi /etc/csf/csf.allow

###############################################################################
# Copyright 2006-2014, Way to the Web Limited
# URL: http://www.configserver.com
# Email: sales@waytotheweb.com
###############################################################################
# The following IP addresses will be allowed through iptables.
# One IP address per line.
# CIDR addressing allowed with a quaded IP (e.g. 192.168.254.0/24).
# Only list IP addresses, not domain names (they will be ignored)
#
# Advanced port+ip filtering allowed with the following format
# tcp/udp|in/out|s/d=port|s/d=ip
# See readme.txt for more information
#
# Note: IP addressess listed in this file will NOT be ignored by lfd, so they
# can still be blocked. If you do not want lfd to block an IP address you must
# add it to csf.ignore
123.123.123.124 # csf SSH installation/upgrade IP address - Wed Feb 26 13:16:28 2014
123.123.123.125 # Home IP address

 DDoS Protection

From Firewall Configuration click on drop down.

connection tracking

For some level of DDoS protection I have enabled connection tracking by doing so I am able to limit the number of connections to network services I want to limit connections it receives.  The values below are what works for my setup you will have to play around as to what settings works best for you.

CT_LIMIT = 100

CT_BLOCK_TIME = 1800 (30 mins blocked time)

CT_PORTS = 80,993

Leaving the rest of the settings to use the default values.

ct_limit

Leaving the rest of the settings up to you to change.  The CSF firewall settings are very well documented.  When you’re done making your edit apply new settings by clicking change.

apply setting changes

Command line CSF

Enable CSF

csf -e

Disable CSF

csf -x

Re-enable CSF and LFD

csf -e

Restart CSF

csf -r

Happy Fire-walling using CSF The User-friendly host-based firewall.

References:

http://forum.configserver.com/viewtopic.php?f=6&t=6968

https://www.virtualmin.com/node/13841