Never Ending Security

It starts all here

OSSEC with Kibana 3 and the latest ElasticSearch and Logstash.

There are some good howto’s on the internet but some lacked the correct config or the howto was outdated.
I decided to rewrite the howto to have a more concise config that works with Kibana 3 and the latest ElasticSearch and Logstash.

1) Install openjdk-7-jre-headless (for Debian/Ubuntu systems)
2) Install elasticsearch and logstash using their apt repo:
deb stable main
deb stable main
3) grab the kibana 3 package (I have not tested and used the 4 branch yet) from the Kibana website and untar/unzip it somehwere in /usr/share
4) change the file /etc/elasticsearch/elasticsearch.yml => mycluster and
5) add the logstash config below to /etc/logstash/conf.d/logstash.conf
6) add the logstash user to the ossec group in /etc/group (warning: this still might not work for you and perhaps you have to run logstash with more rights to get it working with the ossec logfile and more)
7) restart elasticesearch and logstash to let them connect to eachother and read the /var/ossec/logs/alerts/alerts.log
8) create a Nginx vhost with the following config: (I choose Nginx since I like it better then Apache, Apache would do fine too)
9) restart nginx
10) change the kibana config.js in /usr/share/kibana3 => elasticsearch: “http://your_FQDN:80”,
11) ensure you created the kibana apache/htaccess users with the apache2-utils package htpasswd -c etc.
12) test your Kibana 3 instance at http://kibana3nginxfqdn/
13) create some nice dashboards using the bettermap (use the geopip and geopip.country2 to map these results in a world map)
14) create some nice bars to show the users, countries, alert levels with a terms panel

Some links that might help you if this information is insufficient:

Kibana 3 bars and maps:
Or use a Bettermap to show geo-ip attacks! (span 12 + higth 300pxi works best for me)

The logstash.conf:

input {
file {
type => “ossec”
path => “/var/ossec/logs/alerts/alerts.log”
sincedb_path => “/opt/logstash/”
codec => multiline {
pattern => “^\*\*”
negate => true
what => “previous”

filter {
if [type] == “ossec” {
# Parse the header of the alert
grok {
# Matches 2014 Mar 08 00:57:49 (>ossec
# (?m) fixes issues with multi-lines see
match => [“message”, “(?m)\*\* Alert %{DATA:timestamp_seconds}:%{SPACE}%{WORD}?%{SPACE}\- %{DATA:ossec_group}\n%{YEAR} %{SYSLOGTIMESTAMP:syslog_timestamp} \(%{DATA:reporting_host}\) %{IP:reporting_ip}\-\>%{DATA:reporting_source}\nRule: %{NONNEGINT:rule_number} \(level %{NONNEGINT:severity}\) \-\> ‘%{DATA:signature}’\n%{GREEDYDATA:remaining_message}”]

# Matches 2014 Mar 08 00:00:00 ossec-server01->/var/log/auth.log
match => [“message”, “(?m)\*\* Alert %{DATA:timestamp_seconds}:%{SPACE}%{WORD}?%{SPACE}\- %{DATA:ossec_group}\n%{YEAR} %{SYSLOGTIMESTAMP:syslog_timestamp} %{DATA:reporting_host}\-\>%{DATA:reporting_source}\nRule: %{NONNEGINT:rule_number} \(level %{NONNEGINT:severity}\) \-\> ‘%{DATA:signature}’\n%{GREEDYDATA:remaining_message}”]

# Attempt to parse additional data from the alert
grok {
match => [“remaining_message”, “(?m)(Src IP: %{IP:src_ip}%{SPACE})?(Src Port: %{NONNEGINT:src_port}%{SPACE})?(Dst IP: %{IP:dst_ip}%{SPACE})?(Dst Port: %{NONNEGINT:dst_port}%{SPACE})?(User: %{USER:acct}%{SPACE})?%{GREEDYDATA:real_message}”]

geoip {
source => “src_ip”

mutate {
convert => [ “severity”, “integer”]
replace => [ “@message”, “%{real_message}” ]
replace => [ “@fields.hostname”, “%{reporting_host}”]
add_field => [ “@fields.product”, “ossec”]
add_field => [ “raw_message”, “%{message}”]
add_field => [ “ossec_server”, “%{host}”]
remove_field => [ “type”, “syslog_program”, “syslog_timestamp”, “reporting_host”, “message”, “timestamp_seconds”, “real_message”, “remaining_message”, “path”, “host”, “tags”]

output {
elasticsearch {
host => “”
cluster => “mycluster”
} seems the logstash user can not access the /var/ossec/logs/alerts/alerts.log file which seems normal as the directory and files are restricted to the ossec:ossec user and group.
I’ve added the logstash user to the ossec group in /etc/group but even after a restart of the logstash service nothing is indexed.
When running logstash as root it all works but that is not secure and not the correct way of solving this.
I still need to fix that and will put it to this blog post.
PPS. I found some reason why it does not work: running: ‘/opt/logstash/bin/logstash agent -f /etc/logstash/conf.d -l /var/log/logstash/logstash.log’ works
But it seems it is related to the init scripts that chroots the user and its process so the secondary groups are removed and the user will never be in the ossec group thus have no permission to read the file.
See also:

The fix is changing the /etc/init.d/logstash init script as follows:

The chroot line in the init script needs to be preceded with something like:

SUPP_GROUPS=$(groups $LS_USER | cut -d ” ” -f 4- | tr ” ” “,”)
if [ ! -z ${SUPP_GROUPS} ]
and then modify the beginning of the chroot line:

nice -n ${LS_NICE} chroot ${SUPP_GROUP_STR} –userspec=$LS_USER:$LS_GROUP / sh -c ”
… etc …

Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s