Installing Hogzilla + Snort Module

Note 1: The Snort module is still under development.

Note 2: You should install it in a segregated network. By default, the services are available without authentication.

Note 3: This guide is for Debian Linux

Summary

  1. Preparing installation
  2. Installing Snort
  3. Installing Barnyard2-hz
  4. Installing Snorby
  5. Installing Java
  6. Installing Hadoop/HBase
  7. Installing Apache Spark
  8. Installing Hogzilla
  9. Installing Pigtail
  10. Running and Testing
  11. Configure rc.local
  12. It doesn’t work. How to get help?

1. Preparing installation

The number of servers and the hardware configuration depend on your environment. If you have a relative large traffic to be analysed (we still don’t have benchmark to precise it), you shall consider:

  • n Snort sensors with Barnyard2-hz (collect and sending)
  • m Servers for Hadoop/HBase/Apache Spark (processing and NoSQL DB)
  • 1 Server for MySQL DB (used for Snorby)
  • 1 Server for Snorby (monitoring user interface)

Define your sensors and certify that they are connected in span ports. You can have more than one, but we recommend to test first with just one.

In this guide we are assuming:

  • 1 server for Snort/Snorby/MySQL/Barnyard2 (we refer to it by Server A in this guide)
  • 1 server for Hadoop/HBase/Apache Spark (we refer to it by Server B in this guide)

Install your favorite Linux distribution (I personally suggest Debian), or maybe some *BSD. Yes! You can use Virtual Machines in all servers, even in Snort. But remember that you can lose some performance in capturing data and in some cases it can be relevant.

Install some important dependencies

apt-get install vim vim-scripts

Add Hogzilla user and create the data directory

Just in Server B

adduser hogzilla
mkdir /data
chown hogzilla /data

Generate RSA key for SSH authentication

Just in Server B

su - hogzilla
ssh-keygen -t rsa
<press enter 3 times>
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
chmod 0600 ~/.ssh/authorized_keys

Server A

Install some important dependencies

apt-get install vim vim-scripts

2. Installing Snort

Install dependencies for Snort compilation

apt-get install flex bison libpcap-dev libdnet-dev libdumbnet-dev rsync

Download DAQ and Snort

wget https://www.snort.org/downloads/snort/daq-2.0.6.tar.gz
wget https://www.snort.org/downloads/snort/snort-2.9.8.0.tar.gz

Compile and install DAQ

tar xvzf daq-2.0.6.tar.gz
cd daq-2.0.6
./configure --prefix=/usr/local/snort
make
mkdir /usr/local/snort
make install
ln -s /usr/local/snort/bin/daq-modules-config /usr/bin/

Compile and install Snort

tar xvzf snort-2.9.8.0.tar.gz
cd snort-2.9.8.0
./configure --prefix=/usr/local/snort --with-daq-includes=/usr/local/snort/include --with-daq-libraries=/usr/local/snort/lib --enable-sourcefire
make
make install

Copy configuration templates

mkdir /usr/local/snort/etc
rsync -avzP etc/*.conf* /usr/local/snort/etc/.
rsync -avzP etc/*.map /usr/local/snort/etc/.

Copy configuration templates

vim /usr/local/snort/etc/snort.conf

The main change is add/change the lines

include $RULE_PATH/snort.rules
var WHITE_LIST_PATH /usr/local/snort/etc/rules
var BLACK_LIST_PATH /usr/local/snort/etc/rules

and comment all lines like

# include $RULE_PATH/<SOMETHING>

Maybe you will need more adjusts to fit you environment. Check the Snort documentation (references below).

Create snort user

groupadd snort
useradd -g snort snort

Adjust paths

mkdir /usr/local/snort/etc/rules
mkdir /usr/local/snort/lib/snort_dynamicrules
mkdir /usr/local/snort/etc/rules/iplists
mkdir -p /usr/local/snort/var/log
touch /usr/local/snort/etc/rules/local.rules
touch /usr/local/snort/etc/rules/white_list.rules
touch /usr/local/snort/etc/rules/black_list.rules
Install PulledPork

PulledPork is a script to download and update the Snort’s rules. We recommend you to create an account at http://snort.org and generate you Oik Code.

You also can subscribe for the paid rules.

Install dependencies

apt-get install libcrypt-ssleay-perl liblwp-protocol-https-perl subversion

Download source files

svn checkout http://pulledpork.googlecode.com/svn/trunk/ pulledpork-read-only

Create directories and copy files

mkdir /usr/local/pp
mkdir /usr/local/pp/etc
mkdir /usr/local/pp/bin
rsync -avzP pulledpork-read-only/etc/.  /usr/local/pp/etc/.
rsync -avzP pulledpork-read-only/pulledpork.pl /usr/local/pp/bin/.

Run PulledPork

/usr/local/pp/bin/pulledpork.pl -c /usr/local/pp/etc/pulledpork.conf -l

Maybe you will need to specify the rules version.

/usr/local/pp/bin/pulledpork.pl -S 2.9.7.6 -c /usr/local/pp/etc/pulledpork.conf -l

Configure RSyslog to log Snort

vim /etc/rsyslog.d/snort.conf

Add lines

if $programname == 'snort' then /var/log/snort.log
& ~

Restart RSyslog

service rsyslog restart

Run Snort

/usr/local/snort/bin/snort -c /usr/local/snort/etc/snort.conf -T

Comments

If you prefer other Linux distribution, I recommend the official Setup Guides at https://www.snort.org/documents

References

3. Installing Barnyard2-hz

Install dependencies

apt-get install libboost-dev libboost-test-dev libboost-program-options-dev libboost-system-dev libboost-filesystem-dev libevent-dev automake libtool flex bison pkg-config g++ libssl-dev ant automake php5-dev php5-cli phpunit libglib2.0-dev libtool libtool-bin libpcap-dev libmysqld-dev git

Install lib nDPI

git clone https://github.com/ntop/nDPI.git
cd nDPI
./autogen.sh
./configure
make
make install

Install thrift

wget http://www.us.apache.org/dist/thrift/0.9.3/thrift-0.9.3.tar.gz
tar xzvf thrift-0.9.3.tar.gz
cd thrift-0.9.3
./configure
make
make install

Install Barnyard2-hz

git clone https://github.com/pauloangelo/barnyard2.git
cd barnyard2
./autogen.sh
./configure --with-mysql --prefix=/usr/local/by --with-mysql-libraries=/usr/lib/x86_64-linux-gnu
make
make install

Configure rsyslogd

Add the content below into /etc/rsyslog.d/by.conf

if $programname == 'barnyard2' then /var/log/barnyard.log
& ~

Restart RSyslogd

service rsyslogd restart

Start Barnyard2-hz

/usr/local/by/bin/barnyard2 -c /usr/local/by/etc/barnyard2.conf -a /usr/local/snort/var/log/archive -f merged.log -d /usr/local/snort/var/log &

4. Installing Snorby

Snorby requires Ruby 1.9.x! It really does not run if you don’t use it! (Believe, I tried.)

Install dependencies

apt-get install libyaml-dev git-core default-jre imagemagick libmagickwand-dev wkhtmltopdf build-essential libssl-dev zlib1g-dev linux-headers-amd64 libsqlite3-dev libxslt1-dev libxml2-dev libmysqlclient-dev libmysql++-dev apache2-prefork-dev libcurl4-openssl-dev libreadline6-dev

Remove ruby

apt-get purge ruby ruby-dev ruby-build

Install RVM

gpg --keyserver hkp://keys.gnupg.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3
curl -sSL https://get.rvm.io | bash -s stable --quiet-curl --ruby=1.9.3
source /usr/local/rvm/scripts/rvm
rvm requirements

Download Snorby

cd /var/www
git clone http://github.com/Snorby/snorby.git

Configure Snorby

cd /var/www/snorby/config
cp database.yml.example database.yml
cp snorby_config.yml.example snorby_config.yml
sed -i s/"\/usr\/local\/bin\/wkhtmltopdf"/"\/usr\/bin\/wkhtmltopdf"/g snorby_config.yml

Patch Snorby

BUG described at: https://github.com/Snorby/snorby/issues/387

In file “Gemfile”, change

gem 'devise_cas_authenticatable',  :git => 'https://github.com/Snorby/snorby_cas_authenticatable.git'

for

gem 'devise_cas_authenticatable', '~> 1.5'

Install more dependencies

gem install net-ssh -v '2.9.2'
gem install rake --version=0.9.2

Create database

Access mysql CLI.

mysql -uroot -p

Create database and grant access.

create database snorby;
grant all on snorby.* to snorby@localhost identified by 'snorby123'; -- Choose your password here

Configure Snorby

Change database, mailing and other data in files above.

vim /var/www/snorby/config/database.yml
vim /var/www/snorby/config/snorby_config.yml
vim /var/www/snorby/config/initializers/mail_config.rb

Setup Snorby and run it

cd /var/www/snorby
bundle install
bundle exec rake snorby:setup
bundle exec rails server -e production -b 0.0.0.0

Access Snorby interface

http://SERVERIP:3000

Default user and password are

USER: snorby@example.com
PASS: snorby

References https://github.com/Snorby/snorby/issues/369 https://github.com/Snorby/snorby/issues/387

Server B

5. Installing Java

Download JRE

Access

 http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html

accept the terms and click on jdk-7u79-linux-x64.tar.gz .

Uncompress it in /usr/java/

tar xzvf jdk-7u79-linux-x64.tar.gz
mkdir /usr/java
mv jdk1.7.0_79/ /usr/java

Change the paths and binary links. In Debian you can use update-alternatives

update-alternatives --install /usr/bin/java java    /usr/java/jdk1.7.0_79/bin/java  2
update-alternatives --install /usr/bin/javac javac  /usr/java/jdk1.7.0_79/bin/javac 2
update-alternatives --install /usr/bin/jar jar      /usr/java/jdk1.7.0_79/bin/jar   2
update-alternatives --set java  /usr/java/jdk1.7.0_79/bin/java
update-alternatives --set javac /usr/java/jdk1.7.0_79/bin/javac
update-alternatives --set jar   /usr/java/jdk1.7.0_79/bin/jar

Add variables to profile

echo 'export JAVA_HOME="/usr/java/jdk1.7.0_79"' >> /etc/profile
echo 'export PATH="$PATH:$JAVA_HOME/bin"' >> /etc/profile

6. Installing Hadoop/HBase

Through this step you should use user hogzilla

Download Hadoop and HBase

mkdir /home/hogzilla/app
cd /home/hogzilla/app
wget 'http://www.us.apache.org/dist/hadoop/common/stable/hadoop-2.7.1.tar.gz'
tar xzvf hadoop-2.7.1.tar.gz
mv hadoop-2.7.1 /home/hogzilla/hadoop

wget -c 'http://www.us.apache.org/dist/hbase/stable/hbase-1.1.2-bin.tar.gz'
tar xzvf hbase-1.1.2-bin.tar.gz
mv hbase-1.1.2 /home/hogzilla/hbase

Add some variables in ~/.bashrc

echo 'export HADOOP_HOME=/home/hogzilla/hadoop' >> ~/.bashrc
echo 'export HADOOP_MAPRED_HOME=$HADOOP_HOME' >> ~/.bashrc
echo 'export HADOOP_COMMON_HOME=$HADOOP_HOME' >> ~/.bashrc
echo 'export HADOOP_HDFS_HOME=$HADOOP_HOME' >> ~/.bashrc
echo 'export YARN_HOME=$HADOOP_HOME' >> ~/.bashrc
echo 'export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native' >> ~/.bashrc
echo 'export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin' >> ~/.bashrc
echo 'export HADOOP_INSTALL=$HADOOP_HOME' >> ~/.bashrc
echo 'export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"' >> ~/.bashrc
echo 'export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop' >> ~/.bashrc
source ~/.bashrc

Configure Hadoop

cd $HADOOP_HOME/etc/hadoop
echo 'export JAVA_HOME=/usr/java/jdk1.7.0_79/' >> hadoop-env.sh

cp -i core-site.xml core-site.xml-original
cp -i hdfs-site.xml hdfs-site.xml-original
cp -i yarn-site.xml yarn-site.xml-original
cp -i mapred-site.xml.template mapred-site.xml

Put the lines below inside “configuration” tags in core-site.xml

   <property>
      <name>fs.default.name</name>
      <value>hdfs://localhost:9000</value>
   </property>

Put the lines below inside “configuration” tags in hdfs-site.xml

   <property>
      <name>dfs.replication</name >
      <value>1</value>
   </property>
   <property>
      <name>dfs.name.dir</name>
      <value>file:///data/hdfs/namenode</value>
   </property>
   <property>
      <name>dfs.data.dir</name>
      <value>file:///data/hdfs/datanode</value>
   </property>

Put the lines below inside “configuration” tags in yarn-site.xml

   <property>
      <name>yarn.nodemanager.aux-services</name>
      <value>mapreduce_shuffle</value>
   </property>

Put the lines below inside “configuration” tags in mapred-site.xml

   <property>
      <name>mapreduce.framework.name</name>
      <value>yarn</value>
   </property>

Initiate HDFS and Start Hadoop

hdfs namenode -format
start-dfs.sh
start-yarn.sh

Configure HBase

cd /home/hogzilla/hbase/conf
cp -i hbase-env.sh hbase-env.sh-original
cp -i hbase-site.xml hbase-site.xml-original
echo 'export JAVA_HOME=/usr/java/jdk1.7.0_79/' >> hbase-env.sh

Put the lines below inside “configuration” tags in hbase-site.xml

<property>
    <name>zookeeper.znode.rootserver</name>
    <value>localhost</value>
</property>
<property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
</property>
<property>
    <name>hbase.rootdir</name>
    <value>hdfs://localhost:9000/hbase</value>
</property>
<property>
    <!-- <name>hbase.regionserver.lease.period</name> -->
    <name>hbase.client.scanner.timeout.period</name>
    <value>900000</value> <!-- 900 000, 15 minutes -->
</property>
<property>
    <name>hbase.rpc.timeout</name>
    <value>900000</value> <!-- 15 minutes -->
</property>
<property>
    <name>hbase.thrift.connection.max-idletime</name>
    <value>1800000</value>
</property>

Start HBase

cd /home/hogzilla/hbase
./bin/start-hbase.sh
./bin/hbase-daemon.sh start thrift

Create Hogzilla tables in HBase

./bin/hbase shell

Inside HBase Shell

create 'hogzilla_flows','flow','event'
create 'hogzilla_events','event'
create 'hogzilla_sensor','sensor'
create 'hogzilla_signatures','signature'

More variables in ./.bashrc

echo 'export CLASSPATH=$CLASSPATH:/home/hogzilla/hbase/lib/*' >> ~/.bashrc
source ~/.bashrc
References

7. Installing Apache Spark

Through this step you should use user hogzilla

Download Apache Spark

Take a look at http://spark.apache.org/downloads.html to see if you are downloading the latest version.

cd /home/hogzilla/app
wget http://mirror.nbtelecom.com.br/apache/spark/spark-1.6.0/spark-1.6.0-bin-hadoop2.6.tgz
tar xzvf spark-1.6.0-bin-hadoop2.6.tgz
mv spark-1.6.0-bin-hadoop2.6 /home/hogzilla/spark

Configure Apache Spark

cd /home/hogzilla/spark/conf
cp spark-env.sh.template spark-env.sh
echo 'SPARK_DRIVER_MEMORY=1G' >> spark-env.sh

Start Apache Spark

cd /home/hogzilla
./spark/sbin/start-master.sh
./spark/sbin/start-slaves.sh
References

[1] http://spark.apache.org/docs/latest/spark-standalone.html

8. Installing Hogzilla

Through this step you should use user hogzilla

Download Hogzilla

cd /home/hogzilla
wget http://ids-hogzilla.org/downloads/Hogzilla-v0.5.1-alpha.jar
mv Hogzilla-v0.5.1-alpha.jar Hogzilla.jar

Create and run a script

Put the content below in /home/hogzilla/hogzilla.sh

#!/bin/bash

HBASE_PATH=/home/hogzilla/hbase
HBASE_VERSION="1.1.2"

# You can change the values of --num-executors, --driver-memory, --executor-memory, --executor-cores according to your resources.
while : ; do 
     ./spark/bin/spark-submit \
     --master yarn-cluster \
         --num-executors 2 \
         --driver-memory 712m \
         --executor-memory 712m \
         --executor-cores 2 \
     --jars $HBASE_PATH/lib/hbase-annotations-$HBASE_VERSION.jar,$HBASE_PATH/lib/hbase-annotations-$HBASE_VERSION-tests.jar,$HBASE_PATH/lib/hbase-client-$HBASE_VERSION.jar,$HBASE_PATH/lib/hbase-common-$HBASE_VERSION.jar,$HBASE_PATH/lib/hbase-common-$HBASE_VERSION-tests.jar,$HBASE_PATH/lib/hbase-examples-$HBASE_VERSION.jar,$HBASE_PATH/lib/hbase-hadoop2-compat-$HBASE_VERSION.jar,$HBASE_PATH/lib/hbase-hadoop-compat-$HBASE_VERSION.jar,$HBASE_PATH/lib/hbase-it-$HBASE_VERSION.jar,$HBASE_PATH/lib/hbase-it-$HBASE_VERSION-tests.jar,$HBASE_PATH/lib/hbase-prefix-tree-$HBASE_VERSION.jar,$HBASE_PATH/lib/hbase-procedure-$HBASE_VERSION.jar,$HBASE_PATH/lib/hbase-protocol-$HBASE_VERSION.jar,$HBASE_PATH/lib/hbase-rest-$HBASE_VERSION.jar,$HBASE_PATH/lib/hbase-server-$HBASE_VERSION.jar,$HBASE_PATH/lib/hbase-server-$HBASE_VERSION-tests.jar,$HBASE_PATH/lib/hbase-shell-$HBASE_VERSION.jar,$HBASE_PATH/lib/hbase-thrift-$HBASE_VERSION.jar,$HBASE_PATH/lib/htrace-core-3.1.0-incubating.jar,$HBASE_PATH/lib/guava-12.0.1.jar --driver-class-path ./$HBASE_PATH/conf/ --class Hogzilla  /home/hogzilla/Hogzilla.jar  &> /tmp/hogzilla.log

sleep 600

rm -rf /tmp/hadoop-hogzilla*

done

Run it

chmod 755 hogzilla.sh
./hogzilla.sh &

9. Installing Pigtail

As user root.

Certify that Thrift is installed. See above.

Download Pigtail

mkdir /root/app
cd /root/app
apt-get install git
git clone https://github.com/pauloangelo/pigtail.git
mv pigtail/pigtail.php /root
mkdir /usr/lib/php/Thrift/Packages/
mv pigtail/gen-php/Hbase/  /usr/lib/php/Thrift/Packages/

apt-get install php5-mysql
cd /root

Change configuration

Change MySQL db-name, host, user and password.

vim pigtail.php

Certify the MySQL Access

In MySQL, you need to to execute the following command as user root. (Change SERVER and PASSWORD)

grant all privileges on snorby.* to 'snorby'@'SERVER' identified by 'PASSWORD';

In /etc/mysql/my.cnf, you must have

bind-address            = *

You will need to restart MySQL if you change my.cnf

service mysql restart

Run Pigtail

php ./pigtail.php >&/dev/null&

10. Running and Testing

[to be completed]

  • Snort

-> Run

/usr/local/snort/bin/snort -U -D -i eth1 -u snort -g snort -c /usr/local/snort/etc/snort.conf -l /usr/local/snort/var/log/  &

-> Wait some minutes

-> Check logs

less /var/log/snort.log

-> Check the Unified2 file

ls -la /usr/local/snort/var/log/merged*
  • Barnyard2-hz

-> Run

/usr/local/by-hz/bin/barnyard2 -c /usr/local/by-hz/etc/barnyard2.conf -a /usr/local/snort/var/log/archive -f merged.log -d /usr/local/snort/var/log &

-> Check /var/log/barnyard2.log

less /var/log/barnyard2.log
  • Java

-> Run

java -v
  • Hadoop

-> Run

start-dfs.sh
start-yarn.sh

-> List directory

hadoop fs -ls /hbase

-> Check GUI

Services at URL http://serverIP:50070
Cluster Applications at URL http://serverIP:8088
  • HBase

-> Check GUI

Check URL http://serverIP:16010
  • Apache Spark -> Try the standalone mode

    su - hogzilla
    ./spark/bin/spark-shell
    
  • Hogzilla

-> Check process execution and logs into Hadoop interface (above)

-> Check generated events into HBase

su - hogzilla
./hbase/bin/hbase shell
count 'hogzilla_events'
  • Pigtail -> Activate the DEBUG variable inside the PHP script and run it if you need!

11. Configure rc.local

To run after reboot, put the following lines in your /etc/rc.local:

In Server A

/usr/local/snort/bin/snort -U -D -i eth1 -u snort -g snort -c /usr/local/snort/etc/snort.conf -l /usr/local/snort/var/log/  &
/usr/local/by-hz/bin/barnyard2 -c /usr/local/by-hz/etc/barnyard2.conf -a /usr/local/snort/var/log/archive -f merged.log -d /usr/local/snort/var/log &
( cd /usr/local/snorby/ ; bundle exec rails server -e production -b 0.0.0.0 >&/dev/null ) &

In Server B

# Start hadoop
su hogzilla -c "/home/hogzilla/hadoop/sbin/start-dfs.sh"
su hogzilla -c "/home/hogzilla/hadoop/sbin/start-yarn.sh"
# Start HBase
su hogzilla -c "/home/hogzilla/hbase/bin/start-hbase.sh"
su hogzilla -c "/home/hogzilla/hbase/bin/hbase-daemon.sh start thrift"
# Start Apache Spark
su hogzilla -c "/home/hogzilla/spark/sbin/start-master.sh"
su hogzilla -c "/home/hogzilla/spark/sbin/start-slaves.sh"

12. It doesn’t work. How can I get help?

Please, help us improve this guide. Submit your troubles in our Mailing list.