Graylog is a log aggregation tool. Installing and setting up graylog is pretty straightforward. Here is a simple guide to install graylog on an ubuntu server.
This setup is recommended for non-production or low traffic environments where you don’t need any redundancy. For a production level setup, it is recommended to at least have two graylog server load balanced and have multiple elastic search nodes with shards and replication. That’s not in the scope of this post. I might cover it later
This assumes that you have a server with 2 hard disks attached (One for the boot volume and another one to save the logs). We will partition this as LVM volume
sudo apt-get update && sudo apt-get upgrade
sudo apt-get install -y apt-transport-https openjdk-8-jre-headless uuid-runtime pwgen
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 2930ADAE8CAF5059EE73BB4B58712A2291FA4AD5
echo "deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.6 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.6.list
sudo apt-get update
sudo apt-get install -y mongodb-org
sudo systemctl daemon-reload
sudo systemctl enable mongod.service
sudo systemctl restart mongod.service
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
echo "deb https://artifacts.elastic.co/packages/5.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-5.x.list
sudo apt-get update && sudo apt-get install elasticsearch
echo "cluster.name: graylog" >> /etc/elasticsearch/elasticsearch.yml
sudo systemctl daemon-reload
sudo systemctl enable elasticsearch.service
sudo systemctl restart elasticsearch.service
wget https://packages.graylog2.org/repo/packages/graylog-2.4-repository_latest.deb
sudo dpkg -i graylog-2.4-repository_latest.deb
sudo apt-get update && sudo apt-get install graylog-server
By default the graylog has 1GB Java heap memory which is low. You can modify it to make it higher. I usually apply half the available RAM for the Java. ie, if my RAM is 16GB then I apply a maximum of 8GB
/etc/default/graylog-server
Heap size: to 4g by adjusting the value of “Xms1g -Xmx1g”.
ie, Xms4g -Xmx4g
Now update few graylog parameters
 /etc/graylog/server/server.conf
password_secret = SEE BELOW
rest_listen_uri = http://0.0.0.0:9000/api/
web_listen_uri = http://0.0.0.0:9000/
root_password_sha2 = SEE BELOW
elasticsearch_shards = 1
password_secret: is created using
pwgen -N 1 -s 96
root_password_sha2:- is created using
echo -n MyAdminPassword | sha256sum
apt install -y nginx
rm -f /etc/nginx/sites-enabled/default
Create a new file with the following content at
/etc/nginx/sites-available/graylog
server {
listen 80;
return 301 https://$host:443$request_uri;
error_page 502 /502.html;
location /502.html {
internal;
}
}
server {
listen 443;
server_name graylog.mydomain.com;
 ssl on;
ssl_certificate /etc/nginx/ca/mydomain.crt;
ssl_certificate_key /etc/nginx/ca/mydomain.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA;
ssl_prefer_server_ciphers on;
location / {
proxy_pass http://localhost:9000;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Graylog-Server-URL https://$server_name/api;
proxy_pass_request_headers on;
proxy_connect_timeout 150;
proxy_send_timeout 100;
proxy_read_timeout 100;
proxy_buffers 4 32k;
client_max_body_size 8m;
client_body_buffer_size 128k;
}
}
mkdir /etc/nginx/ca
then upload your SSL certificates to the location
ln -s /etc/nginx/sites-available/graylog /etc/nginx/sites-enabled/graylog
systemctl enable nginx
systemctl restart nginx
sudo systemctl daemon-reload
sudo systemctl enable graylog-server.service
sudo systemctl restart graylog-server
Mounting elasticsearch data
Now, if you do not have a secondary disk attached for the elasticsearch, you can skip this step. If you have a secondary disk attached for storing the logs, which I prefer, we can format and mount as LVM volume.
This assumes that you have the disk mounted on /dev/xvdb. Confirm this with “lsblk”
sudo systemctl stop graylog-server
sudo systemctl stop elasticsearch.service
mkdir /root/testElastic
mv /var/lib/elasticsearch/* /root/testElastic/
pvcreate /dev/xvdbÂÂ
vgcreate volumes /dev/xvdb
lvcreate --name graylog -l 100%FREE volumes
mkfs.ext4 /dev/volumes/graylog
echo "/dev/volumes/graylog /var/lib/elasticsearch/ ext4 defaults 0 0" >> /etc/fstab
mount -a
mv /root/testElastic/* /var/lib/elasticsearch/
sudo systemctl start graylog-server
sudo systemctl start elasticsearch.service
rm -rf /root/testElastic/
That should do it. You now have a standalone graylog server built from scratch. Now go ahead and log in with your admin password that you just created and start configuring Inputs. Login username is ‘admin’