Logo

Armand.nz

Home / About / Linkedin / Github

Install ELK on Ubuntu 18.04

#ubuntu #linux #ELK |

Modified from: Bash Script to Install Elastic Search, Logstash and Kibana · GitHub

The elastic stack is an open source system which combines Elasticsearch, Logstash, and Kibana.

Prerequisites

Hostname, FDQN and DNS

During the installation steps we will generate the SSL certificate key to secure the log data transfer from the client filebeat to the logstash server, so make sure your hosts file as your hostname and FQDN bound to localhost:

# Configure hostname:
cat /etc/hostname

elk

# Configure hosts file:

cat /etc/hosts

127.0.0.1      elk.t3st.org elk

# Test DNS resolution

dig elk.t3st.org +short
127.0.0.1

Install java, logstash, elasticsearch and kibana

# Install Java
sudo apt install openjdk-8-jdk -y
echo JAVA_HOME="/usr/lib/jvm/java-8-openjdk-amd64" >> /etc/environment
source /etc/environment

# Downloading debian package of logstash
sudo wget --directory-prefix=/opt/ https://artifacts.elastic.co/downloads/logstash/logstash-6.0.0-rc2.deb
# Install logstash debian package
sudo dpkg -i /opt/logstash-6.0.0-rc2.deb
# Downloading debian package of elasticsearch
sudo wget --directory-prefix=/opt/ https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.0.0-rc2.deb

# Install debian package of elasticsearch
sudo dpkg -i /opt/elasticsearch-6.0.0-rc2.deb
# install kibana
sudo apt-get install apt-transport-https
sudo wget --directory-prefix=/opt/ https://artifacts.elastic.co/downloads/kibana/kibana-6.0.0-rc2-amd64.deb
sudo dpkg -i /opt/kibana-6.0.0-rc2-amd64.deb

# Starting The Services
sudo systemctl restart logstash
sudo systemctl enable logstash
sudo systemctl restart elasticsearch
sudo systemctl enable elasticsearch
sudo systemctl restart kibana
sudo systemctl enable kibana

Elasticsearch

Now uncomment the following lines in /etc/elasticsearch/elasticsearch.yml

http.port: 9200
network.host: localhost

Start the Elasticsearch process:

sudo service elasticsearch start

and verify that it is running by making a cURL request:

curl -v http://localhost:9200

{
  "name" : "lEOavW3",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "8iVBXLx-RC2uNiGXJeWd5w",
  "version" : {
    "number" : "6.2.4",
    "build_hash" : "ccec39f",
    "build_date" : "2018-04-12T20:37:28.497551Z",
    "build_snapshot" : false,
    "lucene_version" : "7.2.1",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}

Kibana

Then add the following lines to /etc/kibana/kibana.yml

server.port: 5601
server.host: "localhost"
elasticsearch.url: "http://localhost:9200"

Now you can start a Kibana process by typing:

sudo service kibana start

Just like in the case of Elasticsearch you can verify that it is running by using a cURL command:

curl -v http://localhost:5601

Logstash

Backup default config:

mkdir /opt/backups/logstash -p
mv /etc/logstash/logstash.yml /opt/backups/logstash/logstash.yml.BAK

Get the Latest GeoIP Databases:

cd /etc/logstash/
wget https://geolite.maxmind.com/download/geoip/database/GeoLite2-City.mmdb.gz
gunzip GeoLite2-City.mmdb.gz

Setup Logstash Main Config:

cat > /etc/logstash/logstash.yml << EOF
path.data: /var/lib/logstash
path.config: /etc/logstash/conf.d
path.logs: /var/log/logstash
EOF

Configure Logstash Application Config:

cat > /etc/logstash/conf.d/logstash-nginx-es.conf << EOF
input {
    beats {
        host => "192.168.20.15"
        port => 5400
    }
}

filter {
 grok {
   match => [ "message" , "%{COMBINEDAPACHELOG}+%{GREEDYDATA:extra_fields}"]
   overwrite => [ "message" ]
 }
 mutate {
   convert => ["response", "integer"]
   convert => ["bytes", "integer"]
   convert => ["responsetime", "float"]
 }
 geoip {
   source => "clientip"
   target => "geoip"
   add_tag => [ "nginx-geoip" ]
 }
 date {
   match => [ "timestamp" , "dd/MMM/YYYY:HH:mm:ss Z" ]
   remove_field => [ "timestamp" ]
 }
 useragent {
   source => "agent"
 }
}

output {
 elasticsearch {
   hosts => ["localhost:9200"]
   index => "weblogs-%{+YYYY.MM.dd}"
   document_type => "nginx_logs"
 }
 stdout { codec => rubydebug }
}
EOF

Enable Logstash on Boot and Start Logstash:

systemctl enable logstash
systemctl restart logstash

Install Filebeat:

Install Filebeat:

apt install filebeat -y

Backup Filebeat configuration:

mkdir /opt/backups/filebeat -p
mv /etc/filebeat/filebeat.yml /opt/backups/filebeat/filebeat.yml.BAK

Create the Filebeat configuration, and specify the Logstash outputs:

cat > /etc/filebeat/filebeat.yml << EOF
filebeat.prospectors:
- input_type: log
  paths:
    - /var/log/nginx/*.log
  exclude_files: ['\.gz$']

output.logstash:
  hosts: ["elk.t3st.org:5400"]
EOF

Note: If you have an NGINX running for a while, you probably have a bunch of GZipped logs in /var/log/nginx/. To send them to Kibana you should unzip them using gunzip and change their resulting filenames to match the *.log wildcard expression.

Enable Filebeat on Boot and Start Filebeat:

systemctl enable filebeat
systemctl restart filebeat

Kibana dashboard

If everything went fine you should go to Kibana dashboard and create an index pattern called weblogs-*. You can do it in a Management menu tab. Now you can go to Discover and see your raw logs data there:

Troubleshooting:

As you can see you need to make various components play together in order to get the ELK stack running. Here’s a list of commands which can help you debug when things go wrong:

Filebeat logs:

tail -f /var/log/filebeat/filebeat

Logstash logs:

tail -f /var/log/logstash/logstash-plain.log
server {
    listen 443;
    server_name elk.t3st.org default_server;
    status_zone elk.t3st.org;

    # Basic Auth
    auth_basic "Restricted Access";
    auth_basic_user_file /etc/nginx/.kibana-user;
    
    # certs sent to the client in SERVER HELLO are concatenated in ssl_certificate
    ssl_certificate /etc/ssl/t3st.org/fullchain.pem;
    ssl_certificate_key /etc/ssl/t3st.org/privkey.pem;
    ssl_session_timeout 1d;
    ssl_session_cache shared:SSL:50m;
    ssl_session_tickets off;


    # modern configuration. tweak to your needs.
    ssl_protocols TLSv1.2;
    ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';
    ssl_prefer_server_ciphers on;
    
    # HSTS (ngx_http_headers_module is required) (15768000 seconds = 6 months)
    add_header Strict-Transport-Security max-age=15768000;
    
    # OCSP Stapling ---
    # fetch OCSP records from URL in ssl_certificate and cache them
    ssl_stapling on;
    ssl_stapling_verify on;
    
    ## verify chain of trust of OCSP response using Root CA and Intermediate certs
    ssl_trusted_certificate /etc/ssl/t3st.org/fullchain.pem;
    
    resolver 1.1.1.1;
    
    location / {
        proxy_pass http://localhost:5601; #kibana
    
    			## Add headers ##
    		#  * proxy_set_header Host $host; # e.g. Host: www.example.com
    		#  * X-Real-IP $remote_addr: Orginal client IP address
    		#  * proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; de-facto 
    		#    standard header for identifying the originating IP address of a client connecting 
    		#    to a web server through an HTTP proxy or a load balancer. 
    		#  * proxy_set_header X-Forwarded-Proto $scheme; de-facto standard header 
    		#    for identifying the protocol (HTTP or HTTPS) that a client used to connect 
    		#    to your proxy or load balancer. 
    
    		proxy_set_header Host $host;
    		proxy_set_header X-Real-IP $remote_addr;
    		proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    		proxy_set_header X-Forwarded-Proto $scheme;
    
            # Websocket proxying - http://nginx.org/en/docs/http/websocket.html
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection 'upgrade';
            proxy_cache_bypass $http_upgrade;
    
    #
    # Wide-open CORS config for nginx
    #
    
        if ($request_method = 'OPTIONS') {
            add_header 'Access-Control-Allow-Origin' '*';
            add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
            #
            # Custom headers and headers various browsers *should* be OK with but aren't
            #
            add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range';
            #
            # Tell client that this pre-flight info is valid for 20 days
            #
            add_header 'Access-Control-Max-Age' 1728000;
            add_header 'Content-Type' 'text/plain; charset=utf-8';
            add_header 'Content-Length' 0;
            return 204;
        }
        if ($request_method = 'POST') {
            add_header 'Access-Control-Allow-Origin' '*';
            add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
            add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range';
            add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range';
        }
        if ($request_method = 'GET') {
            add_header 'Access-Control-Allow-Origin' '*';
            add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
            add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range';
            add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range';
        }
    
    }

}

We also need to create new basic authentication web server for accessing the Kibana dashboard. We will create the basic authentication using the htpasswd command as below.

# Install apache2-utils if you do not have so already
sudo apt-get update; sudo apt-get -y install apache2-utils 


# Change 'armand' with your username or whoever you will log in as:
sudo htpasswd -c /etc/nginx/.kibana-user armand
#Type the elastic user password

# Check NGINX config and reload:
nginx -t && nginx -s reload

Kibana troubleshooting

“Kibana server is not ready yet”

curl -XDELETE http://localhost:9200/.kibana    
curl -XDELETE http://localhost:9200/.kibana\*
curl -XDELETE http://localhost:9200/.kibana_2
curl -XDELETE http://localhost:9200/.kibana_1

or , restarting all of the services:

sudo systemctl restart kibana

“Keep getting “Unable to fetch mapping. Do you have indices matching the pattern?””

/usr/share/kibana/bin# ./kibana --verbose 

Other notes:

Syslog collector

Alternative to nginx collector:

Go to the logstash configuration directory and create the new configuration files syslog-input.conf in the conf.d directory

Paste the following configuration, syslog-input.conf

# sudo vim /etc/logstash/conf.d/syslog-input.conf

input {
  beats {
    port => 5443
    type => syslog
    ssl => true
    ssl_certificate => "/etc/logstash/ssl/logstash-forwarder.crt"
    ssl_key => "/etc/logstash/ssl/logstash-forwarder.key"
  }
}
filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}
output {
  elasticsearch {
    hosts => ["localhost:9200"]
    sniffing => true
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}
comments powered byDisqus

Copyright © Armand