CLOSE
megamenu-tech
CLOSE
service-image
CLOSE
Blogs
Centralized Logging with Elastic Search, Logstash & Kibana — ELK Stack

Centralized Logging with Elastic Search, Logstash & Kibana — ELK Stack

Centralized Logging with Elastic Search, Logstash & Kibana — ELK Stack

#ELKStack

#CentralizedLogging

#ElasticSearch

#MonitoringTools

Technology, Published On : 21- January - 2025
Logstash

The ELK Stack, consisting of Elasticsearch, Logstash, and Kibana, is a powerful suite of tools for managing, analyzing, and visualizing log data. It has become a go-to solution for businesses of all sizes to monitor system performance, troubleshoot issues, and gain actionable insights from their data. This blog will guide you through the basics of setting up the ELK stack in your local environment, its key features, and how it can be leveraged to enhance your logging and monitoring workflows.

Prerequisites:

Before proceeding with this tutorial, you should have a basic understanding of the following:

  • Docker
  • Docker-compose

At the end of the article, we will write a simple NodeJS application to generate the logs. You won’t need any prior experience in NodeJS to follow the code.

What is Elasticsearch?

Elasticsearch is an open-source, distributed, RESTful search and analytics engine. It can be used to search, index, store, and analyze data of all shapes and sizes in near real-time. The core of Elasticsearch is the ability to store and search large volumes of data. Hence Elasticsearch is suitable in most situations where large volumes of data must be stored and processed. Apart from logging which we will see in this article, here are some common use cases for Elasticsearch:

  • Full-Text Search
  • Real-Time Analytics
  • Application Monitoring and Performance Metrics
  • E-Commerce and Retail
  • Recommendation Engines
  • Machine learning
  • Data Science

The following are the different components of Elasticsearch:

  1. Node: A node is a server that stores our data in a way that it can be indexed and searched with ease.
  2. Cluster: A cluster is a group of nodes. Every node in a cluster has the same cluster name.
  3. Index: The index is the fundamental unit of storage in Elasticsearch, a logical namespace for storing data that share similar characteristics. Each index in Elasticsearch is divided into one or more shards, each of which may be replicated across multiple nodes to protect against hardware failures.
  4. Documents: Elasticsearch serializes and stores data in the form of JSON documents. A document is a set of fields, which are key-value pairs that contain your data.

What is Logstash?

Logstash is an open-source data processing pipeline that ingests data from a multitude of sources, transforms it, and then sends the data to Elasticsearch. Data can be sent to Logstash in multiple formats. You can check the full list of available input plugins here.

What is Kibana?

Kibana is a data visualization tool that works seamlessly with Elasticsearch, enabling users to create dashboards, analyze data, and generate insights visually. A typical Kibana dashboard looks like the following:

Kibana

Setting up ELK stack:

Before we start setting up ELK stack I will briefly explain how we will go about it. We will start with running Elasticsearch, Logstash, and Kibana independently through docker. Then we will set up some configuration files and move around a few files. This is not the ideal way to set up ELK stack in production or your local environment. There are easier ways to set up ELK stack. I chose this method because I want to explain all the small steps involved when integrating these 3 services. This will better equip you in case you are diagnosing a problem. In short, the following steps can be automated using a single docker-compose file. I am just elaborating on the steps.

Step 1: Setting up Elasticsearch

Before starting Elasticsearch you might need to run the following command in the terminal:
sudo sysctl -w vm.max_map_count=262144
This command will increase the maximum number of memory map areas a process can have. Memory maps are used by processes to map files or devices into memory. Elasticsearch needs memory Maps to handle many mappings for its indices.
Create a docker network called elastic
docker network create elastic

Start the Elasticsearch container using the following command:

docker run --name es01 \

--net elastic \

-p 9200:9200 \

-it -m 1GB docker.elastic.co/elasticsearch/elasticsearch:8.17.0

Here we are passing the following parameters to docker:

  • We are passing es01 as the container name
  • We are adding the container to the elastic network
  • We are exposing port number 9200
  • We are assigning a memory limit of 1 GB

After Elasticsearch is ready. Run the following command to reset the password for the elastic user.

docker exec -it es01 /usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic

The above command should printout some output like the following:

WARNING: Owner of file [/usr/share/elasticsearch/config/users] used to be [root], but now is [elasticsearch]

WARNING: Owner of file [/usr/share/elasticsearch/config/users_roles] used to be [root], but now is [elasticsearch]

This tool will reset the password of the [elastic] user to an autogenerated value.

The password will be printed in the console.

Please confirm that you would like to continue [y/N]y

Password for the [elastic] user successfully reset.

New value: <new password>

The new password should be printed at the end of the output. Keep this password handy we will be using this to configure Logstash.

Next, we need to get the enrolment token for Kibana. Run the following command to get the enrollment token:

docker exec -it es01 /usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s kibana

Keep the enrollment token handy, we will use it in the next step. Similar to the above two commands there are other scripts to interact with Elasticsearch.

Step 2: Setting up Kibana

Run the following command in a new terminal to start Kibana.

docker run --name kib01 --net elastic -p 5601:5601 \ docker.elastic.co/kibana/kibana:8.17.0

Here we are passing the following parameters to docker:

  • We are passing kib01 as the name of the container
  • We are adding the container to the elastic network
  • We are exposing port 5601

The above command will print the URL to access Kibana. It will look something like the following:

http://0.0.0.0:5601/?code=234301

The following screen should appear in the browser:

Setting up Kibana

Here you can paste the enrollment token that you received in the previous step. In the next screen, Kibana will establish a connection to Elasticsearch. Then you should be able to see the following login screen:

Welcome to elastic

You can use elastic as the username and use the password that we reset in the previous step. You should be able to log in to Kibana now. There won’t be any data in Elasticsearch, so let’s add some data so that we can see how the data will look in Kibana. We can add data to Elasticsearch by calling the APIs that Elasticsearch exposes. Elasticsearch runs in HTTPS by default so we will need a SSL certificate to hit the API. Run the following command to get the SSL from Elasticsearch:

docker cp es01:/usr/share/elasticsearch/config/certs/http_ca.crt ./ca.crt

The certificate will be copied to the directory where you are running the command. Make sure to install curl in your machine. Let’s start with creating an index called books. Run the following command in the same directory where you copied the SSL certificate, replace <ELASTIC_PASSWORD> with the password you received in the previous step:

curl -X PUT --cacert ca.crt -u elastic:<ELASTIC_PASSWORD> https://localhost:9200/books?pretty

You should get the following output:

{

"acknowledged" : true,

"shards_acknowledged" : true,

"index" : "books"

}

Next, let’s add a document to this collection:

curl -X POST --cacert ca.crt -u elastic:<ELASTIC_PASSWORD> https://localhost:9200/books/_doc?pretty -H 'Content-Type: application/json' -d'

{

"name": "Snow Crash",

"author": "Neal Stephenson",

"release_date": "1992-06-01",

"page_count": 470 

You should get a similar output like the following:

{

"_index" : "books",

"_id" : "UfeNepQBoM_ufgeo0J_f",

"_version" : 1,

"result" : "created",

"_shards" : {

"total" : 2,

"successful" : 1,

"failed" : 0

},

"_seq_no" : 0,

"_primary_term" : 1

}

If you go to the indexes section in Elasticsearch you should see the index and the document we just created.

Welcome to elastic 2

Step 3: Setting up Logstash

To set up Logstash we need to create the settings and the configuration files. Create a folder called logstash and create the following folder structure inside the directory:

logstash

├── credentials

│ └── ca

├── pipeline

└── settings

Under the pipeline folder, you can create a file called logstash.config and paste the following code into it:

input {

gelf {

id => "my_plugin_id"

use_udp => true

host => "0.0.0.0"

port_udp => 5044

port => 5044

}

}

output {

elasticsearch {

index => "logstash-%{+YYYY.MM.dd}"

hosts => ["https://111.11.1.1:9200"]

user => "elastic"

password => "<ELASTIC_PASSWORD>"

ssl_enabled => true

cacert => "/usr/share/logstash/certs/ca/ca.crt"

}

}

Logstash handles data through pipelines. In the above configuration, we are telling Logstash to listen for input in the gelf format, over the UDP protocol, on port 5044. We are passing the output through the Elasticsearch method since we are using Elasticsearch. Let’s go over the output section:
index => “logstash-%{+YYYY.MM.dd}”

Here we are telling Logstash which index to send the logs to. The above index pattern tells Logstash to save the index by the date at which the log was generated. For example, if a log was sent on the 10th of January 2025, the log will be stored under the index logstash-2025.01.10. So we will create a new index every day (assuming we will create new logs every day).

hosts => [“https://111.11.1.1:9200"]

Here we are passing the hosts Logstash needs to send the data to. In this tutorial, we are running a single container for Elasticsearch, so we have a single item in the array. You can use the following command to get the IP address of the Elasticsearch container:

docker inspect es01 | grep "IPAddress"

Replace 111.11.1.1 with the IP address you receive from the above command.

user => “elastic”

We are passing the username that Logstash needs to use to access Elasticsearch.

password => “<ELASTIC_PASSWORD>”

Replace <ELASTIC_PASSWORD> with the password we received from the previous steps.

ssl_enabled => true

We are enabling SSL to access Elasticsearch. Since Elasticsearch runs with SSL by default.

cacert => “/usr/share/logstash/certs/ca/ca.crt”

Here, we are passing the path to the SSL certificate. This path referees to the path inside the container. You can either copy the SSL certificate that we copied from the previous step into the ca folder or run the following command to make another copy from the Elasticsearch container and paste the certificate into the ca folder

docker cp es01:/usr/share/elasticsearch/config/certs/http_ca.crt ./logstash/credentials/ca/ca.crt

Next, create a file called logstash.yml under the logstash/settings directory. This is where we can configure the settings for Logstash. Paste the following into that file:

http.host: 0.0.0.0

node.name: logstash

xpack.monitoring.elasticsearch.hosts:

- https://<IP ADDRESS OF ELASTICSEARCH>:9200

xpack.monitoring.enabled: false

api.auth.basic.username: elastic

api.auth.basic.password: <ELASTIC_PASSWORD>

Replace <IP ADDRESS OF ELASTICSEARCH> with the IP Address of the Elasticsearch container, and <ELASTIC_PASSWORD> with the password. From the same directory where you created the folder structure for Logstash run the following command to start Logstash:

docker run --rm -it --net elastic \

-v ./logstash/pipeline/:/usr/share/logstash/pipeline/ \

-v ./logstash/settings/logstash.yml:/usr/share/logstash/config/logstash.yml \

-v ./logstash/credentials/:/usr/share/logstash/certs \

-p 5044:5044/udp \

docker.elastic.co/logstash/logstash:8.17.0

Here we are passing the following:

  • We are mapping the pipeline folder we created with the pipeline folder inside the container.
  • We are mapping the Logstash settings file

If everything ran correctly you should see some logs like the following:

[INFO ][logstash.javapipeline][main] Pipeline started {"pipeline.id"=>"main"}

[INFO ][logstash.inputs.gelf ][main][my_plugin_id] Starting gelf listener (udp) ... {:address=>"0.0.0.0:5044"}

[INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}

Step 4: Running NodeJS using Docker

Logstash supports multiple input methods. For this tutorial, we will trigger logs exposed from a docker container running NodeJS. We will be using the following NodeJS packages:

Create a file called index.js and paste the following code into it:

var express = require("express");

var logger = require("./logger");

var app = express();

app.get("/", function (req, res) {

logger.info('Inside hello world');

res.send("Hello world!");

});

app.listen(3000);

var express = require(“express”);

var logger = require(“./logger”);

We are importing the ExpressJS package. We are importing a file called logger.js from the same directory. The logger.js file will have utility methods for printing the logs. We will create a logger.js file next.

var app = express();

We are initializing a webserver by initializing express.

app.get(“/”, function (req, res) {

logger.info(‘Inside hello world’);

res.send(“Hello world!”);

});

We are creating a simple get API that returns “Hello world!”. Before sending the response we are printing a log saying “Inside hello world”. If we see this log in Kibana it means that the API got called.

app.listen(3000);

We are telling Express to listen to port 3000.

Next, create a file called logger.js in the same directory and paste the following into the file:

const winston = require('winston');

const { ecsFormat } = require('@elastic/ecs-winston-format');

const winstonLogger = winston.createLogger({

format: ecsFormat(),

transports: [

new winston.transports.Console()

]

});

const logger = {

info: (msg) => {

winstonLogger.info(msg)

},

error: (msg) => {

winstonLogger.error(msg)

},

}

module.exports = logger;

Don’t worry if you don’t understand the complete code. This file exports functions that print the logs in a desired format so that Elasticsearch can easily read the data.

Next, create a file called package.json and paste the following code into it:

{

"name": "example",

"version": "1.0.0",

"description": "NodeJS Application",

"main": "index.js",

"scripts": {

"test": "echo \"Error: no test specified\" && exit 1",

"start": "node index.js"

},

"license": "ISC",

"dependencies": {

"@elastic/ecs-winston-format": "^1.5.3",

"express": "^4.18.2",

"winston": "^3.17.0"

}

}

Next, create a file called Dockerfile and paste the following content into it:

FROM node:22

WORKDIR /app

COPY package.json .

npm install

COPY . ./

EXPOSE 3000

There is nothing complicated in this file. We are simply copying the above 3 files (index.js, logger.js, and package.json) into the container and installing the required packages.

Finally, create a file called docker-compose.yml and paste the following code into it:

version: "3.9"

services:

nodeapp:

build:

context: .

args:

NODE_ENV: production

logging:

driver: gelf

options:

gelf-address: "udp://localhost:5044"

network_mode: host

ports:

- 3000:3000

volumes:

- ./:/app

- /app/node_modules

command: ["npm", "run", "start"]

If you are familiar with docker the above will be self-explanatory. What is important here is the logging section.

logging:

driver: gelf

options:

gelf-address: “udp://localhost:5044”

Docker can trigger logs that the containers are printing to any external endpoint. In this case, we are instructing docker to send the logs in gelf format, in the UDP protocol, to localhost:5044 where Logstash is running.

You can run the following command to start the NodeJS container

docker compose up

Once the container has started go to http://localhost:3000 in your browser. This will trigger the following chain of events:

  1. An API request will go to the NodeJS container
  2. NodeJS will print the log “Inside hello world”
  3. Docker will receive the log and will trigger an event to Logstash
  4. Logstash will send the data to Elasticsearch

Finally, through Kibana we can see the logs that we generated

Docker 3

Through this setup, any server having access to Logstash can send the logs and the logs will be stored centrally. You can further modify the Logstash pipeline to collect logs in Elasticsearch as per your requirements.

You can check this GitHub repo to get the code snippets that are used in this repo

Conclusion:

Centralized application logging with the ELK Stack is a game-changer for modern software systems. It streamlines the process of collecting, processing, and visualizing logs from distributed applications, providing a unified view of system health and performance. By enabling real-time monitoring, quick issue resolution, and in-depth analysis, the ELK Stack helps organizations maintain system reliability and improve user experience.

Dinesh

Dinesh Murali

Lead technology

Software engineer by job and adventure seeker by nature. I thrive on developing awesome applications. When not working, I love being in nature and exploring the great outdoors.

Modal_img.max-3000x1500

Discover Next-Generation AI Solutions for Your Business!

Let's collaborate to turn your business challenges into AI-powered success stories.

Get Started