Logging using ELK for MuleSoft

  • April 13, 2020

What is ELK?

ELK is the acronym for three open source projects: Elasticsearch, Logstash, and Kibana

Elasticsearch is an open source, full-text search and analysis engine, based on the Apache Lucene search engine. Logstash is a server-side data processing pipeline that ingests data from multiple sources simultaneously, transforms it and then sends it to a “stash” like Elasticsearch. Kibana lets users visualize data with charts and graphs in Elasticsearch.

Why do we need a system like ELK?

  1. Log aggregation and efficient searching
  2. Generic search

There are three main reasons we need ELK:

  1. It’s interoperable
  2. It’s open source
  3. It’s managed

How to download ELK?

ELK and File Beat can be downloaded from the locations below:

Elastic Search: https://www.elastic.co/downloads/elasticsearch

Kibana: https://www.elastic.co/downloads/kibana

Logstash: https://www.elastic.co/downloads/logstash

Filebeat: https://www.elastic.co/downloads/beats/filebeat

ELK general architecture

In general, the ELK architecture looks as shown in the image:

ELK General Architecture
  • File Beat pools the file and sends the data to Logstash
  • Logstash gets data filter, processes it and sends it to Elasticsearch
ELK General Architecture pipeline
  • Elasticsearch stores data in persistent store with Indexing 
  • Kibana can pull data on demand and create Graph/Chart/Reporting

If in an enterprise we have more than one server, this is what a typical ELK stack looks like:

ELK stack

As Logstash is heavy on resources, we can use Filebeat, which pushes the data to Logstash.

integrate ELK with MuleSoft

Lets integrate ELK with MuleSoft:

  • Install ELK and FileBeat on you local system
  • Start Elasticsearch
code
  • Go to a browser and open http://localhost:9200 and if Elasticsearch is running fine you will get output like below.
code output
  • Start Kibana
code
  • Open Kibana in browser (http://localhost:5601)
code output
  • Create a logstash configuration file as shown below
code output
  • Line # 5 specifies the port logstash will listen
  • Line # 15 specifies the port Elasticsearch server where logstash forward the data
  • Run log stash with configuration created earlier
    • logstash.bat -f logstash-beat.conf
  • Create a Filebeat configuration file as shown below
code
  • Line # 5 specifies the log file to poll
  • You can add more log file similar to line # 5 to poll using same Filebeat
  • Line # 7 specifies the pattern of log file to identify the start of each log
  • Line # 8 and 9 are required to each log span more than one line
  • Run Filebeat with configuration created earlier
    • lfilebeat.exe -c filebeat.yml
  • Now go to Kibana (http://localhost:5601) -> Management -> Index pattern
Index patterns
  • Click on Create Index Pattern
  • You can see a new index filebeat-7.6.1-2020.03.30 is created. This Index is created because of line # 15 of the logstash configuration file. Select it and click on Next Step
Create index pattern
Create index pattern 
  • Click on dropdown and select @timestamp and click on Create Index Pattern
create index code
  • Start Mule application for which log you have configured in filebeat configuration (line # 5)
  • Run few cases so Mule file can generate the logs
create index code
  • Go to Kibana (http://localhost:5601) -> Discover
  • Select Index Pattern Create in previous step
create index code
  • In Search you can write any suitable expression to search specific text from log file

Reference material

— By Mohammad Mazhar Ansari