Skip to main content

ELK Installation | Configuration


How to remove elk stack from the server?

This section of the guide defines the procedure for the removal of various components of the elk stack

  1. Elasticsearch

The various steps that need to be followed, for the removal of the elasticsearch completely from the server.
  • Check the elasticsearch status
It is used to check the status of the elasticsearch service.
service elasticsearch status

How to install elk on the server?

Script for installing the elk stack

The elk components in the training are installed with the help of this bash script.


#! /bin/bash
Name : Bakul Gupta
Email : bakulgupta11@gmail.com(github.com/bullhacks3)
red=`tput setaf 1`
green=`tput setaf 2`
yellow=`tput setaf 3`
blue=`tput setaf 6`
grey=`tput setaf 8`
reset=`tput op`
function help() {
echo -e "\n ${yellow} This script is used to install the elasticsearch on the${red} debian${yellow} instance/machine provided ${reset}"
echo "${grey} #######################Arguments in the script########################################### ${reset}"
echo "${red} -h|--help ${blue}| ${green} to see various arguments available in the script ${reset}"
echo "${red} -ev|--e-version ${blue}| ${green} Elastic-Search version to be installed in the system (Example:- 6.6.0) ${reset}"
echo "${red} -lv|--l-version ${blue}| ${green} Logstash version to be installed in the sytem (Example:- 6.6.0) ${reset}"
}
function exiting() {
help
echo "${red} Invalid options entered ${reset}"
echo "${red} Invalid arguments provided exiting the script ${reset}"
exit
}
function elastic_search_version {
echo "${red} Elastic Search Version not provided in the script,exiting the script ${reset}"
help
exit
}
function logstash_version {
echo "${red} Logstash Version not provided as an arguments in the script,exiting the script ${reset}"
help
exit
}
### Script to take positional arguments
while [[ $# -gt 0 ]]
do
par=$1
case ${par} in
-h|--help)
help
exit
;;
-ev|--e-version)
shift
if [[ $1 ]]
then
e_version=$1
#echo "Elastic Version To Be Installed :-${green} ${e_version} ${reset}"
shift
else
elastic_search_version
fi
;;
-lv|--l-version)
shift
if [[ $1 ]]
then
l_version=$1
#echo "Logstash version To Be Installed :- ${green} ${l_version} ${reset}"
shift
else
logstash_version
fi
shift
;;
*)
exiting
break
;;
esac
done
function check() {
if [[ -z ${e_version} ]]
then
read -p "Enter the elastic search version to be installed :- " e_version
fi
if [[ -z ${l_version} ]]
then
read -p "Enter the logstash version to be installed :- " l_version
fi
}
check
function install() {
echo "${blue}The elastic search version to be installed is :- ${green}${e_version}${reset}"
echo "${blue}The logstash version to be installed is :- ${green}${l_version}${reset}"
#read -p "Do u want to install,with the version provided (Y/N)" answer
#while ${answer} != 'Yes'
}
install
function java_installation() {
check_java=$(type -P java >/dev/null 2>&1)
if [[ $? -eq 0 ]];
then
java_version=$(java -version 2>&1 | grep -i version | cut -d '"' -f2)
echo "${red} Java version ${green}${java_version}${red} already present in the system ${reset}"
else
echo "${green} ####### Installing Java Version 8 ######## ${reset}"
### Java Repository ###
# sudo add-apt-repository -y ppa:webupd8team/java
### Updating the packages of the instance ###
sudo apt-get install -y default-jdk
sudo apt-get -y update
### Installing the java version 8 ###
# sudo apt-get -y install oracle-java8-installer
fi
}
function elasticsearch() {
check_elasticsearch=$(dpkg -s elasticsearch 2>/dev/null 2>&1)
if [[ $? -eq 0 ]];
then
eversion=$(dpkg -s elasticsearch | grep -i version | awk '{print $2}')
echo "${red} Elastic Search Version ${green}${eversion}${red} Already Present ${reset}"
java_installation
else
echo "${green} Elastic Search Installation As a service ${reset}"
echo "${red} Java is installing ${green}"
java_installation
### Installing elastic search version 6.6.0
elasticsearchVersion=${e_version}
### Download the debian package for elasticsearch
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-${elasticsearchVersion}.deb
### Install the debian package
sudo dpkg -i elasticsearch-${elasticsearchVersion}.deb
### Delete the downloaded debian file
rm -rf elasticsearch-${elasticsearchVersion}.deb
fi
}
elasticsearch
function logstash() {
check_logstash=$(dpkg -s logstash >/dev/null 2>&1)
if [[ $? -eq 0 ]];
then
lversion=$(dpkg -s logstash | grep -i version | awk '{print $2}')
echo "${red} Logstash Version ${green}${lversion}${red} Already Present ${reset}"
else
echo "${green} Logstash Installation As a service ${reset}"
### Installing the logstash version
logstashVersion=${l_version}
### Download the logstash package
wget https://artifacts.elastic.co/downloads/logstash/logstash-${logstashVersion}.deb
### Install the logstash debian package
sudo dpkg -i logstash-${logstashVersion}.deb
### Delete the downloaded logstash package
rm -rf logstash-${logstashVersion}.deb
fi
}
logstash
function kibana
{
echo "${green} Downloading kibana on the system"
wget https://artifacts.elastic.co/downloads/kibana/kibana-6.8.0-amd64.deb
echo "${green} Installing kibana on the machine"
sudo dpkg -i kibana-6.8.0-amd64.deb
# Delete the kibana package
sudo rm -rf kibana-6.8.0-and64.deb
}
#kibana


Elasticsearch Installation Using Script



  • Reference to the script help
bash elk.sh --help or bash elk.sh -h

  • Script parameters to download the Elasticsearch of a particular version
bash elk –ev 6.8.0 or bash elk –e-version

Logstash Installation Using Script

  • Download and install the logstash   
bash elk.sh -lv 6.8.0 or –l-version 6.8.0

Kibana Installation Using Script

Follow the following steps :-
## Uncomment the last line of the script
vim elk.sh
# Before editing
#kibana
# After uncommenting
kibana

# Run the bash script
bash elk.sh

Elasticsearch

Configuration

  1. Elasticsearch.yml

  • Used to configure the elasticsearch node
  • Default location:/etc/elasticsearch/elasticsearch.yml
Attribute_Name
Description
cluster.name
Used to define the name of elasticsearch cluster, all the nodes in the elasticsearch cluster must have same cluster name
node.name
Defines the name of individual , useful for the identification of the node functionality
path.data
Specifies the location ,where to store the elasticsearch data
path.log
Used for the debugging purpose
network.host
The ip of the host machine, in which elasticsearch is installed
http.port
Port used for the elasticsearch api communication
node.master
Used for the identification of node functionality .
If master: true, it means this node is capable of becoming the master node.
node.data
Determine the node type
node.ingest
Determine the node type

Elasticsearch

Elastcisearch api provides a medium to interact directly with the elasticsearch and gives the flexibility to do creation, deletion, configuration of indexes
API can be used to gather information about:
  • Nodes in the cluster
  • Role of the nodes
  • State of the cluster
  • Number of indexes in the cluster
  • Utilization of the heap in the cluster etc.
  1. curl localhost:9200

This will provide the information about the elasticsearch instance, such as :-
  • ElasticSearch Version Running
  • ElasticSearch Node_Name
  • ElasticSearch Cluster_Name
  1. curl localhost:9200/_cat/health

Check the health of the cluster
The state of cluster can be either :
Health
Primary_Shards
Replicas
GREEN
Assigned
Assigned
YELLOW
Assigned
not assigned
RED
not assigned
not assigned

  1. curl localhost:9200/_cat/nodes

Basically, the node role can be of three types:
  1. Master Node
  2. Data Node
  3. Ingest Node
The settings for the node roles is defined in the elasticsearch.yml ( /etc/elasticsearch/elasticsearch.yml)
### Default Values for a node in a cluster
node.master: true
node.data: true
node.ingest: true     

Kibana

  • Visualization tool

Configurations

Kibana.yml

Default Location of the configuration file: /etc/kibana/kibana.yml

Attribute
Description
server.port
Specifies the port for kibana ui ( default : 5601)
server.host
Ip of the host machine, on which kibana is installed
elasticsearch.hosts
IP of the elasticsearch machine
logging.dest
Destination where to store the kibana logs, for debugging purpose.

Steps required to configure kibana logs
# Create a new directory
mkdir /var/log/kibana
# Change the ownership of the directory to kibana user
chown –R kibana:kibana /var/log/kibana
# Start the kibana service
service kibana start

Interface

The Kibana interface is divided into four main sections:
  • Discover
  • Visualize
  • Dashboard
  • Management
  Discover
This options of kibana allows seeing the various kibana patterns and the logs available in the index pattern

Visualize

It helps to visualize the data from the logs, in the form of the graphs, charts etc.
Types of visualizations that are available in kibana are :
Area, Line, Pie Chart, Timelion, Data Table ,Gauge and each visualizations can be used for a particular use case.

Dashboards

Kibana dashboards can be used to view more than one visualization at a time.

Management

This tab, used for the elasticsearch index management, as well as for the creation of the index patterns in kibana (i.e. discover tab)

Visualizations

For making any visualizations two concepts are always taken into the consideration i.e
  1. Metric Aggregations
  2. Buckets Aggregations
Aggregations simple means, the grouping of the documents.
  1. Bucket aggregations is responsible for the grouping of the documents on the basis of some properties or fields (i.e IP).
  2. Metric aggregations is the individual metric of the buckets.

Timelion

  • Used for visualizing time-based series data
  • Few main attributes for timelion fields from the logs.

TagCloud

It used to specifies the value of the fields in the form of the tag ,larger test implies more number of count.
Metrics : count
Bucket Aggregations :
Term
destination.ip.keyword

DataTable


It used to depict the information in the form of table.
Here metric is count and bucket aggregation is term type on field attack.type

Gauge

This type of visualization is used to track of the count of various event.


Configurations:
Metric: Count
Bucket: Terms
Type.keyword

Logstash

Configurations

conf.d
Consists of configurations for various pipelines
logstash.yml
File used for logstash configurations
pipelines.yml
Used for pipelines configurations
jvm.options
Specifies the options for the java virtual memory
  1. logstash.yml

   Default value:-
config.reload.automation: false
After modifications :
config.reload.automation: true
2. pipelines.yml
 This file is used for defining the pipeline configurations for logstash
 Note:- The name of pipeline.id should unique, otherwise, it will throw error.
3. jvm.options
 This file is related for the configurations of java related settings, so it particularly used for expert settings. But in most of the cases for initial, one can change the heap size.The heap size in worst case should be half of the RAM

Writing First Pipeline

Procedure

Make a new folder inside the directory /etc/logstash/conf.d
STEP 1: Make a new directory in /etc/logstash/conf.d
mkdir /etc/logstash/conf.d/Test
STEP 2: Make three files into the directory, 1-input.conf,2-filter.conf,3-output.conf
touch 1-input.conf 2-filter.conf 3-output.conf
onfo
STEP 3: Write the following configurations for the three files
  1. 1-input.conf
input {
tcp {
port => 3333
}
}

Purpose of this file ,is to specify the source of input I.e the source from the raw logs are ingesting into the server. Here the pipeline is listening for any logs on port 3333.
3.2 2-filter.conf
This file is used for the modifications of the logs I.e addition, removal ,normalization, parsing of the fields from the logs.
filter {
}

3.3 3-output.conf
Used to specify where to send the data i.e elasticsearch.
output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => ["anything"]
}
}
Make a new pipeline for this configuration in the pipelines.yml file
vim /etc/logstash/pipelines.yml
Now send the manual logs to the port 3333
nc localhost 3333 [ For TCP Port ]
nc –u localhost 3333 [ For UDP Port ]
nc localhost 3333
hello how r u ?
Hi, I am fine
What is ur fvorite color ?
Yellow
hello
heelo tub
hello note

Parsing Logs



CEF Format



Cef codec is used to parse the logs in the cef format.


CEF LOG :
<135>2019-08-06T11:07:11.524+07:00 10.101.101.2 CEF:0|McAfee|ESM|10.3.1|326-5000111|Web Filtering Request|1|start=1563852901000 end=1563852982000 rt=1563853322000 cnt=6 eventId=65279363769 nitroUniqueId=65279363769 deviceExternalId=Web Gateway deviceTranslatedAddress=10.101.101.4 externalId=44208297106 cat=Misc Application Event nitroNormID=1343225856 act=alert deviceDirection=0 dst=49.231.63.145 dpt=443 src=10.104.102.236 nitroTrust=2 nitroAppID=Facebook shost=scontent.fbkk8-3.fna.fbcdn.net nitroMethod=CONNECT nitroURL=https://scontent.fbkk8-3.fna.fbcdn.net nitroResponse_Code=0 nitroReputation_Name=Minimal Risk nitroFile_Type=application/x-empty nitroCategory=Content Server, Social Networking
  1. 1-input.conf
input {
tcp {
port => 3001
codec => cef {}
}
}

2-filter.conf
filter {
}
3-output.conf
output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => ["anything"]
}
}

 Send the manually using the nc command
 Sees the output of the parsed log

KV FILTER

This filter helps to separate the value of the basis of key and value automatically.
Two important parameters :
  1. field_split
The default value of field_split is space, one can change the value on the basis of requirements
  1. value_split
 The default value of value_split is “=”.

  1. 1-input.conf
input {
tcp {
port => 3333
}
}

2-filter.conf
filter {
kv { }
}
3-output.conf
output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => ["anything"]
}
}

If we send the following log to this configuration
fname=bakul lname=gupta city=Bareilly

CSV FILTER

This filter is used to separate the values on the basis of comma by default.
Following command can be used for the csv filter
  1. columns : Specifies the name of the column in the raw log, and if the raw log contains more columns then it names it automatically. One can control this behavior using
field autogenerate_column_names
filter {
csv {
columns => [“fname”,”lname”,”city”]
}
}

Comments

Popular posts from this blog

INTRODUCTION TO BITCOIN

Q. What is Bit-coin ? 1. Bit-coin is formally a virtual form of currency,by virtual form:- we mean that bit-coin is only available in electronic (i.e digitalized) form.Hence there is no physical presence of bitcoin just like as INR(Indian Rupees), USD(Dollars), Yen,Euro etc. 2.It is a digital payment system which is supposed to be implemented by an unknown programmer or group of programmers under the name of Satoshi Nakamoto. 3.Initially the first presence of Bit-coin was felt in the year 2009.   Advantages of Using Bit-coin:- 1. Bit-coin uses peer-to-peer (P2P) technology i.e the bitcoin r transferred     from one person to another person e-wallet without the involvement of any     intermediary. 2. It supports the world-wide connectivity it means that the bitcoin can be  transferred to any part of the world in real world.   3. Low-processing fees is another feature regarding the bitcoin transactions  as comparison to formally money-order agents

Cybrary | AWS Cloud Practitioner

Guide: Nicolas Moy Date:- 5th-September-2019 CLF-C01 Exam Guide Four Domains In Exam 1. Cloud Concepts 2. Security 3. Technology 4. Billing & Pricing Lecture 1:- AWS Cloud Computing Concepts What is cloud computing? In simple terms cloud computing is online renting service, that allows us to use the virtual servers, database on the cloud and follows the pay as per use concept. Major cloud computing providers : 1. AWS 2. GCP [ Google Cloud Platform] 3. Digital Ocean 4. Microsoft Azure Advantages : 1. Pay as per use concept 2. Highly scalable, we can increase or decrease the server according to our need 3. World Wide Access Capability 4. Increasing Speed 5. Develop a Global Infrastructure Lecture 2:- Cloud Models & Deployments Different types of cloud computing models 1. IaaS [ Infrastructure as a Service ] Provides the ability to manage the server on your own. 2. PaaS [ Platform as a Service ] Someone else is responsible for m