The ELK Stack (Elasticsearch, Logstash, Kibana) is a powerful log management and analytics platform. This guide covers configuration for all three components plus Beats for data collection.
| Component | File/Directory | Path | Purpose |
|---|---|---|---|
| Elasticsearch | Main config | /etc/elasticsearch/elasticsearch.yml |
Core Elasticsearch settings |
| Elasticsearch | JVM options | /etc/elasticsearch/jvm.options |
JVM memory settings |
| Elasticsearch | Data directory | /var/lib/elasticsearch/ |
Index and shard storage |
| Elasticsearch | Logs | /var/log/elasticsearch/ |
Elasticsearch logs |
| Logstash | Main config | /etc/logstash/logstash.yml |
Logstash settings |
| Logstash | Pipelines | /etc/logstash/pipelines.yml |
Pipeline definitions |
| Logstash | Pipeline configs | /etc/logstash/conf.d/ |
Pipeline configuration files |
| Logstash | Templates | /etc/logstash/templates/ |
Event templates |
| Logstash | Patterns | /etc/logstash/patterns/ |
Grok patterns |
| Kibana | Main config | /etc/kibana/kibana.yml |
Kibana settings |
| Kibana | Dashboards | /usr/share/kibana/data/ |
Saved objects |
| Beats | Filebeat | /etc/filebeat/filebeat.yml |
Filebeat configuration |
| Beats | Metricbeat | /etc/metricbeat/metricbeat.yml |
Metricbeat configuration |
| Beats | Heartbeat | /etc/heartbeat/heartbeat.yml |
Heartbeat configuration |
| Security | Certificates | /etc/elasticsearch/certs/ |
TLS certificates |
| Security | Users | /etc/elasticsearch/elasticsearch-users |
Native user database |
# /etc/elasticsearch/elasticsearch.yml
# ======================== Cluster Settings ========================
# Cluster name (all nodes in a cluster must have the same name)
cluster.name: production-elk-cluster
# Node name
node.name: es-node-01
# Node roles
node.master: true
node.data: true
node.ingest: true
node.ml: false
node.remote_cluster_client: true
# Path settings
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
path.repo: ["/mnt/elasticsearch/backup"]
# Network settings
network.host: 0.0.0.0
http.port: 9200
transport.port: 9300
# Discovery settings (for cluster formation)
discovery.seed_hosts:
- es-node-01.example.com
- es-node-02.example.com
- es-node-03.example.com
cluster.initial_master_nodes:
- es-node-01
- es-node-02
- es-node-03
# Gateway settings
gateway.recover_after_nodes: 2
gateway.expected_nodes: 3
gateway.recover_after_time: 5m
# ======================== Memory Settings ========================
# Lock memory on startup
bootstrap.memory_lock: true
# ======================== Performance Tuning ========================
# Thread pool settings
thread_pool.write.queue_size: 1000
thread_pool.search.queue_size: 1000
# Indexing settings
indices.memory.index_buffer_size: 20%
indices.fielddata.cache.size: 20%
indices.queries.cache.size: 10%
# Search settings
indices.query.bool.max_clause_count: 4096
# ======================== Security Settings ========================
# Enable security features
xpack.security.enabled: true
# TLS settings
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: certs/elastic-certificates.p12
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: certs/http.p12
# Audit logging
xpack.security.audit.enabled: true
# ======================== Monitoring ========================
# Enable monitoring
xpack.monitoring.enabled: true
xpack.monitoring.collection.enabled: true
xpack.monitoring.collection.interval: 10s
# ======================== Cross-Cluster Settings ========================
# Remote cluster configuration
cluster.remote.clusters:
backup-cluster:
seeds:
- backup-es-01.example.com:9300
- backup-es-02.example.com:9300
# ======================== Index Lifecycle ========================
# ILM settings
xpack.ilm.enabled: true
# ======================== Snapshot and Restore ========================
# Snapshot repository path
path.repo: ["/mnt/elasticsearch/snapshots", "s3://backup-bucket/elasticsearch"]
# S3 repository settings (requires repository-s3 plugin)
s3.client.default.access_key: ${S3_ACCESS_KEY}
s3.client.default.secret_key: ${S3_SECRET_KEY}
# /etc/elasticsearch/jvm.options
# Heap size (set both to same value)
-Xms4g
-Xmx4g
# Use G1 garbage collector
-XX:+UseG1GC
# G1 GC settings
-XX:G1ReservePercent=25
-XX:InitiatingHeapOccupancyPercent=30
# Heap dump on OOM
-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=/var/lib/elasticsearch
# GC logging
-Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m
# Temp directory
-Djava.io.tmpdir=${ES_TMPDIR}
# Error handling
-XX:+ExitOnOutOfMemoryError
# Locale
-Dfile.encoding=UTF-8
# Disable explicit GC calls
-XX:+DisableExplicitGC
# GC logging rotation (Java 9+)
-Xlog:gc*:file=/var/log/elasticsearch/gc.log:time,uptime:filecount=5,filesize=64M
# /etc/logstash/logstash.yml
# Node identity
node.name: logstash-node-01
# Path settings
path.data: /var/lib/logstash
path.logs: /var/log/logstash
path.config: /etc/logstash/conf.d
# Pipeline settings
pipeline.workers: 4
pipeline.batch.size: 125
pipeline.batch.delay: 50
pipeline.ecs_compatibility: v8
# Memory settings
# Set via LS_JAVA_OPTS environment variable
# HTTP API settings
http.host: "127.0.0.1"
http.port: 9600
# Logging
log.level: info
log.format: json
# Dead letter queue
dead_letter_queue.enable: true
dead_letter_queue.max_bytes: 1024mb
# Queue settings (for persistence)
queue.type: persisted
queue.max_bytes: 1gb
queue.checkpoint.writes: 1024
# API authentication
api.auth.type: basic
api.auth.basic.username: logstash_admin
api.auth.basic.password: SecureLogstashPassword123!
# X-Pack Monitoring
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.hosts: ["https://es-node-01.example.com:9200"]
xpack.monitoring.elasticsearch.username: logstash_system
xpack.monitoring.elasticsearch.password: LogstashSystemPassword123!
xpack.monitoring.elasticsearch.ssl.certificate_authority: /etc/elasticsearch/certs/ca.crt
# /etc/logstash/pipelines.yml
# Main pipeline
- pipeline.id: main
path.config: "/etc/logstash/conf.d/*.conf"
pipeline.workers: 4
pipeline.batch.size: 125
# Application logs pipeline
- pipeline.id: app-logs
path.config: "/etc/logstash/conf.d/app-logs/*.conf"
pipeline.workers: 2
pipeline.batch.size: 100
# System logs pipeline
- pipeline.id: system-logs
path.config: "/etc/logstash/conf.d/system-logs/*.conf"
pipeline.workers: 2
pipeline.batch.size: 100
# Security logs pipeline
- pipeline.id: security-logs
path.config: "/etc/logstash/conf.d/security-logs/*.conf"
pipeline.workers: 2
pipeline.batch.size: 100
# /etc/logstash/conf.d/01-inputs.conf
# Filebeat input
input {
beats {
port => 5044
ssl => true
ssl_certificate => "/etc/logstash/certs/logstash.crt"
ssl_key => "/etc/logstash/certs/logstash.key"
ssl_verify_mode => "force_peer"
client_inactivity_timeout => 60000
congestion_detection_threshold => 1000
}
}
# Syslog input
input {
syslog {
port => 5514
type => "syslog"
host => "0.0.0.0"
}
}
# TCP input for application logs
input {
tcp {
port => 5000
type => "application"
codec => json_lines
}
}
# JDBC input for database logs
input {
jdbc {
jdbc_driver_library => "/usr/share/java/mysql-connector-java.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://localhost:3306/application"
jdbc_user => "logstash"
jdbc_password => "${JDBC_PASSWORD}"
schedule => "* * * * *"
statement => "SELECT * FROM audit_log WHERE updated_at > :sql_last_value"
use_column_value => true
tracking_column => "updated_at"
tracking_column_type => "timestamp"
}
}
# /etc/logstash/conf.d/02-filters.conf
# Grok patterns for syslog
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
}
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
# Parse JSON logs
filter {
if [type] == "application" {
json {
source => "message"
target => "json_data"
}
mutate {
rename => {
"[json_data][level]" => "log_level"
"[json_data][message]" => "log_message"
"[json_data][timestamp]" => "@timestamp"
}
}
}
}
# Add geoip information
filter {
if [source_ip] {
geoip {
source => "source_ip"
target => "geoip"
database => "/etc/logstash/geoip/GeoLite2-City.mmdb"
fields => ["city_name", "country_name", "region_name", "location"]
}
}
}
# Add environment tags
filter {
mutate {
add_field => {
"environment" => "production"
"datacenter" => "dc1"
"cluster" => "elk-prod"
}
}
}
# Remove unnecessary fields
filter {
mutate {
remove_field => ["beat", "agent", "ecs", "input", "host"]
}
}
# /etc/logstash/conf.d/03-outputs.conf
# Elasticsearch output
output {
if [type] == "syslog" {
elasticsearch {
hosts => ["https://es-node-01.example.com:9200"]
index => "syslog-%{+YYYY.MM.dd}"
user => "logstash_writer"
password => "${ES_WRITER_PASSWORD}"
cacert => "/etc/elasticsearch/certs/ca.crt"
ssl => true
manage_template => true
template => "/etc/logstash/templates/syslog-template.json"
template_name => "syslog"
template_overwrite => true
}
}
if [type] == "application" {
elasticsearch {
hosts => ["https://es-node-01.example.com:9200"]
index => "application-logs-%{+YYYY.MM.dd}"
user => "logstash_writer"
password => "${ES_WRITER_PASSWORD}"
cacert => "/etc/elasticsearch/certs/ca.crt"
ssl => true
}
}
}
# Dead letter queue output
output {
if "_grokparsefailure" in [tags] {
elasticsearch {
hosts => ["https://es-node-01.example.com:9200"]
index => "failed-logs-%{+YYYY.MM.dd}"
user => "logstash_writer"
password => "${ES_WRITER_PASSWORD}"
cacert => "/etc/elasticsearch/certs/ca.crt"
ssl => true
}
}
}
# Slack alert for critical errors
output {
if [log_level] == "ERROR" or [log_level] == "FATAL" {
http {
url => "https://hooks.slack.com/services/XXX/YYY/ZZZ"
http_method => "post"
format => "json"
mapping => {
"text" => "🚨 Critical Log Alert\nHost: %{[host][name]}\nLevel: %{log_level}\nMessage: %{log_message}"
}
}
}
}
# /etc/kibana/kibana.yml
# Server settings
server.port: 5601
server.host: "0.0.0.0"
server.name: "kibana-prod"
server.basePath: ""
server.rewriteBasePath: false
server.maxPayloadBytes: 1048576
# Elasticsearch connection
elasticsearch.hosts: ["https://es-node-01.example.com:9200"]
elasticsearch.username: "kibana_system"
elasticsearch.password: "KibanaSystemPassword123!"
elasticsearch.ssl.certificateAuthorities: [ "/etc/elasticsearch/certs/ca.crt" ]
elasticsearch.ssl.verificationMode: "certificate"
elasticsearch.requestTimeout: 30000
elasticsearch.pingTimeout: 1500
# Security settings
xpack.security.enabled: true
xpack.security.encryptionKey: "something_at_least_32_characters_long"
xpack.encryptedSavedObjects.encryptionKey: "another_32_character_encryption_key"
xpack.reporting.encryptionKey: "yet_another_32_character_reporting_key"
# Session settings
xpack.security.session.idleTimeout: "8h"
xpack.security.session.lifespan: "30d"
# Telemetry
telemetry.enabled: false
# Logging
logging.dest: /var/log/kibana/kibana.log
logging.verbose: false
logging.rotate: true
logging.rotate.everyBytes: 104857600
logging.rotate.keepFiles: 7
# Monitoring
monitoring.ui.enabled: true
monitoring.ui.elasticsearch.hosts: ["https://es-node-01.example.com:9200"]
monitoring.ui.elasticsearch.username: "kibana_system"
monitoring.ui.elasticsearch.password: "KibanaSystemPassword123!"
# Index patterns
kibana.index: ".kibana"
# Default index pattern
kibana.defaultAppId: "home"
# CSP settings
csp.strict: true
csp.warnLegacyBrowsers: true
# ILM settings
xpack.ilm.enabled: true
# /etc/filebeat/filebeat.yml
filebeat.inputs:
# System logs
- type: log
enabled: true
paths:
- /var/log/syslog
- /var/log/messages
fields:
log_type: syslog
fields_under_root: true
multiline.pattern: '^\s'
multiline.match: after
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
# Application logs
- type: log
enabled: true
paths:
- /var/log/application/*.log
fields:
log_type: application
fields_under_root: true
json.keys_under_root: true
json.add_error_key: true
processors:
- add_host_metadata: ~
# Nginx logs
- type: log
enabled: true
paths:
- /var/log/nginx/access.log
fields:
log_type: nginx_access
fields_under_root: true
processors:
- add_host_metadata: ~
- type: log
enabled: true
paths:
- /var/log/nginx/error.log
fields:
log_type: nginx_error
fields_under_root: true
processors:
- add_host_metadata: ~
# Audit logs
- type: auditd
enabled: true
paths:
- /var/log/audit/audit.log
# Output to Logstash
output.logstash:
hosts: ["logstash-node-01.example.com:5044"]
ssl.certificate_authorities: ["/etc/pki/tls/certs/logstash-ca.crt"]
ssl.certificate: "/etc/pki/tls/certs/filebeat.crt"
ssl.key: "/etc/pki/tls/private/filebeat.key"
loadbalance: true
timeout: 30
# Alternative: Direct output to Elasticsearch
#output.elasticsearch:
# hosts: ["https://es-node-01.example.com:9200"]
# username: "filebeat_writer"
# password: "${ES_FILEBEAT_PASSWORD}"
# ssl.certificate_authorities: ["/etc/elasticsearch/certs/ca.crt"]
# ssl.certificate: "/etc/filebeat/certs/filebeat.crt"
# ssl.key: "/etc/filebeat/certs/filebeat.key"
# Processors
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
# Logging
logging.level: info
logging.to_files: true
logging.files:
path: /var/log/filebeat
name: filebeat.log
keepfiles: 7
permissions: 0644
# Monitoring
monitoring.enabled: true
monitoring.elasticsearch:
hosts: ["https://es-node-01.example.com:9200"]
username: "filebeat_monitor"
password: "${ES_FILEBEAT_MONITOR_PASSWORD}"
ssl.certificate_authorities: ["/etc/elasticsearch/certs/ca.crt"]
// PUT _ilm/policy/logs-policy
{
"policy": {
"phases": {
"hot": {
"min_age": "0ms",
"actions": {
"rollover": {
"max_size": "50gb",
"max_age": "1d"
},
"set_priority": {
"priority": 100
}
}
},
"warm": {
"min_age": "2d",
"actions": {
"set_priority": {
"priority": 50
},
"shrink": {
"number_of_shards": 1
},
"forcemerge": {
"max_num_segments": 1
}
}
},
"cold": {
"min_age": "7d",
"actions": {
"set_priority": {
"priority": 0
},
"freeze": {}
}
},
"delete": {
"min_age": "30d",
"actions": {
"delete": {}
}
}
}
}
}
// PUT _index_template/logs-template
{
"index_patterns": ["logs-*"],
"template": {
"settings": {
"number_of_shards": 3,
"number_of_replicas": 1,
"index.lifecycle.name": "logs-policy",
"index.lifecycle.rollover_alias": "logs"
},
"mappings": {
"properties": {
"@timestamp": { "type": "date" },
"message": { "type": "text" },
"host": {
"properties": {
"name": { "type": "keyword" },
"ip": { "type": "ip" }
}
},
"log_level": { "type": "keyword" },
"source": { "type": "keyword" }
}
}
}
}
// PUT _watcher/watch/high-error-rate
{
"trigger": {
"schedule": {
"interval": "5m"
}
},
"input": {
"search": {
"request": {
"indices": ["application-logs-*"],
"body": {
"size": 0,
"query": {
"bool": {
"filter": [
{ "range": { "@timestamp": { "gte": "now-5m" } } },
{ "term": { "log_level": "ERROR" } }
]
}
},
"aggs": {
"error_count": { "value_count": { "field": "_id" } }
}
}
}
}
},
"condition": {
"compare": {
"ctx.payload.aggregations.error_count.value": {
"gt": 100
}
}
},
"actions": {
"email_admin": {
"email": {
"to": "admin@example.com",
"subject": "High Error Rate Detected",
"body": "Error count: {{ctx.payload.aggregations.error_count.value}}"
}
},
"slack_alert": {
"webhook": {
"scheme": "https",
"host": "hooks.slack.com",
"port": 443,
"method": "post",
"path": "/services/XXX/YYY/ZZZ",
"body": "{\"text\": \"🚨 High Error Rate: {{ctx.payload.aggregations.error_count.value}} errors in last 5 minutes\"}"
}
}
}
}
# Validate Elasticsearch configuration
sudo elasticsearch --validate-config
# Check cluster health
curl -k -u elastic:password https://localhost:9200/_cluster/health?pretty
# Validate Logstash configuration
sudo logstash --config.test_and_exit --path.config /etc/logstash/conf.d/
# Validate Filebeat configuration
sudo filebeat test config -c /etc/filebeat/filebeat.yml
# Test Filebeat output
sudo filebeat test output -c /etc/filebeat/filebeat.yml
# Check Kibana status
curl -k -u kibana_system:password https://localhost:5601/api/status
# Restart Elasticsearch
sudo systemctl restart elasticsearch
# Restart Logstash
sudo systemctl restart logstash
# Restart Kibana
sudo systemctl restart kibana
# Restart Filebeat
sudo systemctl restart filebeat
# Check service status
sudo systemctl status elasticsearch
sudo systemctl status logstash
sudo systemctl status kibana
sudo systemctl status filebeat
# View logs
sudo journalctl -u elasticsearch -f
sudo journalctl -u logstash -f
sudo journalctl -u kibana -f
sudo journalctl -u filebeat -f
# Load ILM policy
curl -k -u elastic:password -X PUT "https://localhost:9200/_ilm/policy/logs-policy" -H 'Content-Type: application/json' -d @/etc/elasticsearch/ilm-policy.json
# Load index template
curl -k -u elastic:password -X PUT "https://localhost:9200/_index_template/logs-template" -H 'Content-Type: application/json' -d @/etc/elasticsearch/index-template.json
# Verify ILM policy
curl -k -u elastic:password "https://localhost:9200/_ilm/policy/logs-policy?pretty"
# Verify index template
curl -k -u elastic:password "https://localhost:9200/_index_template/logs-template?pretty"
# Check cluster health
curl -k -u elastic:password https://localhost:9200/_cluster/health?pretty
# Check node info
curl -k -u elastic:password https://localhost:9200/_nodes?pretty
# Check indices
curl -k -u elastic:password https://localhost:9200/_cat/indices?v
# Check cluster stats
curl -k -u elastic:password https://localhost:9200/_cluster/stats?pretty
# Check ILM status
curl -k -u elastic:password https://localhost:9200/_ilm/status?pretty
# Check Logstash API
curl -u logstash_admin:password http://localhost:9600/_node/stats?pretty
# Check pipeline status
curl -u logstash_admin:password http://localhost:9600/_node/stats/pipelines?pretty
# Check plugins
sudo logstash-plugin list
# Test pipeline
echo '{"message": "test"}' | sudo logstash -f /etc/logstash/conf.d/test.conf
# Check Kibana status
curl -k -u kibana_system:password https://localhost:5601/api/status
# Check saved objects
curl -k -u kibana_system:password https://localhost:5601/api/saved_objects/_find
# Verify index patterns
curl -k -u kibana_system:password https://localhost:5601/api/saved_objects/_find?type=index-pattern
# Check Filebeat status
sudo filebeat status
# Check Filebeat modules
sudo filebeat modules list
# Test Filebeat output
sudo filebeat test output
# Check Filebeat logs
sudo tail -f /var/log/filebeat/filebeat.log
Running ELK Stack in regulated environments? We assist with:
Secure your deployment: office@linux-server-admin.com | Contact Page