When it comes to centralized logging, the ELK stack (Elasticsearch, Logstash and Kibana) often pops up. However, if you have limited computational resources and few servers, it's probably overkill. Logstash will hog lots of resources! For simple use cases, you'll probably manage perfectly well without Logstash, as long as you have Filebeat. I won't mention Metricbeat for now, because metrics and monitoring are covered with Prometheus and Grafana (for now).
In my setup, I'm using Filebeat to ship logs directly to Elasticsearch, and I'm happy with that. I can really easily check things like:
Sure, professional users will benefit from Logstash, but you pay a price that I'm not (yet) willing to pay. I guess at some time, when I'd like to dive into Logstash and learn about its potential, I might install it. But it'd probably be to learn, not because I need it.
So - how did I set it up? What caveats have I encountered?
The setup is easy. See an extract from my docker-compose.yml:
kibana:
container_name: kibana
image: docker.elastic.co/kibana/kibana:7.0.0
networks:
- frontend
- backend
environment:
- TZ=${DOCKER_TZ}
- SERVER_NAME=${KIBANA_DOMAIN}
- SERVER_PORT=5601
- SERVER_HOST="0"
- ELASTICSEARCH_HOST=elasticsearch:9200
- VIRTUAL_HOST=${KIBANA_DOMAIN}
- VIRTUAL_PORT=5601
- VIRTUAL_PROTO=http
- LETSENCRYPT_HOST=${KIBANA_DOMAIN}
- LETSENCRYPT_EMAIL=${NOTIFICATION_EMAIL}
depends_on:
- elasticsearch
labels:
co.elastic.logs/disable: false
co.elastic.logs/module: kibana
elasticsearch:
container_name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.0.0
networks:
- backend
environment:
- TZ=${DOCKER_TZ}
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- discovery.type=single-node
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- elastic_data:/usr/share/elasticsearch/data
labels:
co.elastic.logs/disable: false
co.elastic.logs/module: elasticsearch
filebeat:
container_name: filebeat
image: docker.elastic.co/beats/filebeat:7.0.0
user: root
networks:
- backend
volumes:
- ${MY_DOCKER_DATA_DIR}/filebeat/module/nginx/access/ingest/default.json:/usr/share/filebeat/module/nginx/access/ingest/default.json
- ${MY_DOCKER_DATA_DIR}/filebeat/module/apache/access/ingest/default.json:/usr/share/filebeat/module/apache/access/ingest/default.json
- ${MY_DOCKER_DATA_DIR}/filebeat/module/system/syslog/manifest.yml:/usr/share/filebeat/module/system/syslog/manifest.yml
- ${MY_DOCKER_DATA_DIR}/filebeat/module/system/auth/manifest.yml:/usr/share/filebeat/module/system/auth/manifest.yml
- ${MY_DOCKER_DATA_DIR}/filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
- filebeat_data:/usr/share/filebeat/data
- /var/run/docker.sock:/var/run/docker.sock
- /var/lib/docker/containers/:/var/lib/docker/containers/:ro
- /var/log/:/var/log/:ro
- nextcloud-db-logs:/mnt/nextcloud-db-log:ro
environment:
- TZ=${DOCKER_TZ}
- ELASTICSEARCH_HOST=elasticsearch:9200
- KIBANA_HOST=kibana:5601
command: ["--strict.perms=false"]
labels:
co.elastic.logs/disable: false
There are quite a few takeaways here, compared to what you see in traditional tutorials with only the standard examples.
The really interesting part here is Filebeat. Elasticserach and Kibana are both "fire and forget" in the docker-compose setup context. But Filbeat needs quite som attention in order to ship logs that Elasticsearch can ingest in a structured manner.
What is really important when you start your stack the first time is to run the Filebeat setup; then restart Kibana once. After that, your logs should start ticking in. Go to Kibana and enjoy.
I had to make minor adjustments to the queries made by some of the default dashboards for them to give me results. For example, on the SSH dashboard, In order to visualize successful and failed authentication attempts, I had to change the filters to event.outcome:success and event.outcome:failure on the respective dashboards. The default filters Filebeat hat put there gave me zero hits because the original fields didn't exist in my Elasticsearch index.
For the complete setup, as always TheAwesomeGarage has you covered on GitHub.