When I set out to revamp the monitoring of my infrastructure, and switch to Grafana Alloy for both the host systems and Docker containers running on them - I found myself frustrated by the lack of good tutorials for my use case. Most guides I found that resembled my use case, either focused on host monitoring or Docker monitoring, but not in a unified, easy-to-deploy setup. After quite some research into Grafana Alloy's examples and capabilities, I've created a solution that simplifies monitoring for home labs and small company setups.
My previous monitoring setups required multiple components:
This creates a lot of moving parts, configuration complexity, and maintenance overhead. When you want to add a new server to your monitoring, you need to:
And another problem is that Promtail is being deprecated - and it seems all examples in the world use Promtail.
Grafana Alloy changes the game. It's a single, unified agent that can:
All from a single Docker container. No need for multiple agents, complex configuration on the monitoring server, and no scrapers - because everything is push-based.
This setup is perfect for:
To me, the real beauty is in the deployment speed: adding a new server takes minutes. Just copy a docker-compose file and config, spin it up and you're done.
Let's walk through setting up a monitored server - so just a client - the Alloy agent. This assumes you already have the monitoring server that will receive the logs and metrics. The full setup, including all steps and example configs of a full monitoring server is available in the GitHub repository of ArktIQ IT with an extensive README-file. Disclaimer ArktIQ IT is my own company.
You don't have to do it exactly like this, of course. Do it your way and include it in your existing docker-compose if you wish.
Create a directory for your Alloy compose-file and your config file.
mkdir -p ~/alloy-monitoring/alloy-config
cd ~/alloy-monitoring
Create docker-compose.yml:
services:
alloy:
image: grafana/alloy:v1.12.2
container_name: alloy
restart: unless-stopped
# Optional
ports:
- "12346:12345"
- "4317:4317"
- "4318:4318"
environment:
ALLOY_DEPLOY_MODE: docker
INSTANCE: ${HOSTNAME}
command: >
run
--server.http.listen-addr=0.0.0.0:12345
--storage.path=/var/lib/alloy/data
/etc/alloy/config.alloy
# Without this, you'll miss out on Docker data
privileged: true
volumes:
- ./alloy-config/config.alloy:/etc/alloy/config.alloy:ro
# Host metrics
- /proc:/rootproc:ro
- /:/rootfs:ro
- /sys:/sys:ro
- /run/udev:/run/udev:ro
# Docker access
- /var/run/docker.sock:/var/run/docker.sock
- /var/lib/docker:/var/lib/docker:ro
- /dev/disk/:/dev/disk:ro
# Journald + logs
- /run/log/journal:/run/log/journal:ro
- /var/log/journal:/var/log/journal:ro
- /var/log:/var/log:ro
- /etc/machine-id:/etc/machine-id:ro
# Alloy data
- alloy-data:/var/lib/alloy/data
extra_hosts:
- "host.docker.internal:host-gateway"
devices:
- /dev/kmsg
volumes:
alloy-data:
Create alloy-config/config.alloy. This is where the magic is defined, one config file sets up push-based logs and metrics for both host and docker:
// Prometheus remote write endpoint
prometheus.remote_write "monitoring" {
endpoint {
url = "http://your-monitoring-server:9090/api/v1/write"
}
}
// Host metrics using Unix exporter
prometheus.exporter.unix "host" {
procfs_path = "/rootproc"
rootfs_path = "/rootfs"
disable_collectors = ["ipvs", "btrfs", "infiniband", "xfs", "zfs"]
enable_collectors = [
"meminfo",
"netstat",
"sockstat",
"conntrack",
]
filesystem {
fs_types_exclude = "^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|tmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$"
mount_points_exclude = "^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+)($|/)"
mount_timeout = "5s"
}
netclass {
ignored_devices = "^(veth.*|cali.*|[a-f0-9]{15})$"
}
netdev {
device_exclude = "^(veth.*|cali.*|[a-f0-9]{15})$"
}
}
discovery.relabel "host_metrics" {
targets = prometheus.exporter.unix.host.targets
rule {
target_label = "job"
replacement = "monitoring/host"
}
rule {
target_label = "instance"
replacement = sys.env("INSTANCE")
}
}
prometheus.scrape "host_scrape" {
scrape_interval = "15s"
targets = discovery.relabel.host_metrics.output
forward_to = [prometheus.remote_write.monitoring.receiver]
}
// Docker metrics using cAdvisor exporter
prometheus.exporter.cadvisor "docker" {
docker_only = true
store_container_labels = true
}
discovery.relabel "docker_metrics" {
targets = prometheus.exporter.cadvisor.docker.targets
rule {
target_label = "job"
replacement = "monitoring/docker"
}
rule {
target_label = "instance"
replacement = sys.env("INSTANCE")
}
}
prometheus.scrape "docker_scrape" {
scrape_interval = "10s"
targets = discovery.relabel.docker_metrics.output
forward_to = [prometheus.remote_write.monitoring.receiver]
}
// Loki write endpoint
loki.write "monitoring" {
endpoint {
url = "http://your-monitoring-server:3100/loki/api/v1/push"
}
}
// Host logs from journald
discovery.relabel "host_journal" {
targets = []
rule {
source_labels = ["__journal__systemd_unit"]
target_label = "unit"
}
rule {
source_labels = ["__journal__boot_id"]
target_label = "boot_id"
}
rule {
source_labels = ["__journal__transport"]
target_label = "transport"
}
rule {
source_labels = ["__journal_priority_keyword"]
target_label = "level"
}
rule {
target_label = "job"
replacement = "monitoring/host"
}
rule {
target_label = "instance"
replacement = sys.env("INSTANCE")
}
}
loki.source.journal "host_journal" {
max_age = "24h"
relabel_rules = discovery.relabel.host_journal.rules
forward_to = [loki.write.monitoring.receiver]
}
// Host file logs
local.file_match "host_files" {
path_targets = [{
__path__ = "/var/log/{syslog,messages,*.log}",
job = "monitoring/host",
instance = sys.env("INSTANCE"),
}]
}
loki.source.file "host_files" {
targets = local.file_match.host_files.targets
forward_to = [loki.write.monitoring.receiver]
}
// Docker container logs
discovery.docker "docker" {
host = "unix:///var/run/docker.sock"
}
discovery.relabel "docker_logs" {
targets = []
rule {
source_labels = ["__meta_docker_container_label_com_docker_compose_service"]
target_label = "compose_service"
}
rule {
source_labels = ["__meta_docker_container_name"]
regex = "/(.*)"
target_label = "container"
}
rule {
target_label = "job"
replacement = "monitoring/docker"
}
rule {
target_label = "instance"
replacement = sys.env("INSTANCE")
}
rule {
target_label = "log_type"
replacement = "docker"
}
}
loki.source.docker "docker_logs" {
host = "unix:///var/run/docker.sock"
targets = discovery.docker.docker.targets
relabel_rules = discovery.relabel.docker_logs.rules
forward_to = [loki.write.monitoring.receiver]
}
Important: Update the URLs in the config to point to your monitoring server's Prometheus and Loki endpoints. If you point the agent towards a Wireguard tunnel (check out the GitHub repository for details), it all flows through a secure tunnel.
docker compose up -d
That's it! Your server is now sending metrics and logs to your monitoring stack.
Check the Alloy UI (if you exposed the port - and attention, I used a non-standard port because I was experimenting with a bare metal Alloy at the same time):
curl http://localhost:12346/-/healthy
Or check your Prometheus and Loki to see if data is flowing in.
If you want to add custom labels to your Docker container logs for easier filtering in Loki, you can add labels to your containers:
services:
api:
image: myapp:latest
labels:
no.arktiq.service: api
no.arktiq.env: prod
Then update the Alloy config to extract these labels:
discovery.relabel "docker_logs" {
targets = []
rule {
source_labels = ["__meta_docker_container_label_no_arktiq_service"]
target_label = "service"
}
rule {
source_labels = ["__meta_docker_container_label_no_arktiq_env"]
target_label = "env"
}
// ... rest of the rules
}
This makes it easy to filter logs in Loki by service and environment.
docker compose up -dThis particular tutorial focused on the client-side setup. The full monitoring stack (including Prometheus, Loki, Grafana, Alertmanager, and more) is available in my ArktIQ IT GitHub repository. The repository includes:
After struggling to find a good unified solution for monitoring both hosts and Docker containers, I'm excited to share this setup. It's simple, time-efficient, and perfect for home labs and small teams who want comprehensive monitoring without the complexity.
The key insight is that Grafana Alloy can do it all: host metrics, Docker metrics, host logs, and Docker logs - from a single container. No need for cAdvisor, Node Exporter, Promtail, or complex Prometheus scraping configurations. Just Alloy.
If you're setting up monitoring for your infrastructure, give this unified approach a try. You might find, like I did, that simpler is better.
Resources: