When I set out to revamp the monitoring of my infrastructure, and switch to Grafana Alloy for both the host systems and Docker containers running on them - I found myself frustrated by the lack of good tutorials for my use case. Most guides I found that resembled my use case, either focused on host monitoring or Docker monitoring, but not in a unified, easy-to-deploy setup. After quite some research into Grafana Alloy's examples and capabilities, I've created a solution that simplifies monitoring for home labs and small company setups.

The problem with traditional approaches

My previous monitoring setups required multiple components:

  • cAdvisor for Docker container metrics
  • Node Exporter for host metrics
  • Promtail or similar for log collection
  • Prometheus scrapers that need configuration updates for each new server I wanted to monitor

This creates a lot of moving parts, configuration complexity, and maintenance overhead. When you want to add a new server to your monitoring, you need to:

  1. Install multiple agents
  2. Update Prometheus configuration
  3. Restart services
  4. Hope everything connects properly

And another problem is that Promtail is being deprecated - and it seems all examples in the world use Promtail.

The Alloy solution: One container, everything monitored

Grafana Alloy changes the game. It's a single, unified agent that can:

  • Collect host metrics (like CPU, memory, disk, network)
  • Collect Docker container metrics
  • Collect host system logs (journald, log files)
  • Collect Docker container logs
  • Push everything to your monitoring stack (Prometheus + Loki)

All from a single Docker container. No need for multiple agents, complex configuration on the monitoring server, and no scrapers - because everything is push-based.

This setup is perfect for:

  • Home labs: Simple deployment, easy to maintain
  • Small companies: Quick to set up new servers, no complex infrastructure required
  • Docker environments: Works seamlessly with existing Docker setups

To me, the real beauty is in the deployment speed: adding a new server takes minutes. Just copy a docker-compose file and config, spin it up and you're done.

Quick tutorial: Setting up unified monitoring

Let's walk through setting up a monitored server - so just a client - the Alloy agent. This assumes you already have the monitoring server that will receive the logs and metrics. The full setup, including all steps and example configs of a full monitoring server is available in the GitHub repository of ArktIQ IT with an extensive README-file. Disclaimer ArktIQ IT is my own company.

Step 1: Prepare the directory structure for the client/agent

You don't have to do it exactly like this, of course. Do it your way and include it in your existing docker-compose if you wish.

Create a directory for your Alloy compose-file and your config file.

mkdir -p ~/alloy-monitoring/alloy-config
cd ~/alloy-monitoring

Step 2: Create the Docker compose file

Create docker-compose.yml:

services:
  alloy:
    image: grafana/alloy:v1.12.2
    container_name: alloy
    restart: unless-stopped

    # Optional
    ports:
      - "12346:12345"
      - "4317:4317"
      - "4318:4318"

    environment:
      ALLOY_DEPLOY_MODE: docker
      INSTANCE: ${HOSTNAME}

    command: >
      run
      --server.http.listen-addr=0.0.0.0:12345
      --storage.path=/var/lib/alloy/data
      /etc/alloy/config.alloy

    # Without this, you'll miss out on Docker data
    privileged: true

    volumes:
      - ./alloy-config/config.alloy:/etc/alloy/config.alloy:ro

      # Host metrics
      - /proc:/rootproc:ro
      - /:/rootfs:ro
      - /sys:/sys:ro
      - /run/udev:/run/udev:ro

      # Docker access
      - /var/run/docker.sock:/var/run/docker.sock
      - /var/lib/docker:/var/lib/docker:ro
      - /dev/disk/:/dev/disk:ro

      # Journald + logs
      - /run/log/journal:/run/log/journal:ro
      - /var/log/journal:/var/log/journal:ro
      - /var/log:/var/log:ro
      - /etc/machine-id:/etc/machine-id:ro

      # Alloy data
      - alloy-data:/var/lib/alloy/data

    extra_hosts:
      - "host.docker.internal:host-gateway"

    devices:
      - /dev/kmsg

volumes:
  alloy-data:

Step 3: Create the Alloy configuration

Create alloy-config/config.alloy. This is where the magic is defined, one config file sets up push-based logs and metrics for both host and docker:

// Prometheus remote write endpoint
prometheus.remote_write "monitoring" {
  endpoint {
    url = "http://your-monitoring-server:9090/api/v1/write"
  }
}

// Host metrics using Unix exporter
prometheus.exporter.unix "host" {
  procfs_path = "/rootproc"
  rootfs_path = "/rootfs"

  disable_collectors = ["ipvs", "btrfs", "infiniband", "xfs", "zfs"]
  enable_collectors = [
    "meminfo",
    "netstat",
    "sockstat",
    "conntrack",
  ]

  filesystem {
    fs_types_exclude     = "^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|tmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$"
    mount_points_exclude = "^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+)($|/)"
    mount_timeout        = "5s"
  }

  netclass {
    ignored_devices = "^(veth.*|cali.*|[a-f0-9]{15})$"
  }

  netdev {
    device_exclude = "^(veth.*|cali.*|[a-f0-9]{15})$"
  }
}

discovery.relabel "host_metrics" {
  targets = prometheus.exporter.unix.host.targets

  rule {
    target_label = "job"
    replacement  = "monitoring/host"
  }

  rule {
    target_label = "instance"
    replacement  = sys.env("INSTANCE")
  }
}

prometheus.scrape "host_scrape" {
  scrape_interval = "15s"
  targets         = discovery.relabel.host_metrics.output
  forward_to      = [prometheus.remote_write.monitoring.receiver]
}

// Docker metrics using cAdvisor exporter
prometheus.exporter.cadvisor "docker" {
  docker_only = true
  store_container_labels = true
}

discovery.relabel "docker_metrics" {
  targets = prometheus.exporter.cadvisor.docker.targets

  rule {
    target_label = "job"
    replacement  = "monitoring/docker"
  }

  rule {
    target_label = "instance"
    replacement  = sys.env("INSTANCE")
  }
}

prometheus.scrape "docker_scrape" {
  scrape_interval = "10s"
  targets         = discovery.relabel.docker_metrics.output
  forward_to      = [prometheus.remote_write.monitoring.receiver]
}

// Loki write endpoint
loki.write "monitoring" {
  endpoint {
    url = "http://your-monitoring-server:3100/loki/api/v1/push"
  }
}

// Host logs from journald
discovery.relabel "host_journal" {
  targets = []

  rule {
    source_labels = ["__journal__systemd_unit"]
    target_label  = "unit"
  }

  rule {
    source_labels = ["__journal__boot_id"]
    target_label  = "boot_id"
  }

  rule {
    source_labels = ["__journal__transport"]
    target_label  = "transport"
  }

  rule {
    source_labels = ["__journal_priority_keyword"]
    target_label  = "level"
  }

  rule {
    target_label = "job"
    replacement  = "monitoring/host"
  }

  rule {
    target_label = "instance"
    replacement  = sys.env("INSTANCE")
  }
}

loki.source.journal "host_journal" {
  max_age       = "24h"
  relabel_rules = discovery.relabel.host_journal.rules
  forward_to    = [loki.write.monitoring.receiver]
}

// Host file logs
local.file_match "host_files" {
  path_targets = [{
    __path__  = "/var/log/{syslog,messages,*.log}",
    job       = "monitoring/host",
    instance  = sys.env("INSTANCE"),
  }]
}

loki.source.file "host_files" {
  targets    = local.file_match.host_files.targets
  forward_to = [loki.write.monitoring.receiver]
}

// Docker container logs
discovery.docker "docker" {
  host = "unix:///var/run/docker.sock"
}

discovery.relabel "docker_logs" {
  targets = []

  rule {
    source_labels = ["__meta_docker_container_label_com_docker_compose_service"]
    target_label  = "compose_service"
  }

  rule {
    source_labels = ["__meta_docker_container_name"]
    regex         = "/(.*)"
    target_label  = "container"
  }

  rule {
    target_label = "job"
    replacement  = "monitoring/docker"
  }

  rule {
    target_label = "instance"
    replacement  = sys.env("INSTANCE")
  }

  rule {
    target_label = "log_type"
    replacement  = "docker"
  }
}

loki.source.docker "docker_logs" {
  host          = "unix:///var/run/docker.sock"
  targets       = discovery.docker.docker.targets
  relabel_rules = discovery.relabel.docker_logs.rules
  forward_to    = [loki.write.monitoring.receiver]
}

Important: Update the URLs in the config to point to your monitoring server's Prometheus and Loki endpoints. If you point the agent towards a Wireguard tunnel (check out the GitHub repository for details), it all flows through a secure tunnel.

Step 4: Start the container

docker compose up -d

That's it! Your server is now sending metrics and logs to your monitoring stack.

Step 5: Verify it's working

Check the Alloy UI (if you exposed the port - and attention, I used a non-standard port because I was experimenting with a bare metal Alloy at the same time):

curl http://localhost:12346/-/healthy

Or check your Prometheus and Loki to see if data is flowing in.

Bonus: Enhanced Docker log labeling

If you want to add custom labels to your Docker container logs for easier filtering in Loki, you can add labels to your containers:

services:
  api:
    image: myapp:latest
    labels:
      no.arktiq.service: api
      no.arktiq.env: prod

Then update the Alloy config to extract these labels:

discovery.relabel "docker_logs" {
  targets = []

  rule {
    source_labels = ["__meta_docker_container_label_no_arktiq_service"]
    target_label  = "service"
  }

  rule {
    source_labels = ["__meta_docker_container_label_no_arktiq_env"]
    target_label  = "env"
  }

  // ... rest of the rules
}

This makes it easy to filter logs in Loki by service and environment.

What makes this setup "special"

  1. Single container: One Docker container handles everything - no need for multiple agents
  2. No monitoring server changes: When you add a new server, you don't need to touch Prometheus or Loki configuration. Everything is push-based.
  3. Fast deployment: Adding a new server is just copying files and running docker compose up -d
  4. Unified configuration: One config file manages all metric and log collection. You can even use the same file on all your monitored servers!
  5. Fewer moving parts: Skip Prometheus exporter, Promtail, cAdvisor and scrapers. Just use Alloy!
  6. Production Ready: Works great for small to medium setups

The complete stack

This particular tutorial focused on the client-side setup. The full monitoring stack (including Prometheus, Loki, Grafana, Alertmanager, and more) is available in my ArktIQ IT GitHub repository. The repository includes:

  • Complete monitoring server setup with Docker Compose
  • Pre-configured Grafana dashboards
  • Alert rules for common issues
  • SMTP relay for email notifications
  • Wireguard setup for secure remote monitoring

Final words

After struggling to find a good unified solution for monitoring both hosts and Docker containers, I'm excited to share this setup. It's simple, time-efficient, and perfect for home labs and small teams who want comprehensive monitoring without the complexity.

The key insight is that Grafana Alloy can do it all: host metrics, Docker metrics, host logs, and Docker logs - from a single container. No need for cAdvisor, Node Exporter, Promtail, or complex Prometheus scraping configurations. Just Alloy.

If you're setting up monitoring for your infrastructure, give this unified approach a try. You might find, like I did, that simpler is better.


Resources:

Previous Post