The other day I got Ouroboros talking to my private Docker registry. The thing is, my Docker registry was only available locally. If I wanted to build my stack on a different computer, I'd be stuck! So I wanted to expose my private Docker registry on the big Internet, but at the same time restricting access to it.

This is how I did it...

First, I told my Nginx-gen and Let's Encrypt companion containers to prepare a virtualhost file and SSL certificates for my registry server, I shut down the default 5000 port on the host computer, using expose instead of ports, and I instructed the registry to use SSL certificates for communication + simple authentication for access control. All easily done straight in the docker-compose.yaml file like this: ''' myregistry: container_name: myregistry image: registry:2 networks:

  • frontend expose:
  • "5000" restart: always environment:
  • REGISTRY_AUTH=htpasswd
  • REGISTRY_AUTH_HTPASSWD_REALM=RemiM Private Docker Registry
  • REGISTRY_AUTH_HTPASSWD_PATH=/auth/.htpasswd volumes:
  • ${MY_DOCKER_DATA_DIR}/registry/data:/var/lib/registry
  • nginx-certs:/certs
  • ${MY_DOCKER_DATA_DIR}/registry/auth:/auth ''' Notice how I re-use the nginx-certs volume here. That's a key point, because the registry container must have real certificates in order to work properly. The certificate generation is now done automatically by Let's Encrypt if you use the rest of my example docker setup. It's beautiful how these things just play together.

Also, notice the /auth/.htpasswd reference. In order for this to work, yoy need to mount a registry auth file directory under volumes, but you also need to generate the actual auth-file like this:

docker run --entrypoint htpasswd  registry:2 -Bbn <username> <password> > ~/my-docker-data/registry/auth/.htpasswd

Finally you need to update your container definitions in the docker-compose.yaml file to actually point to your new private repo for this to be of any use; e.g.:


Of course, now Ouroboros will check for images updates not from localhost, but form your new docker host. And this host is now secured with a username+password, so you need to tell Ouroboros how to provide credentials. This is how you do that; just mount a very specific file (Linux) to your Ouroboros container:

      - $HOME/.docker/config.json:/root/.docker/config.json

In order for the config.json file to be populated by credentials, you need to log on to your Docker registry once with the user that owns the referenced $HOME directory:

docker login -u<username> -p<password>

Now you're set! You provision containers from your own private Docker repository. You can acces this repository form anywhere if you got the right credentials, and Ouroboros knows how to pull new updates applying the correct credentials.

Previous Post Next Post