Working with the molecule
Recently I’ve discovered Molecule which improved greatly my development and testing workflow on ansible roles. But unfortunately, soon enough I’ve encountered a blocking issue: working with containers.
The problem
Molecule works great for most of things until you need to test out docker functionality (i.e. with docker_container
).
Remember, by design, your tests will be executed inside a docker container, so if your role wants to spin yet another container, then things get tricky because we are talking about docker inside docker
functionality (and if you are running molecule from a docker container already, then it’s docker inside docker inside docker).
In the beginning I thought that it would be easy, just share /var/run/docker.sock
to the container and voila, right?
Not so fast, turns out that molecule mounts run folder with tmpfs through which removes the volume binding (see
this 2 year old bug
)
Solutions
I’ve found several solutions which can work around this issue:
Using ssh
You can pass an environmental variable DOCKER_HOST
when launching a molecule which will instruct it not to use a predefined socket, but to use the setting container in the variable.
So it’s possible to set it to ssh://remote.machine.com
, and molecule will connect through ssh to the remote machine and use that machine docker instance.
This solution mostly works (although not for dind), but I need to be able to work in offline mode (commuting on the train), so I kept digging.
Mounting /var/sock/docker.sock
Since we are unable to mount docker.sock
inside /var/run
let’s mount it elsewhere.
1platforms:
2 - name: instance
3 image: "alekcander/docker-centos7-ansible:latest"
4 command: ${MOLECULE_DOCKER_COMMAND:-""}
5 volumes:
6 - /sys/fs/cgroup:/sys/fs/cgroup:ro
7 - /var/run/docker.sock:/tmp/docker.sock
8 privileged: true
9 pre_build_image: true
As you can see, in molecule.yaml we are sharing our socket connection inside the /tmp folder, but now we need to tell the ansible executor inside container to use a new path.
There are several solutions:
- Before running our role we can create a link to our new path in
/var/run/docker.sock
- We can try and pass a
DOCKER_HOST
environmental variable to the container,but remember that if you set DOCKER_HOST env var to/tmp/docker.sock
on your dev machine, you will not be able to spin up molecule instance since it won’t find that path on local machine. It should be possible to create a link from/var/run/docker.sock
to/tmp/docker.sock
and it would solve the issue - Use
docker_host
property in your role. My docker roles are always flexible (i.e.docker_host: "{{ jackett_docker_host | default(omit) }}"
) so this is my favourite approach.
This solution works, but I am still not too enthusiast about it, because when we create a container through a shared link, it’s created on your local machine, and by default, when you do a molecule destroy, it will leave a hanging docker container. Also, there is a possibility of a conflict between ports.
That’s why I began to search for another solution, the real dind which would give me the isolation I want.
“Real” DIND
If we are talking about DIND, there is already an official container on Docker Hub . So all we need to do, in theory, is to mount the docker container on the same network as the molecule’s one in order for it to have a connectivity, and then we should have our desired level of the connectivity.
Let’s create our network called dind-network (name can be changed if you want)
1docker network create --attachable dind-network
And spin the docker container
1docker run --rm --privileged --name dind -d \
2 --network dind-network \
3 -e DOCKER_TLS_CERTDIR="" \
4 docker:19.03.0-dind
Couple of explanations about tags:
--r
*: remove container after the stop signal--privileged
: container will run in the privileged mode, warning: this will give a pretty much root access to your local machine, so be aware about it.--network dind-network
: the name of the network we want to bind our container to. It has to be the same as the name used in previous step-e DOCKER_TLS_CERTDIR=""
This setting will disable usage of the tls for the connection to the dind container. Recently (as of 19.03) the default behaviour has changed (see this page on a GitLab for details about dind tls issue)docker:19.03.0-dind
name of the image and it’s pinned tag
Once preparations are in place, we can alter our molecule configuration to following:
1platforms:
2 - name: instance
3 image: "alekcander/docker-centos7-ansible:latest"
4 command: ${MOLECULE_DOCKER_COMMAND:-""}
5 volumes:
6 - /sys/fs/cgroup:/sys/fs/cgroup:ro
7 privileged: true
8 networks:
9 - name: dind-network
10 pre_build_image: true
11 env:
12 DOCKER_HOST: "tcp://dind:2375"
I am using a DOCKER_HOST as a env:
element because at the moment I am using an installed version of molecule on my laptop, and it won’t work if I were to put it as a global environment variable on my machine (molecule will try to spin a docker container using that parameter and it would fail because dind is not a resolvable address if you are not inside a container)
This solution has one weakness though: in theory it wont work very well in CI/CD because of that DOCKER_HOST
setting, however it can be fixed by using a env variable instead of tcp://dind:2375
and setting it to be the same as the service name.
Comments