Why use Docker-in-Docker?

As part of a recent code-with project, where the Commercial Software Engineering (CSE) team I’m on at Microsoft worked with a large customer, I was assigned with creating a Docker-in-Docker image with the Azure CLI pre-installed.

I initially thought “Why would anyone need this?” Also, “How far does the inception go?….Docker within Docker…..within Docker?”

inception-spinThere is a great blog post from Miiro Juuso over at GetIntoDevOps.com with more details around how he created a Docker-in-Docker image as part of a continuous integration pipeline running Jenkins, where the Jeniks master is running inside of a Docker container. This was the same thing our customer needed, but with the Azure CLI already installed.

I started by creating a Dockerfile, which I forked from petazzo/dind, and hosted on GitHub, so that I’d only have to make changes in one place. From there, I could create an automated build process at Docker Hub which monitors my GitHub repo for any changes. Upon sensing any, it will kick off a build at Docker Hub, and the latest image will be available.

I also did this because I wanted users to be aware of what was inside the Dockerfile. If you do not link to an external repository, such as GitHub, the end user has no idea of what’s in the Dockerfile unless you explicitly state so in the description.


There are certainly some things you want to keep in mind here when running this, and I’ve outlined them in the README as well. Most importantly, you MUST run this in privileged mode. JPetazzo explains why in this blog post.



Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.