Multi-Platform Docker Builds

This is a guest post from Docker Captain Adrian Mouat who is Chief Scientist at Container Solutions, a cloud-native consultancy and Kubernetes Certified Service Provider. Adrian is the author of “Using Docker,” published by O’Reilly Media. He is currently developing Trow, a container image registry designed to securely manage the flow of images in a Kubernetes cluster. Adrian is a regular conference speaker and trainer and he has spoken at several events including KubeCon EU, DockerCon, CraftConf, TuringFest and GOTO Amsterdam.

Docker images have become a standard tool for testing and deploying new and third-party software. I’m the main developer of the open source Trow registry and Docker images are the primary way people install the tool. If I didn’t provide images, others would end up rolling their own which would duplicate work and create maintenance issues.

By default, the Docker images we create run on the linux/amd64 platform. This works for the majority of development machines and cloud providers but leaves users of other platforms out in the cold. This is a substantial audience – think of home-labs built from Raspberry Pis, companies producing IoT devices, organisations running on IBM mainframes and clouds utilising low-power arm64 chips. Users of these platforms are typically building their own images or finding another solution.

So how can you build images for these other platforms? The most obvious way is simply to build the image on the target platform itself. This can work in a lot of cases, but if you’re targetting s390x, I hope you have access to an IBM mainframe (try Phil Estes, as I’ve heard he has several in his garage). More common platforms like Raspberry Pis and IoT devices are typically limited in power and are slow or incapable of building images.

So what can we do instead? There’s two more options: 1) emulate the target platform or 2) cross-compile. Interestingly, I’ve found that a blend of the two options can work best.

Emulation

Let’s start by looking at the first option, emulation. There’s a fantastic project called QEMU that can emulate a whole bunch of platforms. With the recent buildx work, it’s easier than ever to use QEMU with Docker.

The QEMU integration relies on a Linux kernel feature with the slightly cryptic name of the binfmt_misc handler. When Linux encounters an executable file format it doesn’t recognise (i.e. one for a different architecture), it will check with the handler if there any “user space applications” configured to deal with the format (i.e. an emulator or VM). If there are, it will pass the executable to the application.

For this to work, we need to register the platforms we’re interested in with the kernel. If you’re using Docker Desktop this will already have been done for you for the most common platforms. If you’re using Linux, you can register handlers in the same way as Docker Desktop by running the latest docker/binfmt image e.g:

docker run --privileged --rm docker/binfmt:a7996909642ee92942dcd6cff44b9b95f08dad64

You may need to restart Docker after doing this. If you’d like a little more control over which platforms you want to register or want to use a more esoteric platform (e.g. PowerPC) take a look at the qus project.

There’s a couple of different ways to use buildx, but the easiest is probably to enable experimental features on the Docker CLI if you haven’t already – just edit ~/.docker/config.json to include the following:

{
    ...
     "experimental": “enabled”
}

You should now be able to run docker buildx ls and you should get output similar to the following:

$ docker buildx ls
NAME/NODE     DRIVER/ENDPOINT             STATUS   PLATFORMS
default       docker                               
  default     default                     running  linux/amd64, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/arm/v7, linux/arm/v6

Let’s try building an image for another platform. Start with this Dockerfile:

FROM debian:buster

CMD uname -m

If we build it normally and run it:

$ docker buildx build -t local-build .
…
$ docker run --rm local-build
x86_64

But if we explicitly name a platform to build for:

$ docker buildx build --platform linux/arm/v7 -t arm-build .
…
$ docker run --rm arm-build
armv7l

Success! We’ve managed to build and run an armv7 image on an x86_64 laptop with little work. This technique is effective, but for more complex builds you may find it runs too slowly or you hit bugs in QEMU. In those cases, it’s worth looking into whether or not you can cross-compile your image.

Cross-Compilation

Several compilers are capable of emitting binary for foreign platforms, most notably including Go and Rust. With the Trow registry project, we found cross-compilation to be the quickest and most reliable method to create images for other platforms. For example, here is the Dockerfile for the Trow armv7 image. The most relevant line is:

RUN cargo build --target armv7-unknown-linux-gnueabihf -Z unstable-options --out-dir ./out 

Which explicitly tells Rust what platform we want our binary to run on. We can then use a multistage build to copy this binary into a base image for the target architecture (we could also use scratch if we statically compiled) and we’re done. However, in the case of the Trow registry, there are a few more things I want to set in the final image, so the final stage actually begins with:

FROM --platform=linux/arm/v7 debian:stable-slim 

Because of this, I’m actually using a blend of both emulation and cross-compilation – cross-compilation to create the binary and emulation to run and configure our final image.

Manifest Lists

In the above advice about emulation, you might have noticed we used the --platform argument to set the build platform, but we left the image specified in the FROM line as debian:buster. It might seem this doesn’t make sense – surely the platform depends on the base image and how it was built, not what the user decides at a later stage?

What is happening here is Docker is using something called manifest lists. These are lists for a given image that contain pointers to images for different architectures. Because the official debian image has a manifest list defined, when I pull the image on my laptop, I automagically get the amd64 image and when I pull it on my Raspberry Pi, I get the armv7 image.

To keep our users happy, we can create manifest lists for our own images. If we go back to our earlier example, first we need to rebuild and push the images to a repository:

$ docker buildx build --platform linux/arm/v7 -t amouat/arch-test:armv7 .
…
$ docker push amouat/arch-test:armv7
…
$ docker buildx build -t amouat/arch-test:amd64 .
…
$ docker push amouat/arch-test:amd64

Next, we create a manifest list that points to these two separate images and push that:

$ docker manifest create amouat/arch-test:blog amouat/arch-test:amd64 amouat/arch-test:armv7
Created manifest list docker.io/amouat/arch-test:blog
$ docker manifest push amouat/arch-test:blog
sha256:039dd768fc0758fbe82e3296d40b45f71fd69768f21bb9e0da02d0fb28c67648

Now Docker will pull and run the appropriate image for the current platform:

$ docker run amouat/arch-test:blog
Unable to find image 'amouat/arch-test:blog' locally
blog: Pulling from amouat/arch-test
Digest: sha256:039dd768fc0758fbe82e3296d40b45f71fd69768f21bb9e0da02d0fb28c67648
Status: Downloaded newer image for amouat/arch-test:blog
x86_64

Somebody with a Raspberry Pi to hand can try running the image and confirm that it does indeed work on that platform as well!

To recap; not all users of Docker images run amd64. With buildx and QEMU, it’s possible to support these users with a small amount of extra work.

Happy Birthday, Docker!