Tech Expert & Vibe Coder

With 14+ years of experience, I specialize in self-hosting, AI automation, and Vibe Coding – building applications using AI-powered tools like Google Antigravity, Dyad, and Cline. From homelabs to enterprise solutions.

Setting Up Multi-Architecture Docker Builds with QEMU Emulation for ARM64 Raspberry Pi Deployments from x86 Hosts

Why I Needed Cross-Architecture Docker Builds

I run most of my infrastructure on x86 hardware—a Proxmox cluster and a few Intel NUCs. But I also have Raspberry Pi devices scattered around for specific tasks: a Pi 4 handling DNS filtering, another managing sensor data collection, and a Pi Zero W running a minimal monitoring agent.

The problem hit me when I started containerizing everything. I'd build a Docker image on my x86 development machine, push it to my registry, pull it on the Pi, and watch it fail with cryptic exec format errors. The container was built for x86_64. The Pi runs ARM64 (or armv7 on older models). They're fundamentally incompatible.

I could have maintained separate build machines or used GitHub Actions with multiple runners. But I wanted a simpler workflow: build once from my main workstation, deploy everywhere. That meant cross-architecture builds using QEMU emulation.

My Initial Setup and What I Learned

I started with Docker Desktop on my Linux workstation. Modern Docker includes buildx, which wraps BuildKit and supports multi-platform builds. The key piece is QEMU—a processor emulator that lets x86 systems run ARM binaries.

First, I verified buildx was available:

docker buildx version

It was already installed. Then I checked if QEMU support was registered:

docker run --rm --privileged multiarch/qemu-user-static --reset -p yes

This command registers QEMU binary formats with the kernel's binfmt_misc system. It tells Linux: "When you see an ARM64 binary, run it through QEMU instead of failing." The --reset flag clears any stale registrations.

I confirmed it worked by inspecting the registered formats:

ls /proc/sys/fs/binfmt_misc/

I saw entries for various ARM architectures. Good sign.

Creating a Multi-Platform Builder

Docker buildx uses "builders"—isolated build environments. The default builder doesn't support multiple platforms simultaneously. I created a new one:

docker buildx create --name multiarch --driver docker-container --use

The docker-container driver runs builds inside a dedicated container, which gives better isolation and supports advanced features. The --use flag makes it the active builder.

I verified it:

docker buildx inspect --bootstrap

The output showed supported platforms: linux/amd64, linux/arm64, linux/arm/v7, and others. The --bootstrap flag starts the builder container if it's not running.

Writing a Dockerfile That Works Across Architectures

Not all base images support multiple architectures. I learned this the hard way when a build succeeded locally but failed on the Pi because the base image only had an x86 variant.

I started using official multi-arch images. For example, Alpine Linux publishes images for both x86_64 and ARM64 under the same tag:

FROM alpine:3.19

Docker automatically pulls the correct variant based on the target platform. The same applies to official Python, Node.js, and Debian images.

For a Python service I run on both architectures, my Dockerfile looks like this:

FROM python:3.11-slim

WORKDIR /app

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY . .

CMD ["python", "app.py"]

Nothing architecture-specific here. The Python base image handles the differences.

Building for Multiple Platforms

The build command specifies target platforms:

docker buildx build --platform linux/amd64,linux/arm64 -t registry.vipinpg.com/myapp:latest --push .

Breaking this down:

  • --platform linux/amd64,linux/arm64 builds for both x86_64 and ARM64
  • -t registry.vipinpg.com/myapp:latest tags the image
  • --push pushes to my registry immediately (required for multi-platform builds)

The build takes significantly longer than a single-platform build. QEMU emulation is slow—sometimes 5-10x slower than native compilation. A build that takes 2 minutes on x86 might take 15 minutes when emulating ARM64.

I watched the build logs carefully the first few times. BuildKit runs each platform build in parallel when resources allow, but the ARM64 portion always lagged behind.

What Broke and How I Fixed It

My first real failure came with a Go application. The build completed, but the ARM64 container crashed immediately on the Pi with a segmentation fault.

The issue was CGO. My Go code imported a package that used C bindings. When cross-compiling with emulation, CGO can behave unpredictably. I fixed it by disabling CGO:

FROM golang:1.21-alpine AS builder

WORKDIR /app
COPY . .

ENV CGO_ENABLED=0
RUN go build -o myapp .

FROM alpine:3.19
COPY --from=builder /app/myapp /usr/local/bin/
CMD ["myapp"]

Setting CGO_ENABLED=0 forces pure Go compilation. The binary became slightly larger but worked reliably across architectures.

Another problem: architecture-specific package installations. I had a Dockerfile that installed system packages without checking availability:

RUN apt-get update && apt-get install -y some-x86-only-package

This failed on ARM64 because the package didn't exist for that architecture. I switched to packages with universal support or added conditional logic:

RUN apt-get update && \
    if [ "$(uname -m)" = "x86_64" ]; then \
        apt-get install -y some-x86-only-package; \
    fi

Not elegant, but it worked when I genuinely needed architecture-specific dependencies.

Registry Configuration and Manifest Lists

When you push a multi-platform image, Docker creates a manifest list—a single tag that points to multiple architecture-specific images. My self-hosted registry (running on Synology) handled this automatically, but I had to verify it supported Docker's manifest format.

I checked by inspecting an image:

docker buildx imagetools inspect registry.vipinpg.com/myapp:latest

The output showed multiple manifests:

Name:      registry.vipinpg.com/myapp:latest
MediaType: application/vnd.docker.distribution.manifest.list.v2+json
Digest:    sha256:abc123...

Manifests:
  Name:      registry.vipinpg.com/myapp:latest@sha256:def456...
  MediaType: application/vnd.docker.distribution.manifest.v2+json
  Platform:  linux/amd64

  Name:      registry.vipinpg.com/myapp:latest@sha256:789abc...
  MediaType: application/vnd.docker.distribution.manifest.v2+json
  Platform:  linux/arm64

When I pull this image on the Pi, Docker automatically selects the ARM64 variant. On my x86 machines, it pulls the amd64 version. Same tag, different binaries.

Performance Reality Check

QEMU emulation is not fast. For simple images—Alpine with a few Python packages—the slowdown is tolerable. For complex builds involving compilation, it's painful.

I measured a Node.js application with native dependencies:

  • Native x86_64 build: 3 minutes
  • Emulated ARM64 build: 22 minutes

That's a 7x difference. For development iteration, this was unacceptable. I adopted a hybrid approach: build x86 images locally for testing, then trigger multi-platform builds on a schedule or before deployment.

I also experimented with native ARM64 builds on a Pi 4, but the 4GB RAM and slower CPU made it impractical for anything beyond trivial images. The emulated builds on my workstation, despite being slower, still outperformed the Pi.

Automating Builds with n8n

I integrated multi-platform builds into my n8n automation workflow. When I push code to my Gitea instance, n8n triggers a webhook that:

  1. Clones the repository
  2. Runs the buildx command
  3. Pushes to my registry
  4. Sends a notification to my monitoring system

The n8n workflow uses an Execute Command node:

docker buildx build \
  --platform linux/amd64,linux/arm64 \
  --build-arg VERSION={{ $json["version"] }} \
  -t registry.vipinpg.com/{{ $json["image"] }}:{{ $json["tag"] }} \
  --push \
  {{ $json["context"] }}

I pass variables from the Git webhook payload. This setup means I never manually build images anymore—commit, push, wait.

Storage Considerations

Multi-platform images consume more registry storage. A single-platform image might be 150MB. The same image built for two platforms is roughly 300MB (slightly less due to shared layers).

My Synology registry filled up faster than expected. I implemented a cleanup policy that removes untagged manifests older than 30 days:

docker exec registry bin/registry garbage-collect /etc/docker/registry/config.yml

I run this weekly via cron. It reclaims space from old builds and failed pushes.

When Emulation Isn't Enough

Some workloads don't run well under QEMU. I encountered this with a Rust project that used heavy macro expansion during compilation. The emulated ARM64 build hung for over an hour before I killed it.

For these cases, I switched to native ARM64 builds using GitHub Actions with ARM runners. It's more complex to set up, but the build time dropped from "unusable" to 8 minutes.

I only use QEMU emulation for projects where the slowdown is acceptable. Anything involving heavy compilation, I offload to native runners.

Key Takeaways

Multi-architecture Docker builds with QEMU work, but they're not magic. The emulation overhead is real, and you'll feel it on complex builds. For simple images—scripting languages, minimal binaries—it's a practical solution.

I use this approach for most of my self-hosted services because the convenience outweighs the build time cost. I develop on x86, deploy to both x86 and ARM64, and the images just work. No separate build pipelines, no architecture-specific tags.

The biggest lesson: test your images on actual ARM hardware before relying on them in production. Emulation can hide issues—missing packages, incorrect binaries, subtle runtime bugs—that only appear on real hardware. I learned this by having containers crash on the Pi after successful emulated builds.

If you're running a mixed-architecture environment like mine, this workflow saves time once it's set up. Just be prepared for slower builds and occasional architecture-specific surprises.