Tech Expert & Vibe Coder

With 15+ years of experience, I specialize in self-hosting, AI automation, and Vibe Coding – building applications using AI-powered tools like Google Antigravity, Dyad, and Cline. From homelabs to enterprise solutions.

Setting Up Docker Buildkit Remote Cache with MinIO: Speeding Up Multi-Stage Builds in Homelab CI Pipelines

Why I Worked on This

I run a small CI pipeline in my homelab using Cronicle and Docker. Most of my builds are multi-stage Node.js or Python images, and watching them rebuild the same dependency layers over and over was frustrating. My build agents are short-lived VMs on Proxmox—they spin up, run a job, and get destroyed. That means no persistent local cache between builds.

I needed a way to share build cache across these ephemeral agents without depending on external services. I already run MinIO for other storage needs, so using it as a Docker BuildKit remote cache backend made sense. This isn’t about squeezing milliseconds—it’s about cutting 5-minute builds down to 30 seconds when nothing meaningful changed.

My Real Setup

Here’s what I’m working with:

  • Proxmox cluster running multiple VMs
  • MinIO instance (self-hosted S3-compatible storage) running in Docker
  • Cronicle for job scheduling
  • Docker with BuildKit enabled on each build agent VM
  • Private container registry (also self-hosted)

Each build agent VM is templated and provisioned on-demand. They have Docker installed but start with an empty BuildKit cache. Without remote caching, every build starts from scratch.

MinIO Configuration

I created a dedicated bucket in MinIO called docker-buildcache. The access credentials are stored as environment variables on each build agent. MinIO is accessible over HTTP within my homelab network—no TLS because it’s internal-only traffic.

BuildKit Setup

I’m using BuildKit’s registry cache exporter, not inline. The inline exporter embeds cache metadata into the final image, which bloats the image and only helps if you’re pulling that exact image later. The registry exporter creates a separate cache manifest that BuildKit can query independently.

My typical build command looks like this:

docker buildx build 
  --cache-from type=registry,ref=minio.local:9000/docker-buildcache/myapp:cache 
  --cache-to type=registry,ref=minio.local:9000/docker-buildcache/myapp:cache,mode=max 
  --tag myapp:latest 
  --push 
  .

The mode=max flag tells BuildKit to export all layers, not just the final ones. This matters for multi-stage builds where intermediate stages get reused.

What Worked

Once configured, the cache hit rate improved dramatically. A clean VM pulling from the remote cache could rebuild an image in under a minute instead of five. The cache metadata downloads are tiny—just JSON manifests—so there’s almost no overhead checking if a layer exists.

BuildKit is smart about this. It doesn’t eagerly download cached layers. It only pulls them when it needs to build a subsequent layer that wasn’t cached. In cases where the entire image is cached remotely, BuildKit assembles the final manifest and pushes it without downloading anything. That’s the 30-second rebuild scenario I mentioned.

Using MinIO instead of a public registry also meant I wasn’t hitting rate limits or paying egress fees. The cache storage is just disk space on my NAS, which I already have.

Multi-Stage Builds

This setup shines with multi-stage Dockerfiles. My Node.js builds have a stage that installs dependencies, another that runs tests, and a final stage that copies only production artifacts. With mode=max, all those intermediate stages get cached. If I only change application code, the dependency layer stays cached and the build skips straight to the final stage.

What Didn’t Work

Getting the MinIO URL format right took trial and error. BuildKit expects a registry-style reference, but MinIO uses S3 paths internally. I initially tried pointing BuildKit at s3://docker-buildcache, which failed. The correct approach is to expose MinIO’s S3 API as a Docker registry endpoint, which MinIO does by default on port 9000.

I also ran into issues with cache expiration. BuildKit’s local garbage collection doesn’t apply to remote caches. If you keep exporting cache without ever cleaning it up, the bucket grows indefinitely. I ended up writing a simple script that runs weekly to delete cache manifests older than 30 days. Not elegant, but it works.

Another gotcha: if your Dockerfile changes in ways that invalidate early layers (like updating the base image), the entire cache becomes useless for that build. BuildKit still checks the remote cache, but it won’t find hits. This is expected behavior, but it means you can’t rely on remote caching to save you from poorly structured Dockerfiles.

Network Bottlenecks

My homelab network is gigabit, but MinIO storage sits on spinning rust, not SSDs. Large layer uploads (like a 500MB dependency archive) are slower than I’d like. If I were doing this at scale, I’d move MinIO to faster storage. For my use case, it’s acceptable.

Key Takeaways

Remote caching with BuildKit and MinIO is straightforward once you understand the registry cache format. It’s not magic—you’re just storing layer blobs and manifests in a place where multiple agents can access them.

Use mode=max for multi-stage builds. The default mode=min only caches the final layers, which defeats the purpose if your build has expensive intermediate stages.

Don’t expect remote caching to fix a bad Dockerfile. If your layers invalidate frequently, you’ll still rebuild often. The cache helps when your build is already well-structured.

Monitor your cache storage. Without cleanup, it will grow until it fills your disk. A simple cron job deleting old cache entries is enough.

If you’re running ephemeral build agents, remote caching isn’t optional—it’s the only way to avoid rebuilding everything from scratch every time. Local caching alone won’t help you.

Leave a Comment

Your email address will not be published. Required fields are marked *