Tech Expert & Vibe Coder

With 14+ years of experience, I specialize in self-hosting, AI automation, and Vibe Coding – building applications using AI-powered tools like Google Antigravity, Dyad, and Cline. From homelabs to enterprise solutions.

Configuring Docker Content Trust and Image Signing to Prevent Supply Chain Attacks

Why I Started Using Docker Content Trust

I run a mix of self-hosted services on Proxmox and Docker, pulling images from Docker Hub, private registries, and sometimes building my own. At some point, I realized I had no real way to verify that the images I was pulling were actually what they claimed to be. If someone compromised a registry or performed a man-in-the-middle attack, I'd have no idea.

This became more than theoretical when I started automating deployments with n8n and Cronicle. I needed a way to ensure that the images being pulled in automated workflows hadn't been tampered with. That's when I started exploring Docker Content Trust (DCT).

My Real Setup

I run Docker on a few different hosts:

  • A Proxmox VM running Ubuntu with Docker installed
  • A Synology NAS with Docker support
  • A dedicated build server for CI/CD experiments

Most of my images come from Docker Hub, but I also maintain a small private registry for internal tools. I wanted to sign images I built myself and verify signatures on images I pulled from public sources.

My initial goal was simple: enable DCT for my own images and see if I could enforce signature verification in production without breaking existing workflows.

Enabling Docker Content Trust

DCT is controlled by a single environment variable:

export DOCKER_CONTENT_TRUST=1

Once enabled, Docker will refuse to push unsigned images or pull unverified ones. This sounds great in theory, but in practice, it immediately broke several things.

The first issue: many images on Docker Hub are not signed. When I tried to pull them with DCT enabled, Docker rejected them outright. This meant I couldn't use DCT globally without manually auditing every image I depended on.

The second issue: enabling DCT for the first time triggers key generation, and if you're not prepared, it's easy to lose track of where those keys are stored. By default, Docker stores them in ~/.docker/trust/, but I had to be careful about backups.

Signing My First Image

I started by signing a simple internal tool I'd containerized. The process looked like this:

docker build -t myregistry.local/internal-tool:v1 .
export DOCKER_CONTENT_TRUST=1
docker push myregistry.local/internal-tool:v1

The first time I ran this, Docker prompted me to create a root key and a repository key. The root key is critical—it's the top-level key for all signing operations. Losing it means losing the ability to manage trust for that repository.

I was asked to set a passphrase for the root key, and then another passphrase for the repository key. I stored both passphrases in my password manager immediately, because I knew I'd forget them otherwise.

After that, the image was signed and pushed. I verified it by pulling it on another machine with DCT enabled:

export DOCKER_CONTENT_TRUST=1
docker pull myregistry.local/internal-tool:v1

It worked. Docker verified the signature and pulled the image without issue.

What Didn't Work

The biggest problem I ran into was key management. The first time I enabled DCT, I didn't fully understand the difference between the root key and the repository key. I assumed I could regenerate them if needed, but that's not the case—the root key is irreplaceable.

I also made the mistake of enabling DCT globally on a production host before auditing which images were signed. This broke several services that depended on unsigned images from Docker Hub. I had to disable DCT temporarily, identify the problematic images, and either find signed alternatives or build my own signed versions.

Another issue: DCT doesn't work well with all registries. I tried using it with a self-hosted Harbor instance, and while it technically worked, the integration was clunky. Harbor has its own signing mechanisms, and mixing the two created confusion.

Finally, DCT doesn't protect against vulnerabilities in the image itself—it only verifies that the image hasn't been tampered with. I still needed to run vulnerability scans separately using tools like Trivy.

Integrating DCT into Automated Workflows

Once I had DCT working manually, I wanted to integrate it into my CI/CD pipeline. I use a mix of custom scripts and n8n workflows to build and deploy Docker images.

The key was making sure the build environment had access to the repository signing keys. I stored the keys in a secure location and mounted them into the build container using Docker volumes:

docker run --rm \
  -v /secure/keys:/root/.docker/trust \
  -e DOCKER_CONTENT_TRUST=1 \
  my-build-image

This worked, but it introduced a new problem: if the keys were ever compromised, every image I'd signed would be at risk. I started looking into key delegation, which allows you to create separate keys for CI/CD without exposing the root key.

Key delegation worked, but it added complexity. I had to generate a delegation key, add it to the Notary server, and configure my CI/CD scripts to use it instead of the root key. The documentation on this was sparse, and I spent a lot of time debugging permissions issues.

Practical Limitations

DCT has real limitations that aren't always obvious:

  • It only works with registries that support Notary (Docker Hub, some private registries). Many cloud registries don't support it fully.
  • Signature metadata isn't visible in docker images—you have to use docker trust inspect to see it.
  • If you lose your root key, you lose the ability to manage trust for that repository. There's no recovery process.
  • DCT doesn't prevent you from pulling unsigned images unless you explicitly enable it. This means it's easy to bypass accidentally.

I also found that DCT doesn't play well with image caching. If you pull an unsigned image with DCT disabled, then enable DCT and try to pull the same image again, Docker will use the cached version without verification. This defeated the purpose of enabling DCT in the first place.

Key Takeaways

DCT is useful, but it's not a silver bullet. Here's what I learned:

  • Back up your root key immediately. Store it offline if possible. I keep mine on an encrypted USB drive that I only plug in when I need to create new repositories.
  • Don't enable DCT globally until you've audited your images. Start with a small subset of critical images and expand from there.
  • Use key delegation for CI/CD. Don't expose your root key in automated workflows.
  • DCT only verifies signatures—it doesn't scan for vulnerabilities. You still need a separate tool for that.
  • Not all registries support DCT. Check compatibility before committing to it.

I still use DCT for my most critical internal images, but I don't enforce it globally. For public images, I rely on a combination of signature verification (when available) and vulnerability scanning. DCT is a useful layer of defense, but it's not a replacement for good security practices overall.

What I'd Do Differently

If I were starting over, I'd spend more time understanding key management before enabling DCT. I'd also set up a dedicated Notary server instead of relying on Docker Hub, since that gives me more control over the signing process.

I'd also automate key backups. Right now, I have to manually copy my keys to a secure location, which is error-prone. A better approach would be to script this as part of my backup routine.

Finally, I'd document my key rotation process before I actually need it. I haven't rotated keys yet, and I'm not looking forward to figuring it out under pressure.