Tech Expert & Vibe Coder

With 14+ years of experience, I specialize in self-hosting, AI automation, and Vibe Coding – building applications using AI-powered tools like Google Antigravity, Dyad, and Cline. From homelabs to enterprise solutions.

Setting up automated Docker image provenance verification with cosign and admission webhooks in Portainer

Why I Started Looking at Image Verification

I run a mix of self-hosted services on Proxmox and Docker, and over time I've become more cautious about what I pull from registries. Not paranoid—just practical. When you're running containers that handle sensitive data or have network access, you want some confidence that what you deployed is what the maintainer actually built.

I'd heard about image signing and attestations but never implemented verification in any automated way. Most of my deployments go through Portainer, which makes container management easier but doesn't natively enforce signature checks. So when I came across Docker's cosign-based attestations and admission webhook patterns, I decided to test if I could actually enforce verification at deploy time.

My Setup and What I Actually Used

Here's what I was working with:

  • Portainer Community Edition running as a container on one of my Proxmox VMs
  • A handful of production stacks (n8n, monitoring tools, internal services)
  • Docker Engine 24.x on the host
  • No Kubernetes—just plain Docker and Docker Compose

I wasn't dealing with Docker Hardened Images specifically. My goal was to understand the verification flow using cosign and see if I could build a lightweight admission control layer that Portainer could respect.

Tools I Actually Installed

I installed cosign directly on the Docker host. The binary is small and doesn't require much setup. I also set up a simple webhook server using a Python Flask app—nothing fancy, just something that could intercept container creation requests and run verification logic before allowing them through.

Portainer doesn't have built-in support for admission webhooks like Kubernetes does, so I had to work around that limitation by intercepting Docker API calls at a different layer.

What I Tried First (and Why It Didn't Work)

My initial idea was to use Docker's authorization plugins to intercept container creation events. I found some examples online and tried to adapt them, but quickly hit two problems:

First, Docker's authz plugin interface is designed for access control, not image validation. You can block or allow API calls, but you don't get clean hooks for "verify this image before starting the container." You'd have to parse the request payload yourself and extract the image reference, which felt fragile.

Second, Portainer makes its own Docker API calls, and those don't flow through the same paths I was expecting. Some operations worked, others bypassed my plugin entirely. I spent a few hours debugging this before accepting it wasn't going to be reliable.

What Actually Worked

I switched to a simpler approach: a wrapper script that Portainer could call instead of directly invoking docker run or docker-compose up.

Here's the basic flow I ended up with:

  1. Portainer triggers a stack deployment
  2. Instead of going straight to Docker, it calls my wrapper script
  3. The script extracts the image references from the compose file
  4. For each image, it runs cosign verify against a public key I trust
  5. If verification passes, the script proceeds with the actual deployment
  6. If verification fails, it logs the error and exits without starting anything

This wasn't true admission control in the Kubernetes sense, but it gave me a checkpoint I could enforce before containers started.

The Verification Script

I wrote a small bash script that does the following:

#!/bin/bash
set -e

COMPOSE_FILE=$1
PUBLIC_KEY="/path/to/trusted.pub"

# Extract image references from compose file
IMAGES=$(grep -E "image:" "$COMPOSE_FILE" | awk '{print $2}')

for IMAGE in $IMAGES; do
  echo "Verifying $IMAGE..."
  cosign verify --key "$PUBLIC_KEY" "$IMAGE" || {
    echo "Verification failed for $IMAGE"
    exit 1
  }
done

# If we get here, all images passed verification
docker-compose -f "$COMPOSE_FILE" up -d

This is simplified—my actual version handles more edge cases and logs to a file—but the core logic is the same.

Integrating with Portainer

Portainer doesn't have a native way to inject custom scripts into its deployment flow, so I used a workaround. I created a custom stack template that calls my wrapper script instead of running docker-compose directly. This means I have to use that template for any stack where I want verification enforced.

It's not automatic across all deployments, but it works for the services I care most about. For less critical containers, I still deploy them the normal way.

What I Learned About cosign and Attestations

Cosign itself is straightforward once you understand the basics. You need:

  • A public key for the images you want to verify
  • Images that have been signed with the corresponding private key
  • Network access to the registry where signatures are stored

One thing that tripped me up initially: cosign doesn't verify the image itself in the sense of scanning its contents. It verifies that the image digest matches what was signed. If someone swaps out the image after signing, the digest changes and verification fails. But if the image was signed with malicious content to begin with, cosign won't catch that—it only proves authenticity, not safety.

I also learned that not all registries store signatures the same way. Docker Hub, for example, stores them as separate manifest entries. Some private registries I tested didn't support this at all, which meant I couldn't verify images from those sources even if I wanted to.

Handling Missing Signatures

Most of the images I pull don't have signatures. Official images from Docker Hub sometimes do, but the majority of community images don't. For those, my script just skips verification and logs a warning. I could make it stricter and block unsigned images entirely, but that would break too many of my existing stacks.

This is the trade-off I'm living with: verification is opt-in for images that support it, but not enforced globally.

Performance and Practical Impact

Verification adds a few seconds to deployment time. For each image, cosign has to:

  • Fetch the signature from the registry
  • Validate it against the public key
  • Check the image digest

On my network, this usually takes 2-5 seconds per image. For a stack with three or four images, that's 10-20 seconds of overhead. Not a big deal for planned deployments, but noticeable if you're iterating quickly during development.

I also found that if the registry is slow or unreachable, verification times out and the deployment fails. This happened once when Docker Hub had an outage, and my automated stack updates stopped working until I temporarily disabled verification.

What I'd Do Differently

If I were starting over, I'd probably look into running a local registry mirror with cached signatures. That would speed up verification and make it more resilient to upstream registry issues.

I'd also want a better way to integrate this with Portainer. Right now, using a custom template works but feels clunky. If Portainer added native support for pre-deployment hooks or image policy enforcement, that would make this whole setup cleaner.

Key Takeaways

Image verification with cosign is practical if you're willing to work around tooling limitations. It's not plug-and-play, especially outside Kubernetes, but the core verification logic is solid and doesn't require much infrastructure.

The biggest limitation I hit was lack of native support in Portainer and Docker's authorization model. Admission webhooks work well in Kubernetes because the API server has explicit extension points for them. Docker's API doesn't have the same hooks, so you end up building workarounds.

For my use case—self-hosted services where I control the deployment flow—the wrapper script approach is good enough. It's not enterprise-grade policy enforcement, but it adds a meaningful security check without breaking my existing workflows.

If you're running critical services and want stronger guarantees about image provenance, this kind of verification is worth the effort. Just be prepared to write some glue code and accept that not every image you use will be signed.