Tech Expert & Vibe Coder

With 14+ years of experience, I specialize in self-hosting, AI automation, and Vibe Coding – building applications using AI-powered tools like Google Antigravity, Dyad, and Cline. From homelabs to enterprise solutions.

Configuring Caddy as a Reverse Proxy for Local AI API Endpoints with mTLS Certificate Pinning

Why I Started Using Caddy for Local AI Endpoints

I run several AI models locally through Ollama, along with Open WebUI and n8n for workflow automation. These services expose HTTP APIs that I access from different machines on my network. As I started experimenting with remote access and wanted to add basic authentication, I realized I needed something more structured than port forwarding and raw IP addresses.

I'd heard about Caddy's automatic HTTPS and simple configuration, so I decided to test it as a reverse proxy for my local AI stack. My goal wasn't just TLS termination—I wanted to understand how to properly secure API endpoints that handle sensitive data, especially when accessed over untrusted networks.

My Real Setup

I run everything on Proxmox, with most services containerized in Docker. My primary AI services include:

  • Ollama (port 11434) for LLM inference
  • Open WebUI (port 8080) for chat interfaces
  • n8n (port 5678) for workflow automation
  • Flowise (port 3001) for low-code AI workflows

These services were initially accessible only via direct IP and port combinations. I wanted to expose them through clean hostnames with HTTPS, but without opening everything to the public internet.

I installed Caddy as a Docker container alongside my existing services. My initial Caddyfile was extremely simple—just basic reverse proxy directives pointing to internal ports.

Basic Reverse Proxy Configuration

My first working Caddyfile looked like this:

ollama.local {
    reverse_proxy localhost:11434
}

webui.local {
    reverse_proxy localhost:8080
}

I used .local hostnames because I wasn't exposing these to the internet. Caddy automatically generated self-signed certificates for these domains, which worked fine for testing but required adding Caddy's CA certificate to my trust store.

This setup worked immediately. I could access https://ollama.local instead of http://192.168.1.50:11434. The browser complained about the self-signed certificate, but the connection was encrypted.

What I Learned About Host Headers

One issue I hit early was with services that validate the Host header. Open WebUI would sometimes reject requests because it expected localhost:8080 but received webui.local. I fixed this by explicitly setting the Host header:

webui.local {
    reverse_proxy localhost:8080 {
        header_up Host localhost:8080
    }
}

This wasn't needed for Ollama, which doesn't care about the Host header, but it was critical for web applications with CSRF protection.

Adding mTLS Certificate Pinning

The next step was restricting access to specific clients. I didn't want just any device on my network to query these APIs—only my trusted machines.

I decided to implement mutual TLS (mTLS), where both the server and client present certificates. This meant:

  1. Caddy verifies the client's certificate before allowing the connection
  2. Only clients with certificates signed by my internal CA can access the endpoints

Generating Client Certificates

I created a simple internal CA using OpenSSL. This isn't production PKI—it's just a self-signed CA for my lab environment.

# Generate CA private key
openssl genrsa -out ca-key.pem 4096

# Create CA certificate
openssl req -new -x509 -days 3650 -key ca-key.pem -out ca-cert.pem \
    -subj "/CN=Internal CA"

# Generate client private key
openssl genrsa -out client-key.pem 4096

# Create client certificate signing request
openssl req -new -key client-key.pem -out client.csr \
    -subj "/CN=trusted-client"

# Sign client certificate with CA
openssl x509 -req -days 365 -in client.csr \
    -CA ca-cert.pem -CAkey ca-key.pem -CAcreateserial \
    -out client-cert.pem

I stored these files in a certs/ directory and mounted it into the Caddy container.

Configuring Caddy for mTLS

The critical part was telling Caddy to require and verify client certificates. Here's the updated Caddyfile:

{
    auto_https off
}

ollama.local {
    tls /certs/server-cert.pem /certs/server-key.pem {
        client_auth {
            mode require_and_verify
            trusted_ca_cert_file /certs/ca-cert.pem
        }
    }
    reverse_proxy localhost:11434
}

Key points from this configuration:

  • auto_https off disables Caddy's automatic certificate management because I'm providing my own
  • tls directive specifies the server certificate and key
  • client_auth block enforces mTLS with require_and_verify mode
  • trusted_ca_cert_file points to my internal CA certificate

Testing mTLS Connections

To verify it worked, I tested with curl:

# Without client certificate - should fail
curl https://ollama.local/api/tags

# With client certificate - should succeed
curl --cert client-cert.pem --key client-key.pem \
     --cacert ca-cert.pem \
     https://ollama.local/api/tags

The first command failed with a TLS handshake error. The second succeeded and returned the list of available models.

Certificate Pinning Implementation

mTLS alone validates that the client certificate is signed by a trusted CA, but it doesn't restrict access to specific certificates. I wanted to go further and pin to exact certificates—essentially whitelisting individual clients.

Caddy doesn't have built-in certificate pinning, so I implemented it using request matchers and certificate fingerprints.

Extracting Certificate Fingerprints

I generated SHA-256 fingerprints for each authorized client certificate:

openssl x509 -in client-cert.pem -noout -fingerprint -sha256

This output looks like:

SHA256 Fingerprint=A1:B2:C3:D4:E5:F6:...

I stored these fingerprints in environment variables for easy management.

Implementing Pinning in Caddy

Caddy exposes the client certificate fingerprint via the {http.request.tls.client.fingerprint} placeholder. I used this to create a whitelist:

ollama.local {
    tls /certs/server-cert.pem /certs/server-key.pem {
        client_auth {
            mode require_and_verify
            trusted_ca_cert_file /certs/ca-cert.pem
        }
    }
    
    @authorized_client {
        expression {http.request.tls.client.fingerprint} == "A1B2C3D4E5F6..."
    }
    
    handle @authorized_client {
        reverse_proxy localhost:11434
    }
    
    handle {
        respond "Unauthorized" 403
    }
}

This configuration:

  1. Requires a valid client certificate signed by my CA
  2. Checks the certificate fingerprint against a hardcoded value
  3. Only proxies requests if the fingerprint matches
  4. Returns 403 for any other certificate, even if validly signed

Managing Multiple Authorized Clients

For multiple clients, I used Caddy's expression language to check against a list:

@authorized_client {
    expression {http.request.tls.client.fingerprint} in ["A1B2C3...", "D4E5F6...", "G7H8I9..."]
}

I kept the fingerprint list in an environment variable and referenced it in the Caddyfile to avoid hardcoding.

What Didn't Work

Initial Certificate Path Issues

My first attempt failed because I didn't properly mount the certificate directory into the Caddy container. The error was cryptic—just "TLS handshake error"—but checking Caddy's logs showed it couldn't read the certificate files.

I fixed this by explicitly mounting the certs directory in my Docker Compose file:

volumes:
  - ./certs:/certs:ro

The :ro flag makes it read-only, which is good practice for certificate storage.

Fingerprint Comparison Failures

Initially, my fingerprint matching didn't work because I was comparing the wrong format. OpenSSL outputs fingerprints with colons (A1:B2:C3), but Caddy's placeholder uses a different format without separators.

I had to normalize the fingerprint by removing colons:

# OpenSSL format
A1:B2:C3:D4:E5:F6

# Caddy format
A1B2C3D4E5F6

This was annoying to debug because the error message just said "expression evaluated to false" without showing what values were being compared.

Performance with Large Fingerprint Lists

When I tested with 20+ authorized fingerprints, I noticed slight latency on connection establishment. Caddy was evaluating the expression on every request, even though the client certificate doesn't change during a TLS session.

I didn't find a perfect solution for this, but it wasn't a real problem in my setup since I only have a handful of authorized clients.

Integration with n8n Workflows

One practical use case was securing n8n webhook endpoints that trigger AI workflows. These webhooks accept external data and pass it to Ollama for processing.

I configured Caddy to require mTLS for specific webhook paths:

n8n.local {
    tls /certs/server-cert.pem /certs/server-key.pem {
        client_auth {
            mode require_and_verify
            trusted_ca_cert_file /certs/ca-cert.pem
        }
    }
    
    @webhook path /webhook/*
    
    handle @webhook {
        @authorized_client {
            expression {http.request.tls.client.fingerprint} in [{env.ALLOWED_FINGERPRINTS}]
        }
        
        handle @authorized_client {
            reverse_proxy localhost:5678
        }
        
        handle {
            respond "Unauthorized webhook access" 403
        }
    }
    
    handle {
        reverse_proxy localhost:5678
    }
}

This setup allows normal browser access to the n8n UI while requiring client certificates for webhook calls. It's not perfect—ideally, I'd use different authentication for the UI—but it works for my needs.

Monitoring and Logging

Caddy's logs showed every TLS handshake attempt, which was useful for debugging but verbose in production. I filtered the logs to only show failed authentication attempts:

{
    log {
        output file /var/log/caddy/access.log
        format json
        level ERROR
    }
}

This reduced log noise significantly. I also configured log rotation using Docker's logging driver to prevent the log file from growing indefinitely.

Limitations and Trade-offs

This setup works well for my lab environment, but it has clear limitations:

  • Certificate distribution: I have to manually copy client certificates to each authorized device. There's no automated enrollment or renewal.
  • Revocation: If a client certificate is compromised, I have to manually remove its fingerprint from the whitelist and reload Caddy. There's no CRL or OCSP.
  • Performance overhead: mTLS adds latency to every connection. For high-throughput API calls, this might be noticeable.
  • Complexity: Managing certificates and fingerprints is more work than simple API keys. For casual use, it's probably overkill.

I'm okay with these trade-offs because I value the security model and don't need enterprise-grade PKI. But this wouldn't scale to hundreds of clients.

Key Takeaways

  • Caddy makes basic reverse proxying trivial, but mTLS requires manual certificate management
  • Certificate pinning adds defense-in-depth but isn't natively supported—you have to implement it with expressions
  • Fingerprint format differences between tools (OpenSSL vs Caddy) will waste your time if you're not careful
  • mTLS is excellent for securing local API endpoints, but only if you're willing to manage certificates properly
  • For production use, consider a proper PKI solution with automated certificate lifecycle management

This configuration gives me strong authentication for AI API endpoints without relying on application-level auth mechanisms. It's not perfect, but it works reliably for my self-hosted setup.