Why I Set This Up
I run several services on my home network—Ollama for local LLMs, ChromaDB for vector storage, monitoring tools, and a few databases. I wanted HTTPS for all of them, but I’m stuck behind CGNAT. My ISP doesn’t give me a public IPv4 address, so port forwarding isn’t an option.
I could have set up a local certificate authority, but that means manually trusting certificates on every device I use. I wanted real, trusted certificates that just work everywhere—on my phone, laptop, and any browser without security warnings.
That’s where Caddy with Cloudflare DNS-01 challenge came in. It lets me get valid Let’s Encrypt certificates for internal services without exposing anything to the internet.
My Setup
Here’s what I’m working with:
- A Proxmox host running Docker containers for all my services
- A domain registered with Cloudflare (DNS managed there)
- All services on a static internal IP (192.168.x.x)
- No public IP, no port forwarding, everything stays local
I use Caddy as a reverse proxy. It sits in front of all my services and handles HTTPS termination. The key piece is the Cloudflare DNS plugin for Caddy, which allows DNS-01 ACME challenges.
How DNS-01 Works in This Context
Normally, Let’s Encrypt uses HTTP-01 challenges, which require your server to be reachable from the internet on port 80. That doesn’t work behind CGNAT.
DNS-01 challenges work differently. Let’s Encrypt asks you to prove domain ownership by creating a specific TXT record in your DNS. Caddy does this automatically using the Cloudflare API—no public exposure needed.
Here’s the flow:
- Caddy requests a certificate for
ollama.mydomain.com - Let’s Encrypt tells Caddy to create a TXT record at
_acme-challenge.ollama.mydomain.com - Caddy uses the Cloudflare API to add that record
- Let’s Encrypt verifies the record exists
- Certificate issued
The entire process happens without any inbound traffic to my network.
Cloudflare DNS Configuration
In Cloudflare, I created A records for each service subdomain pointing to my internal IP:
ollama.mydomain.com→ 192.168.x.xchroma.mydomain.com→ 192.168.x.xpihole.mydomain.com→ 192.168.x.x
These records are set to “DNS only” mode (gray cloud icon). This means Cloudflare just resolves the DNS—it doesn’t proxy traffic. The traffic stays entirely within my network.
Caddy Configuration
I built a custom Caddy Docker image with the Cloudflare DNS plugin. The Dockerfile looks like this:
FROM caddy:2.8.4-builder AS builder RUN xcaddy build --with github.com/caddy-dns/cloudflare FROM caddy:2.8.4 AS caddy COPY --from=builder /usr/bin/caddy /usr/bin/caddy
My Caddyfile is straightforward. Each service gets a block with the subdomain, reverse proxy directive, and TLS configuration:
{
acme_dns cloudflare {env.CLOUDFLARE_API_TOKEN}
}
ollama.mydomain.com {
reverse_proxy http://192.168.x.x:11434
tls {
dns cloudflare {env.CLOUDFLARE_API_TOKEN}
}
}
chroma.mydomain.com {
reverse_proxy http://192.168.x.x:8000
tls {
dns cloudflare {env.CLOUDFLARE_API_TOKEN}
}
}
pihole.mydomain.com {
reverse_proxy http://192.168.x.x:80
tls {
dns cloudflare {env.CLOUDFLARE_API_TOKEN}
}
}
I pass the Cloudflare API token as an environment variable. The token needs permissions to edit DNS records for the domain.
My docker-compose.yml:
version: "3.8"
services:
caddy:
build:
context: .
environment:
- CLOUDFLARE_API_TOKEN=${CLOUDFLARE_API_TOKEN}
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile
- caddy_data:/data
- caddy_config:/config
ports:
- "443:443"
- "80:80"
restart: unless-stopped
volumes:
caddy_data:
caddy_config:
I expose both ports 80 and 443. Caddy automatically redirects HTTP to HTTPS.
What Worked
Once I had everything configured, Caddy pulled certificates for all my subdomains within minutes. No manual intervention needed. Renewals happen automatically—I’ve never had to think about it.
Every device on my network can access these services over HTTPS without any certificate warnings. My phone, laptop, and even random browsers all trust the certificates because they’re issued by Let’s Encrypt.
Adding a new service is trivial. I just add a DNS record in Cloudflare, add a block to the Caddyfile, and reload Caddy. The certificate gets issued automatically.
Performance
Caddy is lightweight. On my setup, it uses less than 50MB of RAM and barely touches the CPU. The reverse proxy overhead is negligible—I haven’t noticed any latency compared to hitting services directly.
What Didn’t Work
Initial API Token Permissions
My first attempt failed because I didn’t give the Cloudflare API token enough permissions. I initially scoped it to a single zone, but Caddy needs the ability to read zone information and edit DNS records. I had to regenerate the token with proper permissions.
The required permissions are:
- Zone – DNS – Edit
- Zone – Zone – Read
Docker Network Issues
I initially ran Caddy in a separate Docker network from my services. This caused DNS resolution issues—Caddy couldn’t reach the backend containers by IP. I fixed this by putting Caddy on the host network or ensuring all containers share the same bridge network.
Rate Limits During Testing
While testing, I hit Let’s Encrypt’s rate limits a few times by restarting Caddy too often. Let’s Encrypt allows 5 duplicate certificate requests per week. I switched to their staging environment for testing by adding this to the global options:
{
acme_ca https://acme-staging-v02.api.letsencrypt.org/directory
}
Once everything worked, I removed that line to use the production CA.
Key Takeaways
This setup works well for internal services that need real HTTPS certificates. It’s particularly useful if you’re behind CGNAT or don’t want to expose services publicly.
A few things to keep in mind:
- DNS-01 challenges require API access to your DNS provider. Cloudflare makes this easy, but other providers may not.
- Certificates are still public. They’ll appear in Certificate Transparency logs. This doesn’t expose your internal IPs, but it does reveal your subdomains.
- If Cloudflare goes down or your API token gets revoked, certificate renewals will fail. Keep backups of your Caddy data directory.
- This doesn’t work for truly offline networks. You need internet access for ACME challenges and certificate issuance.
For my use case—running AI tools, databases, and monitoring dashboards on my home network—this setup has been reliable. I don’t think about certificates anymore. They just work.