Why I Needed This Setup
I run multiple self-hosted services on my Proxmox cluster—n8n, monitoring dashboards, internal tools, and test environments. Each service needs its own subdomain, and I wanted proper HTTPS without manually managing certificates for every new service I spin up.
The problem: my ISP blocks inbound port 80 and 443. I also don't want to expose my home IP directly to the internet. Cloudflare Tunnels solved the exposure issue, but I still needed a reverse proxy that could handle dynamic subdomains without pre-configuring each one.
That's where Caddy's on-demand TLS came in. It lets me create subdomains on the fly—just point the DNS record and Caddy handles the certificate automatically when the first request arrives.
My Actual Setup
Here's what I'm working with:
- Domain managed through Cloudflare (DNS only, not proxied)
- Cloudflare Tunnel running in a Docker container on Proxmox
- Caddy running in another Docker container on the same host
- Internal services running in separate containers or VMs
- All traffic flows: Internet → Cloudflare Tunnel → Caddy → Internal Service
The tunnel terminates inside my network and forwards everything to Caddy on port 443. Caddy then routes requests to the appropriate backend service based on the subdomain.
Cloudflare Tunnel Configuration
I installed cloudflared using Docker. My docker-compose.yml for the tunnel looks like this:
version: '3.8'
services:
cloudflared:
image: cloudflare/cloudflared:latest
container_name: cloudflared
restart: unless-stopped
command: tunnel --no-autoupdate run
environment:
- TUNNEL_TOKEN=your_tunnel_token_here
networks:
- caddy_network
networks:
caddy_network:
external: true
I created the tunnel through Cloudflare's dashboard (Zero Trust → Networks → Tunnels). The key configuration in the tunnel settings is a single wildcard route:
- Subdomain:
* - Domain:
yourdomain.com - Service:
https://caddy:443
This tells Cloudflare to forward all subdomain requests to Caddy's internal hostname on port 443. The https:// prefix is important—it ensures the tunnel connects to Caddy over TLS.
Caddy Configuration
Caddy's on-demand TLS feature is what makes this work. Instead of listing every subdomain explicitly, I use a wildcard pattern and let Caddy request certificates only when traffic actually arrives.
My Caddyfile:
*.yourdomain.com {
tls {
on_demand
}
@n8n host n8n.yourdomain.com
handle @n8n {
reverse_proxy n8n:5678
}
@dashboard host dashboard.yourdomain.com
handle @dashboard {
reverse_proxy grafana:3000
}
@test host test.yourdomain.com
handle @test {
reverse_proxy test-service:8080
}
handle {
respond "Service not configured" 404
}
}
{
on_demand_tls {
ask http://caddy-validator:8000/check
interval 2m
burst 5
}
}
The on_demand directive tells Caddy to obtain certificates dynamically. The global on_demand_tls block includes an ask endpoint—this is critical for security. Without it, anyone could point a subdomain at your server and force Caddy to request certificates, potentially hitting Let's Encrypt rate limits.
The Validation Endpoint
I wrote a simple validation service in Python that checks if a requested subdomain is allowed. This runs in its own container:
from flask import Flask, request
import os
app = Flask(__name__)
ALLOWED_SUBDOMAINS = os.getenv('ALLOWED_SUBDOMAINS', '').split(',')
@app.route('/check')
def check():
domain = request.args.get('domain', '')
subdomain = domain.split('.')[0]
if subdomain in ALLOWED_SUBDOMAINS:
return '', 200
return '', 404
if __name__ == '__main__':
app.run(host='0.0.0.0', port=8000)
Docker Compose for the validator:
version: '3.8'
services:
caddy-validator:
build: ./validator
container_name: caddy-validator
restart: unless-stopped
environment:
- ALLOWED_SUBDOMAINS=n8n,dashboard,test
networks:
- caddy_network
When Caddy receives a request for a new subdomain, it queries this endpoint. If the subdomain isn't in the allowed list, Caddy returns an error instead of requesting a certificate.
Docker Networking
All containers share a custom Docker network:
docker network create caddy_network
This lets containers reference each other by service name. When Caddy proxies to n8n:5678, Docker's internal DNS resolves that to the n8n container's IP.
My Caddy docker-compose.yml:
version: '3.8'
services:
caddy:
image: caddy:latest
container_name: caddy
restart: unless-stopped
ports:
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile
- caddy_data:/data
- caddy_config:/config
networks:
- caddy_network
volumes:
caddy_data:
caddy_config:
networks:
caddy_network:
external: true
Port 443 is exposed only to the Docker network, not to the host. The Cloudflare Tunnel is the only entry point from the internet.
DNS Configuration
In Cloudflare's DNS settings, I add A records for each subdomain pointing to a dummy IP (like 192.0.2.1). The actual IP doesn't matter because the tunnel handles routing.
Important: the DNS records must not be proxied (orange cloud disabled). Cloudflare's proxy would interfere with the tunnel's routing.
What Worked
Once everything was running, I could add new services without touching Caddy or the tunnel configuration:
- Add the subdomain to the validator's
ALLOWED_SUBDOMAINSenvironment variable - Restart the validator container
- Add a DNS A record in Cloudflare
- Add a handle block in the Caddyfile for the new service
- Reload Caddy:
docker exec caddy caddy reload --config /etc/caddy/Caddyfile
The first request to the new subdomain triggers certificate issuance. It takes 2-3 seconds, then subsequent requests are instant.
Certificate renewal happens automatically. Caddy checks expiration and renews before certificates expire.
What Didn't Work
My first attempt used Cloudflare's proxied DNS (orange cloud enabled). This broke everything because Cloudflare's proxy expects to handle TLS termination, but my tunnel was already doing that. Requests timed out or returned 525 errors.
I also initially forgot the ask endpoint in Caddy's on-demand config. Within an hour, I had dozens of certificate requests from random subdomains that attackers had pointed at my IP. Let's Encrypt rate-limited my domain for a week.
Another mistake: I tried using http://caddy:443 in the tunnel configuration instead of https://. The tunnel couldn't establish a connection because Caddy was expecting TLS.
The validator service initially crashed under load because Flask's development server isn't production-ready. I switched to running it with Gunicorn:
CMD ["gunicorn", "-w", "4", "-b", "0.0.0.0:8000", "app:app"]
Limitations and Trade-offs
On-demand TLS adds latency to the first request for a new subdomain. For internal tools, this is fine. For public-facing services with cold-start traffic, it might be noticeable.
The validator service is a single point of failure. If it goes down, Caddy can't issue new certificates. Existing certificates continue working, but new subdomains won't load. I monitor it with Uptime Kuma to get alerts if it stops responding.
Let's Encrypt rate limits still apply. You can request certificates for up to 50 subdomains per week. For my use case, this is more than enough, but if you're spinning up dozens of test environments daily, you'll hit limits.
Cloudflare Tunnels add another layer of abstraction. When something breaks, debugging involves checking tunnel logs, Caddy logs, and the backend service. I've learned to keep detailed notes about the traffic flow to avoid confusion during troubleshooting.
Key Takeaways
This setup gives me the flexibility to add services without exposing ports or manually managing certificates. The combination of Cloudflare Tunnels and Caddy's on-demand TLS handles the complexity while keeping my home IP hidden.
The validation endpoint is non-negotiable. Without it, you're one DNS misconfiguration away from rate limit hell.
Docker networking makes internal routing clean. Services reference each other by name, and I don't have to manage IP addresses or port conflicts.
The first-request delay for new subdomains is real but acceptable for my workflow. If you need instant access, pre-configure certificates for critical services.
This approach scales well for personal infrastructure. I've been running it for eight months without major issues. Certificates renew automatically, new services take minutes to add, and I haven't had to touch my router configuration once.