Why I Started Using Caddy for Legacy Services
I run several older web applications in my homelab—tools I built years ago or inherited from previous projects. Most of them only speak HTTP. They were written before HTTPS was ubiquitous, and retrofitting SSL into each one would mean modifying application code, rewriting connection handlers, or dealing with certificate paths in languages I haven’t touched in years.
I didn’t want to touch the applications themselves. I just wanted HTTPS in front of them without the usual headaches of certificate management, renewal scripts, or complicated reverse proxy configs.
That’s when I moved to Caddy.
My Setup: Proxmox VM Running Caddy
I have a Proxmox host where most of my services run in LXC containers or VMs. One of those VMs runs Caddy as a dedicated reverse proxy. The legacy services themselves run on different containers, each listening on plain HTTP on internal ports like 8080, 3000, or 9000.
Caddy sits at the edge. It terminates HTTPS, handles certificates automatically, and forwards traffic to the backend services over plain HTTP on the local network.
Here’s what I installed on a Debian 12 VM:
sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list sudo apt update sudo apt install caddy
Caddy started immediately as a systemd service. No additional setup required.
The Caddyfile: Simpler Than I Expected
I expected reverse proxy configuration to be verbose. With nginx, I used to write location blocks, proxy headers, SSL certificate paths, and renewal hooks. With Caddy, I wrote this:
oldapp.vipinpg.com {
reverse_proxy 192.168.1.50:8080
}
That’s it. Two lines.
When I reloaded Caddy, it automatically:
- Reached out to Let’s Encrypt
- Obtained a certificate for oldapp.vipinpg.com
- Started serving HTTPS on port 443
- Redirected HTTP to HTTPS
- Forwarded requests to my backend service at 192.168.1.50:8080
I watched the logs:
sudo journalctl -u caddy -f
I saw the ACME challenge complete, the certificate get stored, and the service start listening. The entire process took about 10 seconds.
What Actually Happens Behind the Scenes
Caddy uses ACME (the protocol behind Let’s Encrypt) to prove I control the domain. It does this by responding to an HTTP challenge on port 80. Once verified, it downloads the certificate and stores it in /var/lib/caddy.
It also sets up automatic renewal. Certificates expire in 90 days, but Caddy renews them around 30 days before expiration. I don’t have to set up cron jobs or remember to check certificate validity.
The backend service never knows HTTPS exists. It continues to serve plain HTTP on its internal port. Caddy handles the encryption layer entirely.
Real Configuration I’m Using
Here’s my actual Caddyfile for multiple services:
notes.vipinpg.com {
reverse_proxy 192.168.1.51:3000
}
monitor.vipinpg.com {
reverse_proxy 192.168.1.52:9000
}
archive.vipinpg.com {
reverse_proxy 192.168.1.53:8080
}
Each service runs in its own LXC container. Each listens on HTTP only. Caddy gives them all HTTPS without me touching their configurations.
What Didn’t Work: DNS and Port Access
The first time I tried this, Caddy failed to obtain certificates. The logs showed ACME challenge timeouts.
The problem: my DNS records weren’t pointing to the right IP yet. I had just set up the VM and forgot to update the A records for the subdomains. Let’s Encrypt couldn’t reach my server to verify domain ownership.
Once I fixed the DNS and waited for propagation, it worked immediately.
Another issue: port 80 and 443 need to be open and accessible from the internet. I run Caddy behind my router, so I had to forward those ports to the VM’s internal IP. If you’re on a VPS, this isn’t a problem, but on a homelab setup, you need to check your firewall and router rules.
IPv6 Confusion
At one point, Caddy tried to bind to IPv6 addresses, but my ISP doesn’t provide stable IPv6. I saw errors in the logs about address binding failures.
I fixed this by explicitly binding to IPv4 only:
notes.vipinpg.com {
bind 0.0.0.0
reverse_proxy 192.168.1.51:3000
}
This forced Caddy to use IPv4 and stopped the errors.
Adding Services Without Downtime
One thing I appreciated: I can add new services to the Caddyfile and reload Caddy without interrupting existing traffic.
sudo systemctl reload caddy
Caddy reloads the configuration, obtains certificates for new domains, and keeps serving existing ones. No restart required.
This matters when you’re running production services. I’ve added new subdomains during the day without worrying about brief outages.
WebSocket Support (No Extra Config)
One of my services uses WebSockets for real-time updates. With nginx, I used to add specific headers to handle the connection upgrade. With Caddy, I didn’t have to do anything.
The same two-line config worked:
live.vipinpg.com {
reverse_proxy 192.168.1.54:5000
}
Caddy detected the WebSocket upgrade request and handled it automatically. The connection stayed open, and the service worked exactly as it did on plain HTTP.
What I Learned About Certificate Storage
Caddy stores certificates in /var/lib/caddy/.local/share/caddy/certificates. Each domain gets its own directory with the certificate and key files.
I initially worried about losing certificates if the VM crashed. Then I realized: it doesn’t matter. Caddy will just request new ones on the next startup. Let’s Encrypt allows this, and the process is fully automated.
I still back up that directory as part of my VM snapshots, but it’s not critical.
When This Approach Doesn’t Make Sense
If your backend service already handles HTTPS well, adding Caddy in front adds an extra hop with no real benefit. Some modern frameworks (like Go services with built-in TLS) can manage certificates themselves using ACME libraries.
But for legacy apps—especially those written in older PHP, Python, or Ruby frameworks—this approach is far simpler than modifying the application.
Also, if you need advanced load balancing or complex routing logic, you might outgrow Caddy’s simplicity. But for straightforward proxying, it’s hard to beat.
Key Takeaways
- Caddy can add HTTPS to any HTTP service without modifying the application itself.
- Certificate management is fully automatic—no cron jobs, no manual renewals.
- Configuration is minimal: domain name and backend address are often enough.
- DNS must point to your server before ACME challenges can succeed.
- Ports 80 and 443 must be accessible from the internet for Let’s Encrypt validation.
- WebSocket and other connection upgrades work without extra configuration.
- Reloading the config doesn’t interrupt existing connections.
If you’re running legacy services that don’t support HTTPS, Caddy is the simplest way I’ve found to add it without touching the original code.