Why I Built a DNS-over-HTTPS Failover Chain
I run Pi-hole as my network-wide DNS resolver and ad blocker. It works well, but I kept hitting two problems:
- My ISP could still see every DNS query I made in plain text
- If my upstream DNS went down, my entire network lost resolution
I wanted encryption and redundancy without adding complexity I couldn’t maintain. After testing different setups, I settled on using unbound as a recursive resolver with DNS-over-HTTPS (DoH) forwarding to Quad9 and Cloudflare as fallbacks.
This isn’t about performance or privacy theater. It’s about keeping DNS working when one provider has issues, while keeping queries encrypted from my local network to the upstream resolver.
My Setup Context
I run Pi-hole on a Proxmox VM (Debian 12). The VM has 2GB RAM and 2 CPU cores, which is more than enough for DNS duties on a home network with ~30 devices.
Before this setup, I pointed Pi-hole directly at Cloudflare’s 1.1.1.1. It worked fine until Cloudflare had a brief outage. My entire network lost DNS for about 20 minutes. That’s when I decided redundancy mattered more than simplicity.
Why Unbound Instead of Cloudflared
I initially tried cloudflared as a DoH proxy, following Pi-hole’s official documentation. It worked, but I ran into two issues:
- Cloudflare deprecated the proxy-dns feature in cloudflared in November 2025
- Adding failover meant running multiple cloudflared instances, which felt messy
Unbound is a full recursive resolver that can handle DoH forwarding natively. It also lets me define multiple upstream servers with automatic failover in a single configuration file.
Installing and Configuring Unbound
I installed unbound directly on the same VM running Pi-hole:
sudo apt update
sudo apt install unbound
The default unbound configuration conflicts with Pi-hole because both try to bind to port 53. I needed unbound to listen on a different port (5335 in my case) and forward queries over DoH.
I created a new configuration file at /etc/unbound/unbound.conf.d/pihole.conf:
server:
interface: 127.0.0.1
port: 5335
do-ip4: yes
do-ip6: no
do-udp: yes
do-tcp: yes
# Privacy settings
hide-identity: yes
hide-version: yes
# Performance
num-threads: 2
msg-cache-size: 8m
rrset-cache-size: 16m
cache-min-ttl: 300
cache-max-ttl: 86400
# Logging (disabled after testing)
verbosity: 0
forward-zone:
name: "."
# Quad9 DoH (primary)
forward-addr: 9.9.9.9@853#dns.quad9.net
forward-addr: 149.112.112.112@853#dns.quad9.net
# Cloudflare DoH (fallback)
forward-addr: 1.1.1.1@853#cloudflare-dns.com
forward-addr: 1.0.0.1@853#cloudflare-dns.com
forward-tls-upstream: yes
This configuration does a few things:
- Listens only on localhost port 5335 (so Pi-hole can query it)
- Disables IPv6 because my ISP doesn’t support it reliably
- Forwards all queries over TLS (port 853) to Quad9 first, then Cloudflare
- Uses reasonable cache settings to reduce upstream queries
After saving the file, I restarted unbound:
sudo systemctl restart unbound
sudo systemctl status unbound
I tested it with dig to confirm it was resolving queries:
dig @127.0.0.1 -p 5335 google.com
The response came back with the correct IP and showed the query time. That meant unbound was working and forwarding queries upstream.
Configuring Pi-hole to Use Unbound
In the Pi-hole web interface, I went to Settings → DNS and disabled all the default upstream DNS servers. Then I added a custom DNS entry:
127.0.0.1#5335
I also disabled the conditional forwarding option because unbound handles all forwarding now.
After saving, I tested DNS resolution from a client device on my network. Everything resolved normally, and Pi-hole’s query log showed requests being forwarded to the local unbound instance.
How Failover Actually Works
Unbound doesn’t use a strict “primary then fallback” model. It tries all configured forward-addr entries and picks the fastest one that responds. If one upstream is slow or unresponsive, unbound automatically routes queries to the others.
I tested this by temporarily blocking port 853 to Quad9 using iptables:
sudo iptables -A OUTPUT -p tcp --dport 853 -d 9.9.9.9 -j DROP
sudo iptables -A OUTPUT -p tcp --dport 853 -d 149.112.112.112 -j DROP
DNS queries still resolved, but they took about 2-3 seconds longer while unbound figured out Quad9 wasn’t responding. After that, queries went straight to Cloudflare with normal latency.
When I removed the iptables rules, unbound switched back to Quad9 within a few minutes without any manual intervention.
What Didn’t Work
My first attempt used cloudflared with multiple systemd services, one for each upstream provider. The idea was to run cloudflared on different ports (5053 for Cloudflare, 5054 for Quad9) and configure Pi-hole to use both.
This technically worked, but Pi-hole doesn’t have smart failover logic. It round-robins between configured upstreams, which meant half my queries went to each provider regardless of which one was faster or more reliable.
I also tried using DNS-over-HTTPS instead of DNS-over-TLS in unbound, but the configuration was more complex and didn’t offer any real benefit. TLS on port 853 is simpler and just as encrypted.
Performance and Latency
I don’t have detailed benchmarks because I didn’t set up formal testing. What I can say from observation:
- Cached queries in Pi-hole respond in under 1ms (same as before)
- Uncached queries take 20-40ms on average (measured with dig)
- No noticeable difference in browsing speed or streaming
The extra hop through unbound adds a tiny bit of latency, but it’s not something I notice in day-to-day use.
Monitoring and Logs
I disabled verbose logging in unbound after confirming everything worked. The logs were filling up with routine query information that I didn’t need.
If something breaks, I can re-enable logging by changing verbosity: 0 to verbosity: 1 in the unbound config and restarting the service.
Pi-hole’s query log shows which queries were blocked or forwarded, which is enough for my needs.
Maintenance and Updates
Unbound updates through the normal Debian package manager, so I don’t need to manually track releases. I run apt update && apt upgrade weekly as part of my regular VM maintenance.
The configuration file hasn’t needed changes since I set it up. If Quad9 or Cloudflare change their DoH endpoints, I’d need to update the forward-addr lines, but that’s rare.
Key Takeaways
- Unbound handles DoH forwarding and failover in a single service
- Failover is automatic but not instant—expect a few seconds of delay when switching upstreams
- DNS-over-TLS (port 853) is simpler to configure than DNS-over-HTTPS
- Pi-hole doesn’t need to know about failover logic; unbound handles it
- This setup adds minimal latency and works reliably on a low-spec VM
If you’re running Pi-hole and want encrypted DNS with redundancy, this approach works without requiring constant babysitting.