Why I Had to Debug This
I run Pi-hole on my Proxmox server as my primary DNS filter. My Docker host uses systemd-resolved for DNS management, which seemed like a clean setup until containers started failing to resolve domains intermittently. The pattern was strange: some containers worked fine, others couldn’t resolve anything, and restarting Docker temporarily fixed it.
The real trigger was when I noticed Pi-hole’s query log showing blocked requests from 127.0.0.1. That shouldn’t happen—my containers should be querying Pi-hole directly, not localhost. Something in the DNS chain was breaking.
My Actual Setup
Here’s what I was running:
- Proxmox 8.x host with multiple VMs
- Ubuntu 22.04 VM running Docker Engine (not Docker Desktop)
- Pi-hole running in a separate LXC container on the same Proxmox host
- systemd-resolved enabled on the Docker host (Ubuntu default)
- Docker daemon configured to use Pi-hole’s IP directly in
/etc/docker/daemon.json
My /etc/docker/daemon.json looked like this:
{
"dns": ["192.168.1.53"],
"dns-search": ["local"]
}
Pi-hole was listening on 192.168.1.53, and I confirmed it was reachable from the Docker host with dig @192.168.1.53 google.com.
What Was Actually Breaking
The problem wasn’t Docker’s configuration—it was systemd-resolved intercepting DNS queries before they reached Pi-hole.
When systemd-resolved is active, it creates a stub resolver at 127.0.0.53 and modifies /etc/resolv.conf to point there. Even though Docker was configured to use Pi-hole directly, some containers were still using the host’s resolver stack, which went through systemd-resolved first.
Here’s what I found when checking the Docker host:
$ cat /etc/resolv.conf nameserver 127.0.0.53 options edns0 trust-ad search local
That 127.0.0.53 address is systemd-resolved’s stub. When containers inherited this (which some did, depending on how they were started), their DNS queries hit systemd-resolved, which then forwarded to upstream servers—but not necessarily Pi-hole.
Pi-hole saw these forwarded queries as coming from 127.0.0.1 because systemd-resolved was making the actual request, not the container. Since Pi-hole blocks localhost queries by default (a security measure), resolution failed.
What I Tried First (That Didn’t Work)
My initial assumption was that Docker’s DNS settings weren’t being applied. I tried:
- Restarting the Docker daemon after every config change
- Explicitly setting DNS in docker-compose files with
dns: 192.168.1.53 - Checking Docker’s embedded DNS server at
127.0.0.11(it was working, but still routing through systemd-resolved)
None of this fixed the core issue because I was treating symptoms, not the cause.
I also considered disabling systemd-resolved entirely, but that breaks other things on Ubuntu—NetworkManager relies on it, and removing it cleanly requires more surgery than I wanted.
The Actual Fix
I needed systemd-resolved to forward DNS queries directly to Pi-hole instead of using its default upstream servers. This required editing systemd-resolved’s configuration.
I modified /etc/systemd/resolved.conf:
[Resolve] DNS=192.168.1.53 FallbackDNS= Domains=~. DNSStubListener=yes
Key points:
DNS=192.168.1.53tells systemd-resolved to use Pi-hole as the primary resolverFallbackDNS=(empty) prevents it from falling back to Google/Cloudflare DNSDomains=~.routes all DNS queries through this resolver, not just specific domainsDNSStubListener=yeskeeps the stub resolver active (some tools depend on it)
After editing, I restarted systemd-resolved:
sudo systemctl restart systemd-resolved
Then I verified the change:
$ resolvectl status
Global
Protocols: +LLMNR +mDNS -DNSOverTLS DNSSEC=no/unsupported
resolv.conf mode: stub
DNS Servers: 192.168.1.53
Now systemd-resolved was forwarding to Pi-hole correctly, and Pi-hole saw queries coming from the Docker host’s real IP (192.168.1.x), not localhost.
Why This Worked
Docker containers use the host’s DNS resolver by default unless you override it. Even with /etc/docker/daemon.json configured, some containers—especially those using host networking or specific runtime flags—still inherit the host’s /etc/resolv.conf.
By making systemd-resolved forward correctly, I fixed DNS for:
- Containers that ignored Docker’s DNS settings
- Host-level DNS queries (useful for SSH, apt, etc.)
- Any service on the Docker host that relied on systemd-resolved
Pi-hole’s query log now showed the correct source IPs, and I could see which containers were making which requests—critical for debugging ad-blocking rules.
What Still Broke (And How I Handled It)
One container—running a custom Python script—was hardcoded to use Google’s DNS (8.8.8.8). This bypassed everything. I had to modify the script to respect the system resolver or explicitly pass Pi-hole’s IP as an environment variable.
Another issue: Pi-hole’s web interface started showing duplicate queries because both the Docker daemon and systemd-resolved were configured to use it. This didn’t break anything, but made logs noisier. I removed the dns setting from /etc/docker/daemon.json since systemd-resolved was now handling it globally.
Key Takeaways
- systemd-resolved intercepts DNS even when you think you’ve configured Docker to bypass it
- Pi-hole blocks localhost queries by default—if you see
127.0.0.1in its logs, something upstream is forwarding incorrectly - Docker’s DNS settings apply to the daemon, not necessarily all containers (depends on network mode and runtime flags)
- Fixing DNS at the host level (systemd-resolved) is cleaner than patching every container individually
- Always verify with
resolvectl statusand Pi-hole’s query log—don’t assume config files are being read
This setup has been stable for months now. Containers resolve correctly, Pi-hole filters as expected, and I can trace DNS queries back to their source without false positives from localhost.