Tech Expert & Vibe Coder

With 14+ years of experience, I specialize in self-hosting, AI automation, and Vibe Coding – building applications using AI-powered tools like Google Antigravity, Dyad, and Cline. From homelabs to enterprise solutions.

Debugging Docker DNS resolution failures when containers can't resolve external domains but host networking works fine

Why I Had to Debug This

I run multiple Docker containers on my Proxmox host for self-hosted services—n8n, Cronicle, monitoring tools, and a few experimental setups. One day, I spun up a new container and noticed it couldn't pull updates or reach external APIs. The error was clear: DNS resolution failed.

What confused me was that the host itself had no issues. I could ping google.com, resolve domains, and browse the web without problems. But inside the container, every external domain lookup timed out.

This wasn't a one-time fluke. I've hit this exact issue three times now across different setups—once on my main Proxmox node, once on a test VM, and once after changing my ISP's DNS settings. Each time, the symptoms were identical: host networking worked fine, container DNS failed.

What I Actually Saw

When I ran docker exec -it <container> ping google.com, I got:

ping: google.com: Temporary failure in name resolution

Same thing with curl:

curl: (6) Could not resolve host: google.com

But on the host:

$ ping google.com
PING google.com (142.250.185.46) 56(84) bytes of data.

No issues. The container was clearly not using the same DNS resolver as the host.

How I Debugged It

I started by checking what DNS server the container was actually using. Inside the container, I ran:

cat /etc/resolv.conf

The output showed:

nameserver 127.0.0.11

This is Docker's embedded DNS server. It's supposed to forward queries to the host's DNS or whatever Docker is configured to use. But clearly, something was broken in that chain.

Next, I checked the host's /etc/resolv.conf:

nameserver 127.0.0.53

That's systemd-resolved, which is common on Ubuntu-based systems. The problem here is that Docker can't directly use 127.0.0.53 because that's a loopback address only available on the host. Containers don't have access to it.

I also checked Docker's daemon configuration:

cat /etc/docker/daemon.json

It was empty. No custom DNS settings. Docker was falling back to its defaults, which apparently weren't working.

What Actually Fixed It

I tried several things. Here's what worked and what didn't.

Option 1: Set DNS in Docker Daemon Config

I edited /etc/docker/daemon.json and added:

{
  "dns": ["8.8.8.8", "8.8.4.4"]
}

Then restarted Docker:

sudo systemctl restart docker

After this, new containers could resolve external domains. This worked because I explicitly told Docker to use Google's public DNS instead of relying on the host's systemd-resolved setup.

The downside? This applies to all containers. If I wanted some containers to use a different DNS (like my local DNS server), I'd need to override it per container.

Option 2: Override DNS Per Container

For cases where I didn't want to change the global Docker config, I used the --dns flag when running a container:

docker run --dns=8.8.8.8 --dns=8.8.4.4 <image>

This worked fine for one-off containers or testing. But for my docker-compose stacks, I had to add it to the compose file:

services:
  myservice:
    image: myimage
    dns:
      - 8.8.8.8
      - 8.8.4.4

This gave me more control but required editing every compose file where I hit the issue.

Option 3: Fix systemd-resolved on the Host

Instead of working around the problem, I tried fixing the root cause. I edited /etc/systemd/resolved.conf and set:

[Resolve]
DNS=8.8.8.8
FallbackDNS=8.8.4.4

Then restarted the service:

sudo systemctl restart systemd-resolved

After this, Docker's embedded DNS started working without needing custom config. The host's DNS setup was now usable by containers.

This felt cleaner because it fixed the issue at the system level instead of patching it in Docker.

What Didn't Work

Using Host Networking

I tried running a container with --network host:

docker run --network host <image>

This bypassed Docker's networking entirely and let the container use the host's DNS directly. It worked, but it broke other things—port conflicts, lack of isolation, and issues with services that expected their own network namespace.

I only use host networking now when I absolutely need it for debugging or when a service explicitly requires it.

Restarting Docker Without Fixing Config

Early on, I thought restarting Docker might flush some cache or reset something. It didn't. The issue persisted until I actually changed the DNS configuration.

Why This Happens

Docker's embedded DNS server (127.0.0.11) is supposed to forward queries to whatever DNS the host uses. But when the host uses systemd-resolved on 127.0.0.53, Docker can't reach it because that's a loopback address.

This is a known issue on Ubuntu and Debian-based systems where systemd-resolved is the default. Docker doesn't handle this gracefully out of the box.

The other common cause is firewall rules blocking DNS traffic on port 53. I didn't hit this myself, but I've seen it mentioned in forums. If you're running strict iptables rules or a custom firewall, check that UDP and TCP port 53 are open.

Key Takeaways

  • If containers can't resolve external domains but the host can, check what DNS server Docker is using.
  • Docker's embedded DNS (127.0.0.11) relies on the host's DNS config. If the host uses 127.0.0.53, Docker can't use it.
  • Setting explicit DNS servers in /etc/docker/daemon.json is the most reliable fix for global issues.
  • For per-container control, use --dns or add DNS settings to docker-compose files.
  • Fixing systemd-resolved on the host can solve the problem at the source, but requires understanding your system's DNS setup.
  • Host networking bypasses the issue but breaks container isolation and should only be used when necessary.

This issue isn't rare. I've seen it on fresh Ubuntu installs, after ISP changes, and on systems where someone else configured DNS without thinking about Docker. Once you understand what's happening, it's straightforward to fix. But the first time you hit it, it's frustrating because the host works fine and the error messages don't point to the real cause.