Tech Expert & Vibe Coder

With 14+ years of experience, I specialize in self-hosting, AI automation, and Vibe Coding – building applications using AI-powered tools like Google Antigravity, Dyad, and Cline. From homelabs to enterprise solutions.

fixing docker compose service communication failures after ipv6 enablement: resolving dual-stack dns and routing issues in bridge networks

Why I Had to Fix This

I run several Docker Compose stacks on my home server—mostly monitoring tools, automation services, and a few databases. Everything worked fine until I decided to enable IPv6 on my Docker daemon. I thought it would be straightforward: flip a setting, restart Docker, done.

Instead, my containers stopped talking to each other. Services that had been running for months suddenly couldn't resolve each other's names. My n8n workflows failed because they couldn't reach the database. My monitoring stack went dark. The logs showed DNS timeouts and connection refused errors.

I spent an entire evening tracking down what went wrong.

What I Was Running

My setup is a Proxmox host running Ubuntu 22.04 LTS in a VM, with Docker Engine 24.0.7 and Docker Compose v2.23. I have about eight different compose stacks, each with its own custom bridge network. Most services communicate using container names as hostnames.

I enabled IPv6 because I wanted to experiment with dual-stack networking for some external services. I edited /etc/docker/daemon.json and added:

{
  "ipv6": true,
  "fixed-cidr-v6": "fd00::/64"
}

Then I restarted Docker with sudo systemctl restart docker.

That's when everything broke.

The First Problem: Existing Networks Didn't Get IPv6

I assumed that enabling IPv6 in the daemon config would automatically apply to all networks. It doesn't. Existing bridge networks created before the change remained IPv4-only.

I confirmed this by running:

docker network inspect my_app_network

The output showed no IPv6 subnet. Containers on that network had no IPv6 addresses, but Docker's embedded DNS resolver was now trying to handle both IPv4 and IPv6 queries. This created a mismatch—DNS lookups sometimes returned AAAA records that pointed nowhere.

The fix was to recreate the networks. I had to stop all containers in each stack, remove the network, and let Docker Compose recreate it on the next docker compose up. But there was a catch: I needed to explicitly define IPv6 in my compose files.

Here's what I added to each docker-compose.yml:

networks:
  my_app_network:
    enable_ipv6: true
    ipam:
      config:
        - subnet: 172.20.0.0/16
        - subnet: fd00:1::/64

I used different IPv6 subnets for each stack to avoid conflicts. After recreating the networks, containers got both IPv4 and IPv6 addresses.

The Second Problem: DNS Resolution Was Inconsistent

Even after containers had IPv6 addresses, DNS was flaky. Sometimes ping db worked. Sometimes it didn't. I'd see errors like:

ping: db: Temporary failure in name resolution

I checked the embedded DNS server by running:

docker exec my_container cat /etc/resolv.conf

It showed nameserver 127.0.0.11, which is Docker's internal DNS. That part was correct. But when I tested resolution manually:

docker exec my_container nslookup db

It sometimes returned only an IPv6 address, and sometimes both IPv4 and IPv6. The inconsistency suggested a race condition or a caching issue inside Docker's DNS.

I tried forcing IPv4-only resolution by setting dns options in the compose file, but that didn't help. What finally worked was disabling IPv6 on the host's loopback interface for Docker's DNS queries. I added this to /etc/docker/daemon.json:

{
  "ipv6": true,
  "fixed-cidr-v6": "fd00::/64",
  "dns": ["8.8.8.8", "8.8.4.4"],
  "dns-opts": ["ndots:0"]
}

The ndots:0 option tells the resolver not to append search domains, which reduced some of the lookup confusion. After restarting Docker again, DNS became stable.

The Third Problem: Routing Between Containers Failed

Even with working DNS, some containers couldn't actually connect to each other over IPv6. I'd get:

curl: (7) Failed to connect to db port 5432: No route to host

I checked the routing table inside a container:

docker exec my_container ip -6 route

It showed a default route, but packets weren't making it to the destination. I suspected the host's IPv6 forwarding was disabled.

I checked:

sysctl net.ipv6.conf.all.forwarding

It returned 0. I enabled it:

sudo sysctl -w net.ipv6.conf.all.forwarding=1
sudo sysctl -w net.ipv6.conf.default.forwarding=1

Then I made it permanent by adding these lines to /etc/sysctl.conf:

net.ipv6.conf.all.forwarding=1
net.ipv6.conf.default.forwarding=1

After that, IPv6 routing between containers worked.

What Didn't Work

I tried a few things that seemed logical but didn't help:

  • Setting ip6tables: true in daemon.json. My kernel didn't have the required modules loaded, and I didn't want to mess with that.
  • Using network_mode: bridge in the compose file. This bypassed the custom network entirely and broke service discovery.
  • Manually assigning IPv6 addresses to containers. Docker's IPAM handled this better automatically once the network was configured correctly.

I also wasted time thinking the problem was with my firewall. I checked iptables and ip6tables rules, but Docker manages those automatically. The issue was purely in the network and DNS configuration.

Key Takeaways

Enabling IPv6 in Docker isn't just a daemon setting. You have to:

  • Recreate existing networks with explicit IPv6 subnets
  • Enable IPv6 forwarding on the host
  • Watch for DNS resolver quirks when dual-stack is enabled

If you're running multiple compose stacks, plan to take them down one at a time and recreate the networks. Don't assume Docker will handle the migration automatically.

Also, test both IPv4 and IPv6 connectivity after making changes. I used ping and curl from inside containers to verify that both address families worked.

The whole process took me about three hours, including the time spent reading Docker's networking documentation and testing different configurations. Now my stacks run on dual-stack networks without issues.