Why I Worked on This
I run Tailscale on my Proxmox host to access my self-hosted services remotely. At some point, I decided to move the Tailscale client into a Docker container instead of running it bare-metal on the host. The goal was to advertise subnet routes so I could reach my entire home network (192.168.x.x) through Tailscale without installing the client on every device.
What I didn't expect was that moving to Docker would break connectivity in subtle, intermittent ways. Some connections would hang. Large file transfers would stall. Package updates inside containers would timeout randomly. It took me longer than I'd like to admit to realize this wasn't a Tailscale problem—it was a Docker networking problem involving MTU mismatches and IPv6 path MTU discovery.
My Real Setup
My Proxmox host sits behind a Tailscale subnet router running in Docker. The physical network interface (ens18) has an MTU of 1450 because my ISP's connection uses PPPoE, which reduces the standard 1500-byte Ethernet MTU by 8 bytes for the PPPoE header.
Docker, by default, creates bridge networks with an MTU of 1500. This means:
- Host interface: MTU 1450
- Docker bridge (docker0): MTU 1500
- Tailscale interface inside container: MTU 1500
When a container tried to send a 1500-byte packet through Tailscale, it would hit the host's 1450-byte limit. The packet was too large, and because IPv6 Path MTU Discovery (PMTUD) wasn't working correctly across the Docker bridge and Tailscale layers, packets were silently dropped instead of being fragmented or triggering an ICMP "Packet Too Big" response.
This is what's called an MTU black hole—packets disappear without error messages, and connections just hang.
What Worked
I fixed this by forcing Docker to use an MTU of 1450 to match my physical network interface. This required two changes:
1. Set MTU in Docker daemon
I edited /etc/docker/daemon.json on the Proxmox host and added:
{
"mtu": 1450
}
Then restarted Docker:
sudo systemctl restart docker
This sets the default MTU for all new Docker networks.
2. Set MTU in Docker Compose
For the Tailscale container specifically, I added MTU configuration to my docker-compose.yml:
version: '3.8'
services:
tailscale:
image: tailscale/tailscale:latest
container_name: tailscale
hostname: proxmox-ts
environment:
- TS_AUTHKEY=tskey-auth-xxxxx
- TS_ROUTES=192.168.1.0/24,192.168.2.0/24
- TS_STATE_DIR=/var/lib/tailscale
volumes:
- ./tailscale-state:/var/lib/tailscale
- /dev/net/tun:/dev/net/tun
cap_add:
- NET_ADMIN
- SYS_MODULE
privileged: true
network_mode: host
restart: unless-stopped
networks:
default:
driver: bridge
driver_opts:
com.docker.network.driver.mtu: 1450
The privileged: true flag is necessary for Tailscale to manage routing tables and advertise subnet routes. I initially tried using just cap_add: NET_ADMIN, but subnet route advertisement didn't work until I added privileged mode.
After restarting the container, I verified the MTU inside:
docker exec tailscale ip link show tailscale0
It showed MTU 1450, matching the host.
What Didn't Work
Trying to fix it after the fact
I initially tried setting MTU manually inside the running container using ip link set commands. This didn't persist across container restarts and didn't fix the issue because the Docker bridge itself still had the wrong MTU.
Assuming IPv4 was the only problem
I spent time debugging IPv4 packet fragmentation before realizing that IPv6 was also involved. Tailscale uses IPv6 internally for its mesh network (100.x.x.x addresses are actually IPv6-mapped IPv4). IPv6 doesn't allow routers to fragment packets—only the sender can fragment. This means PMTUD must work correctly, or packets get dropped silently.
The Docker bridge was interfering with ICMP Packet Too Big messages that PMTUD relies on, creating the black hole.
Not checking MTU on all interfaces
I initially only checked the Tailscale interface MTU. I should have checked:
- Physical host interface
- Docker bridge
- Tailscale interface inside container
- Any VLANs or additional networks
The mismatch between any of these layers can cause problems.
Key Takeaways
- Docker's default MTU of 1500 assumes a standard Ethernet network. If your network uses PPPoE, VPNs, or tunnels, the MTU is likely lower.
- MTU mismatches cause silent packet drops, not clear errors. Symptoms include timeouts, stalled transfers, and working small requests but failing large ones.
- Check MTU on every network layer: physical interface, Docker bridge, container interfaces, and overlay networks like Tailscale.
- Set MTU in both
/etc/docker/daemon.jsonand in docker-compose files. The daemon setting applies to new networks; compose settings apply to specific containers. - Tailscale subnet routing in Docker requires
privileged: trueor very specific capabilities. I tried avoiding privileged mode but couldn't get routes to advertise reliably without it. - IPv6 PMTUD is more fragile than IPv4 fragmentation. If you're using Tailscale (which uses IPv6 internally), MTU issues hit harder.
How to Check for This Problem
If you suspect MTU issues:
# Check host interface MTU
ip link show
# Check Docker bridge MTU
docker network inspect bridge | grep -i mtu
# Test connectivity with different packet sizes
ping -M do -s 1472 target_ip # Should work
ping -M do -s 1500 target_ip # May fail if MTU is 1450
The -M do flag tells ping not to fragment packets. If larger packets fail but smaller ones work, you have an MTU problem.
Why This Matters
This isn't just a Tailscale issue. Any containerized network service that routes traffic—VPN servers, reverse proxies, subnet routers—can hit this. The symptoms are confusing because small requests work fine (DNS, SSH, web page loads) while large transfers fail silently.
I lost several hours debugging "broken" services that were actually fine—the network layer was just dropping packets without telling anyone.