Why I Worked on This
I run Pi-hole in a Podman container on my home server, and I wanted to access it remotely through Tailscale’s subnet router feature. The local network worked perfectly—clients got DNS responses instantly. But the moment I tried using it over Tailscale from my laptop on another network, DNS queries just hung. The requests showed up in Pi-hole’s logs with the correct client IP, but the client never received a response.
This wasn’t a theoretical problem. I needed DNS filtering while traveling, and I’d already invested time in the rootless Podman setup with port forwarding through UFW. Something about the combination of WireGuard (which Tailscale uses), container networking, and DNS packet routing was breaking.
My Real Setup
Here’s what I was running:
- Host: Ubuntu 24.04.1 with UFW firewall
- Pi-hole: Running in a rootless Podman container
- Network mode:
slirp4netnswith custom port handler to preserve real client IPs - Port forwarding: UFW redirects port 53 to container port 30053
- Remote access: Tailscale subnet router advertising my home network
The UFW rules looked like this:
*nat
:PREROUTING ACCEPT [0:0]
-A PREROUTING -p udp --dport 53 -d 192.168.0.126 -j DNAT --to-destination 192.168.0.126:30053
COMMIT
I specifically used slirp4netns:port_handler=slirp4netns instead of the default Podman NAT because I wanted to see real client IPs in Pi-hole’s logs, not just the container gateway address.
What Didn’t Work
I spent hours testing different parts of the stack:
- DNS over Tailscale without containers worked fine on another machine running dnsmasq directly
- Local network queries to the container’s custom port (
dig @192.168.0.126 -p 30053) worked - Same query over Tailscale timed out
- Switching back to default Podman networking didn’t help
The Pi-hole logs showed the query arriving. The Reply column showed either an IP or NXDOMAIN—never N/A. This meant Pi-hole was processing the request and generating a response. The response just never made it back to the client.
I tried setting explicit MTU values everywhere:
# Tailscale interface
sudo ip link set tailscale0 mtu 1500
# Podman container
--net=slirp4netns:port_handler=slirp4netns,mtu=1500
Still nothing. The problem persisted.
What Actually Fixed It
The issue was MTU mismatch causing packet fragmentation, but not in the way I expected.
WireGuard (which Tailscale uses) has a default MTU of 1420 bytes to account for encryption overhead. When Pi-hole sent back a DNS response larger than 1420 bytes through the Tailscale tunnel, the packet got fragmented. But here’s the problem: IPv6 doesn’t support in-transit fragmentation—it has to happen at the source.
The combination of:
- Podman’s slirp4netns network (MTU 1500)
- Host network (MTU 1500)
- Tailscale tunnel (MTU 1420)
…meant that large DNS responses were being dropped silently. Small responses (like A records for simple domains) might work, but anything requiring multiple records or DNSSEC responses would fail.
The fix was to lower the MTU at the container level to match Tailscale’s constraints:
Network=slirp4netns:port_handler=slirp4netns,mtu=1280
I chose 1280 instead of 1420 because:
- It’s the minimum MTU required by IPv6
- It leaves headroom for additional encapsulation layers
- It’s conservative enough to work across most VPN configurations
After this change, DNS queries over Tailscale worked immediately. No more timeouts.
Why This Was Hard to Debug
The problem was invisible in most tools:
tcpdumpon the host showed packets leaving correctly- Pi-hole logs showed queries being answered
- No error messages anywhere
The only clue was that responses never arrived at the client. Packet fragmentation failures are silent—there’s no ICMP error, no log entry, just dropped packets.
What finally pointed me in the right direction was testing with dig +short for domains I knew would return small responses versus large ones. Small responses worked. Large ones didn’t. That pattern only makes sense with MTU issues.
Key Takeaways
- WireGuard/Tailscale MTU defaults to 1420—any layer above it needs to respect that limit
- Container networking adds another MTU boundary—slirp4netns defaults to 1500, which is too large
- IPv6 fragmentation doesn’t work like IPv4—oversized packets just get dropped
- DNS responses can be larger than you think—especially with DNSSEC or multiple A/AAAA records
- Set container MTU to 1280 when routing through VPNs—it’s conservative but reliable
If you’re running Pi-hole in a container and accessing it through any VPN, check your MTU settings first. It’s not obvious, it’s not logged, and it only breaks sometimes—which makes it incredibly frustrating to diagnose.
What I’d Do Differently
If I were setting this up again, I’d:
- Set the container MTU to 1280 from the start
- Test with known large DNS responses (like
dig google.com) over the VPN immediately - Use
tracepathto check MTU along the entire path:tracepath -n [destination]
I’d also consider whether rootless Podman with slirp4netns is worth the complexity for this use case. It preserves client IPs in logs, which is nice, but it adds another network layer to troubleshoot. For a simpler setup, running Pi-hole directly on the host or using host networking mode would eliminate one potential failure point.
But for now, the MTU fix works, and I can use DNS filtering remotely without issues.