Tech Expert & Vibe Coder

With 14+ years of experience, I specialize in self-hosting, AI automation, and Vibe Coding – building applications using AI-powered tools like Google Antigravity, Dyad, and Cline. From homelabs to enterprise solutions.

Debugging wireguard mtu black holes when routing ipv6 traffic through cgnat connections

Why I Had to Debug This

I run a WireGuard tunnel from my home network to a cloud VPS. My ISP uses CGNAT for IPv4, so I rely on IPv6 for incoming connections. Everything worked fine until it didn't—suddenly, my monitoring tools stopped reaching InfluxDB through the tunnel. Ping worked. SSH worked. But HTTP requests to InfluxDB just hung.

This is the kind of failure that makes you question everything: firewall rules, service configs, DNS. But the pattern was too specific. Small packets went through. Large ones disappeared. That pointed to one thing: MTU black holes.

My Setup and What Changed

My home router gets native IPv6 from the ISP. The WireGuard tunnel runs over IPv6 because CGNAT blocks IPv4 entirely for incoming traffic. The VPS endpoint is dual-stack, but the connection uses IPv6.

Here's what I didn't realize at first: my ISP's network has different MTU limits depending on the protocol. When I switched providers, the IPv6 path changed. The old provider gave me 1492 bytes (standard Ethernet minus PPPoE overhead). The new one uses DS-Lite for IPv4, which drops the MTU to 1452 bytes for IPv4 traffic—but IPv6 stayed at 1492.

WireGuard's default interface MTU is 1420 bytes. That should have been safe. But when routing IPv6 traffic through a tunnel that itself runs over IPv6, the math breaks down if you don't account for the full header stack.

How MTU Black Holes Happen

A black hole occurs when packets are too large to pass through a link, but the network doesn't send back ICMP "Packet Too Big" messages. Path MTU Discovery (PMTUD) relies on those messages to adjust packet size. Without them, large packets just vanish.

In my case:

  • The WireGuard interface was set to 1420 bytes
  • The underlying IPv6 link supported 1492 bytes
  • But WireGuard adds 32 bytes of overhead, plus the IPv6 header (40 bytes) and UDP header (8 bytes)
  • Total overhead: 80 bytes

So a 1420-byte WireGuard packet became a 1500-byte IPv6 packet. That's exactly the Ethernet MTU limit. If any hop along the path had slightly less capacity—or if the ISP's edge router silently dropped oversized packets—the connection broke.

What I Tried First (That Didn't Work)

I assumed it was a service problem. I rolled back InfluxDB and Telegraf to older versions using ZFS snapshots. No change. I checked firewall logs. Nothing blocked. I ran packet captures on the VPS. The SYN packets arrived, but the data transfer stalled.

Then I tested with curl and wget. Small requests worked. Large responses didn't. That confirmed MTU.

Calculating the Right MTU

The formula for WireGuard over IPv6 is:

WireGuard MTU = Link MTU - IPv6 header - UDP header - WireGuard overhead
              = 1492 - 40 - 8 - 32
              = 1412 bytes

But if the tunnel also routes IPv4 traffic (which mine does for some internal services), you have to account for the worst case. IPv4 over IPv6 WireGuard means:

WireGuard MTU = Link MTU - IPv6 header - UDP header - WireGuard overhead
              = 1452 - 40 - 8 - 32
              = 1372 bytes (if DS-Lite is in the path)

I went with 1392 bytes to stay safe. That's lower than the pure IPv6 calculation, but it works for both protocols without fragmentation.

How I Fixed It

I changed the MTU on the WireGuard interface by editing /etc/wireguard/wg0.conf:

[Interface]
Address = 10.1.0.1/24
MTU = 1392
PrivateKey = ...
ListenPort = 51820

Then restarted the tunnel:

wg-quick down wg0
wg-quick up wg0

On the client side (my Proxmox host), I set the same MTU in the peer config. After that, InfluxDB writes started working immediately.

MSS Clamping (And Why I Needed It)

MTU fixes the interface, but TCP connections negotiate their own Maximum Segment Size (MSS) during the handshake. If the client and server don't know about the reduced MTU, they'll still try to send oversized packets.

MSS clamping forces the router to rewrite the MSS value in outgoing SYN packets. I added this to my WireGuard config:

PostUp = iptables -A FORWARD -o wg0 -p tcp -m tcp --tcp-flags SYN,RST SYN -j TCPMSS --set-mss 1352
PostUp = ip6tables -A FORWARD -o wg0 -p tcp -m tcp --tcp-flags SYN,RST SYN -j TCPMSS --set-mss 1352
PostDown = iptables -D FORWARD -o wg0 -p tcp -m tcp --tcp-flags SYN,RST SYN -j TCPMSS --set-mss 1352
PostDown = ip6tables -D FORWARD -o wg0 -p tcp -m tcp --tcp-flags SYN,RST SYN -j TCPMSS --set-mss 1352

The MSS value is the WireGuard MTU minus the TCP and IP headers:

MSS = 1392 - 40 (IPv6 header) - 20 (TCP header) = 1332 bytes

I set it to 1352 to leave some margin. This prevents fragmentation even if Path MTU Discovery fails completely.

How to Test If It's Actually Fixed

I used ping with the "Don't Fragment" flag to verify:

ping -M do -s 1364 10.1.0.2

If the MTU is too low, this fails with "Frag needed" errors. If it works, the MTU is correct. I tested at different sizes to find the exact limit.

For IPv6:

ping6 -M do -s 1364 2a00:6020:1000:33::1234

This confirmed that 1392 bytes was safe for both protocols.

What I Learned

MTU issues are invisible until they're catastrophic. Small packets (DNS, ping, SSH) work fine. Large ones (HTTP POST, database writes, file transfers) fail silently. If you're running WireGuard over CGNAT or DS-Lite, assume the MTU is lower than you think.

The default WireGuard MTU (1420 bytes) is safe for most networks, but not all. If your ISP uses IPv6 transition mechanisms (DS-Lite, 6rd, MAP-E), you need to recalculate. And if you're routing both IPv4 and IPv6 through the tunnel, use the lowest common denominator.

MSS clamping is not optional. Path MTU Discovery doesn't work reliably on the modern internet. Too many middleboxes drop ICMP. Clamping forces the right behavior at the TCP layer, which is the only layer most applications care about.

Key Takeaways

  • WireGuard's default MTU assumes a clean 1500-byte path. CGNAT and DS-Lite break that assumption.
  • IPv6 headers are 20 bytes larger than IPv4. That matters when calculating overhead.
  • If small packets work but large ones don't, it's MTU. Always.
  • MSS clamping is the only reliable way to prevent fragmentation when PMTUD fails.
  • Test with ping -M do to verify the actual working MTU before assuming it's correct.

This failure cost me two hours of debugging. Writing it down took twenty minutes. That's the trade-off.