Tech Expert & Vibe Coder

With 15+ years of experience, I specialize in self-hosting, AI automation, and Vibe Coding – building applications using AI-powered tools like Google Antigravity, Dyad, and Cline. From homelabs to enterprise solutions.

Debugging DNS Leaks in WireGuard Split Tunnel Configurations: Forcing DNS Through Specific Interfaces with iptables

Why I Worked on This

I run WireGuard for selective routing — some traffic goes through the VPN, the rest uses my regular ISP connection. The standard approach is to hardcode IP ranges into AllowedIPs, but that falls apart immediately. CDNs rotate IPs constantly. Services like GitHub span multiple autonomous systems. You end up maintaining lists that go stale within hours.

What I actually wanted was simple: specify domains, not IPs. If I’m accessing github.com, route it through WireGuard. If I’m accessing anything else, use the regular connection. The system should handle DNS resolution and route traffic dynamically based on what domains resolve to, not what I manually configured last week.

I also needed DNS queries for tunneled domains to go through the VPN itself. My ISP intercepts port 53 traffic and returns fake IPs for blocked domains — even when I query 1.1.1.1 directly. The only reliable fix is to encrypt the DNS query by routing it through WireGuard.

My Real Setup

I’m running this on Debian 12 with:

  • WireGuard built into the kernel (5.10+)
  • dnsmasq 2.89 (needs nftset support)
  • nftables 1.0.6
  • A WireGuard endpoint running on my router (OpenWRT) with dnsmasq listening on 10.1.0.1:53

The stack works like this: dnsmasq intercepts DNS queries, forwards queries for tunneled domains through WireGuard to the VPN’s DNS server, adds resolved IPs to nftables sets, and nftables marks packets destined for those IPs so policy routing can send them through the tunnel.

How It Actually Works

DNS Resolution Path

When an application queries a domain I’ve marked for tunneling:

  1. The query hits dnsmasq listening on 127.0.0.53 (or gets redirected there if the app tries to bypass system DNS)
  2. dnsmasq checks its config and sees this domain should use the VPN’s DNS server (10.1.0.1)
  3. The DNS query routes through WireGuard because 10.1.0.1 is only reachable through the tunnel
  4. The response comes back with IPs
  5. dnsmasq adds those IPs to nftables sets (wg_domains4 for IPv4, wg_domains6 for IPv6)
  6. The application gets the DNS response normally

Traffic Routing Path

When the application sends a packet to one of those IPs:

  1. nftables checks if the destination IP exists in wg_domains4 or wg_domains6
  2. If it matches, nftables applies a firewall mark (fwmark 0x1) to the packet
  3. A policy routing rule directs packets with fwmark 0x1 to routing table 100
  4. Table 100 has a default route through wg0
  5. WireGuard encrypts and sends the packet
  6. Return traffic comes back through the tunnel and gets masqueraded back to the original connection

DNS Bypass Prevention

Applications that try to query DNS servers directly (like 8.8.8.8) get intercepted by an nftables NAT rule that redirects all port 53 traffic to 127.0.0.1:53 where dnsmasq is listening. The exception is traffic from root (UID 0) — dnsmasq runs as root and needs to query upstream servers without creating a redirect loop.

Configuration Files

/etc/resolv.conf

This must point to dnsmasq:

nameserver 127.0.0.53

If this points anywhere else (like your router’s IP), DNS queries bypass dnsmasq entirely and the nftables sets never get populated. I verified this by pointing it at my router, flushing the sets, and watching traffic go out my regular connection instead of the tunnel.

/etc/systemd/resolved.conf.d/no-stub.conf

systemd-resolved runs a stub listener on 127.0.0.53 by default. I need dnsmasq to bind there instead:

[Resolve]
DNSStubListener=no

/etc/wireguard/domains.toml

[dns]
tunneled = ["10.1.0.1"]
default = ["9.9.9.9", "149.112.112.112"]

[tunneled]
domains = [
  "github.com",
  "icanhazip.com",
]

The key insight: 10.1.0.1 is a private IP on the WireGuard subnet. It’s only reachable through the tunnel. DNS queries to this address naturally route through WireGuard without needing any special packet marking.

/etc/wireguard/generate-config.py

I wrote a Python script that reads domains.toml and generates the dnsmasq configuration. It creates server directives that forward queries for tunneled domains to 10.1.0.1, and nftset directives that populate the nftables sets with resolved IPs:

#!/usr/bin/env python3
import tomllib
from pathlib import Path

TOML_PATH = Path("/etc/wireguard/domains.toml")
OUTPUT_PATH = Path("/etc/dnsmasq.d/wireguard.conf")

def main():
  with open(TOML_PATH, "rb") as f:
    config = tomllib.load(f)

  tunneled_dns = config["dns"]["tunneled"]
  default_dns = config["dns"]["default"]
  domains = config["tunneled"]["domains"]

  lines = [
    "# Generated automatically - do not edit",
    "",
    "user=root",
    "",
  ]

  for domain in domains:
    lines.extend(f"server=/{domain}/{server}" for server in tunneled_dns)
    lines.append(f"nftset=/{domain}/.{domain}/4#inet#wg_routing#wg_domains4,6#inet#wg_routing#wg_domains6")

  lines.extend([
    "",
    "listen-address=127.0.0.1",
    "listen-address=127.0.0.53",
    "port=53",
    "bind-interfaces",
    "no-resolv",
    *[f"server={server}" for server in default_dns],
    "cache-size=1000",
  ])

  OUTPUT_PATH.write_text("
".join(lines) + "
")
  print(f"Generated {OUTPUT_PATH}")

if __name__ == "__main__":
  main()

/etc/systemd/system/dnsmasq.service.d/nftables.conf

This systemd drop-in sets up the nftables infrastructure before dnsmasq starts:

[Service]
ExecStartPre=/usr/sbin/nft add table inet wg_routing
ExecStartPre=/usr/sbin/nft add set inet wg_routing wg_domains4 "{ type ipv4_addr; flags interval,timeout; timeout 60m; }"
ExecStartPre=/usr/sbin/nft add set inet wg_routing wg_domains6 "{ type ipv6_addr; flags interval,timeout; timeout 60m; }"
ExecStartPre=/usr/sbin/nft add chain inet wg_routing output "{ type route hook output priority mangle; }"
ExecStartPre=/usr/sbin/nft add rule inet wg_routing output ip daddr @wg_domains4 ct state new meta mark set 0x1 ct mark set meta mark
ExecStartPre=/usr/sbin/nft add rule inet wg_routing output ip6 daddr @wg_domains6 ct state new meta mark set 0x1 ct mark set meta mark
ExecStartPre=/usr/sbin/nft add rule inet wg_routing output ct mark 0x1 meta mark set ct mark
ExecStartPre=/usr/sbin/nft add chain inet wg_routing postrouting "{ type nat hook postrouting priority srcnat; }"
ExecStartPre=/usr/sbin/nft add rule inet wg_routing postrouting oifname "wg0" masquerade
ExecStartPre=/usr/sbin/nft add chain inet wg_routing output_nat "{ type nat hook output priority -100; }"
ExecStartPre=/usr/sbin/nft add rule inet wg_routing output_nat meta skuid != 0 udp dport 53 redirect to :53
ExecStartPre=/usr/sbin/nft add rule inet wg_routing output_nat meta skuid != 0 tcp dport 53 redirect to :53
ExecStopPost=-/usr/sbin/nft delete table inet wg_routing

The sets have a 60-minute timeout. IPs that haven’t been queried in an hour get removed automatically. The ct mark rules preserve the fwmark across connection tracking so established connections continue routing through the tunnel even after the initial packet.

/etc/wireguard/wg0.conf

[Interface]
Address = 10.1.0.2/24
PrivateKey = 
MTU = 1420
Table = off

PostUp = ip rule del fwmark 0x1 table 100 priority 100 2>/dev/null || true
PostUp = ip rule add fwmark 0x1 table 100 priority 100
PostUp = ip route add default dev wg0 table 100
PostUp = sysctl -w net.ipv4.conf.all.rp_filter=2

PostDown = ip rule del fwmark 0x1 table 100 priority 100 2>/dev/null || true
PostDown = ip route del default dev wg0 table 100 2>/dev/null || true

[Peer]
PublicKey = 
Endpoint = :51820
PersistentKeepalive = 25

Table = off is critical. Without it, WireGuard adds its own routing rules that interfere with policy routing. The rp_filter=2 setting allows packets with source IPs from the local network to route through WireGuard — without this, reverse path filtering drops them.

Activation

sudo systemctl daemon-reload
sudo systemctl restart systemd-resolved
sudo python3 /etc/wireguard/generate-config.py
sudo systemctl restart dnsmasq
sudo wg-quick up wg0

What Worked

Traffic to tunneled domains routes through WireGuard automatically. I can verify this by running:

curl icanhazip.com

It returns my VPN’s IP, not my real one. If I remove icanhazip.com from domains.toml, regenerate the config, and restart dnsmasq, it returns my real IP.

DNS queries for tunneled domains go through the VPN. I confirmed this by capturing traffic on wg0 while querying github.com — the DNS query shows up in the WireGuard traffic.

Applications trying to bypass system DNS by querying 8.8.8.8 directly get redirected to dnsmasq. I tested this with:

dig @8.8.8.8 github.com

The query still populates the nftables sets and traffic routes through WireGuard.

Long-lived connections stay tunneled. The ct mark rules preserve the fwmark across connection tracking, so SSH sessions and long downloads don’t break when IPs age out of the sets.

What Didn’t Work

DNS-over-HTTPS bypasses everything. Firefox and Chrome can use DoH to send DNS queries over port 443 instead of port 53. The nftables redirect doesn’t catch this, so queries for tunneled domains don’t populate the sets and traffic goes out the regular connection. I had to disable DoH in both browsers.

Initial implementation used iptables instead of nftables. The syntax for populating sets from dnsmasq is different — iptables uses ipset with a separate daemon, while nftables has native set support. I switched to nftables because the integration is cleaner and the timeout handling is more reliable.

I initially set the set timeout to 5 minutes. This caused problems with long-running downloads — IPs would age out mid-transfer and subsequent packets would route through the regular connection instead of WireGuard. Increasing the timeout to 60 minutes fixed this.

The rp_filter setting was initially set to 1 (strict mode). This caused packets from local network IPs (192.168.x.x) to be dropped when routing through WireGuard because the return path didn’t match the incoming interface. Setting it to 2 (loose mode) fixed this.

Real Limitations

This only works for domains I explicitly list. There’s no way to automatically tunnel all traffic to a specific country or ASN without maintaining IP lists.

Subdomains need explicit configuration. If I add github.com to the list, api.github.com works automatically because of how dnsmasq handles domain matching. But github.io doesn’t — I need to add it separately.

The DNS redirect doesn’t catch DoH or DoT. Applications using encrypted DNS protocols bypass the entire setup unless I block port 853 (DoT) and somehow identify DoH traffic on port 443.

Connection tracking state persists across configuration changes. If I remove a domain from the list and restart dnsmasq, existing connections to that domain continue routing through WireGuard until they close. I have to manually flush conntrack entries with:

sudo conntrack -F

The 60-minute timeout is arbitrary. I picked it based on my usage patterns. If you have connections that stay idle longer than that, you might need to increase it.

Key Takeaways

Domain-based split tunneling is possible on Linux without maintaining static IP lists. The combination of dnsmasq, nftables, and policy routing handles dynamic IP resolution transparently.

Routing DNS through the VPN prevents ISP interception. Querying 1.1.1.1 over port 53 doesn’t help if your ISP does deep packet inspection — you need to encrypt the query by sending it through the tunnel.

Connection tracking state matters. The ct mark rules are not optional — without them, established connections break when IPs age out of the nftables sets.

DNS-over-HTTPS is a real problem. There’s no clean way to intercept it without breaking other HTTPS traffic. The only solution is to disable DoH at the application level.

Table = off in WireGuard config is critical. Without it, WireGuard’s automatic routing rules conflict with policy routing and traffic doesn’t route correctly.

Leave a Comment

Your email address will not be published. Required fields are marked *