Tech Expert & Vibe Coder

With 14+ years of experience, I specialize in self-hosting, AI automation, and Vibe Coding – building applications using AI-powered tools like Google Antigravity, Dyad, and Cline. From homelabs to enterprise solutions.

Implementing split-horizon DNS with Pi-hole and dnsmasq to serve different records for LAN vs VPN clients on Tailscale
# Implementing Split-Horizon DNS with Pi-hole and dnsmasq to Serve Different Records for LAN vs VPN Clients on Tailscale

Why I Needed This

I've been running Tailscale as my VPN for a while now. It's a mesh network built on Wireguard, and it's significantly faster than the OpenVPN setup I used to run. When I first configured it, I took a shortcut: I had my reverse proxy advertise a route to its own LAN IP. This worked, but it meant Tailscale clients were still hitting services via the LAN address, which created unnecessary routing complexity. The real problem was access control. My reverse proxy sits in front of multiple services, and for WAN connections, it requires an extra authentication layer before passing traffic through to the backing service. This is intentional—I don't want the wider internet directly touching service authentication stacks. But some clients, like Nextcloud's desktop sync client, don't support this additional auth. They use the `Authorization` header for their own bearer tokens, and there's no way to inject custom headers. I also have geo-blocking in place after dealing with some persistent attempts at compromise. This blocks connections from unexpected regions, which is fine until I'm traveling or connecting from an IP that's mislocated in the GeoDB. I needed a way to give Tailscale clients the same privileged access that LAN clients get—without opening up WAN access or breaking geo-restrictions.

How Split-Horizon DNS Works in Pi-hole

Pi-hole runs on a fork of dnsmasq called `pihole-FTL`, and it has a setting called `localise-queries` enabled by default. This setting allows dnsmasq to return different DNS responses based on which network interface received the query. If you have multiple A records for the same hostname in `/etc/hosts` or Pi-hole's `custom.list`, dnsmasq will return only the record that matches the subnet of the receiving interface. If no match exists, it returns all records. For example, if I have: 192.168.3.33 foo.example.com 100.100.3.2 foo.example.com A query received on an interface in `192.168.3.0/24` returns `192.168.3.33`. A query received on an interface in `100.100.0.0/16` (Tailscale's range) returns `100.100.3.2`. A query from any other subnet returns both. This is exactly what I needed. LAN clients would get LAN IPs. Tailscale clients would get Tailscale IPs.

The Docker Networking Problem

My Pi-hole was running in a Docker container with bridged networking: docker run \ -d \ --name=pihole \ -p 53:53 -p 53:53/udp \ -p 8080:80 \ -v $PWD/pihole/conf:/etc/pihole \ -v $PWD/pihole/dnsmasq.d:/etc/dnsmasq.d/ \ pihole/pihole This broke the split-horizon setup. With bridged networking, Pi-hole only sees the Docker bridge interface—it has no visibility into which physical interface a query actually arrived on. All queries look like they came from the same place. I killed the container and switched to host networking: docker run \ -d \ --network=host \ -e WEB_PORT=8080 \ -v $PWD/pihole/conf:/etc/pihole \ -v $PWD/pihole/dnsmasq.d:/etc/dnsmasq.d/ \ pihole/pihole The `WEB_PORT` environment variable was necessary because Pi-hole's web interface defaults to port 80, which was already in use on the host.

DNS Stopped Responding

The container started, but DNS queries weren't being answered. I could see `pihole-FTL` listening on port 53: tcp 0 0 0.0.0.0:53 0.0.0.0:* LISTEN pihole-FTL udp 0 0 0.0.0.0:53 0.0.0.0:* pihole-FTL Packet captures showed queries arriving, but no responses going out. Pi-hole's query log was completely silent. I reverted to bridged networking temporarily to restore service, then dug into the configuration. Eventually I found the issue in Pi-hole's web interface under "Interface settings." Pi-hole was configured to only respond on interface `eth0`. My server doesn't have an `eth0`—it has `enp0s25` (thanks, udev). With host networking, Pi-hole was listening on all interfaces but refusing to respond because it didn't recognize them. I changed the setting to "Permit all origins" and restarted the container. DNS started working immediately.

Configuring the Split

I migrated the DNS records I wanted to split from dnsmasq format files into `/etc/pihole/custom.list` in hosts format: 192.168.3.33 service.example.com 100.100.3.2 service.example.com Then I tested from my laptop (which is on the Tailscale network): $ dig +short service.example.com @100.100.3.2 100.100.3.2 $ dig +short service.example.com @192.168.3.13 192.168.3.33 It worked. Queries to the Tailscale IP returned the Tailscale record. Queries to the LAN IP returned the LAN record. I also removed the route advertisement from my reverse proxy's Tailscale configuration, since it was no longer needed: sudo tailscale down sudo tailscale set --advertise-routes= sudo tailscale up

Making Tailscale Clients Use Pi-hole

Tailscale has a feature called Split DNS that lets you specify which DNS server should handle queries for specific domains. I logged into Tailscale's admin console and added a split DNS entry pointing queries for my domain to Pi-hole's Tailscale IP (`100.100.3.2`). On Linux clients, I had to explicitly accept the DNS configuration: sudo tailscale up --accept-dns The Android app has a toggle for this in the settings, which I enabled. After that, DNS queries from Tailscale clients started hitting Pi-hole via the Tailscale interface, and they received Tailscale IPs in response.

What This Solved

Tailscale clients now connect to services using their Tailscale IPs. This means: - They bypass geo-blocking (Tailscale's subnet is allow-listed, just like the LAN) - They skip the additional authentication layer at the reverse proxy - They don't need to route through the LAN gateway unnecessarily I was also able to close off WAN access to several services that previously needed to be exposed. The only exceptions are services that interact with Chromecasts, because Chromecasts ignore local DNS and always use Google's resolvers.

What Didn't Work

The initial attempt with bridged Docker networking was a dead end. I should have anticipated this—it's obvious in hindsight that a container can't see host interfaces through a bridge. The interface restriction in Pi-hole also caught me off guard. I'd never noticed that setting before because the default (`eth0`) happened to match the interface name in previous environments. With host networking, that assumption broke. I also had to migrate records from dnsmasq format files to the hosts-style `custom.list`. This wasn't difficult, but it was manual work, and I can't automate it easily if I want to keep using `localise-queries`.

Key Takeaways

Split-horizon DNS with Pi-hole and Tailscale is straightforward once you understand how `localise-queries` works. The key requirement is that Pi-hole must be able to see the actual network interfaces receiving queries—bridged Docker networking breaks this. The setup gives me transparent access to services when I'm off-net, without exposing those services to the wider internet or dealing with geo-blocking restrictions. It also means I can write stricter firewall rules, since I'm not relying on WAN access for most things anymore. Tailscale's mesh architecture helps here too. Devices on the same local network can connect directly to each other instead of routing through a coordinator, so there's no performance penalty when I'm at home.