# Building a Split-Horizon DNS Setup with Pi-hole and dnsmasq
Why I Needed This
I run Pi-hole in a Docker container on my home network. It handles DNS for everything—laptops, phones, servers, the usual mess of IoT devices that refuse to die.
For months, I had a problem: some of my self-hosted services needed to resolve differently depending on where the request came from. LAN clients should hit the local IP directly. WAN clients should go through Cloudflare Workers, which handles caching, basic DDoS protection, and keeps my home IP out of public DNS records.
I didn't want to maintain two separate DNS servers. I also didn't want to rely on NAT hairpinning, which my router handles inconsistently. What I needed was split-horizon DNS—one name, multiple answers, depending on the source.
My Setup Before This
Pi-hole was running in Docker with bridged networking. The container looked like this:
docker run -d \
--name=pihole \
-p 53:53/tcp -p 53:53/udp \
-p 8080:80 \
-v ./pihole/conf:/etc/pihole \
-v ./pihole/dnsmasq.d:/etc/dnsmasq.d \
pihole/pihole
Most of my custom DNS records lived in dnsmasq config files under `/etc/dnsmasq.d/`, formatted like this:
address=/service.example.com/192.168.1.50
This worked fine for basic overrides, but it couldn't handle returning different IPs based on where the query came from.
How dnsmasq's localise-queries Works
Pi-hole uses a fork of dnsmasq called `pihole-FTL`. It inherits a setting called `localise-queries`, which is enabled by default.
The idea is simple: if you have multiple A records for the same hostname in `/etc/hosts`, dnsmasq will return only the one that matches the subnet of the interface where the query arrived.
For example, if `/etc/pihole/custom.list` contains:
192.168.1.50 service.example.com
203.0.113.10 service.example.com
Then:
- A query from `192.168.1.0/24` gets `192.168.1.50`
- A query from anywhere else gets `203.0.113.10`
This only works if dnsmasq can see the actual interface the query arrived on. That became the problem.
The Problem with Bridged Networking
In my original setup, Docker was using bridged networking. Pi-hole saw every query as coming from the bridge interface, not the physical NIC. This meant `localise-queries` couldn't work—it had no way to know whether the request came from LAN or WAN.
I had to switch to host networking.
Switching to Host Networking
I stopped the container and recreated it with `--network=host`:
docker run -d \
--network=host \
--name=pihole \
-e WEB_PORT=8080 \
-v ./pihole/conf:/etc/pihole \
-v ./pihole/dnsmasq.d:/etc/dnsmasq.d \
pihole/pihole
I had to set `WEB_PORT=8080` because something else on the host was already using port 80.
The container started, but DNS stopped working.
DNS Stopped Responding
Pi-hole was running. `pihole-FTL` was listening on port 53. Queries were arriving. But no responses were going out.
I checked with tcpdump:
sudo tcpdump -i any port 53
Queries were visible, but Pi-hole wasn't replying.
I eventually found the issue in the web interface under Settings → DNS. There's an option called "Interface listening behavior," and it was set to respond only on `eth0`.
My server's NIC is named `enp0s25`, not `eth0`. Pi-hole was ignoring queries because they weren't coming from the interface it expected.
I changed the setting to "Permit all origins" and restarted the container. DNS came back.
Migrating Records to /etc/hosts Format
The `localise-queries` feature only works with records in `/etc/hosts` format, not dnsmasq's `address=` syntax.
I had to move the records I wanted to split from `/etc/dnsmasq.d/` into `/etc/pihole/custom.list`, formatted like this:
192.168.1.50 service.example.com
203.0.113.10 service.example.com
I tested from a LAN client:
dig +short service.example.com @192.168.1.5
192.168.1.50
That worked. But I didn't have an easy way to test the WAN side from inside my network.
Testing from a Different Subnet
I spun up a VM on a different VLAN to simulate an external query. From there:
dig +short service.example.com @192.168.1.5
203.0.113.10
It returned the WAN IP. The split-horizon was working.
Routing WAN Traffic Through Cloudflare Workers
For the WAN side, I didn't want to expose my home IP directly. I set up a Cloudflare Worker to act as a reverse proxy.
The Worker script looks like this:
addEventListener('fetch', event => {
event.respondWith(handleRequest(event.request))
})
async function handleRequest(request) {
const url = new URL(request.url)
url.hostname = 'my-home-ip.example.com'
url.port = '443'
return fetch(url, {
method: request.method,
headers: request.headers,
body: request.body
})
}
I pointed the WAN DNS record (`203.0.113.10` in the examples above) to the Worker's IP. Cloudflare handles SSL termination, and the Worker forwards requests to my actual home IP over HTTPS.
This setup keeps my home IP out of public DNS and adds a layer of caching and rate limiting.
What Worked
- LAN clients resolve to the local IP and connect directly
- WAN clients resolve to the Cloudflare Worker IP
- No need for NAT hairpinning or multiple DNS servers
- Pi-hole's existing blocklists and logging still work
- The setup survives container restarts
What Didn't Work
I initially tried to keep using dnsmasq's `address=` syntax. That doesn't work with `localise-queries`. The migration to `/etc/hosts` format was mandatory.
I also wasted time troubleshooting DNS failures before realizing Pi-hole was ignoring queries because of the interface name mismatch. The web interface doesn't make this obvious.
Another limitation: this only works for IPv4. The dnsmasq documentation explicitly states that `localise-queries` doesn't support IPv6. I don't have IPv6 enabled on my LAN yet, so this didn't affect me.
Key Takeaways
- Split-horizon DNS with Pi-hole requires host networking, not bridge mode
- Records must be in `/etc/hosts` format, not dnsmasq syntax
- Check Pi-hole's interface listening settings if DNS stops responding after switching to host networking
- Cloudflare Workers can act as a simple reverse proxy for WAN access without exposing your home IP
- This setup doesn't work for IPv6
The system has been stable for several months now. I haven't had to touch it since the initial setup.