Why I Needed This
I run multiple VLANs at home—one for trusted devices, one for IoT gadgets, and another for guests. My ISP doesn't respect privacy, and I wanted certain traffic to route through a WireGuard VPN while keeping IoT devices on the local gateway. IoT devices are chatty, often poorly secured, and I didn't want their constant cloud connections clogging my VPN tunnel or triggering rate limits.
The goal was simple: route my main VLAN (192.168.1.0/24) through WireGuard, but leave IoT (192.168.3.0/24) and guest networks untouched. This required policy-based routing—not just flipping a switch to tunnel everything.
My Setup
I'm running this on a Proxmox VM with Debian 12. The VM acts as my router, handling VLANs, firewall rules, and now WireGuard. My physical network has:
- Main VLAN: 192.168.1.0/24 (trusted devices)
- IoT VLAN: 192.168.3.0/24 (smart home junk)
- Guest VLAN: 192.168.2.0/24 (visitors)
The WireGuard server is hosted externally (a cheap VPS I rent). I already had WireGuard working for a single device, but extending it to an entire VLAN required routing tables and firewall marks.
Installing WireGuard
First, I installed WireGuard on the Debian VM:
apt update
apt install wireguard wireguard-tools
Then I generated keys for the client (my router VM):
wg genkey | tee privatekey | wg pubkey > publickey
I added the public key to my VPS's WireGuard config and noted the VPS's public key for my client config.
WireGuard Client Configuration
I created /etc/wireguard/wg0.conf on the router VM:
[Interface]
PrivateKey = <client_private_key>
Address = 10.8.0.2/24
Table = off
[Peer]
PublicKey = <server_public_key>
Endpoint = <vps_ip>:51820
AllowedIPs = 0.0.0.0/0
PersistentKeepalive = 25
The key here is Table = off. By default, WireGuard modifies the main routing table, which would route all traffic through the tunnel. I didn't want that. I needed manual control using policy routing.
Creating a Custom Routing Table
I defined a new routing table in /etc/iproute2/rt_tables:
200 wg0_table
This table would hold routes specific to WireGuard traffic. Then I added a default route in that table:
ip route add default dev wg0 table wg0_table
This tells the kernel: "If something uses table 200, send it through wg0."
Marking Traffic with iptables
Next, I used firewall marks to tag traffic from my main VLAN. This is how I told the system which packets should use the custom routing table:
iptables -t mangle -A PREROUTING -s 192.168.1.0/24 -j MARK --set-mark 200
This marks all packets from 192.168.1.0/24 with the value 200. Then I created a rule to route marked packets through the custom table:
ip rule add fwmark 200 table wg0_table
Now, any packet marked with 200 uses the wg0_table, which routes it through WireGuard.
Masquerading and NAT
WireGuard alone doesn't handle NAT. I needed to masquerade outgoing traffic so the VPS could route responses back correctly:
iptables -t nat -A POSTROUTING -o wg0 -j MASQUERADE
This rewrites the source IP of packets leaving through wg0 to match the WireGuard interface's IP (10.8.0.2).
Starting WireGuard
I brought up the interface:
wg-quick up wg0
And verified it was running:
wg show
I saw the handshake with the server and confirmed the tunnel was active.
Testing the Split Routing
From a device on 192.168.1.0/24, I checked my public IP:
curl ifconfig.me
It returned my VPS's IP. Good—traffic was tunneled.
From an IoT device on 192.168.3.0/24, the same command returned my home ISP's IP. The IoT VLAN was bypassing the VPN as intended.
What Didn't Work Initially
My first attempt failed because I forgot to disable reverse path filtering. Debian's default rp_filter setting was dropping packets that didn't match expected routes. I had to adjust /etc/sysctl.conf:
net.ipv4.conf.all.rp_filter = 2
net.ipv4.conf.default.rp_filter = 2
Then applied it:
sysctl -p
Without this, return traffic from the VPN was being rejected because it didn't match the main routing table.
Making It Persistent
To ensure everything survived reboots, I added the routing rules to a script in /etc/wireguard/wg0-up.sh:
#!/bin/bash
ip route add default dev wg0 table wg0_table
ip rule add fwmark 200 table wg0_table
iptables -t mangle -A PREROUTING -s 192.168.1.0/24 -j MARK --set-mark 200
iptables -t nat -A POSTROUTING -o wg0 -j MASQUERADE
I made it executable:
chmod +x /etc/wireguard/wg0-up.sh
And referenced it in wg0.conf:
[Interface]
PrivateKey = <client_private_key>
Address = 10.8.0.2/24
Table = off
PostUp = /etc/wireguard/wg0-up.sh
Now, when WireGuard starts, it automatically applies the routing rules.
Performance and Latency
Tunneling added about 15-20ms of latency compared to my direct ISP connection. For browsing and SSH, it's negligible. Video calls occasionally stuttered, but that's more about my VPS's bandwidth than WireGuard itself. IoT devices, which stayed on the local gateway, had no latency impact.
Limitations and Trade-offs
This setup works, but it has quirks. If the VPN drops, traffic from 192.168.1.0/24 stops routing until WireGuard reconnects. I haven't implemented a failover to the local gateway because I'd rather know when the VPN is down than silently leak traffic.
Also, some services (like Netflix) block VPN traffic. Devices on my main VLAN can't access those services unless I temporarily disable WireGuard or add exceptions.
Key Takeaways
- Policy-based routing with
fwmarkand custom tables gives precise control over which traffic uses a VPN. - Setting
Table = offin WireGuard's config is critical to avoid automatic route changes. - Reverse path filtering (
rp_filter) will break asymmetric routing if not adjusted. - IoT devices don't need VPN protection—they're already isolated, and tunneling their traffic wastes bandwidth.
- Always test from devices on each VLAN to confirm routing works as expected.
This approach keeps my main devices private while letting IoT junk do its thing without interference. It's not bulletproof, but it's practical and works for my threat model.