Why I Worked on This
I run several services on Proxmox VE 8.3 using Docker containers. For months, everything ran on the default Docker bridge network without issues. Then I decided to switch to macvlan networking because I wanted my containers to appear as first-class devices on my home network—each with its own IP address, accessible directly without port mapping gymnastics.
The migration seemed straightforward. I created the macvlan network, moved my containers over, and everything appeared to work. Containers got IP addresses from my router's DHCP pool. I could access them from other devices on my network. But then I noticed something: responses were sluggish. Database queries that used to return instantly now took 200-300ms. Web interfaces felt laggy. Something was wrong, and it wasn't obvious from the logs.
My Real Setup
The environment where this happened:
- Proxmox VE 8.3 running on a Dell R720
- Ubuntu 22.04 LTS VM dedicated to Docker workloads
- Docker Engine 24.0.7
- Physical network: 1Gbps home network, Ubiquiti EdgeRouter, managed switches
- Containers: PostgreSQL, n8n, Syncthing, Cronicle, several custom Python services
The original bridge network configuration was simple—Docker's default bridge with standard port mappings. The macvlan setup I created looked like this:
docker network create -d macvlan \ --subnet=192.168.1.0/24 \ --gateway=192.168.1.1 \ -o parent=ens18 \ macvlan0
I then recreated containers on this network, letting them pull DHCP addresses from my router. Everything connected. Everything worked. Just slowly.
What I Found During Debugging
Initial Symptoms
The latency wasn't consistent. Some requests were fine. Others would hang for 200-500ms before completing. This pattern didn't match a simple misconfiguration—it felt like something intermittent or queue-related.
I started with basic connectivity tests from inside the containers:
docker exec -it postgres bash ping 192.168.1.1 # Gateway ping 8.8.8.8 # Internet
Ping times were normal—1-2ms to the gateway, 15-20ms to external hosts. No packet loss. So the network path itself was fine.
DNS Resolution Issues
Next, I checked DNS. Containers were using my router's DNS server (192.168.1.1), which forwards to Cloudflare. I tested resolution:
nslookup google.com
This is where things got interesting. DNS queries were slow—sometimes taking 100-200ms. Not every time, but often enough to be noticeable. I ran a quick test with dig to see query times:
dig google.com
The "Query time" line showed 150ms, 80ms, 200ms across multiple runs. That's not normal for a local DNS server on the same network.
The Proxmox Bridge Factor
Here's what I had missed: my Ubuntu VM's network interface (ens18) was connected to a Proxmox bridge (vmbr0), not directly to the physical interface. The macvlan network was created on top of ens18, which meant:
Container → macvlan (on ens18) → vmbr0 (Proxmox bridge) → Physical NIC
This works, but it adds complexity. Proxmox bridges use Linux bridging, which has its own forwarding logic. When macvlan traffic hits the bridge, it has to be processed and forwarded correctly. I suspected something in this path was causing delays.
MTU Mismatches
I checked MTU settings across the chain:
# On the host VM ip link show ens18 # Inside a container ip link show eth0
The VM's ens18 interface had MTU 1500. The Proxmox bridge (vmbr0) also had MTU 1500. But inside the containers, the macvlan interfaces were also 1500. No mismatch there.
Still, I tested with different packet sizes to see if fragmentation was an issue:
ping -M do -s 1400 192.168.1.1 ping -M do -s 1450 192.168.1.1 ping -M do -s 1472 192.168.1.1
All worked fine. No fragmentation needed. MTU wasn't the problem.
ARP and MAC Address Handling
With macvlan, each container gets its own MAC address. I checked the ARP table on my router to see if it was learning these addresses correctly:
# On the router show arp
The MAC addresses were there, mapped to the correct IPs. But I noticed something: the ARP entries had short timeouts and were frequently marked as "incomplete" before resolving again. This suggested the router was having trouble maintaining stable ARP entries for the macvlan interfaces.
I captured traffic on the host to see what was happening:
tcpdump -i ens18 arp
There were a lot of ARP requests—more than I expected. Containers were constantly re-requesting MAC addresses for the gateway and other hosts. This explained the intermittent delays: every few seconds, a container would need to re-resolve an address, adding 50-100ms to the first packet in a new connection.
Proxmox Bridge Filtering
Proxmox bridges have a feature called "bridge-nf-call-iptables" which processes bridged packets through iptables rules. This is useful for filtering but adds overhead. I checked if it was enabled:
cat /proc/sys/net/bridge/bridge-nf-call-iptables cat /proc/sys/net/bridge/bridge-nf-call-ip6tables
Both returned "1" (enabled). I tried disabling them temporarily:
echo 0 > /proc/sys/net/bridge/bridge-nf-call-iptables echo 0 > /proc/sys/net/bridge/bridge-nf-call-ip6tables
This made a noticeable difference. Latency dropped significantly. The issue wasn't just ARP—it was also the bridge processing every packet through netfilter, adding microseconds that accumulated into visible delays under load.
What Actually Worked
Solution 1: Disable Bridge Netfilter (Temporary)
Disabling bridge netfilter helped, but it's not a permanent solution because Proxmox can reset these settings on reboot or network changes. To make it persistent, I added this to /etc/sysctl.conf on the Proxmox host:
net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-arptables = 0
Then applied it:
sysctl -p
This reduced latency but didn't eliminate all the delays. ARP issues remained.
Solution 2: Static ARP Entries
To stop the constant ARP requests, I added static ARP entries on the host for the most frequently accessed devices (router, NAS, other critical services):
arp -s 192.168.1.1 [gateway-mac-address] arp -s 192.168.1.10 [nas-mac-address]
This helped for outbound traffic from containers to known hosts. But containers still had to learn each other's MAC addresses dynamically, which caused delays in container-to-container communication.
Solution 3: Switch to IPVLAN (What Worked Best)
After fighting with macvlan for several days, I tried IPVLAN instead. IPVLAN is similar to macvlan but doesn't create separate MAC addresses for each container—all containers share the host's MAC address and only differ by IP.
I created a new network:
docker network create -d ipvlan \ --subnet=192.168.1.0/24 \ --gateway=192.168.1.1 \ -o parent=ens18 \ -o ipvlan_mode=l2 \ ipvlan0
Then moved containers to this network. The difference was immediate. Latency dropped to normal levels—single-digit milliseconds for local traffic, no more random delays. ARP issues disappeared because there was only one MAC address to track.
The trade-off: IPVLAN requires the upstream network (router/switch) to handle multiple IPs on the same MAC address. Most modern equipment does this fine, but older or misconfigured switches might have issues. Mine worked without problems.
What Didn't Work
Adjusting MTU
I tried lowering the MTU to 1400, thinking fragmentation might be happening somewhere invisible. It made no difference. The problem wasn't packet size.
Switching to Host Networking
I briefly considered using --network host for critical containers. This eliminates all Docker networking overhead by using the host's network stack directly. It worked, but it's not a real solution—you lose isolation and can't run multiple containers that bind to the same ports.
Custom iptables Rules
I spent time trying to optimize iptables rules on the Proxmox bridge, thinking I could reduce processing overhead. This was a dead end. The real issue was the interaction between macvlan, the Proxmox bridge, and ARP handling—not specific firewall rules.
Key Takeaways
Switching from Docker bridge to macvlan on Proxmox isn't as simple as creating a new network and moving containers. The Proxmox bridge adds a layer that interacts poorly with macvlan's MAC address handling, especially under load.
If you're running Docker on a Proxmox VM and want containers on your LAN:
- Try IPVLAN first, not macvlan. It avoids MAC address proliferation and ARP complexity.
- Disable bridge netfilter on the Proxmox host if you don't need packet filtering at the bridge level.
- Test latency under realistic load, not just with ping. Application-level delays often don't show up in simple connectivity tests.
- Be prepared for ARP issues if you stick with macvlan. Static ARP entries help but don't solve everything.
The most frustrating part was that everything "worked"—containers had connectivity, services responded, logs showed no errors. The problem was purely performance-related, which made it harder to diagnose. If I had started with IPVLAN instead of macvlan, I would have saved several days of troubleshooting.
For my setup, IPVLAN in L2 mode was the right answer. Your network might differ, but if you're seeing similar symptoms—intermittent latency, slow DNS, sluggish container responses—check your ARP behavior and consider IPVLAN as an alternative.