Why I Set Up a Docker Socket Proxy
I run Portainer to manage my Docker containers on Proxmox. It makes things easier when I need to check logs, restart services, or adjust container settings without SSHing into the host every time. But Portainer needs access to the Docker socket to work, and that always bothered me.
The Docker socket is essentially root access to the host. Any container that can talk to it can spin up new containers with full privileges, mount host directories, or mess with running services. Portainer doesn't need all of that power—it just needs to read container states and perform basic management tasks.
I wanted a way to give Portainer only what it actually needs, without handing over complete control. That's when I started looking into socket proxies.
What Docker Socket Proxy Actually Does
The tecnativa/docker-socket-proxy image sits between Portainer (or any other service) and the actual Docker socket. It uses HAProxy to filter API requests based on environment variables you set.
When a container tries to access the Docker API through the proxy, HAProxy checks the request against the rules you configured. If the request matches an allowed API section, it gets forwarded to the real socket. If not, the proxy returns a 403 Forbidden response.
This means I can allow read-only operations like listing containers or checking logs, while blocking dangerous actions like creating new containers with privileged flags or accessing Docker secrets.
My Setup and Configuration
I run the socket proxy as a separate container on the same Docker network as Portainer. Here's the compose file I'm using:
services:
dockerproxy:
image: tecnativa/docker-socket-proxy:latest
container_name: dockerproxy
restart: unless-stopped
privileged: true
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
environment:
- CONTAINERS=1
- IMAGES=1
- INFO=1
- NETWORKS=1
- VOLUMES=1
- SERVICES=1
- TASKS=1
- POST=0
- BUILD=0
- COMMIT=0
- SECRETS=0
- SWARM=0
networks:
- proxy
ports:
- "127.0.0.1:2375:2375"
portainer:
image: portainer/portainer-ce:latest
container_name: portainer
restart: unless-stopped
volumes:
- portainer_data:/data
environment:
- DOCKER_HOST=tcp://dockerproxy:2375
networks:
- proxy
ports:
- "9000:9000"
networks:
proxy:
driver: bridge
volumes:
portainer_data:
The key parts here:
- The proxy mounts the Docker socket as read-only (
:ro), though the proxy itself still needs privileged mode to access it in some SELinux contexts - I bind the proxy port to localhost only, so nothing outside the host can reach it directly
- Portainer connects to
tcp://dockerproxy:2375instead of the real socket - Both containers share a dedicated bridge network
Environment Variables I Actually Use
I enabled these API sections because Portainer needs them for basic functionality:
CONTAINERS=1– List, inspect, and manage containersIMAGES=1– View image informationINFO=1– Get system info and Docker versionNETWORKS=1– View network configurationsVOLUMES=1– Inspect volumesSERVICES=1andTASKS=1– For Swarm mode (even though I'm not using it yet)
And I explicitly blocked these:
POST=0– This is critical. It blocks all write operations, making the entire API read-onlyBUILD=0– No building images through the APICOMMIT=0– Can't create new images from containersSECRETS=0– No access to Docker secretsSWARM=0– Can't modify swarm configuration
The POST=0 setting is the most important one. With it disabled, Portainer can view everything but can't actually change anything. For my use case, that's mostly fine—I use Portainer as a monitoring dashboard more than a management tool.
What Worked
After starting both containers, Portainer connected to the proxy without any issues. The dashboard loaded normally, showing all my containers, images, and networks. I could view logs, inspect container details, and check resource usage.
When I tried to restart a container through Portainer, it failed with a 403 error. That's exactly what I wanted. The proxy blocked the POST request, and Portainer showed a permission denied message.
The setup is transparent to Portainer—it doesn't know it's talking to a proxy instead of the real socket. No special configuration needed on the Portainer side beyond changing the DOCKER_HOST variable.
What Didn't Work (And Trade-Offs)
The biggest limitation is that Portainer becomes mostly read-only. I can't use it to quickly restart a failed container or adjust resource limits anymore. For those tasks, I have to SSH into the host or use my automation scripts.
Initially, I tried enabling ALLOW_RESTARTS=1 to permit container restarts while keeping everything else locked down. But that variable didn't exist in the version I was using. The proxy's granularity isn't perfect—you can enable broad API sections, but you can't easily cherry-pick specific operations within those sections.
I also discovered that some Portainer features silently fail. The UI doesn't always make it obvious when an action was blocked by the proxy versus when something actually went wrong. I had to check the proxy logs to confirm requests were being rejected as expected.
Another issue: the privileged: true flag on the proxy container feels wrong. I'm trying to limit privileges, but the proxy itself runs privileged to access the Docker socket. This is apparently necessary in some SELinux environments, but it still bothers me. The socket is mounted read-only, which helps, but it's not a perfect solution.
Monitoring and Debugging
I keep an eye on the proxy logs to see what requests are being made and blocked:
docker logs -f dockerproxy
HAProxy logs every request, including the HTTP method, path, and response code. When Portainer tries something it shouldn't, I see a 403 response in the logs immediately.
This visibility is useful. I can see exactly what Portainer is trying to do and adjust the proxy rules if needed. For example, I noticed Portainer was making frequent requests to /info and /version, which are harmless, so I made sure those were allowed.
Key Takeaways
- The socket proxy works as advertised—it blocks unauthorized API access effectively
- Read-only mode (
POST=0) is the safest option but limits Portainer's usefulness as a management tool - The proxy adds a small layer of latency, but it's not noticeable in practice
- You need to understand the Docker API to configure the proxy properly—guessing which variables to enable doesn't work well
- This isn't a complete security solution. The proxy container itself has privileged access, and if someone compromises it, they still have a path to the socket
For my setup, the proxy is a reasonable compromise. Portainer can't accidentally (or maliciously) do anything destructive, but I still get the convenience of a web UI for monitoring. If I need to actually manage containers, I fall back to the command line or my automation tools, which connect directly to the socket on the host.
If you're running services that need Docker socket access, especially anything exposed to the network, putting a proxy in front of it is worth the effort. Just be realistic about what it can and can't protect against.