Why I Migrated to Docker Secrets
I’ve been running my self-hosted stack on Proxmox for years now. Most services run in Docker containers managed by Compose files. For a long time, I used environment variables for secrets—database passwords, API keys, tokens for n8n workflows, credentials for my Synology backup scripts. It worked fine until I started noticing how exposed everything was.
The breaking point came when I was debugging a container issue and realized my database password was sitting in plain text in the Docker inspect output. Anyone with access to the host could see it. I also had `.env` files scattered across different compose directories, which made rotation painful and tracking access impossible.
I needed a better approach, but I couldn’t afford downtime. My n8n workflows run critical automations, my monitoring stack feeds into Cronicle jobs, and my DNS setup depends on several containers staying up. So I had to migrate secrets without breaking anything.
My Setup Before Migration
Here’s what I was working with:
- Proxmox host running multiple LXC containers and VMs
- Docker Compose managing about 15 services across different stacks
- Secrets stored in `.env` files or hardcoded in compose files
- No centralized secret management
- Docker Compose version 2.32.1 (important—older versions don’t support secrets properly)
A typical compose file looked like this:
services:
postgres:
image: postgres:16
environment:
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_USER: ${DB_USER}
volumes:
- pgdata:/var/lib/postgresql/data
The problem: I could run docker inspect on that container and see POSTGRES_PASSWORD in plain text. Same with docker exec postgres printenv. Not great.
How Docker Secrets Actually Work
Docker secrets mount as files inside the container at /run/secrets/<secret_name>. The application reads from that file instead of an environment variable. This means:
- Secrets aren’t visible in environment variable dumps
- They don’t show up in logs by default
- You can control file permissions inside the container
- Each service only gets the secrets it explicitly needs
The catch: your application must support reading secrets from files. Many official Docker images (Postgres, MySQL, Redis) already support this through _FILE suffix conventions. For custom apps, I had to add that logic myself.
Migration Strategy Without Downtime
I couldn’t just swap everything at once. Here’s the approach that worked:
Step 1: Verify Application Support
First, I checked which services could read secrets from files. Postgres and MySQL already supported POSTGRES_PASSWORD_FILE and MYSQL_PASSWORD_FILE. For my custom Python services (including some n8n integration scripts), I added this pattern:
import os
def get_secret(key):
file_key = f"{key}_FILE"
if file_key in os.environ:
with open(os.environ[file_key], 'r') as f:
return f.read().strip()
return os.environ.get(key, '')
This checks for SECRET_FILE first, then falls back to SECRET as an environment variable. That fallback was critical for zero-downtime migration.
Step 2: Create Secret Files
I created a secrets/ directory next to each compose file:
mkdir secrets
echo "my_db_password" > secrets/db_password.txt
chmod 600 secrets/db_password.txt
Important: these files stay on the host. Docker mounts them into containers at runtime.
Step 3: Update Compose File Gradually
Here’s the key: I kept both methods active during migration. My updated compose file looked like this:
services:
postgres:
image: postgres:16
environment:
POSTGRES_PASSWORD: ${DB_PASSWORD} # Old method still works
POSTGRES_PASSWORD_FILE: /run/secrets/db_password # New method
secrets:
- db_password
volumes:
- pgdata:/var/lib/postgresql/data
secrets:
db_password:
file: ./secrets/db_password.txt
Postgres checks for POSTGRES_PASSWORD_FILE first. If it exists, it uses that. Otherwise, it falls back to POSTGRES_PASSWORD. This meant I could deploy the change without breaking anything.
Step 4: Rolling Deploy
I updated services one at a time:
docker compose up -d postgres
Postgres restarted, read the secret from /run/secrets/db_password, and came back up. No downtime. I verified it worked:
docker exec postgres-container psql -U postgres -c "SELECT 1"
Once confirmed, I moved to the next service. Each time, I checked logs and tested connectivity before continuing.
Step 5: Remove Environment Variables
After all services were using secrets successfully, I removed the old environment variables from the compose file:
services:
postgres:
image: postgres:16
environment:
POSTGRES_PASSWORD_FILE: /run/secrets/db_password
secrets:
- db_password
volumes:
- pgdata:/var/lib/postgresql/data
Then removed them from .env files. Final deploy:
docker compose up -d
Everything stayed up. No restarts needed since the containers were already using the file-based secrets.
What Worked Well
The fallback pattern was essential. By supporting both methods temporarily, I could migrate without risk. If something broke, the service would fall back to environment variables automatically.
Testing one service at a time caught issues early. I discovered that one of my custom scripts wasn’t reading the _FILE variable correctly—it was looking for the wrong key name. Fixed it before it caused problems elsewhere.
File permissions mattered. I set secrets to 600 on the host, and Docker respected that inside containers. Only the service user could read them.
What Didn’t Work
I initially tried using Docker Swarm secrets, thinking they’d work with Compose. They don’t—at least not in standalone mode. Compose secrets are file-based only. Swarm secrets require a Swarm cluster, which I’m not running.
I also tried mounting secrets from a separate encrypted volume. That added complexity without much benefit. The host filesystem is already encrypted (LUKS), so double-encrypting didn’t help.
One mistake: I forgot to update my backup scripts. They were still referencing .env files. After migration, backups failed silently for a day before I noticed. Now I explicitly back up the secrets/ directory.
Handling Multi-Service Secrets
Some secrets are shared across services. For example, my n8n instance and a custom webhook processor both need the same API token. I defined it once and granted access to both:
services:
n8n:
image: n8nio/n8n
secrets:
- api_token
webhook:
image: my-webhook-processor
secrets:
- api_token
secrets:
api_token:
file: ./secrets/api_token.txt
Both services get /run/secrets/api_token mounted. If I need to rotate the token, I update one file and restart both services.
Secret Rotation Process
Rotating secrets is now simpler:
- Generate new secret value
- Update
secrets/secret_name.txt - Restart affected services:
docker compose up -d service_name - Verify services are healthy
- Update any external systems using the old secret
No need to edit compose files or environment variables. Just update the file and restart.
Limitations and Trade-offs
Docker Compose secrets are not encrypted at rest. They’re just files on the host. If someone gets root access to the Proxmox host, they can read them. For my threat model, that’s acceptable—physical security and host hardening are already in place.
Secrets are bind-mounted at container start. If you change a secret file while the container is running, it won’t pick up the new value until restart. This is different from Kubernetes secrets, which can be updated dynamically.
You can’t use secrets in build-time arguments easily. For that, you need build secrets, which work differently and require BuildKit. I haven’t needed that yet.
Key Takeaways
Migrating to Docker secrets improved my security posture without adding much complexity. The biggest win: secrets are no longer visible in docker inspect or environment dumps.
The fallback pattern made migration safe. Supporting both environment variables and file-based secrets during the transition meant zero downtime.
Not every application supports reading secrets from files out of the box. I had to modify a few custom scripts. That took time but was worth it.
Docker Compose secrets are not a complete secret management solution. They’re better than environment variables but still rely on host filesystem security. For more sensitive environments, I’d look at external secret managers like Vault or cloud provider solutions.
For my self-hosted setup, though, this was the right balance of security and simplicity. Everything still runs on one Proxmox host, secrets are better protected, and rotation is straightforward.