Why I Moved from Docker Volumes to Bind Mounts
I run most of my services on Hetzner VPS instances. They're reliable and reasonably priced, but storage is limited—my current setup maxes out at 320GB. For most containers, that's fine. But when you're running PostgreSQL databases with growing datasets, automated backups, and multiple service stacks, you start hitting walls.
I used to rely on Docker volumes for everything. They're the default, they work, and Docker manages them. But when I wanted to integrate my backup workflow with rsync and rclone, I ran into friction. Docker volumes live in /var/lib/docker/volumes/, buried under Docker's management layer. Backing them up required either stopping containers or using docker cp, which felt clunky for scheduled, automated syncs.
Bind mounts solved that. They're just directories on the host filesystem that get mounted into containers. No abstraction, no special tooling—just files I can back up like anything else.
What Bind Mounts Actually Give You
Before I made the switch, I needed to understand what I was trading off.
Docker Volumes
Volumes are isolated and managed by Docker. They persist even if you delete the container. They're portable across Docker environments and have built-in lifecycle management. For most use cases, they're the right choice.
But they're harder to back up directly. You can't just point rsync at a volume and expect it to work cleanly. You either need to stop the container or use Docker's own tooling, which breaks the flow of my existing backup scripts.
Bind Mounts
Bind mounts are simpler. You specify a host directory in your compose file, and Docker mounts it into the container. No magic, no hidden paths. The data lives where you put it.
This means:
- rsync can access it directly
- rclone can sync it to remote storage without extra steps
- File permissions and ownership are transparent
- You can inspect or modify files without entering the container
The downside is that you're responsible for managing the directory. If you mess up permissions or delete it accidentally, Docker won't save you. But for my setup, that trade-off made sense.
The Migration Process
I started with PostgreSQL because it's the most critical service I run. If I could migrate that cleanly, everything else would be easier.
My Original Setup
Here's what the compose file looked like before the change:
services:
postgres:
image: postgres:15
volumes:
- postgres-data:/var/lib/postgresql/data
volumes:
postgres-data:
The data lived in a Docker-managed volume. To find it on the host, I had to run docker volume inspect postgres-data and dig through the JSON output. Not terrible, but not convenient.
Creating the Bind Mount Directory
I decided to store all bind-mounted data under /mnt/data/ to keep it separate from system files. For PostgreSQL:
mkdir -p /mnt/data/postgres
Next, I needed to match the ownership to what PostgreSQL expects inside the container. I spun up the existing container and checked:
docker exec -it postgres_container /bin/sh
ls -al /var/lib/postgresql/
id postgres
The output showed:
uid=999(postgres) gid=999(postgres)
So I set the same ownership on the host directory:
sudo chown -R 999:999 /mnt/data/postgres
This step is critical. If the permissions don't match, PostgreSQL won't start, or worse, it'll start but fail silently when trying to write data.
Copying the Data
With the directory ready, I stopped the container and copied the volume contents:
docker-compose down
sudo rsync -axPS /var/lib/docker/volumes/postgres-data/_data/ /mnt/data/postgres/
I used rsync with:
-ato preserve permissions, timestamps, and ownership-xto stay on the same filesystem-Pto show progress and allow resuming if interrupted-Sto handle sparse files efficiently
The trailing slashes matter. Without them, rsync creates a nested directory instead of copying the contents directly.
Updating the Compose File
I changed the volume definition to a bind mount:
services:
postgres:
image: postgres:15
volumes:
- /mnt/data/postgres:/var/lib/postgresql/data
No more named volume. Just a direct path.
I brought the container back up:
docker-compose up -d
PostgreSQL started without errors. I checked the logs to confirm it was using the existing data and ran a few queries to verify nothing was corrupted. Everything worked.
Integrating with Backup Workflows
This was the whole point of the migration. With bind mounts, my backup scripts became straightforward.
rsync for Local Backups
I already had a script that backed up critical directories to an external drive. Adding the bind mount was one line:
rsync -axPS --delete /mnt/data/ /mnt/backup/data/
No need to stop containers. No need to use docker cp. Just a normal directory sync.
rclone for Remote Backups
For off-site backups, I use rclone to sync to Backblaze B2. The command is similarly simple:
rclone sync /mnt/data/ b2:my-bucket/data/ --progress
I run this on a schedule with cron. Because the data is just files on disk, rclone handles it like any other backup target.
Handling Live Databases
One thing I learned: backing up a live PostgreSQL data directory can result in inconsistent snapshots. PostgreSQL writes to multiple files during transactions, and if rsync catches it mid-write, the backup might be unusable.
For PostgreSQL specifically, I added a pre-backup step using pg_dump:
docker exec postgres pg_dumpall -U postgres > /mnt/data/postgres-backup.sql
This creates a consistent SQL dump that I can restore even if the raw data directory is corrupted. I back up both the dump and the data directory, so I have options if something goes wrong.
What Went Wrong (and How I Fixed It)
The first time I tried this, I skipped the ownership step. PostgreSQL started, but it couldn't write to the data directory. The logs showed permission errors, and the container kept restarting.
I also initially forgot the trailing slash in the rsync command, which created a nested _data directory inside /mnt/data/postgres. PostgreSQL couldn't find the expected structure and treated it as an empty database. I had to delete the directory, re-run rsync correctly, and start over.
Another issue: I didn't verify the data before deleting the old volume. After migrating, I ran a few test queries, but I should have done a more thorough check. Luckily, nothing was lost, but it was a reminder to always validate before cleaning up.
Migrating Other Services
Once PostgreSQL worked, I repeated the process for other containers:
- n8n: Moved the data directory to
/mnt/data/n8n - Redis: Migrated the persistence files to
/mnt/data/redis - Nginx: Kept configs and SSL certs in
/mnt/data/nginx
Each one followed the same pattern: create the directory, set ownership, copy data, update the compose file. The only differences were the user IDs and the specific paths inside each container.
Key Takeaways
Bind mounts simplified my backup workflow. I can now back up all service data with standard tools, without worrying about Docker-specific commands or stopping containers unnecessarily.
The trade-off is that I'm responsible for managing those directories. If I misconfigure permissions or accidentally delete something, Docker won't protect me. But for my setup, that's acceptable. I'd rather have direct control and simpler backups.
If you're running services with data you need to back up regularly, and you're already comfortable managing filesystems, bind mounts are worth considering. Just make sure you get the permissions right and verify your data after migrating.