Why I Built This
I run several Docker containers on Proxmox—databases, monitoring tools, a few automation services. Most of them live on internal networks, accessible only through Tailscale. That works fine for SSH access and web UIs, but backups were a different problem.
I needed encrypted, offsite copies of specific Docker volumes without opening ports, setting up cloud storage APIs, or relying on third-party backup services. I already use Tailscale across my devices, so I wanted a solution that would:
- Encrypt volume data before it leaves the host
- Send backups to another machine on my Tailnet via Taildrop
- Run automatically without manual intervention
- Keep the backup process simple and auditable
This isn’t about disaster recovery at scale. It’s about protecting container data I care about—configuration files, small databases, application state—and storing encrypted copies on a separate physical machine I control.
My Setup
Here’s what I’m working with:
- Proxmox host running Docker containers (Ubuntu-based LXC)
- Tailscale installed on the host and authenticated
- A Synology NAS also connected to my Tailnet, acting as the backup target
- Docker volumes I want to back up: Postgres data, n8n workflows, configuration directories
- A simple Bash script triggered by cron
I didn’t use Docker Compose for the Tailscale container itself. Tailscale is installed directly on the host because I need it for more than just backups. The containers themselves don’t need Tailscale—they’re not joining the mesh network. Only the host needs access to send files via Taildrop.
How It Works
The backup process has four steps:
1. Stop the Container (If Necessary)
For databases like Postgres, I stop the container before copying the volume. This ensures a consistent snapshot. For stateless containers or those with append-only logs, I skip this step.
docker stop my-postgres-container
2. Create a Tarball of the Volume
I use tar to archive the volume directory. Docker volumes are typically stored in /var/lib/docker/volumes/, so I target the specific _data subdirectory:
tar -czf /tmp/postgres-backup.tar.gz -C /var/lib/docker/volumes/postgres_data/_data .
This creates a compressed archive in /tmp. I use -C to change into the volume directory so the archive doesn’t include the full path.
3. Encrypt the Tarball
I encrypt the archive using gpg with a symmetric passphrase. This keeps the backup unreadable if someone gains access to the target machine:
gpg --symmetric --cipher-algo AES256 --batch --yes --passphrase-file /root/.backup-passphrase /tmp/postgres-backup.tar.gz
The passphrase is stored in a file readable only by root. I know this isn’t perfect—if the host is compromised, the passphrase is exposed—but it’s good enough for my threat model. The alternative would be managing GPG keys, which adds complexity I don’t need right now.
4. Send the Encrypted File via Taildrop
Taildrop is Tailscale’s built-in file-sharing feature. It works like AirDrop but over your private mesh network. I use the Tailscale CLI to send the encrypted backup to my NAS:
tailscale file cp /tmp/postgres-backup.tar.gz.gpg synology-nas:
The file lands in the Taildrop folder on the NAS (/volume1/Taildrop by default on Synology). From there, I move it to a dedicated backup directory using a separate script that runs on the NAS.
5. Restart the Container
If I stopped the container, I start it again:
docker start my-postgres-container
6. Clean Up Temporary Files
I delete the local tarball and encrypted file to avoid filling up the host’s disk:
rm /tmp/postgres-backup.tar.gz /tmp/postgres-backup.tar.gz.gpg
The Full Script
Here’s the actual script I use. It’s not fancy, but it works:
#!/bin/bash
CONTAINER_NAME="my-postgres-container"
VOLUME_PATH="/var/lib/docker/volumes/postgres_data/_data"
BACKUP_FILE="/tmp/postgres-backup-$(date +%Y%m%d-%H%M%S).tar.gz"
ENCRYPTED_FILE="${BACKUP_FILE}.gpg"
PASSPHRASE_FILE="/root/.backup-passphrase"
TARGET_DEVICE="synology-nas"
# Stop container
docker stop $CONTAINER_NAME
# Create tarball
tar -czf $BACKUP_FILE -C $VOLUME_PATH .
# Encrypt
gpg --symmetric --cipher-algo AES256 --batch --yes --passphrase-file $PASSPHRASE_FILE $BACKUP_FILE
# Send via Taildrop
tailscale file cp $ENCRYPTED_FILE $TARGET_DEVICE:
# Restart container
docker start $CONTAINER_NAME
# Clean up
rm $BACKUP_FILE $ENCRYPTED_FILE
echo "Backup completed: $ENCRYPTED_FILE sent to $TARGET_DEVICE"
I run this via cron every night at 2 AM:
0 2 * * * /root/scripts/docker-volume-backup.sh >> /var/log/backup.log 2>&1
What Worked
Taildrop is simple and reliable. It just works. No cloud APIs, no S3 buckets, no authentication tokens. The file shows up on the target device every time.
Encryption is straightforward. GPG with a symmetric passphrase is easy to script and doesn’t require key management. Decrypting a backup is one command:
gpg --decrypt postgres-backup-20250115-020000.tar.gz.gpg > postgres-backup.tar.gz
The script is auditable. I can see exactly what it’s doing. No black-box backup tools, no hidden dependencies.
Stopping the container ensures consistency. For databases, this matters. I’ve tried volume snapshots without stopping the container, and I’ve ended up with corrupted backups. A few seconds of downtime at 2 AM is worth it.
What Didn’t Work
Taildrop doesn’t overwrite files. If a file with the same name already exists in the Taildrop folder, the transfer fails silently. I fixed this by adding a timestamp to the backup filename.
No built-in retention policy. Taildrop just dumps files into a folder. I had to write a second script on the NAS to delete backups older than 30 days. It runs daily and looks like this:
find /volume1/Backups/docker -name "*.gpg" -mtime +30 -delete
The passphrase file is a weak point. If someone gets root on the host, they can decrypt the backups. I accept this risk because my threat model is “protect against accidental deletion and hardware failure,” not “defend against nation-state actors.” If I needed stronger security, I’d use GPG key pairs and store the private key offline.
Large volumes take time. My Postgres volume is only a few hundred megabytes, so compression and encryption take seconds. For multi-gigabyte volumes, this would be slower. I haven’t tested it at scale because I don’t need to.
Restoring a Backup
I’ve tested this twice—once intentionally, once because I broke something.
The process:
- Copy the encrypted file from the NAS to the host
- Decrypt it:
gpg --decrypt backup.tar.gz.gpg > backup.tar.gz - Stop the container:
docker stop my-postgres-container - Clear the volume:
rm -rf /var/lib/docker/volumes/postgres_data/_data/* - Extract the backup:
tar -xzf backup.tar.gz -C /var/lib/docker/volumes/postgres_data/_data - Restart the container:
docker start my-postgres-container
It worked both times. The second time was after I accidentally deleted a table in Postgres. The backup was from the night before, so I lost a few hours of data, but the restore process itself was clean.
Why Not Use a Real Backup Tool?
I looked at tools like Restic, Borg, and Duplicati. They’re powerful, but they’re also complex. I don’t need deduplication, incremental backups, or multi-cloud support. I just need encrypted tarballs sent to a machine I control.
The script I wrote is 20 lines. I understand every part of it. If something breaks, I can fix it. That’s worth more to me than features I don’t use.
Key Takeaways
- Taildrop works well for small-to-medium file transfers over Tailscale. It’s not designed for this, but it’s reliable enough.
- Stopping containers before backing up databases is non-negotiable. Snapshots aren’t a substitute for consistency.
- Symmetric encryption with GPG is simple and effective for most self-hosted use cases. If your threat model requires more, use key pairs.
- Scripts don’t need to be perfect. They need to work and be maintainable.
- Test your restores. A backup you can’t restore is useless.
This setup isn’t elegant, but it solves the problem I had. It runs every night without supervision, and I know where my backups are. That’s enough.