Why I Set This Up
I run a Proxmox server with ZFS datasets containing VMs, container configs, and media files. My initial backup strategy was just Sanoid snapshots on the same pool—which protects against accidental deletion but does nothing if the drives fail or the server catches fire.
I needed offsite replication to my Synology NAS sitting in a different room. The goal was automated, incremental backups that only transfer what changed, with alerts if something breaks.
My Setup
Source machine: Proxmox server running ZFS on rpool/data
Destination: Synology DS920+ with a ZFS pool called backup-pool
Network: Both machines connected via Tailscale
I already had Sanoid running on the Proxmox side, creating hourly/daily/weekly snapshots. I didn't want Syncoid making extra snapshots—just use what Sanoid already creates.
Installing Sanoid/Syncoid on Synology
Synology doesn't package Sanoid, so I installed it manually. SSH into the NAS, then:
sudo -i cd /volume1/@appstore git clone https://github.com/jimsalterjrs/sanoid.git cd sanoid ln -s /volume1/@appstore/sanoid/syncoid /usr/local/bin/syncoid
I also installed pv, mbuffer, and lzop via Synology's community packages to improve transfer speeds and monitoring. These aren't required but make a noticeable difference on larger datasets.
Setting Up Non-Root Users
I didn't want to run this as root on either end. On the Proxmox side, I created a user called syncoid-sender:
sudo adduser --system --no-create-home syncoid-sender sudo zfs allow syncoid-sender hold,send,release rpool/data
On the Synology, I created syncoid-receiver through the DSM web interface, then set ZFS permissions:
sudo zfs create backup-pool/proxmox-backups sudo zfs set readonly=on backup-pool/proxmox-backups sudo zfs allow syncoid-receiver create,mount,receive,hold,release backup-pool/proxmox-backups
I set up SSH keys so the Synology could pull from Proxmox without passwords. Generated a key on the Synology as syncoid-receiver, added the public key to ~/.ssh/authorized_keys on the Proxmox syncoid-sender account.
Initial Replication
The first sync transfers the entire dataset. I ran this manually to make sure it worked before automating:
syncoid \ --sendoptions=raw \ --no-privilege-elevation \ --no-sync-snap \ --no-rollback \ --use-hold \ [email protected]:rpool/data \ backup-pool/proxmox-backups/data
Breaking this down:
--sendoptions=rawkeeps the data encrypted during transfer. I don't load encryption keys on the Synology.--no-privilege-elevationprevents sudo attempts since I delegated ZFS permissions.--no-sync-snaptells Syncoid to use existing Sanoid snapshots instead of creating its own.--no-rollbackstops it from rolling back the destination if source snapshots disappear.--use-holdplaces ZFS holds on snapshots being used for incremental sends, preventing Sanoid from pruning them mid-transfer.
The initial sync took about 4 hours for 800GB. Subsequent runs only transfer deltas and finish in minutes.
Automating with Cron
I set up a cron job on the Synology to pull updates every 6 hours:
# /etc/cron.d/syncoid-backup 0 */6 * * * syncoid-receiver /usr/local/bin/syncoid --sendoptions=raw --no-privilege-elevation --no-sync-snap --no-rollback --use-hold [email protected]:rpool/data backup-pool/proxmox-backups/data >> /volume1/logs/syncoid.log 2>&1
I also added a Sanoid config on the Synology to prune old snapshots from the backup:
# /etc/sanoid/sanoid.conf
[backup-pool/proxmox-backups/data]
frequently = 0
hourly = 24
daily = 7
monthly = 3
yearly = 0
autoprune = yes
autosnap = no
monitor = yes
daily_warn = 48h
daily_crit = 72h
Then enabled the Sanoid timer:
sudo systemctl enable --now sanoid.timer
Slack Notifications for Failures
I wanted alerts if replication failed. I use n8n for automation, but a simple shell script works too.
I wrapped the Syncoid command in a script that checks the exit code:
#!/bin/bash LOGFILE="/volume1/logs/syncoid-$(date +%Y%m%d-%H%M%S).log" /usr/local/bin/syncoid \ --sendoptions=raw \ --no-privilege-elevation \ --no-sync-snap \ --no-rollback \ --use-hold \ [email protected]:rpool/data \ backup-pool/proxmox-backups/data > "$LOGFILE" 2>&1 if [ $? -ne 0 ]; then curl -X POST -H 'Content-type: application/json' \ --data "{\"text\":\"Syncoid backup failed. Check $LOGFILE\"}" \ https://hooks.slack.com/services/YOUR/WEBHOOK/URL fi
Replace the Slack webhook URL with your own. I keep a week of log files and rotate them with a separate cleanup script.
What Worked
The setup has been running for 6 months without manual intervention. Incremental transfers are fast—usually under 5 minutes for a few GB of changes. The encrypted send means I don't expose decryption keys on the backup side.
Using --no-sync-snap keeps snapshot management clean. Sanoid handles retention on both ends, and Syncoid just moves what exists.
Slack alerts caught two failures: once when Tailscale went down during a network change, and once when I accidentally filled the Synology pool with other data.
What Didn't Work
Initially, I tried running Syncoid as root on both sides. This worked but felt sloppy. Switching to delegated permissions took a few tries—I forgot to add the release permission at first, which caused holds to pile up and block snapshot pruning.
I also tried using --compress=zstd-fast but saw no speed improvement over the default lzo, and it added CPU load on the Proxmox side during VM activity. Removed it.
The Synology's cron setup was confusing. DSM has its own task scheduler in the GUI, but it doesn't handle environment variables well for scripts. I ended up using /etc/cron.d/ directly and making sure the script had full paths to everything.
Verifying Backups
Every few weeks, I spot-check the backup by temporarily loading the encryption key on the Synology and mounting a dataset:
sudo zfs load-key backup-pool/proxmox-backups/data sudo zfs mount backup-pool/proxmox-backups/data ls /backup-pool/proxmox-backups/data sudo zfs unmount backup-pool/proxmox-backups/data sudo zfs unload-key backup-pool/proxmox-backups/data
I've never had to restore from this backup, but I did test it once by spinning up a VM from a replicated snapshot. Worked as expected.
Key Takeaways
- Syncoid with
--no-sync-snapintegrates cleanly with Sanoid's snapshot policies. - Delegated ZFS permissions work but require careful setup—missing one permission breaks things silently.
- Tailscale makes offsite replication trivial without port forwarding or VPN complexity.
- Slack webhooks are a low-effort way to catch failures without checking logs manually.
- Compression options don't always help—test them with your actual workload.
This setup gives me automated, encrypted, incremental backups with minimal maintenance. It's not perfect, but it's reliable enough that I stopped worrying about losing data.