Why I Set This Up
I've been running TrueNAS as my primary storage system for a while, and it works well for local backups. But I needed a proper offsite backup that didn't involve manual intervention or hoping cloud storage would be fast enough over my home connection. The 3-2-1 rule kept nagging at me: three copies, two different media types, one offsite.
My main NAS handles daily snapshots and local replication to a second pool. That covers two copies on different drives. For the third copy, I built a second TrueNAS system using spare hardware and USB drives—not ideal, but what I had available—and placed it at a family member's house. The goal was encrypted ZFS replication over Tailscale so the data stays private in transit and at rest, without exposing SSH to the internet.
My Actual Setup
Source system: TrueNAS SCALE running on a Proxmox VM with passed-through drives. Multiple datasets organized by data type (documents, media, backups, etc.). All datasets use ZFS native encryption with passphrases.
Destination system: A janky TrueNAS CORE box built from old desktop parts, two USB 3.0 external drives in a mirrored pool. Not fast, not pretty, but functional for receiving snapshots once a day.
Network: Both systems connected via Tailscale mesh network. No port forwarding, no dynamic DNS, no firewall rules to manage. Just two nodes on the same virtual network with stable IPs.
Dataset Structure
I reorganized my datasets before starting replication. Previously I had one massive dataset with everything mixed together. That made it impossible to apply different snapshot schedules or replication policies. I split it into:
- Critical data (documents, configs, photos) - replicated daily
- Media library (movies, music) - replicated weekly
- VM backups - replicated after each backup job runs
- Scratch space - not replicated at all
Moving data between datasets while preserving snapshots was painful. Old snapshots held references to moved files, bloating space usage. I had to delete most snapshots before the move, then rebuild the snapshot schedule afterward.
Encryption Configuration
Source datasets were already encrypted with ZFS native encryption using passphrases. I did not want the destination system to automatically unlock these datasets—if someone steals the offsite box, the data should stay encrypted.
TrueNAS replication can preserve encryption properties, meaning the destination dataset inherits the same encryption settings as the source. The replicated snapshots arrive encrypted, and you need the passphrase or key from the source to unlock them.
I exported the encryption keys from each source dataset and stored them in my password manager. The destination datasets remain locked unless I manually unlock them for a restore.
SSH Connection Setup
TrueNAS replication uses SSH for remote transfers. I needed to establish a keypair-based connection between the two systems over Tailscale.
On the Source System
I went to Credentials > Backup Credentials > SSH Keypairs and generated a new keypair. TrueNAS creates both private and public keys. I copied the public key to a text file for the next step.
On the Destination System
I went to Credentials > Backup Credentials > SSH Connections and added the source system's public key. The destination needs to trust the source's key to accept incoming replication connections.
I also enabled Allow Password Authentication temporarily to test the connection, then disabled it once keypair auth worked.
Testing the Connection
From the source system, I used the TrueNAS shell to test SSH:
ssh -i /root/.ssh/id_rsa root@[destination-tailscale-ip]
It connected without asking for a password. If it failed, I checked:
- Tailscale connectivity (can I ping the destination IP?)
- SSH service running on destination
- Correct public key added to destination
- Firewall rules (Tailscale usually bypasses this, but worth checking)
Replication Task Configuration
With SSH working, I created the replication tasks. TrueNAS has a wizard for simple cases, but I used the manual configuration to control every setting.
First Task: Critical Data
I went to Data Protection > Replication Tasks > Add and configured:
- Source: Selected the critical data dataset on the source system
- Destination: Remote system, using the SSH connection I created earlier
- Destination Dataset: Specified the path on the destination pool where snapshots should land
- Recursive: Enabled, to replicate all child datasets
- Encryption: Left this disabled because the source datasets were already encrypted and I wanted to preserve those properties
- Include Dataset Properties: Enabled, so encryption settings transfer to the destination
- Schedule: Daily at 2 AM
- Retention Policy: Keep 7 daily snapshots on destination
I saved the task and ran it manually to test. It failed.
First Failure: Dataset Properties
The error message said something about incompatible properties. After digging through logs, I found that TrueNAS was trying to set properties on the destination dataset that didn't exist yet. The destination pool wasn't initialized with the same structure.
I manually created the destination dataset first, then re-ran the replication task. It worked.
Second Failure: Snapshot Conflicts
A few days later, the replication task started failing with errors about existing snapshots. The source had automatic snapshot tasks creating snapshots, and the replication task was also trying to create snapshots. They conflicted.
I disabled Replicate Specific Snapshots and instead pointed the replication task at the existing snapshot schedule. Now it just replicates whatever snapshots already exist, rather than trying to create new ones.
Handling Encryption at the Destination
The replicated datasets arrived encrypted. I tested unlocking one to make sure I could actually restore data if needed.
On the destination system, I went to Datasets, selected the replicated dataset, and clicked Unlock. It asked for the passphrase. I entered the passphrase from my password manager, and the dataset unlocked.
I browsed the files to confirm everything was there, then locked it again. The destination datasets stay locked by default, which is what I want for offsite storage.
Key Management Problem
I initially used hex keys instead of passphrases for some datasets. Unlocking with a hex key requires downloading the key file from the source system and uploading it to the destination. This was annoying.
I switched all datasets to passphrase-based encryption. Passphrases are easier to store in a password manager and easier to type when unlocking datasets remotely.
Monitoring and Failures
Replication tasks don't always succeed. Network hiccups, destination pool full, source dataset locked—plenty of reasons for failure.
TrueNAS sends email alerts when replication tasks fail. I configured SMTP with my Gmail account to receive these alerts. The setup was straightforward in System Settings > Email.
Common Failures I've Seen
- Destination pool out of space: The offsite USB drives filled up faster than expected. I adjusted retention policies to keep fewer snapshots.
- Tailscale connection dropped: The destination system occasionally lost its Tailscale connection. I added a cron job to restart Tailscale if it goes down.
- Source dataset locked: If I manually lock a dataset for maintenance and forget to unlock it, replication fails. No automatic fix for this—just have to remember to unlock it.
- SSH key expired: TrueNAS regenerated SSH keys after a system update. I had to re-add the public key to the destination system.
Bandwidth and Transfer Times
The initial replication took forever. I had about 2 TB to transfer, and my upload speed is capped at 20 Mbps. It ran for nearly a week before finishing.
After the initial sync, incremental replication is much faster. Daily changes are usually under 10 GB, which transfers in about an hour. Weekly media replication is larger—sometimes 50-100 GB—but still manageable overnight.
I scheduled replication tasks during off-peak hours to avoid saturating my connection during the day. TrueNAS doesn't have built-in bandwidth throttling for replication, so I relied on scheduling alone.
Compression Helps
ZFS compression (lz4) is enabled on all datasets. This reduces the amount of data transferred during replication. For text-heavy datasets like documents and configs, compression ratios are around 2:1. For media, it's closer to 1:1 since video files are already compressed.
What I Would Do Differently
USB drives are slow and unreliable for long-term storage. If I were building this again, I'd use internal SATA drives in the destination system. But I'm working with what I have, and USB 3.0 is good enough for a once-a-day sync.
I should have organized datasets better from the start. Moving data between datasets while preserving snapshots is a pain. If you're setting this up fresh, think carefully about dataset structure before creating snapshots.
Tailscale works great, but I wish TrueNAS had better integration for it. I had to manually configure Tailscale on both systems outside of the TrueNAS UI. It would be nice to manage it from the web interface.
Key Takeaways
- ZFS replication with encryption works well for offsite backups, but you need to manage keys carefully
- Tailscale makes remote replication simple without exposing SSH to the internet
- Test your restore process—unlock the destination datasets and make sure you can actually access the data
- Initial replication takes a long time; plan for it and don't expect instant results
- Snapshot conflicts are common if you're not careful about how you schedule snapshots and replication tasks
- Email alerts are essential for catching replication failures before they become a bigger problem
This setup isn't perfect, but it's functional and runs without constant babysitting. The data is encrypted, offsite, and replicated automatically. If my main NAS dies, I can restore from the offsite box. That's the goal.