Why I Moved VMs from Proxmox to TrueNAS SCALE
I've been running Proxmox for years—mostly on older hardware that still had life left in it. My main Proxmox host was a 2012 ASUS laptop, which sounds ridiculous but worked fine for the handful of VMs I needed. The problem wasn't the laptop dying. It was that I kept adding more VMs, and I knew I'd eventually need to rebuild the setup on something more capable.
At the same time, I had a TrueNAS SCALE server sitting there with better specs and room to grow. I didn't want to wait until the laptop failed or until I had time to build a proper Proxmox box. I wanted to move some VMs over now, without taking them offline or rebuilding from scratch.
This wasn't about chasing the latest features or doing some elaborate migration project. It was about shifting workloads to where they made more sense, using what I already had running.
My Setup Before the Migration
Here's what I was working with:
- Proxmox VE running on an old laptop
- VM disks stored as ZVOLs on TrueNAS SCALE, exposed via iSCSI
- TrueNAS SCALE Electric Eel (24.10) with KVM virtualization enabled
- A small NVMe pool on TrueNAS dedicated to VM storage
The key detail here is that my VM disks were already on TrueNAS. I wasn't using local Proxmox storage. Every VM disk was a ZVOL on TrueNAS, presented to Proxmox over iSCSI. This meant the actual data didn't need to move—it was already where I wanted it.
The challenge was getting TrueNAS to recognize and boot those existing disks as VMs, without converting formats or losing the running state.
What I Tried First (and Why It Didn't Work)
My first instinct was to just create a new VM in TrueNAS and point it at the existing ZVOL. The disk was already there under /dev/zvol/store-fast1/, so in theory, it should have been simple.
It wasn't.
When I tried to attach the existing ZVOL as a disk during VM creation, TrueNAS didn't show it as an option. The ZVOL was still bound to the iSCSI extent, so the system treated it as "in use" even though Proxmox wasn't actively connected.
I could have removed the iSCSI extent and tried again, but I was nervous about breaking the link to Proxmox before I had a working replacement. I didn't want to orphan the disk or corrupt the VM.
The Approach That Actually Worked
After some trial and error, here's the process I followed:
1. Shut Down the VM in Proxmox
I cleanly powered off the VM I wanted to migrate. No snapshots, no suspend—just a normal shutdown. This ensured the disk was in a consistent state.
2. Disconnect the iSCSI Target
On the TrueNAS side, I went into the iSCSI configuration and removed the extent associated with that VM's ZVOL. This released the lock on the disk so TrueNAS could use it directly.
I didn't delete the ZVOL itself—just the iSCSI mapping.
3. Create a New VM in TrueNAS
In TrueNAS SCALE, I created a new VM with these settings:
- Same CPU and RAM allocation as the Proxmox VM
- VirtIO network adapter (same as Proxmox)
- No new disk created—I left the storage section empty for now
4. Manually Attach the Existing ZVOL
This is where it got hands-on. TrueNAS doesn't have a clean UI option for attaching an existing ZVOL to a VM, so I had to edit the VM's configuration manually.
I SSH'd into TrueNAS and located the VM's libvirt XML file under /etc/libvirt/qemu/. I added a disk entry pointing to the ZVOL path:
<disk type='block' device='disk'>
<driver name='qemu' type='raw' cache='none' io='native'/>
<source dev='/dev/zvol/store-fast1/vm-100-disk-0'/>
<target dev='vda' bus='virtio'/>
</disk>
After saving the file, I reloaded the VM definition:
virsh define /etc/libvirt/qemu/my-vm.xml
5. Boot the VM
I started the VM through the TrueNAS UI. It came up without issues. The OS inside didn't even realize it had moved to a different hypervisor—it just saw the same disk, same network, same everything.
What Went Wrong (and How I Fixed It)
Not everything worked on the first try.
Network Didn't Come Up
The VM booted, but it had no network connectivity. This was because the network interface name changed. Proxmox had assigned it ens18, but TrueNAS's KVM assigned it ens3.
I logged into the VM via the TrueNAS console and updated the network configuration to match the new interface name. After a reboot, networking worked.
VM Wouldn't Start After a TrueNAS Reboot
At one point, I rebooted TrueNAS and the VM wouldn't start. The error log showed that the ZVOL path wasn't available at boot time.
I fixed this by adding a systemd dependency to ensure ZFS volumes were fully imported before libvirt tried to start VMs. This involved editing /etc/systemd/system/libvirtd.service.d/override.conf and adding:
[Unit]
After=zfs-import.target
After reloading systemd and restarting libvirt, the issue was gone.
What I Learned
This process taught me a few things:
- ZVOLs are portable, but the tooling isn't always helpful. The disk format was fine—it was raw, so no conversion needed—but getting TrueNAS to recognize it required manual work.
- iSCSI locks are real. Even when Proxmox wasn't connected, TrueNAS still treated the ZVOL as "in use" because of the extent mapping.
- Libvirt is powerful but unforgiving. One typo in the XML file and the VM won't start. But once you get it right, it's rock solid.
- Zero downtime is possible, but only if you plan it. I could have done this live by temporarily running both Proxmox and TrueNAS at the same time, syncing the disk in real-time. I didn't need that level of complexity for my setup, but it's doable.
Would I Do This Again?
Yes, but only in specific situations.
If I already had the storage on TrueNAS and just needed to shift the hypervisor, this approach worked well. It saved me from having to clone disks or rebuild VMs from scratch.
If I were starting fresh, I'd probably just create new VMs in TrueNAS and migrate the data over the network using something like rsync or a live migration tool. That would avoid the manual XML editing and iSCSI cleanup.
But for moving existing workloads with minimal disruption, this method got the job done.
Key Takeaways
- If your VM disks are already on TrueNAS as ZVOLs, you can reuse them directly—no format conversion needed.
- iSCSI extents must be removed before TrueNAS can attach the ZVOL to a VM.
- Manual libvirt XML editing is required to attach existing ZVOLs—the TrueNAS UI doesn't support this natively.
- Network interface names will likely change, so be ready to reconfigure the guest OS.
- Ensure ZFS pools are imported before libvirt starts, or VMs won't boot after a reboot.
This wasn't a polished, one-click migration. It required SSH access, manual configuration, and some troubleshooting. But it worked, and the VMs are now running where they make more sense for my setup.