Why I Worked on This
I run a Proxmox-based homelab with several isolated VLANs. One of them is completely air-gapped—no internet access by design. I wanted to run local LLMs there using Ollama, but I needed a way to version models, verify their integrity, and update them without breaking the air gap every time.
The standard Ollama workflow assumes you can just pull models from the internet. That doesn't work when your network is intentionally cut off. I needed a process that let me:
- Download models on a machine with internet access
- Transfer them securely to the air-gapped environment
- Verify nothing got corrupted or tampered with
- Keep track of which model version I'm actually running
This isn't theoretical. I actually use this setup for testing AI workflows on sensitive data that never leaves my lab.
My Real Setup
Here's what I'm working with:
- Proxmox host running several VMs
- One VM with internet access (my "staging" machine)
- One VM on an isolated VLAN with no internet (the "production" air-gapped machine)
- Ollama running in Docker containers on both
- A NAS for storing exported model volumes
The staging machine is where I pull models from Ollama's registry. The air-gapped machine is where I actually use them. The NAS acts as the transfer point between the two.
What Worked
Exporting and Importing Ollama Volumes
Ollama stores models in /root/.ollama inside the container. I mount this as a Docker volume. To transfer models, I export the entire volume as a tarball.
On the staging machine:
docker run -d \
-v ollama-staging:/root/.ollama \
-p 11434:11434 \
--name ollama-staging \
ollama/ollama
docker exec ollama-staging ollama pull mistral:latest
docker exec ollama-staging ollama pull llama3.2:latest
docker volume export ollama-staging -o ollama-staging.tar
I copy the tarball to my NAS, then import it on the air-gapped machine:
docker volume create ollama-airgap
docker volume import ollama-airgap ollama-staging.tar
docker run -d \
-v ollama-airgap:/root/.ollama \
-p 11434:11434 \
--name ollama-airgap \
ollama/ollama
This works. The models are now available offline.
Verifying Model Integrity with SHA256 Hashes
Ollama models are stored as blobs with SHA256 hashes in their filenames. I can verify these manually to make sure nothing got corrupted during transfer.
Inside the container:
docker exec ollama-staging find /root/.ollama/models/blobs -type f -exec sha256sum {} \;
I save this output to a file on the staging machine, transfer it with the tarball, and run the same command on the air-gapped machine. If the hashes match, the transfer was clean.
I don't do this automatically yet, but I've scripted it for models I care about. It's tedious but necessary when you're working with sensitive data.
Tracking Model Versions
Ollama doesn't have a built-in versioning system for offline use. Models are just stored by their digest. I track versions manually by naming the exported tarballs with timestamps and model tags:
ollama-staging-2025-01-15-mistral-latest.tar
ollama-staging-2025-01-15-llama3.2-latest.tar
Inside the tarball, I also keep a manifest.txt file that lists which models were pulled and when:
mistral:latest - pulled 2025-01-15 14:23 UTC
llama3.2:latest - pulled 2025-01-15 14:25 UTC
This is manual work, but it's the only way I've found to keep things organized without internet access.
Isolating the Air-Gapped Network
I use Proxmox's VLAN tagging to isolate the air-gapped VM. No default gateway, no DNS servers. The VM can only talk to other machines on the same VLAN.
I also verified the isolation by trying to ping external addresses from inside the Ollama container. It fails as expected:
docker exec ollama-airgap ping -c 3 8.8.8.8
# No route to host
This confirms the network-level air gap is working.
What Didn't Work
Using Docker's Built-in Volume Export
Docker Desktop has a UI for exporting volumes, but it requires a paid license. I tried it briefly and it worked, but I'm not paying for Docker Desktop just for this. The CLI method with docker volume export works fine on Linux.
Automating Hash Verification
I wanted to automate the hash comparison, but the file paths inside the tarball don't always match the paths on the destination. Ollama's blob storage structure is consistent, but extracting and comparing hashes across two systems turned out to be more fragile than I expected.
I still do it manually for now. A proper solution would involve writing a script that understands Ollama's internal structure, but I haven't had time to build that yet.
Using Ollama's Registry Mirror Feature
Ollama has an environment variable OLLAMA_REGISTRY that's supposed to let you point to a custom registry. I tried setting this up with a local Docker registry, but Ollama's pull mechanism doesn't actually work with standard OCI registries. It expects Ollama's own registry format.
I spent a few hours trying to make this work and gave up. The volume export method is simpler and more reliable.
Key Takeaways
- Ollama's volume structure is straightforward. You can export and import it as a tarball without losing anything.
- SHA256 hashes are already part of Ollama's storage. Use them to verify transfers.
- Manual versioning is tedious but necessary. There's no built-in way to track model versions offline.
- Air-gapping at the network level (VLAN isolation, no gateway) is more reliable than trying to use Docker's internal network modes.
- Custom registries don't work the way you'd expect. Stick with volume exports unless Ollama adds better registry support.
This setup isn't elegant, but it works. I've been using it for a few months now without issues. If you're running LLMs in a truly isolated environment, this is the most practical approach I've found.