Tech Expert & Vibe Coder

With 15+ years of experience, I specialize in self-hosting, AI automation, and Vibe Coding – building applications using AI-powered tools like Google Antigravity, Dyad, and Cline. From homelabs to enterprise solutions.

Migrating Windows Vm Workloads to Linux Containers on Proxmox:  Converting Legacy Services to Docker After Windows 10 Eol

Why I Started Moving Windows VMs to Linux Containers

I run a Proxmox cluster at home that’s accumulated several Windows VMs over the years. Most of them started as quick solutions—a small service here, a utility there—but they became permanent fixtures consuming resources I didn’t want to keep allocating. When Windows 10 EOL was announced for October 2025, I had a decision to make: upgrade these VMs to Windows 11 (which isn’t even officially supported in VMs without workarounds), move to Windows Server licenses I don’t want to pay for, or find another path.

I chose the third option. Most of these services didn’t need Windows at all. They were just running there because that’s where I built them initially. The EOL deadline gave me the push to finally containerize what I could and eliminate the Windows licensing and update overhead entirely.

What I Was Actually Running

My Windows VMs fell into a few categories:

  • A Windows 10 VM running a Python-based monitoring script that checked external APIs and sent alerts
  • Another VM hosting a small Node.js service for webhook processing
  • A third running a .NET Framework 4.8 application that generated reports from a SQL Server database
  • One VM that existed solely because I needed PowerShell scripts to run on a schedule

Each VM was allocated 2-4GB of RAM and 40-60GB of disk. Together, they consumed about 12GB of RAM just sitting idle. The update cycles were a constant annoyance—Windows updates breaking things, reboots required, and the general weight of maintaining multiple Windows instances.

I knew containers would be lighter, but I needed to figure out which services could actually move and what that process would look like in practice.

What Worked: The Easy Migrations

Python and Node.js Services

The Python monitoring script was the easiest win. I copied the script and its dependencies list to my Proxmox host, created a simple Dockerfile based on python:3.11-slim, and had it running in a container within an hour. The container uses about 50MB of RAM compared to the 2GB VM it replaced.

My Dockerfile looked like this:

FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY monitor.py .
CMD ["python", "monitor.py"]

The Node.js webhook service was similarly straightforward. I used the official node:18-alpine image, copied my application code, and ran npm install during the build. The entire container is under 200MB, and it starts in seconds instead of the minute-plus boot time the VM needed.

PowerShell Scripts

The VM running scheduled PowerShell scripts was harder to justify. I didn’t want to keep Windows around just for PowerShell, so I rewrote the scripts in bash and Python. This wasn’t a direct migration—it was a rewrite—but the scripts were simple enough that it took an afternoon. I now run them via cron in a Debian-based LXC container, not even a full Docker container. They execute faster and use a fraction of the resources.

What Didn’t Work: The .NET Framework Problem

The .NET Framework 4.8 application was a different story. This was a legacy reporting tool that connected to SQL Server, ran queries, generated Excel files, and emailed them out. It was built years ago and hadn’t been touched since.

I initially thought I could run it in a Windows container, but I quickly hit walls:

  • Windows containers require a Windows host or Hyper-V isolation, neither of which I wanted to maintain
  • The application had dependencies on specific Windows libraries that didn’t exist in any container base image I tried
  • Even if I got it running, I’d still be managing Windows—just in a container instead of a VM

I considered rewriting it in .NET Core (now just .NET 6+), which runs natively on Linux, but the effort didn’t justify the outcome. Instead, I kept this VM running for now. It’s isolated, it works, and I’ll revisit it when I have time to rebuild it properly. Not everything needs to be containerized immediately.

The Actual Migration Process

Here’s what the process looked like for each service I successfully moved:

Step 1: Audit the Service

I logged into each Windows VM and documented:

  • What the service actually does
  • What runtime it needs (Python, Node.js, etc.)
  • What external dependencies it has (databases, APIs, file shares)
  • How it’s currently started (Task Scheduler, Windows Service, manual)
  • Where it stores data or state

Step 2: Test Locally

Before touching Proxmox, I tested each service in a Docker container on my laptop. I wanted to catch dependency issues, configuration problems, and runtime errors in a safe environment. This step saved me from several mistakes, like hardcoded Windows paths and assumptions about file system case sensitivity.

Step 3: Build the Container Image

I wrote a Dockerfile for each service, built the image, and pushed it to my local Docker registry running on Proxmox. I don’t use Docker Hub for internal services—I run a simple registry container that stores images on my NAS via NFS.

Step 4: Deploy on Proxmox

I created a Debian 12 LXC container on Proxmox specifically for running Docker. I installed Docker inside the LXC container (not directly on the Proxmox host), pulled my images from the local registry, and started them with docker-compose. This gave me a clean separation: Proxmox manages the LXC container, and Docker manages the application containers inside it.

Step 5: Migrate Data and Configuration

For services that needed persistent data, I mounted volumes from my NAS. For configuration, I used environment variables passed through docker-compose.yml. I avoided baking secrets into images—everything sensitive lives in .env files that aren’t committed to version control.

Step 6: Monitor and Validate

I let each service run in parallel with its Windows VM for a week. I compared outputs, checked logs, and verified that nothing broke. Only after I was confident did I shut down the Windows VM.

What I Learned About Containers vs. VMs

Containers Are Not VMs

This sounds obvious, but it’s easy to forget in practice. I initially tried to treat containers like lightweight VMs—installing multiple services in one container, using systemd, expecting persistent storage by default. None of that works well. Containers are designed to run a single process, be stateless, and be disposable. Fighting that design makes everything harder.

State and Data Need External Storage

Containers are ephemeral. If a container crashes or gets recreated, anything stored inside it is gone. I learned this the hard way when I lost a day’s worth of logs because I hadn’t mounted a volume. Now, anything that needs to persist—logs, data files, configuration—lives outside the container on my NAS or in a dedicated volume.

Networking Is Different

Windows VMs got their own IP addresses from my network’s DHCP server. Containers, by default, live on a Docker bridge network and aren’t directly accessible from my LAN. I had to decide whether to use port forwarding, put containers on a macvlan network, or use a reverse proxy. I ended up using Traefik as a reverse proxy for HTTP services and port forwarding for everything else. It works, but it required rethinking how I access services.

Not Everything Should Be Containerized

The .NET Framework application taught me this. If migrating something to a container requires more effort than the benefit it provides, it’s okay to leave it as-is. Containers are a tool, not a mandate. I still have one Windows VM running, and that’s fine.

Resource Savings

After migrating three of my four Windows VMs to containers, I freed up about 10GB of RAM and roughly 150GB of disk space. The containers use a combined 300MB of RAM and about 2GB of disk. Boot times went from minutes to seconds. Updates are now docker pull and docker-compose up -d instead of waiting for Windows Update to finish.

The one remaining Windows VM still consumes 2GB of RAM, but I can live with that for now.

Key Takeaways

  • Windows 10 EOL is a forcing function, but it’s also an opportunity to eliminate unnecessary Windows dependencies.
  • Services built on cross-platform runtimes (Python, Node.js, Go) are trivial to containerize. Legacy .NET Framework applications are not.
  • Containers require a different mindset than VMs. Treat them as stateless, single-purpose, and disposable.
  • Not everything needs to be migrated immediately. Prioritize based on effort vs. benefit.
  • Running Docker inside an LXC container on Proxmox works well and keeps the host clean.
  • Test locally before deploying to production. Catch the easy mistakes on your laptop, not on your server.

I’m not done with this process. I still have that .NET application to deal with, and I’m sure I’ll find other services that could benefit from containerization. But the progress so far has been worth it—less overhead, faster deployments, and no more Windows Update interruptions for services that never needed Windows in the first place.

Leave a Comment

Your email address will not be published. Required fields are marked *