Tech Expert & Vibe Coder

With 14+ years of experience, I specialize in self-hosting, AI automation, and Vibe Coding – building applications using AI-powered tools like Google Antigravity, Dyad, and Cline. From homelabs to enterprise solutions.

Migrating from Docker Compose to Podman with systemd for rootless container management

Why I Moved from Docker Compose to Podman

I've been running containers on my Proxmox homelab for years, mostly with Docker Compose. It worked fine—until I started thinking about what happens when the Docker daemon crashes or needs an update. Every container stops. Everything goes down at once.

That's when I looked at Podman. The pitch was simple: rootless containers, no daemon, and direct systemd integration. I wanted to see if it could replace my Docker Compose setup without adding complexity.

This isn't a theoretical comparison. I migrated real services—monitoring tools, automation workflows, and a few internal apps—and ran them in production for months. Some things worked better than expected. Others required workarounds I didn't anticipate.

My Setup Before the Migration

I had about a dozen services running across multiple Docker Compose files on an Ubuntu VM inside Proxmox. Each service had its own compose file in /opt/services/<name>, and I used systemd units to start them on boot with docker-compose up -d.

The pattern looked like this:

[Unit]
Description=Some Service
After=docker.service
Requires=docker.service

[Service]
Type=oneshot
RemainAfterExit=yes
WorkingDirectory=/opt/services/some-service
ExecStart=/usr/local/bin/docker-compose up -d
ExecStop=/usr/local/bin/docker-compose down

[Install]
WantedBy=multi-user.target

It worked. But every service depended on the Docker daemon being healthy. If I needed to restart Docker for any reason, everything stopped.

Installing Podman and podman-compose

I installed Podman directly from Ubuntu's repos:

sudo apt install podman

Then I added podman-compose via pip. I used a virtual environment to keep it isolated:

python3 -m venv ~/podman-venv
source ~/podman-venv/bin/activate
pip install podman-compose

I didn't remove Docker yet. I wanted to run both in parallel during the migration so I could compare behavior and fall back if something broke.

Converting the First Service

I picked a simple service first: a single-container app with no volumes or networks. My existing compose file looked like this:

version: '3'
services:
  app:
    image: someimage:latest
    ports:
      - "8080:8080"
    environment:
      - VAR=value

I copied the directory, renamed it, and ran:

podman-compose up -d

It worked immediately. No changes to the compose file needed. The container started, and I could access it on port 8080.

But I noticed something: Podman created a pod automatically. When I listed containers with podman ps, I saw two—my app container and an "infra" container. The infra container exists just to hold the network namespace for the pod. It doesn't run anything.

This is different from Docker, which doesn't use pods at all. In Podman, every podman-compose deployment creates a pod, even if you only have one container.

Handling Multi-Container Services

Next, I tried a service with multiple containers: a web app, a database, and a Redis instance. The compose file had a custom network and named volumes:

version: '3'
services:
  web:
    image: webapp:latest
    ports:
      - "3000:3000"
    depends_on:
      - db
      - redis
  db:
    image: postgres:15
    volumes:
      - db-data:/var/lib/postgresql/data
  redis:
    image: redis:7
    volumes:
      - redis-data:/data

volumes:
  db-data:
  redis-data:

networks:
  default:
    name: webapp-net

I ran podman-compose up -d without changes. It worked, but I had to adjust my expectations.

Podman created a pod with all three containers inside it. They shared the same network namespace, so localhost worked for inter-container communication. I didn't need to use service names like db or redis—everything was reachable on 127.0.0.1.

This is actually simpler than Docker's approach, where each service gets its own network interface and you rely on Docker's DNS to resolve service names.

But there's a catch: all containers in the pod share the same port space. If two containers try to expose the same port, it fails. I ran into this with a service that had two web interfaces both trying to use port 8080. I had to change one of them to 8081 in the compose file.

Switching to systemd for Real

The big advantage of Podman is native systemd integration. Instead of using podman-compose as a long-running process, I could generate systemd units and let systemd manage everything.

I started a pod with podman-compose, then generated the unit files:

podman generate systemd --new --files --name <pod-name>

This created multiple unit files—one for the pod and one for each container. I moved them to ~/.config/systemd/user/ because I was running rootless:

mkdir -p ~/.config/systemd/user
mv *.service ~/.config/systemd/user/
systemctl --user daemon-reload

Then I enabled and started the pod:

systemctl --user enable --now pod-<pod-name>.service

It worked. The pod started on boot, and I could manage it with systemctl like any other service. No more docker-compose up or down.

But there's an important detail: user services don't start until you log in. To make them start at boot, I had to enable lingering:

sudo loginctl enable-linger $USER

Without this, the services would only start when I SSH'd into the machine.

What Didn't Work

Not everything translated cleanly.

Volume permissions: Rootless Podman maps your user ID inside the container to a different UID outside. If a container expects to write as UID 1000, but Podman maps that to UID 100000 on the host, file permissions break.

I fixed this by setting :Z on volume mounts to let Podman relabel files with SELinux contexts. For example:

volumes:
  - ./data:/data:Z

This worked on my Ubuntu system even though I wasn't using SELinux. Podman just ignored the flag.

Networking quirks: Some services expected to bind to 0.0.0.0 but couldn't because they were inside a pod. I had to explicitly set network_mode: host in the compose file to bypass the pod's network namespace.

Image pulls: Podman doesn't default to Docker Hub. It checks multiple registries, and sometimes it picked the wrong one. I had to fully qualify image names like docker.io/library/postgres:15 instead of just postgres:15.

Running This in Production

After migrating a few services, I switched my entire homelab to Podman. I kept Docker installed but stopped using it for new deployments.

The main benefit I noticed: isolation. Each service runs as its own systemd unit. If one crashes, the others keep running. If I need to restart a service, I use systemctl --user restart instead of docker-compose restart.

I also liked that I could see container resource usage in systemd-cgtop, which integrates better with my monitoring setup.

But there are trade-offs. Podman is less mature than Docker in some areas. Documentation assumes you're running on Red Hat or Fedora. Some third-party tools (like Portainer) don't support Podman well. And podman-compose isn't as feature-complete as Docker Compose—some advanced options just don't work.

Key Takeaways

Podman works. It's not a drop-in replacement for Docker, but it's close enough that most services migrate without major changes.

The rootless model is genuinely better for security. No daemon running as root. No single point of failure. And systemd integration means I don't need a separate orchestration layer for simple multi-container apps.

But you have to adjust your expectations. Podman's pod model changes how networking works. Volume permissions require more attention. And some Docker-specific tools won't work at all.

If you're running a homelab or self-hosting on a single machine, Podman is worth trying. If you're deeply invested in Docker tooling or need features like Docker Swarm, the migration will be harder.

I'm still using Podman. It's been stable, and I haven't hit any issues that made me want to switch back. But I also know it's not the right choice for everyone.