Why I Started Using Git-Based Stack Versioning
I run multiple Docker stacks across different environments—some on my Proxmox host, others on a dedicated Docker node. Managing these stacks through Portainer’s UI was fine at first, but I kept running into the same problem: no clear history of what changed, when, or why.
When a stack broke after an update, I had to dig through container logs and try to remember what I changed. Rolling back meant manually editing the compose file in Portainer’s editor and hoping I didn’t miss something. It worked, but it felt fragile.
I needed a system where:
- Every change was tracked
- Rollbacks were predictable
- I could see exactly what was running in production
- Updates didn’t require me to be in the Portainer UI
That’s when I moved my stack definitions into Git and connected them to Portainer using its GitOps feature.
My Setup
I already had Portainer running as a Docker container on my main host. I’m using the Enterprise Edition because I needed the GitOps integration—Portainer offers a free 3-node license, which covers my setup.
Here’s the compose file I use to run Portainer:
services:
portainer:
image: portainer/portainer-ee:2.31.3
container_name: portainer
restart: unless-stopped
environment:
TZ: Europe/Amsterdam
PUID: 1000
PGID: 1000
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- portainer:/data
ports:
- 8000:8000
- 9443:9443
volumes:
portainer:
name: portainer
I store all my stack definitions in a private GitHub repository. Each stack gets its own directory with a docker-compose.yml file and any related config files.
Connecting Git to Portainer
To let Portainer access my GitHub repo, I created a fine-grained personal access token with read access to metadata and read/write access to code. I didn’t use a classic token because I wanted tighter control over what the token could do.
In Portainer, I added a new stack and chose “Git repository” as the build method. I entered:
- My GitHub username
- The personal access token
- The HTTPS clone URL of my repo
- The path to the compose file inside the repo (e.g.,
nginx/docker-compose.yml)
I enabled “GitOps updates” and set the sync interval to 5 minutes. This means Portainer checks the repo every 5 minutes and redeploys the stack if it detects changes.
What Worked
Once the stack was deployed, any change I pushed to the repo automatically triggered a redeploy. I didn’t have to log into Portainer or manually click “Update the stack.” The workflow became:
- Edit the compose file locally
- Commit and push to GitHub
- Wait a few minutes for Portainer to sync
- Verify the stack redeployed correctly
Rollbacks became trivial. If something broke, I reverted the commit in Git and pushed. Portainer picked up the change and redeployed the previous version.
I also started using Git tags to mark stable releases. This made it easy to see which version was running in production and which changes were still in testing.
What Didn’t Work
The sync interval isn’t instant. If I pushed a critical fix, I still had to wait up to 5 minutes for Portainer to detect it. I could manually trigger an update from the UI, but that defeated the purpose of automation.
I initially tried storing environment variables in .env files inside the repo, but Portainer doesn’t read those files during GitOps sync. I had to define environment variables directly in Portainer’s stack settings, which meant they weren’t version-controlled. This was a limitation I didn’t expect.
Another issue: if a stack failed to deploy, Portainer didn’t roll back to the previous working version. It just left the stack in a failed state. I had to manually revert the Git commit and wait for the next sync. There’s no automatic rollback on failure.
Webhook Triggers
To solve the sync delay, I set up a webhook in Portainer. This gave me a URL I could call to force an immediate update without waiting for the next scheduled sync.
I added the webhook URL to my GitHub repository as a webhook trigger. Now, every time I push a commit, GitHub sends a POST request to Portainer, and the stack redeploys within seconds.
This made the workflow much faster, but it also introduced a new problem: if I pushed a bad commit, it deployed immediately. I had to be more careful about testing changes locally before pushing.
Handling Config Files
Some of my stacks rely on external config files—like Nginx configs or application settings. I stored these in the same Git repo under a subdirectory for each stack.
In Portainer’s stack settings, I added the config directory path under “Local filesystem paths.” This tells Portainer to sync those files along with the compose file.
This worked, but it’s not obvious from the UI. If you forget to add the path, Portainer deploys the stack without the config files, and the containers fail to start. There’s no warning or validation.
Key Takeaways
- GitOps with Portainer gives you version control and audit logs, but it’s not a complete CI/CD pipeline
- Webhook triggers are essential if you want fast deployments—scheduled syncs are too slow for anything time-sensitive
- Environment variables aren’t version-controlled in this setup, which is a gap I haven’t fully solved
- Failed deployments don’t roll back automatically—you need to handle that manually
- Config files need to be explicitly listed in Portainer’s settings, or they won’t sync
This setup works well for managing stacks across multiple hosts, but it’s not foolproof. You still need to test changes before pushing, and you need a plan for handling failed deployments.