Why I Built This
I run a mixed environment at home: Debian VMs on Proxmox, Alpine containers for lightweight services, and a handful of Docker Compose stacks scattered across different hosts. Updates were a mess. I’d pull new container images on one machine, forget to update system packages on another, and occasionally break something because a host-level dependency changed while a container expected the old version.
I needed a way to coordinate updates across all of this without manually SSH-ing into each host. I already had n8n running for other automation tasks, so I decided to build a workflow that could trigger Docker Compose pulls and system package upgrades in a somewhat synchronized way—not perfectly atomic, but predictable enough that I could trust it.
My Setup
I have n8n running in a Docker container on one of my Debian VMs. It’s accessible over my local network and handles various automation tasks already. The hosts I needed to manage are:
- Two Debian 12 VMs running Docker with several Compose stacks
- One Alpine 3.19 LXC container running a few standalone Docker containers
- SSH access configured with key-based authentication for all hosts
I didn’t want to install agents or additional software on each host. SSH and existing package managers were enough.
What I Built
The workflow I created in n8n does the following:
- Runs on a schedule (I set it to weekly, Sunday mornings)
- Connects to each host via SSH using the Execute Command node
- Checks for available system package updates (apt on Debian, apk on Alpine)
- Pulls the latest Docker Compose images for specified stacks
- Logs the output so I can review what changed
- Sends me a summary notification when it’s done
The SSH Execute Command Node
n8n has an SSH node that lets you run commands on remote hosts. I configured separate SSH credentials for each host in n8n’s credential manager. Each credential stores the hostname, username, and the path to my private key.
For Debian hosts, the command I run looks like this:
sudo apt update && sudo apt list --upgradable && cd /path/to/compose/stack && docker compose pull
For Alpine, it’s similar but uses apk:
sudo apk update && sudo apk list --upgradable && cd /path/to/compose/stack && docker compose pull
I don’t automatically apply upgrades. The workflow just checks what’s available and pulls new images. I review the output and decide whether to actually run the upgrades and restart services.
Handling Multiple Hosts
I used a Loop Over Items node to iterate through a list of hosts. Each item in the list contains the hostname, the path to the Compose stack, and the package manager type (apt or apk). The SSH node runs inside the loop, so it executes once per host.
This approach isn’t elegant, but it works. I hardcoded the host list in a Set node at the start of the workflow. If I add a new host, I just update that node.
Logging and Notifications
I added a Code node after the SSH execution to parse the output and extract useful information—like how many packages have updates available or which Docker images were pulled. The parsed data gets written to a file on the n8n host itself (just a simple append to a log file using another Execute Command node running locally).
At the end, I use the Send Email node to send myself a summary. It includes the host name, the number of updates available, and any errors that occurred during the SSH connection or command execution.
What Worked
The workflow runs reliably every week. I get a clear picture of what needs attention without having to manually check each host. The SSH approach is simple and doesn’t require installing anything new on the target machines.
Separating the check from the actual upgrade was the right call. I’ve caught a few situations where a package update would have broken a container dependency, and I was able to handle it manually before things went sideways.
Using n8n’s credential manager for SSH keys made it easy to rotate keys when I needed to. I didn’t have to dig through the workflow logic—just updated the credential and everything kept working.
What Didn’t Work
The first version of this workflow tried to be too clever. I attempted to parse the output of apt list --upgradable and apk list --upgradable in a single Code node, assuming the formats would be consistent. They weren’t. Debian’s output includes extra metadata that Alpine’s doesn’t, and my regex broke constantly. I ended up writing separate parsing logic for each package manager, which made the Code node messier but more reliable.
I also tried to automatically restart Docker Compose stacks after pulling new images. That was a mistake. Some stacks have dependencies on others, and restarting them out of order caused temporary outages. I removed that step and now handle restarts manually after reviewing the log.
Error handling was harder than I expected. If an SSH connection fails, the workflow stops unless you explicitly configure the node to continue on error. I had to add error outputs and route them to a separate notification path so I’d know when something failed, rather than just getting silence.
Timing Issues
Running docker compose pull on multiple hosts simultaneously can saturate my network, especially if several large images are being downloaded at once. I added a Wait node between hosts to stagger the pulls by a few minutes. It’s not perfect, but it keeps things from grinding to a halt.
Key Takeaways
This workflow isn’t a replacement for proper configuration management or orchestration tools. It’s a lightweight coordination layer that works for my small setup. If I had dozens of hosts, I’d probably look at something like Ansible instead.
The value here is visibility. I know what’s out of date, and I can make informed decisions about when to apply updates. The automation handles the tedious part—checking each host—and leaves the risky part—actually upgrading—in my hands.
n8n’s SSH node is straightforward but limited. You can’t easily stream output or handle interactive prompts. Everything has to be non-interactive, which means you need passwordless sudo configured on the target hosts. That’s fine for my environment, but it’s worth noting.
If you’re running a similar mixed environment and want to coordinate updates without adding complexity, this approach might work for you. Just don’t expect it to be fully automated—think of it as a reporting tool that happens to pull Docker images along the way.