Tech Expert & Vibe Coder

With 15+ years of experience, I specialize in self-hosting, AI automation, and Vibe Coding – building applications using AI-powered tools like Google Antigravity, Dyad, and Cline. From homelabs to enterprise solutions.

Automating Container Security Patching with Watchtower and Ntfy Notifications:  Selective Auto-updates with Rollback Triggers for Production Homelab Services

Why I Built Selective Auto-Updates for My Homelab

I run about 20 Docker containers in my homelab—n8n for automation, AdGuard for DNS filtering, a few monitoring tools, and several services I depend on daily. Security patches matter, but so does uptime. I needed a way to keep containers updated without waking up to broken services or silent failures.

Manual updates felt safe but slow. Full automation felt fast but reckless. I wanted something in between: automatic updates for low-risk containers, notifications for everything, and a way to quickly roll back when things broke.

This is how I set up Watchtower with ntfy notifications and selective update policies that work for my production homelab.

My Setup: Watchtower, Ntfy, and Docker Compose

I run everything on Proxmox, with Docker containers managed through Portainer. My notification system is a self-hosted ntfy instance running in its own container. I chose to self-host ntfy because I didn’t want update notifications going through a public service, and I already had the infrastructure.

Here’s the core of my Watchtower configuration in Docker Compose:

version: "3.8"

services:
  watchtower:
    image: containrrr/watchtower
    container_name: watchtower
    restart: unless-stopped
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    environment:
      - WATCHTOWER_CLEANUP=true
      - WATCHTOWER_INCLUDE_RESTARTING=true
      - WATCHTOWER_POLL_INTERVAL=21600
      - WATCHTOWER_NOTIFICATIONS=shoutrrr
      - WATCHTOWER_NOTIFICATION_URL=ntfy://ntfy.local.domain/watchtower
      - TZ=America/New_York

A few things I learned while setting this up:

  • WATCHTOWER_CLEANUP=true removes old images after updates. Without this, my disk filled up fast.
  • WATCHTOWER_POLL_INTERVAL=21600 checks every 6 hours. Daily felt too slow for security patches; hourly felt excessive.
  • The notification URL uses shoutrrr format, which Watchtower supports natively. I point it at my local ntfy instance.

Selective Updates: What Gets Auto-Updated and What Doesn’t

I don’t auto-update everything. Some containers are too critical or too fragile. I use labels to control what Watchtower touches.

For containers I trust to update automatically, I add this label:

labels:
  - "com.centurylinklabs.watchtower.enable=true"

For containers I want monitored but not auto-updated, I use monitor-only mode. I run a second Watchtower instance just for this:

watchtower-monitor:
  image: containrrr/watchtower
  container_name: watchtower-monitor
  restart: unless-stopped
  volumes:
    - /var/run/docker.sock:/var/run/docker.sock
  environment:
    - WATCHTOWER_MONITOR_ONLY=true
    - WATCHTOWER_POLL_INTERVAL=21600
    - WATCHTOWER_NOTIFICATIONS=shoutrrr
    - WATCHTOWER_NOTIFICATION_URL=ntfy://ntfy.local.domain/watchtower-monitor
  command: n8n adguard-home

This instance watches my n8n and AdGuard containers but never updates them. I get a notification when updates are available, then I update manually during a maintenance window.

Running two Watchtower instances felt redundant at first, but it solved a real problem: I needed different policies for different services.

Ntfy Notifications: What Actually Gets Sent

When Watchtower updates a container, ntfy sends me a push notification. I have the ntfy app on my phone and a browser tab open on my desktop.

A typical notification looks like this:

Watchtower
Updated container: homepage (ghcr.io/gethomepage/homepage:latest)

For monitor-only mode, I get:

Watchtower Monitor
Update available: n8n (n8nio/n8n:latest)

The notifications are plain but functional. I know what updated and when. If something breaks, I can check the timestamp and correlate it with the update.

One limitation: Watchtower doesn’t tell you what changed in the update. You still need to check release notes if you care about specifics.

Setting Up Ntfy

I run ntfy in a container with a simple config file mounted as a volume:

ntfy:
  image: binwiederhier/ntfy
  container_name: ntfy
  restart: unless-stopped
  ports:
    - "8080:80"
  volumes:
    - ./ntfy/server.yml:/etc/ntfy/server.yml
    - ./ntfy/cache:/var/cache/ntfy
  command: serve

My server.yml enables basic auth for the topics I care about:

base-url: "http://ntfy.local.domain"
cache-file: "/var/cache/ntfy/cache.db"
auth-default-access: "deny-all"

I created a user with write access to the watchtower and watchtower-monitor topics. Without auth, anyone on my network could spam notifications.

What Worked

This setup has been running for about eight months. Here’s what actually worked:

  • Automatic updates for stable containers: Things like homepage, Uptime Kuma, and a few monitoring tools update without issues. I’ve had zero breakages in this category.
  • Notifications are reliable: I’ve never missed an update notification. Ntfy just works.
  • Monitor-only mode for critical services: Knowing when n8n or AdGuard have updates available, without auto-applying them, gives me control without constant manual checking.
  • Cleanup prevents disk bloat: Before enabling WATCHTOWER_CLEANUP, I had dozens of old images eating 40+ GB. Now it’s automatic.

What Didn’t Work

Not everything went smoothly:

  • No built-in rollback: Watchtower doesn’t have a rollback feature. When a container breaks after an update, I have to manually pull the previous image tag and redeploy. I keep a text file with the last known good image tags for critical services, but it’s a manual process.
  • Breaking changes in updates: One container (a custom dashboard tool) updated and changed its config file format. The container started but didn’t work. I only noticed because I happened to check it. Notifications tell you an update happened, not whether it succeeded functionally.
  • No pre-update health checks: Watchtower doesn’t verify a container is healthy before updating it. If a container is already in a weird state, the update can make it worse.
  • Timing conflicts: I run backups at 2 AM. Watchtower sometimes updates containers during the backup window, which caused one corrupted backup. I had to adjust the poll interval to avoid overlap.

Rollback Strategy: What I Actually Do

Since Watchtower doesn’t handle rollbacks, I built a simple process:

  1. I keep a versions.txt file in my Docker Compose directory with the last known good image tags for critical containers.
  2. When I get a notification that a critical container updated (via monitor-only mode), I test it within an hour.
  3. If something breaks, I edit the compose file to pin the previous tag, then run docker compose up -d.

For example, if n8n updates from 1.20.0 to 1.21.0 and breaks, I change:

image: n8nio/n8n:latest

to:

image: n8nio/n8n:1.20.0

This works but requires manual intervention. I looked into using Watchtower’s WATCHTOWER_LIFECYCLE_HOOKS to run health checks post-update, but I haven’t implemented it yet. It’s on my list.

Lessons from Eight Months of Automated Updates

  • Not all containers are equal: Auto-update policies should match the risk profile of each service. A homepage widget can break without consequence. DNS filtering cannot.
  • Notifications are not monitoring: Knowing an update happened is not the same as knowing it succeeded. I still check critical services manually after updates.
  • Cleanup is mandatory: Without it, you’ll run out of disk space faster than you expect.
  • Self-hosted ntfy is worth it: I didn’t want my homelab update notifications going through a public service. Running ntfy myself took 10 minutes to set up and has been completely reliable.
  • Rollback is still manual: This is the biggest gap. I accept it because the time saved on routine updates outweighs the occasional manual rollback.

What I’d Change If I Started Over

If I rebuilt this setup today, I’d add a few things:

  • Post-update health checks: Use lifecycle hooks to ping each container’s health endpoint after an update. If it fails, send a high-priority notification.
  • Automated image tag logging: Script something to log the previous image tag before Watchtower updates, so rollback is one command instead of hunting through versions.txt.
  • Separate update windows: Stagger updates for different container groups to avoid everything updating at once.

I haven’t done these yet because the current setup works well enough. But they’re on my roadmap.

Final Thoughts

Automating container updates with Watchtower and ntfy solved a real problem for me: keeping my homelab secure without constant manual work. It’s not perfect—rollback is still manual, and breaking changes can slip through—but it’s far better than the alternative of ignoring updates or spending hours each week checking for them.

The key was accepting that not everything should auto-update. Critical services get monitored, low-risk services get automated, and I get notifications for both. That balance works for my setup.

If you run a homelab with more than a handful of containers, this approach might work for you too. Just don’t expect it to be fully hands-off. You’re still responsible for knowing what’s running and how it breaks.

Leave a Comment

Your email address will not be published. Required fields are marked *