Tech Expert & Vibe Coder

With 14+ years of experience, I specialize in self-hosting, AI automation, and Vibe Coding – building applications using AI-powered tools like Google Antigravity, Dyad, and Cline. From homelabs to enterprise solutions.

Implementing automated MongoDB 8.0 security patching with Watchtower and health checks after December 2025 CVE

Why I Needed Automated MongoDB Patching

I run MongoDB 8.0 in Docker containers on my Proxmox homelab. When CVE-2025-14847 dropped in late December 2025, I was traveling. The vulnerability allowed unauthenticated remote attackers to read uninitialized heap memory through a zlib compression flaw—no authentication required, low complexity.

My MongoDB instance wasn't exposed to the internet, but it was accessible from my internal network where multiple services connect to it. I couldn't risk waiting days to manually pull and test a new image. I needed a system that would:

  • Detect the patched MongoDB 8.0.17 image as soon as it was available
  • Pull and restart the container automatically
  • Verify the database came back up correctly
  • Alert me if something broke

This wasn't theoretical. I had to solve it while away from my keyboard.

My MongoDB Setup Before Automation

I run MongoDB 8.0 as a single-node instance in Docker, managed through Portainer on Proxmox. The container uses:

  • Official mongo:8.0 image from Docker Hub
  • Persistent volume mounted at /data/db
  • Custom network with other internal services (n8n, a few Python scrapers)
  • No external exposure—only accessible via internal DNS

My deployment was stable but entirely manual. When updates came out, I would:

  1. Check MongoDB release notes
  2. Pull the new image
  3. Stop the container
  4. Start it again with the new image
  5. Manually verify connections worked

This worked fine until I wasn't home to do it.

Implementing Watchtower for Automated Updates

I already used Watchtower to monitor a few other containers, but I had excluded MongoDB because I wanted manual control over database updates. The CVE changed that calculation.

Here's the Watchtower configuration I deployed:

version: '3.8'
services:
  watchtower:
    image: containrrr/watchtower:latest
    container_name: watchtower
    restart: unless-stopped
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    environment:
      - WATCHTOWER_POLL_INTERVAL=3600
      - WATCHTOWER_CLEANUP=true
      - WATCHTOWER_INCLUDE_STOPPED=false
      - WATCHTOWER_LABEL_ENABLE=true
      - WATCHTOWER_NOTIFICATIONS=shoutrrr
      - WATCHTOWER_NOTIFICATION_URL=generic+https://n8n.internal.vipinpg.com/webhook/watchtower-alerts

Key decisions I made:

  • Poll interval of 1 hour: MongoDB's official images update within hours of a release, not minutes. Checking every hour was aggressive enough without hammering Docker Hub.
  • Label-based monitoring: I added com.centurylinklabs.watchtower.enable=true to my MongoDB container. This way, Watchtower only touches containers I explicitly allow.
  • Cleanup enabled: Old images get removed automatically to avoid filling my storage.
  • Custom webhook for notifications: I route Watchtower alerts through n8n instead of email or Slack because I wanted to add conditional logic later.

I updated my MongoDB container labels:

labels:
  - "com.centurylinklabs.watchtower.enable=true"

That was it. Watchtower now monitors MongoDB for new images and pulls them automatically.

Building the Health Check System

Watchtower restarts containers after pulling new images, but it doesn't verify the service inside actually works. I needed a health check that would:

  • Confirm MongoDB accepts connections
  • Verify authentication works
  • Test a basic read operation
  • Run immediately after the container restarts

I wrote a Python script that runs as a separate container on the same Docker network:

import pymongo
import sys
import time

def check_mongodb_health():
    max_retries = 5
    retry_delay = 10
    
    for attempt in range(max_retries):
        try:
            client = pymongo.MongoClient(
                "mongodb://mongodb:27017/",
                username="admin",
                password="[redacted]",
                serverSelectionTimeoutMS=5000
            )
            
            # Force connection
            client.admin.command('ping')
            
            # Test read operation
            db = client['healthcheck']
            collection = db['status']
            collection.find_one()
            
            print(f"MongoDB health check passed on attempt {attempt + 1}")
            return True
            
        except Exception as e:
            print(f"Attempt {attempt + 1} failed: {str(e)}")
            if attempt < max_retries - 1:
                time.sleep(retry_delay)
            else:
                return False
    
    return False

if __name__ == "__main__":
    success = check_mongodb_health()
    sys.exit(0 if success else 1)

I packaged this in a lightweight Alpine-based container and set it to run via Cronicle (my job scheduler) every 5 minutes. If MongoDB restarts due to a Watchtower update, the health check catches it within that window.

The health check container is configured with:

depends_on:
  - mongodb
restart: "no"

It runs once per execution, exits, and gets cleaned up. No persistent processes.

Connecting Health Checks to Alerts

I route both Watchtower notifications and health check failures through n8n. Here's the workflow I built:

  1. Watchtower webhook: When Watchtower updates MongoDB, it sends a JSON payload to my n8n webhook.
  2. Parse and log: n8n extracts the container name, old image tag, and new image tag. It writes this to a simple SQLite database I keep for audit logs.
  3. Wait 2 minutes: n8n pauses to give MongoDB time to fully start.
  4. Trigger health check: n8n calls the Cronicle API to run the MongoDB health check immediately (instead of waiting for the next scheduled run).
  5. Check result: If the health check fails, n8n sends me a Telegram message with the error details and the specific CVE that triggered the update (if I've logged it).

This took about an hour to set up in n8n. The workflow is not complicated, but it required manually testing each step to make sure the timing worked.

What Worked

When MongoDB 8.0.17 was released on December 24, 2025, Watchtower detected it within 90 minutes. It pulled the image, stopped the old container, and started the new one. The health check ran automatically 2 minutes later and passed on the first attempt.

I received a Telegram notification:

MongoDB updated: 8.0.16 → 8.0.17
Health check: PASSED
CVE patched: CVE-2025-14847

I didn't have to touch anything. The system handled it while I was offline.

Other things that worked:

  • No downtime for connected services: My n8n workflows and scrapers reconnected automatically. MongoDB's connection pooling handled the brief restart without errors.
  • Audit trail: I have a record of every update in my SQLite log, including timestamps and image versions.
  • Low resource overhead: Watchtower uses minimal CPU and RAM. The health check container runs for less than 10 seconds per execution.

What Didn't Work

The first time I tested this setup (before the CVE), the health check failed because MongoDB hadn't fully initialized its replica set status, even though it was accepting connections. The script passed the ping command but failed on the find_one() operation.

I fixed this by adding retries with exponential backoff in the health check script. Now it waits up to 50 seconds before declaring failure.

Another issue: Watchtower's notification payload doesn't include CVE information. I had to manually log which CVEs corresponded to which MongoDB versions in a separate lookup table in n8n. This works, but it's fragile—if MongoDB releases a patch for multiple CVEs at once, I have to update the table manually.

I also learned that Watchtower doesn't respect Docker Compose depends_on relationships. If multiple containers need to update in sequence, Watchtower updates them in parallel. This hasn't broken anything yet, but it's a limitation I'm aware of.

Key Takeaways

  • Automated patching is not optional for internet-facing or network-accessible databases. Even internal services need this if you're not always available to respond.
  • Health checks must account for slow startup times. A simple connection test is not enough—you need to verify the service is actually ready to handle queries.
  • Watchtower is reliable for simple update workflows, but it doesn't replace proper monitoring. You still need to verify updates succeeded.
  • Audit logs are worth the effort. When something breaks weeks later, knowing exactly when and how a container was updated saves hours of debugging.
  • CVE tracking requires manual work. There's no automated way to link a Docker image update to the specific vulnerabilities it patches unless you maintain that mapping yourself.

This system has been running since late December 2025. It's handled two MongoDB updates so far without issues. I'm not claiming it's perfect, but it solved the problem I had: keeping my database patched when I'm not around to do it manually.