Tech Expert & Vibe Coder

With 14+ years of experience, I specialize in self-hosting, AI automation, and Vibe Coding – building applications using AI-powered tools like Google Antigravity, Dyad, and Cline. From homelabs to enterprise solutions.

Creating n8n workflows to scrape CVE feeds and auto-patch vulnerable home automation containers before exploit publication

Why I Built This (And Why It Matters)

I run a home automation stack on Proxmox with multiple Docker containers handling everything from lighting control to climate monitoring. These containers pull images from public registries, and those images often bundle dependencies I never asked for—libraries, binaries, runtime components that sit there quietly until someone finds a vulnerability.

The problem hit me after a critical OpenSSL CVE dropped. I had containers running affected versions for days before I even knew to check. By the time I manually patched everything, exploit code was already circulating. That gap—between CVE publication and my awareness—felt unacceptable for systems controlling physical infrastructure in my home.

I needed a way to monitor CVE feeds, identify which of my running containers were affected, and trigger updates before public exploits became widespread. This had to run automatically, because I don't check security bulletins every morning.

What I Actually Built

I created an n8n workflow that scrapes CVE data feeds, cross-references them against my running Docker containers, and automatically pulls updated images when vulnerabilities are detected. The workflow runs on my existing n8n instance inside Proxmox.

The Core Components

My setup uses:

  • n8n running in a dedicated LXC container on Proxmox
  • Docker API access from n8n to query running containers
  • NVD CVE feed (National Vulnerability Database JSON API)
  • Docker Hub API to check for updated images
  • Watchtower as the actual update mechanism
  • Gotify for push notifications when patches are applied

The workflow does not scan container contents or perform deep vulnerability analysis. It relies on CVE metadata and image tags to make patching decisions.

How the Workflow Operates

Every 6 hours, n8n triggers a scheduled workflow:

  1. Fetch recent CVEs from NVD API (last 7 days of published vulnerabilities)
  2. Query Docker API to list all running containers with their base images and tags
  3. Parse CVE descriptions to extract affected software names and versions
  4. Match CVEs to containers by comparing software names in CVE data against image names and known packages
  5. Check for newer image tags on Docker Hub that might contain patches
  6. Trigger Watchtower to pull and restart affected containers if updates exist
  7. Send notification via Gotify with details of what was patched

This is not a perfect vulnerability scanner. It's a pragmatic early-warning system that acts on publicly disclosed risks before exploit code becomes common.

The Real Setup Details

n8n Configuration

I run n8n in an unprivileged LXC container (Debian 12) with 2GB RAM and Docker socket access. The container is configured to mount the Docker socket from the Proxmox host:

lxc.mount.entry: /var/run/docker.sock var/run/docker.sock none bind,create=file 0 0

This lets n8n query the Docker API without running Docker inside the container itself. I tried running n8n in a Docker container initially, but nested Docker access introduced permission issues I didn't want to debug.

NVD API Access

The National Vulnerability Database provides a free JSON API at https://services.nvd.nist.gov/rest/json/cves/2.0. I query it with a time filter to fetch CVEs published in the last 7 days:

?pubStartDate=2024-01-01T00:00:00.000&pubEndDate=2024-01-08T00:00:00.000

The API is rate-limited to 5 requests per 30 seconds without an API key. I added a 10-second delay between requests in the n8n workflow to stay under this limit. I don't use an API key because the free tier is sufficient for my polling frequency.

The API returns CVE metadata including:

  • CVE ID
  • Description text
  • CVSS severity scores
  • Affected software configurations (CPE data)
  • Publication and modification dates

Docker API Queries

From n8n, I use the HTTP Request node to call the Docker API via the Unix socket:

GET http://unix:/var/run/docker.sock:/containers/json

This returns a JSON array of all running containers with their image names, tags, and creation timestamps. I filter this list to only include containers in my home automation stack by checking image names against a predefined list:

  • homeassistant/home-assistant
  • nodered/node-red
  • eclipse-mosquitto
  • zigbee2mqtt/zigbee2mqtt
  • esphome/esphome

I exclude system containers like Portainer and monitoring tools because those get updated through different processes.

CVE Matching Logic

This is the weakest part of the workflow, and I know it. Matching CVE descriptions to Docker images is imprecise because:

  • CVE descriptions don't always mention the exact package name
  • Docker images bundle multiple software components
  • Version numbers in CVEs don't always map cleanly to image tags

My approach is deliberately broad: if a CVE description contains a keyword that matches an image name (e.g., "mosquitto" in the description triggers a check for the eclipse-mosquitto image), I flag it as potentially affected.

This generates false positives. I get notifications about CVEs that don't actually impact my setup. But the cost of a false positive is just an unnecessary image pull, which Watchtower handles gracefully by detecting that the running image is already current.

The alternative—false negatives where I miss a real vulnerability—is worse.

Image Update Checks

For flagged containers, I query the Docker Hub API to check if a newer image tag exists:

GET https://hub.docker.com/v2/repositories/{image}/tags/?page_size=10

I compare the digest of the currently running image against the digest of the latest tag. If they differ, I assume an update is available.

This doesn't guarantee the update contains a CVE patch. It only tells me the image has been rebuilt. But in practice, maintainers of popular images (Home Assistant, Node-RED, Mosquitto) rebuild and push updated images within hours of dependency CVEs being disclosed.

Watchtower Integration

I don't have n8n directly pull and restart containers. Instead, I use Watchtower, which runs as a separate container and monitors for image updates.

When n8n detects a vulnerable container, it triggers Watchtower via its HTTP API:

POST http://watchtower:8080/v1/update
Authorization: Bearer {token}

Watchtower pulls the latest image, stops the old container, and starts a new one with the same configuration. This keeps the update process consistent with how I normally manage containers.

Watchtower is configured to only update containers with the com.centurylinklabs.watchtower.enable=true label. This prevents accidental updates to containers I want to control manually.

Notification Setup

I use Gotify for push notifications because it's self-hosted and doesn't require external services. When a container is patched, n8n sends a message to my Gotify instance:

POST https://gotify.local/message
X-Gotify-Key: {app_token}
{
  "title": "Container Patched",
  "message": "Updated homeassistant/home-assistant due to CVE-2024-12345",
  "priority": 5
}

I get these notifications on my phone via the Gotify Android app. High-severity CVEs (CVSS score above 7.0) trigger priority 8 notifications, which override Do Not Disturb.

What Worked

Early Detection Actually Works

The workflow caught a critical vulnerability in the Mosquitto MQTT broker about 18 hours after CVE publication. The updated image was available within 6 hours of that, and Watchtower applied the patch before any public exploits appeared in the wild.

I would not have known about this CVE until days later through normal channels. The automation gave me a real head start.

False Positives Are Manageable

I get about 3-5 false positive alerts per week—CVEs that mention software in my stack but don't actually affect the versions I'm running. These are annoying but not disruptive.

The key is that Watchtower handles unnecessary update attempts gracefully. If it pulls an image and finds the digest matches what's already running, it does nothing. No restart, no downtime.

Watchtower as the Update Layer

Using Watchtower instead of having n8n directly manage Docker was the right choice. Watchtower already handles:

  • Image pulling with retry logic
  • Container recreation with existing volumes and networks
  • Cleanup of old images
  • Rollback if a new container fails health checks

I didn't want to reimplement any of that in n8n workflows.

NVD API Is Reliable

The National Vulnerability Database API has been stable. I've had zero downtime or rate limit issues in 8 months of running this workflow. The data quality is high—CVE descriptions are detailed enough for keyword matching to work most of the time.

What Didn't Work

CPE Matching Is Too Fragile

I initially tried to use CPE (Common Platform Enumeration) data from the NVD API to precisely match CVEs to container images. CPE strings look like:

cpe:2.3:a:eclipse:mosquitto:2.0.15:*:*:*:*:*:*:*

The problem is that Docker images don't expose CPE identifiers. I would have to scan image contents to extract installed package versions and map them to CPE strings. That's possible with tools like Trivy or Grype, but it's heavyweight and slow.

I tried running Trivy scans from n8n, but scanning 10+ container images took 15-20 minutes and consumed significant CPU. For a workflow that runs every 6 hours, that overhead wasn't acceptable.

I fell back to simple keyword matching in CVE descriptions. It's less accurate, but it's fast and doesn't require scanning container filesystems.

Version Comparison Is Hard

Even when I can extract a version number from a CVE (e.g., "affects versions before 2.0.18"), comparing that to a Docker image tag is unreliable.

Docker tags don't follow consistent versioning schemes. Some images use semantic versioning (2.0.18), others use date-based tags (2024.1.1), and some use arbitrary labels (latest, stable, edge).

I gave up on precise version comparison. If a CVE mentions software I'm running, I flag it and let Watchtower decide if an update is available. This over-triggers updates, but it's safer than missing a real vulnerability.

Some Images Don't Get Timely Updates

Not all container maintainers respond quickly to CVEs. I've seen cases where a vulnerability is disclosed, but the affected image doesn't get rebuilt for weeks.

For critical infrastructure (Home Assistant, Mosquitto), updates are fast. For smaller projects or community-maintained images, I've waited 2-3 weeks for patches.

In those cases, the workflow alerts me, but there's no automated fix available. I have to manually assess the risk and decide whether to pin an older version, switch to a different image, or accept the vulnerability temporarily.

No Rollback on Failed Updates

Watchtower will restart a container after pulling a new image, but it doesn't automatically roll back if the new version breaks something. I've had two cases where an updated image introduced bugs that broke my home automation:

  • Home Assistant 2023.8.0 had a regression in the Zigbee integration that caused devices to disconnect
  • Node-RED 3.0.0 broke a custom node I rely on for MQTT handling

In both cases, I had to manually roll back to the previous image. Watchtower doesn't keep old images by default, so I had to pull the previous tag from Docker Hub and redeploy.

I now have Watchtower configured to keep the last 2 image versions, but rollback is still a manual process.

Notification Fatigue Is Real

Getting 3-5 false positive alerts per week adds up. I've tuned the workflow to only send high-priority notifications for CVEs with CVSS scores above 7.0, but even then, I sometimes ignore alerts because I assume they're false positives.

I haven't solved this. The trade-off is between alert fatigue and missing real vulnerabilities. I'm currently erring on the side of too many alerts.

Key Lessons

Automation Doesn't Mean Hands-Off

This workflow reduces manual effort, but it doesn't eliminate the need for judgment. I still review alerts, check if updates are actually relevant, and occasionally intervene when automated patches break things.

The value is in speed, not in removing humans from the loop. I can respond to CVEs in hours instead of days.

Imperfect Matching Is Better Than No Matching

My CVE-to-container matching logic is crude. It generates false positives. But it's fast, simple, and catches real vulnerabilities early enough to matter.

I tried building a more sophisticated system with Trivy integration and CPE matching. It was slower, more complex, and didn't meaningfully reduce false positives. Simpler is better here.

Layer Your Defenses

This workflow is one layer in my security setup. It doesn't replace:

  • Network segmentation (my home automation VLAN is isolated from the internet)
  • Regular backups (I snapshot Proxmox VMs daily)
  • Monitoring (I track container restarts and alert on unexpected behavior)
  • Manual security reviews (I still read release notes for major updates)

Automated patching helps, but it's not a complete security strategy.

Know When to Stop