Why I Built a Docker Security Pipeline
I run multiple Docker containers across my self-hosted infrastructure—everything from n8n automation to Synology services. Over time, I noticed something uncomfortable: I had no systematic way to know when vulnerabilities appeared in my images. I'd rebuild containers occasionally, but only when I remembered to check for updates.
That reactive approach felt wrong. If a critical CVE dropped in a base image I was using, I wanted to know immediately—not weeks later when I happened to rebuild. I needed automated scanning that would alert me without requiring manual checks.
My Setup: Trivy, Dockle, and Slack
I chose Trivy for vulnerability scanning because it's fast, comprehensive, and actually maintained. It scans both OS packages and application dependencies, which matters when you're running custom images with multiple layers.
I added Dockle for Docker best practices—things like checking if images run as root, whether secrets are baked in, or if unnecessary ports are exposed. Trivy finds CVEs; Dockle catches configuration mistakes.
For notifications, I use Slack. I already have channels for monitoring alerts, so adding security findings there made sense. Email would get buried; Slack gets seen.
The Docker Compose Structure
I run this as a scheduled service using Cronicle (my job scheduler), but the core is a Docker Compose setup that scans images and sends results to Slack.
version: '3.8'
services:
security-scanner:
image: aquasec/trivy:latest
container_name: security-scanner
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./scan-results:/results
- ./scripts:/scripts:ro
environment:
- SLACK_WEBHOOK=${SLACK_WEBHOOK}
command: /scripts/scan-and-notify.sh
restart: "no"
The scanner mounts the Docker socket to access running containers, writes results to a local directory, and runs a custom script that handles both scanning and notification.
The Scanning Script
This script does the actual work. It scans all running containers, runs Dockle checks, and formats findings for Slack.
#!/bin/bash
set -euo pipefail
RESULTS_DIR="/results"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
SLACK_WEBHOOK="${SLACK_WEBHOOK:-}"
# Create results directory
mkdir -p "$RESULTS_DIR"
# Get list of running containers
CONTAINERS=$(docker ps --format '{{.Image}}' | sort -u)
if [ -z "$CONTAINERS" ]; then
echo "No running containers found"
exit 0
fi
# Initialize summary
CRITICAL_COUNT=0
HIGH_COUNT=0
TOTAL_ISSUES=0
# Scan each container
while IFS= read -r IMAGE; do
echo "Scanning $IMAGE..."
# Trivy scan
TRIVY_OUTPUT="$RESULTS_DIR/trivy_${IMAGE//\//_}_$TIMESTAMP.json"
trivy image \
--severity CRITICAL,HIGH \
--format json \
--output "$TRIVY_OUTPUT" \
"$IMAGE" || true
# Count vulnerabilities
if [ -f "$TRIVY_OUTPUT" ]; then
CRITICAL=$(jq '[.Results[]?.Vulnerabilities[]? | select(.Severity=="CRITICAL")] | length' "$TRIVY_OUTPUT")
HIGH=$(jq '[.Results[]?.Vulnerabilities[]? | select(.Severity=="HIGH")] | length' "$TRIVY_OUTPUT")
CRITICAL_COUNT=$((CRITICAL_COUNT + CRITICAL))
HIGH_COUNT=$((HIGH_COUNT + HIGH))
if [ "$CRITICAL" -gt 0 ] || [ "$HIGH" -gt 0 ]; then
TOTAL_ISSUES=$((TOTAL_ISSUES + 1))
echo " Found: $CRITICAL CRITICAL, $HIGH HIGH"
fi
fi
# Dockle scan
DOCKLE_OUTPUT="$RESULTS_DIR/dockle_${IMAGE//\//_}_$TIMESTAMP.json"
dockle --format json --output "$DOCKLE_OUTPUT" "$IMAGE" || true
done <<< "$CONTAINERS"
# Send Slack notification if issues found
if [ "$TOTAL_ISSUES" -gt 0 ] && [ -n "$SLACK_WEBHOOK" ]; then
SLACK_MESSAGE=$(cat <
What This Script Does
It lists all running Docker images, scans each with Trivy for CRITICAL and HIGH severity CVEs, runs Dockle for configuration issues, saves detailed JSON results locally, and sends a Slack notification if problems are found.
I only alert on CRITICAL and HIGH. MEDIUM and LOW findings are logged but don't trigger notifications—otherwise I'd get too much noise.
Setting Up Slack Notifications
I created a Slack webhook in my monitoring channel. The webhook URL goes in a .env file:
SLACK_WEBHOOK=https://hooks.slack.com/services/YOUR/WEBHOOK/URL
The notification format is simple: counts of CRITICAL and HIGH issues, number of affected images, and a timestamp. I don't dump full CVE details into Slack—that would be unreadable. The detailed JSON files are stored locally for investigation.
Scheduling with Cronicle
I run this scan daily at 2 AM using Cronicle. The job definition looks like this:
{
"title": "Docker Security Scan",
"enabled": 1,
"timing": {
"hours": [2],
"minutes": [0]
},
"plugin": "shellplug",
"params": {
"script": "cd /opt/security-scanner && docker compose up --abort-on-container-exit"
}
}
This starts the scanner container, runs the scan, sends notifications if needed, and exits. The container doesn't stay running—it's a one-shot job.
What Worked
The pipeline catches real issues. I've gotten alerts for CVEs in base images I was using—things I wouldn't have noticed otherwise. One alert caught a CRITICAL vulnerability in a Debian package that affected three of my containers. I rebuilt them the same day.
Dockle has been useful for catching configuration problems. It flagged containers running as root when they didn't need to, and found a case where I'd accidentally left a test password in an environment variable.
Slack notifications work well. They're visible but not overwhelming. I get a summary, and if I need details, I check the JSON files.
What Didn't Work
My first version scanned every image in my Docker registry, not just running containers. That was too noisy—I'd get alerts for old images I wasn't even using anymore. Limiting scans to running containers fixed that.
I initially tried sending full CVE lists to Slack. The messages were huge and unreadable. Switching to summary counts with local file storage was much better.
Trivy's database updates can be slow. If the container doesn't have a recent vulnerability database cached, the first scan takes several minutes. I worked around this by running a weekly database update job separately, so daily scans are faster.
False Positives
Not every finding is actionable. Some CVEs don't apply to how I'm using a package. Trivy doesn't know my context—it just reports what's in the database. I've learned to review findings before acting, rather than blindly rebuilding everything.
Key Takeaways
Automated scanning is necessary if you run containers long-term. Manual checks don't scale and get forgotten.
Scan running containers, not your entire image library. Focus on what's actually deployed.
Combine vulnerability scanning (Trivy) with configuration checks (Dockle). They catch different problems.
Keep notifications concise. Summary counts in Slack, detailed results in files.
Not every CVE requires immediate action. Review findings in context.
This setup gives me visibility into security issues without requiring constant manual work. It's not perfect—no automated system is—but it's significantly better than hoping I remember to check for updates.