Tech Expert & Vibe Coder

With 14+ years of experience, I specialize in self-hosting, AI automation, and Vibe Coding – building applications using AI-powered tools like Google Antigravity, Dyad, and Cline. From homelabs to enterprise solutions.

Building an n8n workflow to auto-archive docker container logs to s3-compatible storage with compression and 90-day retention policies

Why I Built This Workflow

I run multiple Docker containers on my Proxmox host, and logs pile up fast. Without rotation or cleanup, they eat disk space and make troubleshooting harder. I needed a way to automatically archive old logs, compress them, ship them to S3-compatible storage, and enforce a 90-day retention policy—all without manual intervention.

I already use n8n for automation, so building a workflow to handle this made sense. It keeps everything in one place and lets me see exactly what's happening at each step.

My Real Setup

Here's what I'm working with:

  • Proxmox host running Docker containers (n8n, monitoring tools, DNS services)
  • n8n instance in Docker with access to the host filesystem via bind mounts
  • MinIO as my S3-compatible storage backend (self-hosted on the same Proxmox node)
  • Docker logs stored in /var/lib/docker/containers/[container-id]/

I wanted logs older than 7 days compressed and moved to S3, then deleted locally after 90 days in storage.

How the Workflow Actually Works

Step 1: Scanning for Log Files

I use an n8n Execute Command node to list container log files. The command looks like this:

find /var/lib/docker/containers -name "*-json.log" -mtime +7

This finds all JSON-formatted Docker logs older than 7 days. I parse the output as a list so n8n can process each file individually.

Step 2: Compressing Logs

For each log file, I run another Execute Command node to compress it with gzip:

gzip -c /var/lib/docker/containers/[container-id]/[container-id]-json.log > /tmp/[container-id]-$(date +%Y%m%d).log.gz

I save the compressed file to /tmp with a datestamp in the filename. This keeps things organized and makes it easy to identify when the log was archived.

Step 3: Uploading to S3

I use the n8n AWS S3 node (which works with any S3-compatible service). I configured it to point at my MinIO instance:

  • Endpoint: https://minio.local:9000
  • Bucket: docker-logs
  • Region: Doesn't matter for MinIO, but I set it to us-east-1 anyway
  • Access/Secret keys from MinIO

The node uploads the compressed log file from /tmp and uses the filename as the S3 object key. I organize files by container ID and date in the bucket structure.

Step 4: Deleting Local Files

After a successful upload, I delete the compressed file from /tmp and the original log file from the Docker directory:

rm /tmp/[container-id]-$(date +%Y%m%d).log.gz
rm /var/lib/docker/containers/[container-id]/[container-id]-json.log

This prevents logs from taking up space twice.

Step 5: Enforcing 90-Day Retention

I added a separate workflow that runs daily to clean up old files from MinIO. It uses the AWS S3 node again to list objects in the bucket, filters for files older than 90 days based on their LastModified timestamp, and deletes them.

The logic looks like this in a Function node:

const cutoffDate = new Date();
cutoffDate.setDate(cutoffDate.getDate() - 90);

return items.filter(item => {
  const lastModified = new Date(item.json.LastModified);
  return lastModified < cutoffDate;
});

Then I pass the filtered list to an S3 delete operation.

What Worked

This setup has been running for about three months without issues. Logs get archived on schedule, storage usage stays predictable, and I can pull old logs from MinIO when I need to debug something that happened weeks ago.

Using n8n for this made sense because I already had it running, and I didn't want to maintain a separate cron job or script. Everything is visible in the workflow UI, so I can see if something fails and why.

Compression works well—most log files shrink to about 10-15% of their original size. A 500MB log file becomes ~50MB after gzip, which makes a real difference over time.

What Didn't Work

I initially tried to compress files directly in the Docker log directory instead of copying them to /tmp first. This caused permission issues because n8n's Docker user didn't have write access to that location. Moving the compression step to /tmp solved it.

I also ran into a problem with the S3 upload node timing out on very large log files (over 1GB). I added a check to skip files above a certain size and handle them manually. It doesn't happen often, but it's something to watch for.

The retention cleanup workflow initially deleted files one at a time, which was slow when there were hundreds of old logs. I switched to batching deletes in groups of 50 using n8n's Split In Batches node, and that sped things up significantly.

Key Takeaways

  • Automating log archival saves disk space and keeps systems running smoothly without manual cleanup.
  • Compression is essential—gzip shrinks logs by 85-90%, making storage costs trivial.
  • S3-compatible storage like MinIO works perfectly for this and gives you full control over retention policies.
  • n8n's flexibility makes it easy to build workflows like this without writing custom scripts, but you still need to understand filesystem permissions and command-line basics.
  • Testing the workflow on a small set of logs first is critical—deleting the wrong files or breaking Docker logging can cause real problems.

Final Notes

This workflow isn't complicated, but it solves a real problem I was facing. If you're running Docker in production or even just at home, log management matters. Letting logs grow unchecked will eventually cause disk space issues or make troubleshooting harder than it needs to be.

I built this because I needed it to work reliably without constant attention. If you're in a similar situation, the basic structure here should translate directly to your setup—just adjust the paths, bucket names, and retention periods to match your environment.