Why I Built This Workflow
I run a self-hosted mapping service for offline navigation when traveling through areas with poor connectivity. The problem was simple: OpenStreetMap tiles go stale. Roads change, buildings appear, and my cached maps slowly become outdated. Manually downloading fresh tiles every few weeks felt ridiculous for something that should just happen automatically.
I already had n8n running in Docker on my Proxmox server for other automation tasks, so it made sense to handle this there. The goal was weekly tile updates, stored locally on my Synology NAS, with basic version control so I could roll back if a tile set came down corrupted or incomplete.
My Setup
Here's what I was working with:
- n8n running in Docker on Proxmox (version 1.x at the time)
- Synology NAS mounted via NFS to the Proxmox host
- Local tile server (I use a simple nginx container serving static files)
- Bash scripts for tile downloading (using wget with proper rate limiting)
- Git repository for tracking tile metadata and versions
I didn't use any fancy tile download tools. Just wget with a carefully constructed URL pattern pointing to OpenStreetMap's tile servers, respecting their usage policy of no more than 2 requests per second.
How the Workflow Actually Works
Triggering the Download
I set up a Cron node in n8n to fire every Sunday at 2 AM. Nothing fancy. The workflow starts with a simple schedule trigger.
The first step checks if the previous download is still running. I learned this the hard way when a slow download overlapped with the next scheduled run and created duplicate processes fighting over the same files. Now I write a lockfile to the NAS at workflow start and check for its existence before proceeding.
Downloading Tiles
The actual download happens through an Execute Command node running a bash script. The script:
- Calculates which tiles I need based on geographic bounds (my travel region)
- Checks existing tiles and only downloads what's missing or older than 30 days
- Pulls tiles from OSM's servers with proper rate limiting (2-second delay between requests)
- Saves everything to a timestamped directory on the NAS
I originally tried to do this directly in n8n using HTTP Request nodes in a loop, but it was absurdly slow and the workflow became unreadable. A bash script with parallel wget was 10x faster and easier to debug.
Version Control
This part is simpler than it sounds. After the download completes, another Execute Command node:
- Creates a metadata file listing all downloaded tiles with their timestamps and checksums
- Commits this metadata to a local git repository on the NAS
- Tags the commit with the download date
I'm not versioning the actual tile images (that would be massive). Just the metadata. If I need to roll back, I can compare metadata files and re-download specific tiles that changed.
Updating the Active Tile Set
Once everything downloads successfully, the workflow:
- Runs a checksum verification on the new tiles
- If checksums pass, creates a symlink pointing my nginx tile server to the new directory
- Keeps the previous 3 tile sets on disk, deletes anything older
The symlink approach means my tile server never goes down during updates. I just atomically swap the symlink once the new tiles are ready.
What Didn't Work
My first version tried to be clever with n8n's built-in nodes for everything. I used HTTP Request nodes to download tiles one by one. It took 14 hours to download tiles for a small region. Completely impractical.
I also tried storing tiles directly in Docker volumes instead of NFS-mounted storage. Bad idea. Docker volumes filled up fast, and managing space became a nightmare. Moving to NFS-mounted NAS storage solved that immediately.
The version control initially used full git commits of tile directories. This created multi-gigabyte git repositories that were painful to work with. Switching to metadata-only commits fixed it.
I wasted time trying to use n8n's error handling to retry failed tile downloads. It never worked reliably. Now the bash script handles retries with exponential backoff, and n8n just reports success or failure of the entire batch.
Current State and Limitations
The workflow runs every Sunday and takes about 2-3 hours to complete for my region (roughly 50,000 tiles). It's been stable for about 8 months now.
Limitations I'm aware of:
- No validation that tiles are actually correct, just that they downloaded without errors
- If OSM's tile servers are down, the workflow fails entirely (no fallback source)
- Rate limiting is hardcoded; if I expand my region significantly, downloads will take much longer
- The metadata git repo will grow indefinitely; I'll need to prune old history eventually
I don't monitor tile freshness in real-time. If a critical road closes and OSM updates it, I won't know until the next Sunday download. For my use case (recreational travel), that's fine. For something mission-critical, it wouldn't be.
Key Takeaways
Use n8n for orchestration, not heavy lifting. The workflow coordinates steps, but bash scripts do the actual work. This keeps n8n workflows readable and lets you optimize the heavy parts independently.
Don't version large binary data in git. Version the metadata that describes it. You can always reconstruct or re-download if needed.
Lockfiles prevent overlapping runs. Check for them at the start of any long-running workflow.
Atomic updates (like symlink swaps) mean your services never see partial or broken data during updates.
Respect rate limits on public services. OSM's tile servers are free and community-run. Hammering them is both rude and gets you blocked.