Tech Expert & Vibe Coder

With 14+ years of experience, I specialize in self-hosting, AI automation, and Vibe Coding – building applications using AI-powered tools like Google Antigravity, Dyad, and Cline. From homelabs to enterprise solutions.

Implementing n8n workflows to auto-deploy OpenWorkers functions from GitHub releases to self-hosted Rust runtime environments

Why I Built This System

I maintain a self-hosted Rust runtime environment that runs serverless functions for various automation tasks. These functions need to be updated whenever I push new releases to GitHub, but I didn't want to SSH into the server every time or write custom deployment scripts that would break when the setup changed.

I already had n8n running for other workflows, so I decided to use it to bridge GitHub releases and my deployment process. The goal was simple: when I tag a new release, n8n should pull the binary, verify it, and deploy it to the correct runtime slot without me touching anything.

My Setup

Here's what I'm actually running:

  • n8n hosted on Proxmox in a Docker container, connected to Traefik for SSL
  • A Rust-based serverless runtime I built that runs functions as isolated processes
  • GitHub repositories where I version my OpenWorkers functions
  • A simple HTTP API on the runtime that accepts deployments via POST requests

The runtime expects a JSON payload with the binary URL, function name, and a hash for verification. If the hash matches, it downloads the binary, places it in the correct directory, and restarts the function process.

How I Set Up the n8n Workflow

Triggering on GitHub Releases

I used n8n's webhook node to listen for GitHub release events. GitHub sends a POST request to my webhook URL whenever a new release is published. The payload includes the release tag, asset URLs, and metadata.

I configured the webhook in my GitHub repository settings under "Webhooks" and pointed it to:

https://n8n.mydomain.com/webhook/github-deploy

I set the content type to application/json and selected only the "Releases" event to avoid unnecessary triggers.

Extracting Release Data

The GitHub webhook payload is large and nested. I used an n8n Function node to extract what I actually needed:

  • Release tag name
  • Asset download URL (filtered to match .wasm or .so files)
  • Repository name
  • SHA256 checksum (if included in release notes)

Here's the JavaScript I used in the Function node:

const release = items[0].json;
const assets = release.assets || [];

const binary = assets.find(asset => 
  asset.name.endsWith('.wasm') || asset.name.endsWith('.so')
);

if (!binary) {
  throw new Error('No valid binary found in release assets');
}

const checksum = release.body.match(/SHA256:\s*([a-f0-9]{64})/i);

return [{
  json: {
    tag: release.tag_name,
    repo: release.repository.name,
    url: binary.browser_download_url,
    checksum: checksum ? checksum[1] : null
  }
}];

This fails loudly if no binary is found, which is what I want. I don't want partial deployments.

Verifying the Binary

I added an HTTP Request node to download the binary temporarily and compute its SHA256 hash. n8n doesn't have a built-in hash node, so I used another Function node with Node.js crypto:

const crypto = require('crypto');
const binaryData = items[0].binary.data;

const hash = crypto
  .createHash('sha256')
  .update(Buffer.from(binaryData, 'base64'))
  .digest('hex');

const expected = items[0].json.checksum;

if (expected && hash !== expected) {
  throw new Error(`Hash mismatch: expected ${expected}, got ${hash}`);
}

return [{
  json: {
    ...items[0].json,
    verified_hash: hash
  }
}];

If the checksum doesn't match, the workflow stops. I log this to a file so I can review failures later.

Deploying to the Runtime

Once verified, I used another HTTP Request node to POST the deployment payload to my Rust runtime API:

POST https://runtime.mydomain.com/api/deploy
Content-Type: application/json

{
  "function_name": "{{ $json.repo }}",
  "binary_url": "{{ $json.url }}",
  "hash": "{{ $json.verified_hash }}",
  "tag": "{{ $json.tag }}"
}

The runtime downloads the binary, verifies the hash again (redundant, but safer), and swaps the old function with the new one. If the new binary fails to start, it rolls back to the previous version.

Logging and Notifications

I added a final node that writes deployment results to a log file and sends a notification to my self-hosted Gotify instance. This way, I know immediately if something succeeded or failed.

What Worked

The workflow has been running for three months without manual intervention. Every time I tag a release, the function deploys within 30 seconds. The hash verification caught one corrupted download early on, which would have caused a runtime crash if deployed.

Using n8n meant I didn't need to write custom CI/CD scripts or set up GitHub Actions with self-hosted runners. The visual workflow made debugging easier because I could see exactly where data was transforming.

The rollback mechanism in my runtime API saved me twice when new binaries had startup bugs. The old version stayed active until the new one passed health checks.

What Didn't Work

My first attempt used n8n's built-in GitHub node instead of webhooks. It polled the API every minute, which was slow and hit rate limits during testing. Switching to webhooks made deployments instant and removed the polling overhead.

I initially tried to handle binary downloads inside n8n's HTTP Request node, but large files (over 50MB) caused memory issues. I moved the actual download to the runtime API and only passed the URL through n8n.

Error handling was weak at first. If the runtime API was unreachable, the workflow would hang indefinitely. I added a timeout setting to the HTTP Request node and a retry mechanism with exponential backoff.

I also learned that n8n's Function nodes don't persist state between executions. I tried to cache checksums for reuse but had to move that logic to the runtime API instead.

Key Takeaways

Webhooks are faster and more reliable than polling for real-time deployments. If the service supports them, use them.

Always verify binaries before deployment, even if you trust the source. Corrupted downloads happen more often than expected.

Keep deployment logic in the target system, not the automation tool. n8n should orchestrate, not execute.

Log everything. When deployments fail at 2 AM, detailed logs are the only way to diagnose issues without reproducing them.

Test rollback mechanisms under failure conditions. A deployment system that can't recover from bad releases is worse than manual deployments.