Why I Built Rate Limiting for Home Assistant Webhooks
I run Home Assistant on my home network with several automations triggered by webhooks. One day, I accidentally created a feedback loop: an automation triggered a webhook that fired another automation, which triggered the first one again. Within seconds, my nginx logs showed thousands of requests hammering the server. Home Assistant stayed up, but my system was clearly misbehaving.
I needed a way to stop these runaway loops at the edge—before they even reached Home Assistant. I didn't want to rely on automation-level delays or scripts because those solutions only work after the request has already been processed. I wanted protection at the reverse proxy layer.
My Setup
I run Home Assistant in Docker on a Proxmox host. All external traffic goes through nginx as a reverse proxy. I already had Redis running for other services (like n8n state management), so adding rate limiting using Redis as the backend made sense.
The goal was simple: limit webhook endpoints to a reasonable number of requests per time window. If something goes wrong, nginx should return a 429 status code and stop the flood before it reaches Home Assistant.
What I Actually Configured
Installing nginx with the limit_req Module
nginx includes the ngx_http_limit_req_module by default, which handles rate limiting using shared memory zones. I didn't need Redis at first—I tried the built-in approach first because it's simpler.
In my nginx configuration, I added a rate limit zone in the http block:
http {
limit_req_zone $binary_remote_addr zone=webhook_limit:10m rate=10r/m;
...
}
This creates a shared memory zone called webhook_limit that tracks requests by IP address. The rate=10r/m means 10 requests per minute. The 10m allocates 10 megabytes of memory to store request state.
Then, in the server block for Home Assistant, I applied the limit to webhook paths:
location /api/webhook/ {
limit_req zone=webhook_limit burst=5 nodelay;
proxy_pass http://homeassistant:8123;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
The burst=5 allows short bursts of up to 5 extra requests beyond the rate limit before rejecting them. The nodelay option processes burst requests immediately instead of queuing them.
Why This Worked (Mostly)
This setup stopped the automation loop immediately. When the webhook fired too many times, nginx returned a 429 status code, and Home Assistant never saw the excess requests. The burst parameter was useful because some legitimate automations might fire a few webhooks in quick succession (like motion sensors triggering multiple zones).
But there was a limitation: the rate limit was per IP address. If I had multiple devices or services triggering webhooks from the same IP (which I do, since everything is behind my router), they all shared the same limit. This wasn't ideal, but it was good enough to prevent runaway loops.
Why I Didn't Use Redis (Yet)
I considered using Redis for more granular rate limiting—like per webhook path or per automation—but I didn't actually implement it. Here's why:
- The built-in nginx solution worked for my use case.
- Adding Redis would require the
lua-nginx-moduleor a third-party module, which complicates the setup. - I didn't need distributed rate limiting across multiple nginx instances.
If I had multiple nginx servers or needed more complex logic (like rate limiting per webhook endpoint instead of per IP), Redis would make sense. But for a single home server, shared memory zones were simpler and faster.
What Didn't Work
Trying to Rate Limit Inside Home Assistant
Before adding nginx rate limiting, I tried to handle this inside Home Assistant using automation delays. The problem is that delays only work after the automation has already started. If the trigger fires 100 times in one second, Home Assistant queues all 100 executions. Adding a delay at the end of the action doesn't stop the queue from growing.
Some people suggest turning off the automation as the first action, adding a delay, then re-enabling it. This works, but it's fragile. If the automation crashes or the delay is interrupted, the automation stays disabled. I didn't want to risk that.
Using Scripts Instead of Automations
Scripts in Home Assistant have a mode setting that can prevent parallel execution. Setting mode: single means the script won't run again until the previous execution finishes. This is useful, but it doesn't stop the trigger from firing. The automation still processes the event—it just doesn't run the script multiple times.
This didn't solve my problem because the issue was at the network level. I needed to stop the requests before they even reached Home Assistant.
Misunderstanding the burst Parameter
At first, I set burst=0 thinking it would enforce a strict rate limit. Instead, it rejected all requests that exceeded the rate, even legitimate ones. The burst parameter exists to handle short spikes in traffic without rejecting valid requests. Setting it too low made the system too aggressive.
I eventually settled on burst=5 after testing with real automations. This allowed a few rapid-fire webhooks (like multiple motion sensors triggering at once) without blocking them, but still stopped runaway loops.
Key Takeaways
- Rate limiting at the reverse proxy level is more effective than trying to handle it inside Home Assistant.
- nginx's built-in
limit_reqmodule is simple and works well for single-server setups. - The
burstparameter is important—set it too low and you'll block legitimate traffic. - Redis-based rate limiting is overkill for most home setups unless you need distributed limiting or per-endpoint rules.
- Always test rate limits with real automations. What seems reasonable in theory might block legitimate use cases.
If you're running Home Assistant behind a reverse proxy, I recommend adding basic rate limiting to webhook endpoints. It's a simple safeguard that can save you from debugging runaway automations at 2 AM.