Why I Started Using Uptime Kuma with Cloudflare Tunnels
I run several services at home—mostly for my family and myself. Some are behind Cloudflare Tunnels because I don’t want to expose my home IP or deal with dynamic DNS and port forwarding. For a while, I used Uptime Kuma to monitor these services and get notified when something went down.
Then I hit a problem: Cloudflare was lying to my monitoring.
When a service behind a tunnel went offline, Cloudflare would return a 200 HTTP status code along with its own “service unavailable” page. Uptime Kuma saw the 200 and assumed everything was fine. I’d only find out something was broken when someone complained.
This wasn’t acceptable. I needed accurate monitoring, not false positives from a proxy layer.
My Setup and the Problem
I run Uptime Kuma in a Docker container on my Proxmox host. Most of my self-hosted services sit behind Cloudflare Tunnels—things like my family’s shared calendar, a few automation dashboards, and some internal tools.
Uptime Kuma was configured to do simple HTTP checks every few minutes. If a service didn’t respond, I’d get a Discord notification via webhook. This worked perfectly for services not behind Cloudflare.
But for tunneled services, Cloudflare’s status page broke the chain. The tunnel itself was up, so Cloudflare returned a 200. The actual service could be dead, and I’d never know.
What Worked: Service Tokens and Bypass Rules
The solution came from Cloudflare Zero Trust’s service authentication feature. Instead of relying on HTTP status codes alone, I configured Uptime Kuma to authenticate as a trusted service using Cloudflare’s service tokens.
Here’s what I did:
Step 1: Create a Service Token
I went to my Cloudflare Zero Trust dashboard and created a service token under Access > Service Auth. This gave me two values:
CF-Access-Client-IdCF-Access-Client-Secret
I saved these immediately because Cloudflare only shows the secret once.
Step 2: Create an Access Group
I created a new access group specifically for monitoring. Under Define Group Criteria, I selected:
- Selector: Service Token
- Value: The name of the service token I just created
This group represents “requests coming from my monitoring system.”
Step 3: Add a Bypass Policy
For each application behind a tunnel that I wanted to monitor, I added a new policy at the top of the policy list:
- Action: Service Auth
- Assign a Group: My monitoring access group
- Enable: Return 401 on failure
This tells Cloudflare: “If a request comes in with valid service token headers, let it through. If the service is actually down, return a 401 instead of a 200.”
Step 4: Configure Uptime Kuma
In Uptime Kuma, I edited each monitor for tunneled services and added custom headers:
{
"CF-Access-Client-Id": "my-actual-client-id",
"CF-Access-Client-Secret": "my-actual-secret"
}
Now when Uptime Kuma checks a service, it authenticates as a trusted client. If the service is down, Cloudflare returns a 401, and Uptime Kuma correctly marks it as offline.
What Didn’t Work
Before I figured this out, I tried a few things that failed:
Keyword monitoring: I configured Uptime Kuma to look for specific text on the page instead of relying on status codes. This worked, but it was fragile. Any change to the page layout broke monitoring, and it added unnecessary load.
TCP checks: I tried monitoring the tunnel’s TCP port directly, but that only told me if the tunnel process was running—not whether the actual service behind it was responding.
Ignoring Cloudflare entirely: I briefly considered bypassing Cloudflare for monitoring by checking services directly on my local network. But that defeated the purpose—I wanted to know if the public-facing endpoint was working, not just the internal one.
Key Takeaways
Cloudflare’s service tokens exist specifically for this use case, but they’re poorly documented. Once configured, they work reliably.
The trick is understanding that Cloudflare sits between your monitor and your service. If you don’t authenticate properly, you’re monitoring Cloudflare’s availability, not your service’s.
Service tokens let you bypass that layer and get accurate status codes. The setup takes about ten minutes per application, and it’s been stable for me ever since.
If you’re using Cloudflare Tunnels and external monitoring, don’t trust HTTP 200 responses. Use service tokens.