Tech Expert & Vibe Coder

With 14+ years of experience, I specialize in self-hosting, AI automation, and Vibe Coding – building applications using AI-powered tools like Google Antigravity, Dyad, and Cline. From homelabs to enterprise solutions.

Configuring Tailscale Funnel with Caddy reverse proxy for exposing local LLM APIs without opening firewall ports or VPS tunnels

Why I Needed This Setup

I run several LLM instances locally—Ollama, LM Studio, and a few custom models I've fine-tuned for specific tasks. These run on my home server (Proxmox host with GPU passthrough to a dedicated VM). I wanted to access these APIs remotely without the usual headaches: no port forwarding on my router, no firewall rules, no VPS tunneling setup, and definitely no exposing raw IP addresses to the internet.

I'd been using Tailscale for a while to access my homelab, but I wanted something more flexible than just VPN access. I needed:

  • Public HTTPS endpoints for specific services
  • No changes to my home network configuration
  • Automatic TLS certificates
  • A reverse proxy I could actually understand and modify quickly

That's when I looked into Tailscale Funnel combined with Caddy. Funnel lets you expose services from your tailnet to the public internet without opening firewall ports. Caddy handles the reverse proxy layer with automatic HTTPS.

My Actual Setup

Here's what I'm running:

  • Home server: Proxmox node with an Ubuntu VM running Docker
  • LLM services: Ollama on port 11434, LM Studio API on port 1234
  • Tailscale: Installed directly on the Ubuntu VM (not in Docker—this matters)
  • Caddy: Running in Docker on the same VM

The VM is connected to my tailnet with the hostname llm-server. My Tailscale machine name becomes llm-server.tail[xxxxx].ts.net.

Installing Tailscale on the VM

I installed Tailscale directly on the host VM, not inside a container. This is important because Funnel needs to control the Tailscale daemon, and doing that from inside Docker gets messy fast.

curl -fsSL https://tailscale.com/install.sh | sh
sudo tailscale up
sudo tailscale funnel status

At this point, Funnel isn't enabled yet—just checking that the command works.

Caddy in Docker

I run Caddy in Docker because I like keeping my reverse proxy isolated and easily replaceable. My docker-compose.yml looks like this:

services:
  caddy:
    image: caddy:latest
    container_name: caddy
    restart: unless-stopped
    ports:
      - "8080:80"
      - "8443:443"
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile
      - caddy_data:/data
      - caddy_config:/config
    network_mode: host

volumes:
  caddy_data:
  caddy_config:

I used network_mode: host so Caddy can reach services on localhost without Docker networking complications. This means Caddy can proxy to 127.0.0.1:11434 (Ollama) directly.

Caddyfile Configuration

My Caddyfile is extremely simple. Caddy handles TLS automatically, which is one reason I picked it over nginx.

llm-server.tail[xxxxx].ts.net {
    reverse_proxy /ollama/* localhost:11434
    reverse_proxy /lmstudio/* localhost:1234
}

This proxies requests to my LLM APIs based on the URL path. For example:

  • https://llm-server.tail[xxxxx].ts.net/ollama/api/generate → Ollama
  • https://llm-server.tail[xxxxx].ts.net/lmstudio/v1/chat/completions → LM Studio

Enabling Tailscale Funnel

Now comes the key part. I needed to expose Caddy (running on ports 8080/8443) through Tailscale Funnel.

sudo tailscale serve https:8443 / http://127.0.0.1:8080
sudo tailscale funnel 8443 on

This tells Tailscale:

  • Serve HTTPS traffic on port 8443
  • Forward it to Caddy's HTTP port (8080) locally
  • Enable Funnel to allow public internet access

I chose port 8443 instead of 443 because I already had other services using 443 on the same machine. Tailscale Funnel supports 443, 8443, and 10000.

What Worked

Once everything was running, I could access my LLM APIs from anywhere:

curl https://llm-server.tail[xxxxx].ts.net:8443/ollama/api/generate \
  -d '{"model": "llama2", "prompt": "Why is the sky blue?"}'

This worked from my phone, my laptop at a coffee shop, and even from a friend's network when I was testing cross-network behavior. No VPN connection required on the client side—just a public HTTPS URL.

TLS certificates were handled automatically by Tailscale. I didn't touch Let's Encrypt, didn't configure cert renewal, didn't deal with DNS challenges. It just worked.

The reverse proxy layer (Caddy) let me route different paths to different services without exposing raw ports. I could add more LLM backends or switch models without changing the public URL structure.

What Didn't Work

My first attempt used Caddy inside Docker with bridge networking. This failed because Caddy couldn't reach localhost:11434—it was isolated in its own network namespace. I had to switch to network_mode: host.

I initially tried running Tailscale inside a Docker container using tailscale/tailscale. This was a mistake. Funnel requires direct control over the Tailscale daemon, and the containerized version doesn't support tailscale serve or tailscale funnel commands properly. I had to install Tailscale on the host VM.

I also tried using Funnel on port 443 directly, but that conflicted with another service I had running. Moving to 8443 fixed it, but it means I have to specify the port in my URLs. Not ideal, but acceptable.

There's a quirk with Caddy's automatic HTTPS: it tries to provision certificates even when traffic is coming through Tailscale (which already handles TLS). I had to explicitly configure Caddy to listen on HTTP internally and let Tailscale terminate TLS. This took me a while to figure out because the error messages weren't clear.

Key Takeaways

Tailscale Funnel is not a VPN feature—it's a way to expose specific services to the public internet without opening firewall ports. This distinction matters. Your LLM APIs become publicly accessible, so you need authentication at the application layer.

Running Tailscale on the host (not in Docker) is essential for Funnel to work. The tailscale serve and tailscale funnel commands need direct daemon access.

Caddy is a good match for this setup because it handles TLS automatically and has a simple config syntax. But you have to be careful about network modes in Docker. Host networking is the easiest path.

This setup works well for personal use or small teams. I wouldn't use it for production workloads with strict SLAs, but for accessing my own LLM APIs remotely, it's perfect. No monthly VPS costs, no tunneling services, no firewall rules to maintain.

If you're already using Tailscale for your homelab, Funnel + Caddy is a clean way to selectively expose services without the usual infrastructure overhead.