Tech Expert & Vibe Coder

With 14+ years of experience, I specialize in self-hosting, AI automation, and Vibe Coding – building applications using AI-powered tools like Google Antigravity, Dyad, and Cline. From homelabs to enterprise solutions.

Implementing Nginx stream module SNI routing to multiplex multiple HTTPS services on a single public IP without TLS termination

Why I Needed SNI Routing Without TLS Termination

I run multiple HTTPS services behind a single public IP on my home network. Each service manages its own TLS certificates and has its own security requirements. Some are Synology services with built-in HTTPS, others are Docker containers with their own certificate handling.

The problem: I didn't want a central reverse proxy terminating TLS for everything. That would mean:

  • Managing all certificates in one place
  • Trusting the proxy with decrypted traffic
  • Reconfiguring the proxy every time a backend service updates its certificate

I needed a way to route incoming HTTPS connections to the right backend service based on the requested domain name, without decrypting the traffic. That's where Nginx's stream module with SNI routing came in.

My Setup and Requirements

I'm running Nginx in a Docker container on my Proxmox host. The container listens on my single public IP address (port 443) and needs to route traffic to:

  • Synology NAS on my local network (handling its own certificates)
  • Various Docker containers running web services
  • Self-hosted apps with their own HTTPS implementations

All these services sit behind my router, which forwards port 443 to the Nginx container. My DNS has wildcard A records pointing subdomains to my public IP.

The key requirement: Nginx must read the SNI field from the TLS handshake to determine where to route the connection, but not decrypt it.

What Actually Worked

I used Nginx's stream module with ssl_preread enabled. This tells Nginx to peek at the TLS handshake without terminating it.

Here's the configuration I'm actually running:

stream {
  map $ssl_preread_server_name $backend {
    nas.mydomain.com      192.168.1.10:5001;
    n8n.mydomain.com      192.168.1.20:5678;
    vault.mydomain.com    192.168.1.30:8200;
    default               127.0.0.1:8080;
  }

  server {
    listen 443;
    proxy_pass $backend;
    ssl_preread on;
    proxy_connect_timeout 5s;
    proxy_timeout 30s;
  }
}

This sits in /etc/nginx/nginx.conf inside my Nginx container, outside the http block. The stream module operates at layer 4, not layer 7.

When a client connects to https://nas.mydomain.com, Nginx:

  1. Receives the TLS ClientHello message
  2. Reads the SNI field (the requested domain name)
  3. Matches it against the map
  4. Opens a TCP connection to the backend
  5. Passes all bytes through unchanged

The backend service completes the TLS handshake directly with the client. Nginx never sees the decrypted traffic.

Why This Configuration

I set proxy_connect_timeout to 5 seconds because my backend services are all local. If they don't respond quickly, something is wrong.

The proxy_timeout of 30 seconds is higher because some of my services maintain long-lived connections (like websockets in n8n). I found 30 seconds worked without breaking legitimate connections.

The default entry catches anything that doesn't match. I point it to a local service that returns a basic error page. This prevents connection hangs when someone hits an undefined subdomain.

What Didn't Work

My first attempt used the http block with proxy_pass, thinking I could just forward HTTPS connections. That failed immediately because Nginx tried to parse the TLS handshake as HTTP.

I then tried mixing ssl_preread with certificate configuration in the same server block:

server {
  listen 443 ssl;
  ssl_preread on;
  ssl_certificate /path/to/cert;
  ...
}

Nginx rejected this outright. You can't enable ssl_preread and also configure SSL parameters. It's one or the other.

I also wasted time trying to use a resolver directive for local IPs. The documentation mentions resolver for DNS lookups, but I found it unnecessary for static local addresses. It only matters if you're using variables that need DNS resolution at runtime.

The Certificate Mismatch Problem

One backend service was using a self-signed certificate with the wrong CN. Even though Nginx wasn't terminating TLS, clients still saw the certificate error from the backend. I had to regenerate that service's certificate with the correct domain name.

This is important: SNI routing doesn't hide certificate problems. If your backend presents a cert for the wrong domain, clients will reject it. Nginx just routes the connection; it doesn't fix certificate issues.

Limitations I Hit

You can't route based on URL paths with this approach. SNI only contains the hostname, not the full URL. If you need path-based routing, you must terminate TLS and use the http module.

I also can't inspect or modify the traffic in any way. No logging of HTTP requests, no adding headers, no rate limiting at the application layer. This is pure TCP proxying.

Port 443 is locked to stream mode. If I want to terminate TLS for some services, I need a separate Nginx instance or a different port. I can't mix stream and http on the same port.

Monitoring and Debugging

Stream module logging is minimal. I added this to see connection attempts:

log_format stream_routing '$remote_addr [$time_local] '
                          '$protocol $status $bytes_sent $bytes_received '
                          '$session_time "$ssl_preread_server_name"';

access_log /var/log/nginx/stream.log stream_routing;

This shows which domain was requested and whether the connection succeeded. It helped me catch typos in my map.

For debugging, I used openssl s_client to test connections:

openssl s_client -connect mydomain.com:443 -servername nas.mydomain.com

This lets me verify that SNI routing works and check which certificate the backend presents.

Key Takeaways

SNI routing with ssl_preread works well when you want backend services to handle their own TLS. It's simpler than managing all certificates centrally.

The stream module is not the http module. Don't try to mix them or expect http features to work.

Certificate management stays with each backend service. This is good for isolation but means you can't fix certificate problems at the proxy layer.

Logging and debugging are limited compared to full reverse proxying. You won't see HTTP-level details.

If you need any application-layer inspection or modification, this approach won't work. You'll need to terminate TLS instead.

When I'd Use This Again

This setup makes sense when:

  • Backend services already handle HTTPS properly
  • You don't need centralized certificate management
  • You want to minimize trust in the proxy
  • You're routing to a small number of known services

For dynamic environments or when I need request inspection, I'd terminate TLS at the proxy instead.