I have a need for routing/loadbalance to a backend(s) based on subdomain names, terminate TLS/SSL, and support also websockets, (today I do this with HAproxy), I’am looking into Harvester to replace vmware based solutions (because of move from VM’s to containers - but both VM’s and docker containers will co-exist for some year(s))
I can ofc install a VM with haproxy and route through it (thats how its done today in my wmware environments). but I would really want to squeeze as much as possible out-of-the box from Harvester to reduce complexity and cpu/mem/traffic.
any thoughts are valuable
//Rickard
Harvester itself does not natively provide SNI-based routing, TLS termination, or WebSocket support as a built-in feature. Here is what the official docs say about what IS available:
What Harvester provides natively:
- Harvester Load Balancer (via the
harvester-load-balancer addon): Provides L4 load balancing (TCP/UDP) for VMs using IP pool-based VIPs. It supports IP-based routing but does NOT provide SNI-based (L7) routing or TLS termination.
-
- VLAN/Untagged networks: VMs connect to L2 networks, and you can assign static or DHCP IPs.
-
- Kube-OVN overlay networks (experimental as of v1.6): Provides VPC/subnet-level networking, but still no built-in L7 routing.
- What Harvester does NOT provide natively:
-
- SNI-based (subdomain) routing
-
- TLS/SSL termination
-
- WebSocket-aware load balancing
-
- HTTP/HTTPS ingress controller for VM traffic
- Recommended approach:
- Your existing approach — running HAProxy (or NGINX) in a VM — is actually the recommended way to achieve this in Harvester. You create a dedicated VM running HAProxy/NGINX with a static IP or VIP, configure it for SNI-based routing, TLS termination, and WebSocket support, and route external traffic through it.
- Alternatively, if your VMs are part of a Rancher-managed downstream Kubernetes cluster running on Harvester, you can deploy an ingress controller (e.g., ingress-nginx) within that cluster for L7 routing — but that applies to containerized workloads, not bare VMs.
- Conclusion: For bare VM workloads, a dedicated reverse-proxy VM remains the right architecture. Harvester’s built-in load balancer is L4 only.
- Reference: Official Harvester docs > Networking > Load Balancer