For your use case — a bare metal OVH server running Harvester 1.3.2, wanting all VM/cluster nodes to communicate via NAT and reach the internet — here is the recommended approach based on the official Harvester networking documentation:
Understanding Harvester’s network types:
Harvester offers three VM network types:
- Management Network — uses Calico/Canal overlay (Kubernetes internal). VMs get an internal cluster IP only, no direct external access. Internet access goes via masquerade NAT through the host.
-
- VLAN Network — bridges VMs directly to the physical network (requires VLAN-capable switch/NIC).
-
- Untagged Network — similar to VLAN but without explicit VLAN ID tagging.
- For your OVH bare metal scenario (single public IP, need NAT):
- The Management Network with
masqueradetype is your best option. It works like this: -
- VMs get a private IP on the management overlay network.
-
- Traffic is masqueraded (NAT’d) through the Harvester host’s public IP for internet access.
-
- VMs on the same cluster can communicate with each other directly.
- Steps:
-
- When creating VMs, attach them to the Management Network.
-
- Set the network interface type to masquerade (this is the default for management network).
-
- VMs will get a 10.x.x.x IP and internet traffic will be NAT’d through the host.
- For Rancher-provisioned downstream cluster nodes:
- When using the Harvester Node Driver in Rancher to provision RKE2/K3s clusters, the VMs are automatically placed on the management network with masquerade by default. No additional firewall configuration is needed on OVH — the NAT is handled at the Harvester/KubeVirt level.
- Limitation: VMs on the management network are not directly reachable from outside the cluster without additional configuration (e.g., setting up a Kubernetes Service with a VIP using the Harvester Load Balancer addon, or port forwarding).
- Reference: Official Harvester docs > Networking > VM Network