This is a valid use case but touches on the boundary between Harvester’s HCI layer and KubeVirt’s internals. Here’s what the official docs and KubeVirt support:
The challenge:
Harvester manages VMs through KubeVirt. The virt-launcher pod runs the QEMU/KVM process, and attaching arbitrary host files to it requires modifying the VM spec to mount volumes into the pod. Harvester’s UI provides a limited set of volume types (disk images, container disks, etc.) and doesn’t expose raw ConfigMap/Secret volume mounts directly in its interface.
Supported approaches via KubeVirt (kubectl/YAML):
-
ConfigMap volumes: As you found, you can add a
ConfigMapas a volume in the VM spec. The ConfigMap data is mounted as files inside thevirt-launcherpod (not inside the VM guest OS). This is the supported KubeVirt mechanism for injecting files into the pod. Example in the VM spec: -
yaml -
volumes:
-
- name: my-config
- configMap:
-
name: my-configmap
-
And in the
domain.devices.disksor directly as a volume reference depending on your use case. -
Secret volumes: Similar to ConfigMap, you can use Kubernetes Secrets to inject files.
The Harvester CSI conflict:
You mentioned that when using harvester-csi-provider, Harvester overwrites or rejects manual YAML edits. This is a known limitation — Harvester’s controller reconciles the VM spec and may overwrite certain fields. The workaround is to use annotations or to apply changes after the VM is stopped, before it starts.
Practical recommendation:
- Edit the VM YAML directly using
kubectl edit vm <name> -n <namespace>from the cluster kubeconfig. -
- Add your ConfigMap or Secret volume mount to the VM spec.
-
- Stop and restart the VM to apply changes.
- Note: The Harvester UI may not show or preserve these custom volume mounts after future edits via the UI.
- Reference: Official Harvester docs > VM Management and KubeVirt documentation on volumes