Thank you, Philip. This matches what we have found in our testing, and we feel like the SUSE Virtualization should document this situation better, or perhaps the upstream product should consider a formula to warn the user if a Harvester/Longhorn disk configuration combined with x number of VM(s) disk(s) at x size will not be sustainable. Many of our production VMs running under SLE 15 KVM on local RAID10 have local disks above 1TB, and they perform extremely well on modern hardware; migrating these very same VMS into a new 1.5.x Harvester/SUSE Virtualization cluster en masse to match CPU cores and RAM and available Longhorn total disk space works initially, but quickly begins to offer significant disk corruption and multiple issues due to lack of space (because the VM disks in our testing had very high percentage of modifications and changes). I would appreciate any thoughts/guidance on how we should be configuring these nodes and which trade-offs we should simply accept if we want to migrate to SUSE Virtualization/Harvester from straight KVM.