For future searchers with similar issues, in general, the maximum size of your VM disks on Harvester/Longhorn is really dependent on how much of your VM’s data changes, because that will drive the space required for rebuilding with temporary snapshots. If you have VMs where a very large percentage of the data is being modified or changed, it can outgrow the largest available NVMe disk on your Harvester Node (this is what I believe the 25% warning per disk is for in Harvester, but it’s a general warning. As the Harvester admin, you really need to understand how much of your VM disk data is changing over. You can manually move / delete other replicas or other VM disks (as we have done, but it’s ugly and requires a significant amount of time to do properly and even then for two test VMs with 5TB disks we had to fully restore them). Eventually, I feel this should be automated in Harvester disk control; to the point where the admin has the option to enable automatic VM pausing/live migrations to other nodes to prevent disk corruption.