Max VM Disk Size?

By corruption, I mean longhorn disk corruption assigned to the VM. (Red alerts, you can’t do anything. To the point where you need to delete it out of longhorn and re-create.) This seems to only happen on disks above 5TB assigned to VMs – again depending entirely on how much data changes for those “hidden” snapshot items you discussed. For our configuration, we have 140 nodes configured exactly as described in my original post. We have found under 1.6.x Harvester, so far there is no harvester/longhorn corruption when you assign disks to VMs 1TB or under. So long as the VM’s able to span multiple disks for the specific use case, this has worked great. We never get to the 25% warning on each NVMe, but I suspect the 5TB+ longhorn disk becomes “full” if the vm disk has a LOT of changes that need to be tracked. Am I making sense here I hope? So far, decreasing this to 1TB groups has eliminated this.

[edit] I think the 25% warning on each NVMe (each Longhorn disk) should account for this hidden snapshot data. We don’t think it is accounting for this – and an earlier warning here may solve the problem for those with large vm disk files on longhorn, especially when multiple NVMe are assigned to Longhorn on each node. Because when you look at the entire available pool of many dozens of TB in a clean, shiny new Harvester cluster, the admin’s “instinct” is to just assign each VM whatever it needs in one disk. There is no note or warning describing how you need to consider replicas and hidden snapshots. The documentation does help a bit, but I think the interface should be a little stricter. Or perhaps we’re the only ones trying to use Harvester like we would vmware/ESX, :slight_smile: