I was kind of under the impression that Harvester would let me know that there was an update, and offer to download it for me. I have default settings and the path to check is releases.rancher.com/harvester which is essentially an empty file. Is this the correct default setting? I also see a harvester-upgrade-responder.rancher.io/v1/checkupgrade but it too doesn’t work.
I’m going to download and upgrade through the “manual” process and see where I end up. This is my lab and I’m really not very far into testing so not an issue if it destroys itself. I also saw something about changing to a v2 upgrade in the 1.8.0 docs, I’ll have to read it again to understand what it is doing with the shift from v1 to v2 upgrades.
1 Like
[edit] managed to delete all the Longhorn V2 stuff, I went from 3 cores each host in use to .5 to 1 core each host in use, that kind of makes up my mind for now see below [/edit]
On my little lab computers, this took over an hour (went to lunch part way through) and it was sort of successful. These are only 4c/8t (AMD V1756b) processors with 64GB of RAM and the OS is on an m.2 SATA drive. This is barely enough processor to have things work with a few VMs and not really recommended, 8c/16t would be worlds better if money is available. Even 6c12t in a more modern processor would be nicer.
Longhorn V2 did not come back up. Getting
! READY ! Schedulable
I had an image and a VM on there which means I can’t even delete them and start again. Just did a full shutdown and restarted hoping that it would work out the problems, but LH V2 is still down. I’ll give it a few days to see if I can find the problem, but it’s looking like I’ll reconfigure the disks and just run everything LH V1 for now and wait for V2 to get into release.
OS is 1TB SATA and V2 is 1TB nvme, if V2 isn’t going to get me anything, then I’ll trade 256GB SATA back in and let the 1TB disk be the only data disk and running on LHv1. This also frees up some resources, LHv2 creates some pods which takes more processor, and I’m processor limited on these so I probably should drop back to a more simple configuration.
1 Like