Longhorn V2 from the ISO installer?

I am very new to Kubernetes, Rancher, and Harvester. I’ve started with Harvester which is probably the wrong way around, but here I am.

I was wondering if there is a way to select Longhorn V2 during the text UI installer for the data drive?

My lab is made up of old HP T740 thin clients, each with the following:

64gb of ram

256gb m.2 SATA for OS

1tb m.2 nvme for data

dual port 10gbe card in the only slot

As I read around the web, it doesn’t look like I can “remove” the v1 data drive and “replace” with a v2 drive. Or at at least not a process I can understand. The price of even a 500gb drive that could share OS and data is high, more than I can really expend for just an experiment. So is there a way that I can turn Longhorn v2 on and use this for the initial data drive set up while installing the OS? I have 1.7.1 release that I’m working with. I just want to see how much faster a VM will read/write under the newer nvme driver.

You’re correct, the data disk selected during install will only be used for LHv1 volumes, and it can’t be swapped out/replaced by LHv2. The only way to try LHv2 with your setup above would be to use your 256gb disk for both OS and the data partition (which will work, BTW, but you’ll end up with only maybe ~80gb on the data partition) then after install, enable the LHv2 data engine and use the 1TB NVMe for that.

(You could try the other way around – OS / LHv1 data on the 1TB drive and use the 256GB for LHv2, but I reckon you really want the NVMe for LHv2, not the SSD)

Thanks for the reply, I’ll have to check again with the installer, it pretty much didn’t let me split the sata drive giving me a persistent data size error and seemingly wanting all 256gb for the OS.

There is an alternate I might try… The installer seems to allow USB drives to be used for the initial data drive, I have a bunch of 32gb drives I could liberate and use. The thought just makes me shiver though, not knowing exactly what parts of the system ride on that LH v1 dataset made me pause.

I spent a bunch of time pricing drives last night, and still not sure I’m going to upgrade. 500gb sata drives are roughly $90usd for the lowest tier, but mid tier 1tb drives are only $140-$150 which certainly doesn’t make sense. I almost bought some lowest tier 1tb for $116 but too many reviews stating they failed early and warranty not being honored, so I chicken out. I have one mid tier 1tb and can buy two more of the same model, that might be a way forward, but still $300usd to test something that they say is not for production because it lacks a few features is a bit much.

I did some speed tests to compare, let’s see if I can post a link to my rambling journey:

I wish I had made the time to set up vSan when I still had my vSphere license (stupid Broadcom messed that up, didn’t have time to study and pass an exam), that would have given a nice baseline for HCI on a very mature product. It might have been exactly the same performance I’m seeing now. But LH v2 suggests it is faster in the nvme mode, and I’d like to see what it can do in my little system. Going forward, there will be few servers in the HCI world that are not run on nvme, everything I’ve been pricing for my production system is based on this going forward, and I’m a very little fish in this IT world.

Against my better judgment, I decided to sell my second kidney and buy two more 1tb drives that match one I already have. I hate having to know if something is better. Just hoping that LH v2 gets to be feature parity quickly (ready for production). In the mean time, I’m going to play with the drivers, I’ve read that the scsi driver might be slightly faster and I think I had the other driver loaded when I tested. Might get this done before I reinstall on the larger drives, might not.

I had a bit of time on friday, tested sata, scsi, and virtio config for the vm, speeds were all relatively close the sata average lower, scsi and virtio similar. Hoping I have time to install the new drives, install and configure Harvester, and maybe test tomorrow. Going to be LH v1 on the OS disk (sata interface) and LH v2 on the nvme. I think network speed is going to be my next limitation, only have 10gbe right now, might need to aggregate both connections for management/storage, or buy cards that support 25gbe and connect to the three ports I have on another switch in the system. It’s nice to know where the limitations are before designing a system.

Also hoping the LH v2 kind work in kind of a round robin way. Host one sends the data to local disk and to host 2, host 2 then forwards the data to host 3. Not entirely real time like LH v1 uses by sending the data local and to both hosts directly, but uses half the bandwidth or provides more bandwidth for the single handoff to keep things flowing faster. Guess I’ll find out soon enough.