Harvester v1.2.1 - Node is cordoned

Hi all, I have a 3 node harvester cluster, with only 3-4 vm’s running. One day one host automatically cordoned itself, and any vm’s on it migrated to other hosts. Looking at the Harvester UI, everything is healthy on the server, but its stuck in cordoned state. Looking at the physical host, one disk had faulted, I have since replaced the bad disk and all physical disks on the cluster are healthy. After this I figured the host would notice and become available again. This is not the case, nothing I do will get this host out of the “cordoned” state.

Also I cannot SSH into anything, even with the default rancher rancher creds.

Any help on either of these issues is greatly appreciated.

Fixed everything myself.
SSH was enabled I was just using the wrong creds.
After replacing the failed disk on one of the nodes, the node in question was stuck in cordoned state, and restarting the node or doing anything in UI (uncordon or maintenance mode) did nothing.
Below the problem node was a message “kubelet stopped posting node status”. The fix was to restart this service, but the kubelet.service was non existent on my node. The actual service I needed to restart was the rke2-agent.service. Once I restarted this service, the node came back to life, and began its checks and balances, rebuilding failed disks/volumes one by one.