Hi all, I have a 3 node harvester cluster, with only 3-4 vm’s running. One day one host automatically cordoned itself, and any vm’s on it migrated to other hosts. Looking at the Harvester UI, everything is healthy on the server, but its stuck in cordoned state. Looking at the physical host, one disk had faulted, I have since replaced the bad disk and all physical disks on the cluster are healthy. After this I figured the host would notice and become available again. This is not the case, nothing I do will get this host out of the “cordoned” state.
Also I cannot SSH into anything, even with the default rancher rancher creds.
Any help on either of these issues is greatly appreciated.