Hi,
In my rancher version: 2.10.0. In downstream rke2 cluster, I registered new node with role worker only and from cluster manager it stuck with [Machine: custom-7b034c1ff937 Waiting for Node Ref]
Nodes from kubectl:
(⎈|dev1a:N/A)[srikacha@iaadev-cluster]~% k get nodes
NAME STATUS ROLES AGE VERSION
kbc-001-adff10b3 Ready control-plane,etcd,master,worker 44d v1.31.3+rke2r1
kbc-002-adff10b3 Ready control-plane,etcd,master,worker 44d v1.31.3+rke2r1
kbc-003-adff10b3 Ready control-plane,etcd,master,worker 44d v1.31.3+rke2r1
kbw-001-ebfa2f8f Ready worker 44d v1.31.3+rke2r1
kbw-002-ebfa2f8f Ready worker 44d v1.31.3+rke2r1
kbw-003-ebfa2f8f Ready worker 44d v1.31.3+rke2r1
kbw-004-ebfa2f8f Ready worker 44d v1.31.3+rke2r1
kbw-005-ebfa2f8f Ready worker 44d v1.31.3+rke2r1
kbw-006-ebfa2f8f Ready worker 44d v1.31.3+rke2r1
kbw-007-ebfa2f8f Ready <none> 100m v1.31.3+rke2r1
kbw-008-ebfa2f8f Ready <none> 100m v1.31.3+rke2r1
log from rke2-agent:
Mar 12 15:17:20 kbw-007-ebfa2f8f rke2[1097]: time="2025-03-12T15:17:20Z" level=info msg="Annotations and labels have already set on node: kbw-007-ebfa2f8f"
Mar 12 15:17:20 kbw-007-ebfa2f8f rke2[1097]: time="2025-03-12T15:17:20Z" level=info msg="rke2 agent is up and running"
Mar 12 15:17:20 kbw-007-ebfa2f8f systemd[1]: Started Rancher Kubernetes Engine v2 (agent).
░░ Subject: A start job for unit rke2-agent.service has finished successfully
░░ Defined-By: systemd
░░ Support: https://wiki.almalinux.org/Help-and-Support
░░
░░ A start job for unit rke2-agent.service has finished successfully.
░░
░░ The job identifier is 205.
Mar 12 15:17:20 kbw-007-ebfa2f8f rke2[1097]: time="2025-03-12T15:17:20Z" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --conntrack-max-per-core=0 --conntrack-tcp-timeout-close-wait=0s --conntrack-tcp-timeout-established=0s --healthz-bind-address=127.0.0.1 --hostname-override=kbw-007-ebfa2f8f --kubeconfig=/var/lib/rancher/rke2/agent/kubeproxy.kubeconfig --proxy-mode=iptables"
Mar 12 15:17:25 kbw-007-ebfa2f8f rke2[1097]: time="2025-03-12T15:17:25Z" level=info msg="Tunnel authorizer set Kubelet Port 0.0.0.0:10250"
logs of deployment.apps/cattle-cluster-agent
time="2025-03-12T15:18:21Z" level=info msg="Watching metadata for k3s.cattle.io/v1, Kind=Addon"
time="2025-03-12T15:18:21Z" level=info msg="Watching metadata for external-secrets.io/v1beta1, Kind=SecretStore"
time="2025-03-12T15:18:21Z" level=info msg="Watching metadata for druid.apache.org/v1alpha1, Kind=Druid"
time="2025-03-12T15:18:21Z" level=info msg="Watching metadata for k8s.streamnative.io/v1alpha1, Kind=OxiaCluster"
time="2025-03-12T15:18:21Z" level=info msg="Watching metadata for crd.projectcalico.org/v1, Kind=IPReservation"
W0312 15:18:21.283795 39 warnings.go:70] v1beta2 Provider is deprecated, upgrade to v1beta3
W0312 15:18:21.291059 39 warnings.go:70] v1beta2 Provider is deprecated, upgrade to v1beta3
time="2025-03-12T15:18:21Z" level=info msg="Starting /v1, Kind=Secret controller"
time="2025-03-12T15:18:21Z" level=info msg="Updating TLS secret for cattle-system/serving-cert (count: 14): map[field.cattle.io/projectId:c-m-sjxbsl5x:p-jxqdc listener.cattle.io/cn-10.42.0.16:10.42.0.16 listener.cattle.io/cn-10.42.0.21:10.42.0.21 listener.cattle.io/cn-10.42.0.22:10.42.0.22 listener.cattle.io/cn-10.42.0.23:10.42.0.23 listener.cattle.io/cn-10.42.0.246:10.42.0.246 listener.cattle.io/cn-10.42.1.12:10.42.1.12 listener.cattle.io/cn-10.42.10.4:10.42.10.4 listener.cattle.io/cn-10.42.2.155:10.42.2.155 listener.cattle.io/cn-10.42.3.3:10.42.3.3 listener.cattle.io/cn-10.42.3.4:10.42.3.4 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-localhost:localhost listener.cattle.io/cn-rancher.cattle-system:rancher.cattle-system listener.cattle.io/fingerprint:SHA1=3C9C9946CF9F6B0552373E5334BA16E55672ADD5]"
time="2025-03-12T15:18:21Z" level=info msg="Active TLS secret cattle-system/serving-cert (ver=66021066) (count 14): map[field.cattle.io/projectId:c-m-sjxbsl5x:p-jxqdc listener.cattle.io/cn-10.42.0.16:10.42.0.16 listener.cattle.io/cn-10.42.0.21:10.42.0.21 listener.cattle.io/cn-10.42.0.22:10.42.0.22 listener.cattle.io/cn-10.42.0.23:10.42.0.23 listener.cattle.io/cn-10.42.0.246:10.42.0.246 listener.cattle.io/cn-10.42.1.12:10.42.1.12 listener.cattle.io/cn-10.42.10.4:10.42.10.4 listener.cattle.io/cn-10.42.2.155:10.42.2.155 listener.cattle.io/cn-10.42.3.3:10.42.3.3 listener.cattle.io/cn-10.42.3.4:10.42.3.4 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-localhost:localhost listener.cattle.io/cn-rancher.cattle-system:rancher.cattle-system listener.cattle.io/fingerprint:SHA1=3C9C9946CF9F6B0552373E5334BA16E55672ADD5]"
time="2025-03-12T15:18:27Z" level=info msg="[systemcharts] feature ManagedSystemUpgradeController = false, toEnable = true"
time="2025-03-12T15:18:27Z" level=info msg="[systemcharts] feature ManagedSystemUpgradeController = false, toEnable = true"
W0312 15:25:42.288907 39 warnings.go:70] v1beta2 Provider is deprecated, upgrade to v1beta3
W0312 15:33:25.289797 39 warnings.go:70] v1beta2 Provider is deprecated, upgrade to v1beta3
W0312 15:39:09.291181 39 warnings.go:70] v1beta2 Provider is deprecated, upgrade to v1beta3
W0312 15:45:41.292374 39 warnings.go:70] v1beta2 Provider is deprecated, upgrade to v1beta3
W0312 15:51:58.293880 39 warnings.go:70] v1beta2 Provider is deprecated, upgrade to v1beta3
W0312 16:01:42.295951 39 warnings.go:70] v1beta2 Provider is deprecated, upgrade to v1beta3
W0312 16:09:34.297762 39 warnings.go:70] v1beta2 Provider is deprecated, upgrade to v1beta3
Node yaml
k get node kbw-007-ebfa2f8f -oyaml
apiVersion: v1
kind: Node
metadata:
annotations:
alpha.kubernetes.io/provided-node-ip: 10.0.0.125
csi.volume.kubernetes.io/nodeid: '{"csi.trident.netapp.io":"kbw-007-2f8febfa.novalocal"}'
flannel.alpha.coreos.com/backend-data: '{"VNI":1,"VtepMAC":"32:fa:c7:08:9a:d5"}'
flannel.alpha.coreos.com/backend-type: vxlan
flannel.alpha.coreos.com/kube-subnet-manager: "true"
flannel.alpha.coreos.com/public-ip: 10.0.0.125
management.cattle.io/pod-limits: '{"cpu":"1800m","memory":"2804Mi"}'
management.cattle.io/pod-requests: '{"cpu":"1005m","memory":"1102Mi","pods":"9"}'
node.alpha.kubernetes.io/ttl: "0"
rke2.io/hostname: kbw-007-2f8febfa.novalocal
rke2.io/internal-ip: 10.0.0.125
rke2.io/node-args: '["agent","--node-label","cattle.io/os=linux","--node-label","infra.internal.example.com/node_group=worker","--node-label","rke.cattle.io/machine=9ecec270-662b-4c28-9824-a09ec3b639d0","--private-registry","/etc/rancher/rke2/registries.yaml","--server","https://10.0.0.80:9345","--token","********"]'
rke2.io/node-config-hash: 2JL26JMMHA2BXIRMMHAWPUKKZRJVCEB7WMDNM2Y57P6SKFRPFQAA====
rke2.io/node-env: '{"RKE2_DATA_DIR":"/var/lib/rancher/rke2"}'
volumes.kubernetes.io/controller-managed-attach-detach: "true"
creationTimestamp: "2025-03-12T14:23:17Z"
finalizers:
- wrangler.cattle.io/managed-etcd-controller
- wrangler.cattle.io/node
labels:
beta.kubernetes.io/arch: amd64
beta.kubernetes.io/instance-type: rke2
beta.kubernetes.io/os: linux
cattle.io/os: linux
infra.internal.example.com/node_group: worker
kubernetes.io/arch: amd64
kubernetes.io/hostname: kbw-007-2f8febfa.novalocal
kubernetes.io/os: linux
node.kubernetes.io/instance-type: rke2
plan.upgrade.cattle.io/system-agent-upgrader: 5d9d94769ab494551ff15f6edab83e362b34ab1b365cb3df3b131b48
rke.cattle.io/machine: 9ecec270-662b-4c28-9824-a09ec3b639d0
name: kbw-007-2f8febfa.novalocal
resourceVersion: "66123899"
uid: 6332ccc0-ceca-400b-b9d2-7d68a0696dd0
spec:
podCIDR: 10.42.10.0/24
podCIDRs:
- 10.42.10.0/24
providerID: rke2://kbw-007-2f8febfa.novalocal
status:
addresses:
- address: 10.0.0.125
type: InternalIP
- address: kbw-007-2f8febfa.novalocal
type: Hostname
allocatable:
cpu: "20"
ephemeral-storage: "48347984859"
hugepages-1Gi: "0"
hugepages-2Mi: "0"
memory: 65300928Ki
pods: "110"
capacity:
cpu: "20"
ephemeral-storage: 49699820Ki
hugepages-1Gi: "0"
hugepages-2Mi: "0"
memory: 65300928Ki
pods: "110"
conditions:
- lastHeartbeatTime: "2025-03-12T15:17:38Z"
lastTransitionTime: "2025-03-12T15:17:38Z"
message: Flannel is running on this node
reason: FlannelIsUp
status: "False"
type: NetworkUnavailable
- lastHeartbeatTime: "2025-03-12T16:55:21Z"
lastTransitionTime: "2025-03-12T15:17:24Z"
message: kubelet has sufficient memory available
reason: KubeletHasSufficientMemory
status: "False"
type: MemoryPressure
- lastHeartbeatTime: "2025-03-12T16:55:21Z"
lastTransitionTime: "2025-03-12T15:17:24Z"
message: kubelet has no disk pressure
reason: KubeletHasNoDiskPressure
status: "False"
type: DiskPressure
- lastHeartbeatTime: "2025-03-12T16:55:21Z"
lastTransitionTime: "2025-03-12T15:17:24Z"
message: kubelet has sufficient PID available
reason: KubeletHasSufficientPID
status: "False"
type: PIDPressure
- lastHeartbeatTime: "2025-03-12T16:55:21Z"
lastTransitionTime: "2025-03-12T15:17:24Z"
message: kubelet is posting ready status
reason: KubeletReady
status: "True"
type: Ready
daemonEndpoints:
kubeletEndpoint:
Port: 10250
images:
- names:
- docker.io/rancher/nginx-ingress-controller:v1.10.4-hardened3
sizeBytes: 725779238
- names:
- docker.io/rancher/mirrored-cilium-cilium:v1.16.1
sizeBytes: 622824136
- names:
- docker.io/rancher/rancher-agent@sha256:b39ec39f78d63e0deead05f56feed50a04d431464904afbbca500fca1d67e11a
- docker.io/rancher/rancher-agent:v2.10.0
sizeBytes: 596500857
- names:
- docker.io/rancher/hardened-kubernetes:v1.30.5-rke2r1-build20240912
sizeBytes: 576054634
- names:
- docker.io/rancher/hardened-calico:v3.28.1-build20240911
sizeBytes: 575508054
- names:
- docker.io/rancher/rke2-runtime:v1.30.5-rke2r1
sizeBytes: 328389947
- names:
- qa-oci.internal.example.com/cc-internal-proxy/smi-fuse-docker-internal/releases/3rdparty/splunk-otel-collector@sha256:3f206aa5722cc8f6301e348518fc454404b598f821e76eae05e296f899c9feb6
- qa-oci.internal.example.com/cc-internal-proxy/smi-fuse-docker-internal/releases/3rdparty/splunk-otel-collector:0.105.0
sizeBytes: 303475887
- names:
- docker.io/rancher/nginx-ingress-controller@sha256:5c6a29000084db28d4f7f073b8ab4312080e1f1ad662e3aad0edee998e6a00ee
- docker.io/rancher/nginx-ingress-controller:v1.10.5-hardened4
sizeBytes: 297175306
- names:
- docker.io/rancher/hardened-flannel:v0.25.6-build20240910
sizeBytes: 223987240
- names:
- docker.io/rancher/hardened-calico@sha256:56925b33e377071d4826a199ed155befce69cfe655d8dd1cb733244f30ce0576
- docker.io/rancher/hardened-calico:v3.29.0-build20241104
sizeBytes: 208578086
- names:
- docker.io/rancher/klipper-helm:v0.9.2-build20240828
sizeBytes: 198005642
- names:
- docker.io/rancher/mirrored-cilium-cilium-envoy:v1.29.7-39a2a56bbd5b3a591f69dbca51d3e30ef97e0e51
sizeBytes: 165228247
- names:
- docker.io/rancher/hardened-kubernetes@sha256:f0d0a4f763fc2c41509d705786b0c88c653b4881e1dccc045f99093296ba1525
- docker.io/rancher/hardened-kubernetes:v1.31.3-rke2r1-build20241121
sizeBytes: 162106707
- names:
- docker.io/rancher/mirrored-cilium-operator-aws:v1.16.1
sizeBytes: 128673385
- names:
- docker.io/rancher/mirrored-cilium-clustermesh-apiserver:v1.16.1
sizeBytes: 110762054
- names:
- docker.io/rancher/mirrored-cilium-operator-azure:v1.16.1
sizeBytes: 108033644
- names:
- docker.io/rancher/mirrored-cilium-operator-generic:v1.16.1
sizeBytes: 104679030
- names:
- docker.io/rancher/hardened-cni-plugins:v1.5.1-build20240910
sizeBytes: 97963178
- names:
- docker.io/rancher/mirrored-cilium-hubble-relay:v1.16.1
sizeBytes: 90139437
- names:
- docker.io/rancher/hardened-flannel@sha256:8aaa85c1291afccd5a42d20733609c1ab8f58f5aff6bc3b28cbb8a3bcdc8245d
- docker.io/rancher/hardened-flannel:v0.26.0-build20241024
sizeBytes: 81091277
- names:
- qa-oci.internal.example.com/cc-internal-proxy/smi-fuse-docker-internal/releases/3rdparty/frrouting/frr@sha256:e7aef15009b0bec4c3ec16a64d3c7a6d2780b4990e5a4dba0a2545cc033ec93d
- qa-oci.internal.example.com/cc-internal-proxy/smi-fuse-docker-internal/releases/3rdparty/frrouting/frr:9.1.0
sizeBytes: 80291902
- names:
- qa-oci.internal.example.com/cc-internal-proxy/smi-fuse-docker-internal/releases/3rdparty/telegraf@sha256:b67f96e018ee71d05bdaac622bab43d5f004cc2307e0c6a0c81b35ab99d7a4d5
- qa-oci.internal.example.com/cc-internal-proxy/smi-fuse-docker-internal/releases/3rdparty/telegraf:1.32-alpine
sizeBytes: 76471113
- names:
- docker.io/rancher/hardened-coredns:v1.11.1-build20240910
sizeBytes: 74104488
- names:
- qa-oci.internal.example.com/cc-internal-proxy/smi-fuse-docker-internal/3rdparty/netapp/trident@sha256:bf06e0342dc528151ccf49dcb630ef5be72629541c251b5de97da7e4e7ff4eac
- qa-oci.internal.example.com/cc-internal-proxy/smi-fuse-docker-internal/3rdparty/netapp/trident:24.10.0
sizeBytes: 69643800
- names:
- docker.io/rancher/rke2-cloud-provider:v1.30.4-build20240910
sizeBytes: 66208317
- names:
- docker.io/rancher/mirrored-cilium-hubble-ui-backend:v0.13.1
sizeBytes: 66116374
- names:
- docker.io/rancher/system-agent@sha256:865e14e9f6f8e2395af80a6ef4fd1d2cf7cad0104b797180e7c6da261c8b69fe
- docker.io/rancher/system-agent:v0.3.11-suc
sizeBytes: 63455331
- names:
- docker.io/rancher/hardened-k8s-metrics-server:v0.7.1-build20240910
sizeBytes: 58887178
- names:
- docker.io/rancher/mirrored-sig-storage-snapshot-controller:v6.2.1
sizeBytes: 58390267
- names:
- docker.io/rancher/mirrored-ingress-nginx-kube-webhook-certgen:v1.4.1
sizeBytes: 56758085
- names:
- docker.io/rancher/mirrored-cilium-certgen:v0.2.0
sizeBytes: 54307082
- names:
- qa-oci.internal.example.com/cc-internal-proxy/smi-fuse-docker-internal/releases/3rdparty/metallb/speaker@sha256:1d7aa970c58c45380090c78124de5e003795a35407b23dc7845ce678ecd16778
- qa-oci.internal.example.com/cc-internal-proxy/smi-fuse-docker-internal/releases/3rdparty/metallb/speaker:v0.14.8
sizeBytes: 53142585
- names:
- docker.io/rancher/hardened-dns-node-cache:1.23.1-build20240910
sizeBytes: 53012000
- names:
- docker.io/rancher/mirrored-sig-storage-snapshot-validation-webhook:v6.2.2
sizeBytes: 48959246
- names:
- docker.io/rancher/hardened-etcd:v3.5.13-k3s1-build20240910
sizeBytes: 45933192
- names:
- docker.io/rancher/hardened-cluster-autoscaler:v1.8.11-build20240910
sizeBytes: 38323790
- names:
- docker.io/rancher/hardened-addon-resizer:1.8.20-build20240910
sizeBytes: 32574468
- names:
- docker.io/rancher/mirrored-cilium-hubble-ui:v0.13.1
sizeBytes: 31448478
- names:
- quay.io/brancz/kube-rbac-proxy@sha256:2c7b120590cbe9f634f5099f2cbb91d0b668569023a81505ca124a5c437e7663
- quay.io/brancz/kube-rbac-proxy:v0.15.0
sizeBytes: 25017413
- names:
- qa-oci.internal.example.com/cc-internal-proxy/smi-fuse-docker-internal/3rdparty/opentelemetry-operator/opentelemetry-operator@sha256:7a7d4f172ecc24db1884de59a0aa6fd6210338099967e26d35bf70f469691011
- qa-oci.internal.example.com/cc-internal-proxy/smi-fuse-docker-internal/3rdparty/opentelemetry-operator/opentelemetry-operator:0.95.0
sizeBytes: 23353816
- names:
- qa-oci.internal.example.com/cc-internal-proxy/smi-fuse-docker-internal/3rdparty/netapp/csi-node-driver-registrar@sha256:fe2f9f85a363f9ac9b9de655062f7283bb70d91978c952462c8557d90ffefbbe
- qa-oci.internal.example.com/cc-internal-proxy/smi-fuse-docker-internal/3rdparty/netapp/csi-node-driver-registrar:v2.12.0
sizeBytes: 14035564
- names:
- docker.io/rancher/klipper-lb:v0.4.9
sizeBytes: 12651250
- names:
- docker.io/rancher/mirrored-pause:3.6
sizeBytes: 685844
nodeInfo:
architecture: amd64
bootID: 4aab0b38-5506-4748-8367-9b132d91680d
containerRuntimeVersion: containerd://1.7.23-k3s2
kernelVersion: 5.14.0-427.40.1.el9_4.x86_64
kubeProxyVersion: v1.31.3+rke2r1
kubeletVersion: v1.31.3+rke2r1
machineID: e5d553b7db6d4bf090e18bd73bb5390c
operatingSystem: linux
osImage: AlmaLinux 9.4 (Seafoam Ocelot)
systemUUID: 6b60f546-2afd-41ce-ab4a-2664f6619ec6
machine yaml
apiVersion: cluster.x-k8s.io/v1beta1
kind: Machine
metadata:
annotations:
objectset.rio.cattle.io/applied: >-
H4sIAAAAAAAA/...
objectset.rio.cattle.io/id: unmanaged-machine
objectset.rio.cattle.io/owner-gvk: /v1, Kind=Secret
objectset.rio.cattle.io/owner-name: custom-7b034c1ff937
objectset.rio.cattle.io/owner-namespace: c-m-sjxbsl5x
creationTimestamp: '2025-03-12T14:21:19Z'
finalizers:
- machine.cluster.x-k8s.io
generation: 2
labels:
cattle.io/os: linux
cluster.x-k8s.io/cluster-name: dev1a
iaa.iotcc.cisco.com/node_group: worker
objectset.rio.cattle.io/hash: c7863dd2baa8be1f86680836286dba4ca05ffa67
rke.cattle.io/cluster-name: dev1a
rke.cattle.io/machine-id: f5858fba110a72...
rke.cattle.io/worker-role: 'true'
managedFields:
- apiVersion: cluster.x-k8s.io/v1beta1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:objectset.rio.cattle.io/applied: {}
f:objectset.rio.cattle.io/id: {}
f:objectset.rio.cattle.io/owner-gvk: {}
f:objectset.rio.cattle.io/owner-name: {}
f:objectset.rio.cattle.io/owner-namespace: {}
f:labels:
.: {}
f:cattle.io/os: {}
f:cluster.x-k8s.io/cluster-name: {}
f:iaa.iotcc.cisco.com/node_group: {}
f:objectset.rio.cattle.io/hash: {}
f:rke.cattle.io/cluster-name: {}
f:rke.cattle.io/machine-id: {}
f:rke.cattle.io/worker-role: {}
f:spec:
.: {}
f:bootstrap:
.: {}
f:configRef: {}
f:clusterName: {}
f:infrastructureRef: {}
manager: rancher
operation: Update
time: '2025-03-12T14:21:19Z'
- apiVersion: cluster.x-k8s.io/v1beta1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:finalizers:
.: {}
v:"machine.cluster.x-k8s.io": {}
f:ownerReferences:
.: {}
k:{"uid":"f0caece5-ecba-4da9-8abe-fc20c804e090"}: {}
f:spec:
f:bootstrap:
f:dataSecretName: {}
manager: manager
operation: Update
time: '2025-03-12T14:21:20Z'
- apiVersion: cluster.x-k8s.io/v1beta1
fieldsType: FieldsV1
fieldsV1:
f:status:
.: {}
f:bootstrapReady: {}
f:lastUpdated: {}
f:observedGeneration: {}
f:phase: {}
manager: manager
operation: Update
subresource: status
time: '2025-03-12T14:21:20Z'
- apiVersion: cluster.x-k8s.io/v1beta1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions: {}
f:infrastructureReady: {}
manager: rancher
operation: Update
subresource: status
time: '2025-03-12T14:23:15Z'
name: custom-7b034c1ff937
namespace: fleet-default
ownerReferences:
- apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
name: dev1a
uid: f0caece5-ecba-4da9-8abe-fc20c804e090
resourceVersion: '384741610'
uid: 6e4de58c-3603-4e5b-aa88-180c459c2e35
spec:
bootstrap:
configRef:
apiVersion: rke.cattle.io/v1
kind: RKEBootstrap
name: custom-7b034c1ff937
namespace: fleet-default
dataSecretName: custom-7b034c1ff937-machine-bootstrap
clusterName: dev1a
infrastructureRef:
apiVersion: rke.cattle.io/v1
kind: CustomMachine
name: custom-7b034c1ff937
namespace: fleet-default
nodeDeletionTimeout: 10s
status:
bootstrapReady: true
conditions:
- lastTransitionTime: '2025-03-12T14:21:20Z'
status: 'True'
type: Ready
- lastTransitionTime: '2025-03-12T14:21:20Z'
status: 'True'
type: BootstrapReady
- lastTransitionTime: '2025-03-12T14:21:19Z'
status: 'True'
type: InfrastructureReady
- lastTransitionTime: '2025-03-12T14:21:19Z'
reason: WaitingForNodeRef
severity: Info
status: 'False'
type: NodeHealthy
- lastTransitionTime: '2025-03-12T14:21:19Z'
status: 'True'
type: PlanApplied
- lastTransitionTime: '2025-03-12T14:23:15Z'
status: 'True'
type: Reconciled
infrastructureReady: false
lastUpdated: '2025-03-12T14:21:20Z'
observedGeneration: 2
phase: Provisioning
Can someone help how to troubleshoot this?