Changing Cluster CIDR in RKE1

Hi Team,

I have a running cluster of RKE and RKE2 with the default CIDR 10.42.0.0, With a recent migration of application servers, The servers has been assigned with 10.42.x.x IP and now it is having a conflict with communication, The only option left is to change the cluster CIDR without any data loss or user workloads.

Please suggest recommended approach for this.

Thanks in advance.

Changing a cluster CIDR after creating an RKE1 cluster is not possible.

Hi!

I’m trying to deploy a cluster using AutoK3s:
autok3s create -d
–provider native
–name myk3s
–ssh-user ubuntu
–ssh-key-path ~/aws.pem
–master-ips 172.31.58.141,172.31.55.44,172.31.58.84
–cluster
–master-extra-args ‘–disable=traefik --disable=servicelb --disable-kube-proxy --flannel-backend=none --disable-network-policy --cluster-cidr=10.100.0.0/18 --service-cidr=10.100.64.0/18’

However, I encountered the following error:
Job for k3s.service failed because the control process exited with error code.
Could you please help me figure out how to fix it?

Thanks a lot!