Release v2.14.0
[!IMPORTANT]
If you are using Active Directory Federation Service (AD FS), upgrading to Rancher v2.10.1 or later may cause issues with authentication, requiring manual intervention. These issues are due to the AD FS Relying Party Trust not being able to pick up a signature verification certificate from the metadata. For more information, see #48655. These issues can be corrected by either of two methods:
Updating the Relying Party Trust information from federation metadata (Relying Party Trust → Update from Federation Metadata…).
Directly adding the certificate (Relying Party Trust → Properties → Signature tab → Add → Select the certificate).
Rancher v2.14.0 is the latest minor release of Rancher. This is a Community version release that introduces new features, enhancements, and various updates.
Rancher General
Features and Enhancements
-
Rancher now supports Kubernetes v1.35. See #52957 for information on Rancher support for Kubernetes v1.35. You can view the upstream Kubernetes changelog for v1.35 for a complete list of changes.
-
Support for all Kubernetes Resource Types and Quotas. Rancher now supports the use of arbitrary resource references and their quotas.
Important: This support does not cover the resources in the
ext.cattle.iogroup, see #52602.For more details, see #52403.
-
To simplify the Rancher Helm chart, support and version-checking code for End-of-Life (EOL) and unsupported versions of cert-manager have been removed. The Rancher chart now only supports cert-manager versions that are compatible with the Kubernetes versions supported by Rancher. See #52922.
-
Rancher v2.14 adds support for switching from ingress to Gateway API when installing the Rancher Helm chart. Note that when using the Gateway API network exposure type, Rancher can create and manage its own Gateway resource. However, if you have an existing Gateway that you manage independently (for example, a shared Gateway used by multiple applications), you will need to create your own HTTPRoute resources to route traffic to Rancher. For more information, refer to Using an External Gateway with Rancher. See #52796.
-
Rancher v2.14 now allows users to add custom labels and annotations to Rancher namespaces. At install time, users can provide their custom labels and annotations to the Rancher Helm chart. These labels and annotations are then applied to namespaces as they are created. This applies both to management and downstream clusters and helps identify namespaces necessary for Rancher to operate properly. See #53268.
Behavior Changes
-
Rancher v2.14 removes support for Kubernetes 1.32. See #53764.
-
The Rancher Ingress resource no longer deploys
ingress-nginx-specific annotations (nginx.ingress.kubernetes.io/proxy-connect-timeout,nginx.ingress.kubernetes.io/proxy-read-timeout, andnginx.ingress.kubernetes.io/proxy-send-timeout) by default. These annotations are no longer required for performance, and removing them improves compatibility with other ingress controllers following the announcement ofingress-nginxretirement. See #53272.
Major Bug Fixes
-
Fixed an issue where upgrading Rancher from v2.13.2 to v2.14 caused Rancher pods to enter a
CrashLoopBackOffstate and fail to start. See #53854. -
Support for custom claim mapping in Keycloak OIDC. The custom claim mechanism for Keycloak OIDC now works as expected. For custom names, the display name will come from the
nameclaim if populated - the preference order for display names is:-
The “name” claim or custom “name” claim.
-
The “email” claim, or custom “email claim”.
For groups, this may lead to additional permissions for newly detected groups claims. See #52405.
-
Rancher App (Global UI)
Features and Enhancements
-
In the Azure Cluster Provisioning form, the Use managed disks option for the Azure node driver now defaults to
true. This change aligns with Azure’s guidance around the deprecation of unmanaged disks. See #16184 and the related Azure documentation. -
Rancher now supports managing audit policies in the UI, under Policy > Audit Log Policies. See #12961.
Major Bug Fixes
-
Fixed an issue where the legacy
/v3/API-UI page failed to load, resulting in a blank page and console errors when users attempted to access it via the host URL<RANCHER-SERVER-URL>/v3. See #52911. -
Lists showing Custom Resources with Age columns of type Date are now correctly handled, similarly to other Age columns. See #15711.
-
Fixed an issue when a standard user with the Cluster Owner role attempted to edit an Azure or AKS cluster, the Machine Pools section showed an error
Cannot read properties of undefined…. See #15241.
Rancher CLI
Behavior Changes
- The Rancher CLI
tokencommand now supports authorization code flow with Microsoft Entra ID (formerly Azure AD) as an alternative to device code flow, which addresses environments where device code flow is blocked by security policies. Use--auth-flow authcodeor setCATTLE_OAUTH_AUTH_FLOW=authcode. See #52404.
Authentication
Features and Enhancements
-
Rancher now supports using access tokens issued for OAuth and OpenID Connect (OIDC) as standard Bearer tokens (per RFC 6750) to authenticate with the platform. Previously, only an ID token could be used to impersonate and authenticate a user. As part of this functionality, you can configure the allowed scopes for an OIDC provider to restrict the scopes that a client can request. If not configured, these will default to
openid,profileandoffline_access. See #52716. -
Rancher supports the Proof Key for Code Exchange (PKCE) extension (RFC 7636) for OIDC authentication providers. Enabling
S256PKCE token verification allows you to mitigate authorization code interception attacks during OIDC authentication flows. See #50665 and #15637. -
A new feature flag is now available to disable the deprecated
/v3-publicAPI endpoints. This allows you to test UI changes and integrations exclusively against the newer/v1-publicendpoints. See #52680. -
Rancher now includes Terraform resource support for the Amazon Cognito authentication provider. This allows you to programmatically manage Amazon Cognito integration, such as enabling or disabling the authentication provider, through Infrastructure as Code (IaC) practices instead of configuring it manually via the UI or API. See #51058.
Behavior Changes
-
Deprecation of
tokens.management.cattle.io.-
Rancher v2.13 introduced a new type of token resource in the
ext.cattle.ioAPI group to serve as Rancher’s public API for tokens. The previous token resources in themanagement.cattle.ioAPI group (now referred to aslegacy tokens,norman tokens, orv3 tokensdepending on the context) are being phased out. -
Previously, the new
tokens.ext.cattle.ioresources were only accessible viakubectl. Starting in Rancher v2.14, the Rancher UI provides basic support for these tokens, allowing you to create, view, list, and delete them. -
In future releases, all uses of
tokens.management.cattle.iowill incrementally transition totokens.ext.cattle.io, eventually leading to the complete removal of support fortokens.management.cattle.io. -
See #2220.
-
Major Bug Fixes
-
Fixed an issue where the selection of pods in the Rancher UI (Workloads > Pods) would reset when the status of a pod changed. See #15847.
-
Fixed an issue where users without sufficient permissions could see the ‘Edit Config’ option for their user object in the Users & Authentication section. Attempting to use this option to edit values, such as a password, would result in a ‘Forbidden: permission denied’ error. See #49436.
-
Added additional improvements to reduce the amount of failed network requests when fetching resources shown in lists in the Rancher UI by reducing the length of the URL used in the request. This failure was more likely to happen when viewing resources in the local cluster as an administrator. See #16215.
-
Fixed an issue where OIDC-based Auth Provider settings could be lost when upgrading from Rancher v2.12.x to v2.13.x or later. A cleanup process for unused OIDC secrets could cause the AuthConfig to be partially overwritten, leading to the loss of critical authentication settings like endpoints and client IDs. See #53995.
Cluster Provisioning
Features and Enhancements
-
Rancher v2.14 adds support for Cluster API v1beta2 by upgrading to cluster-api v1.12.2 (from v1.10.6). Rolling back to Rancher 2.13.x, which supports Cluster API v1beta1, introduces compatibility issues and requires alternative rollback steps. Refer to the Rancher documentation for the required procedure. Refer to Upgrading to CAPI v1beta2 for more details on the changes introduced in this new API version. See #52034 and #53334.
-
Rancher now supports the provisioning and management of dual-stack (IPv4/IPv6) networking for EKS clusters. See #53584.
Behavior Changes
-
Embedded Cluster API removed: The built-in Rancher Provisioning Cluster API functionality (
rancher-provisioning-capi) has been removed in Rancher v2.14.0, as Rancher Turtles is now the default mechanism for deploying CAPI CRDs and controllers. This includes:-
The
embedded-cluster-apifeature flag. -
The
rancher-provisioning-capiHelm chart. -
Related webhooks and controllers.
-
Migration Path: The migration to Rancher Turtles happens automatically during the upgrade to v2.14.0. Rancher Turtles is now the only supported method for Cluster API (CAPI) integration with Rancher. Unless you are using a certificated CAPI provider installed through Turtles, no manual action is required.
However, if you had previously disabled Rancher Turtles, you will need to manually re-enable it. A warning will let you know during Rancher startup if Turtles is disabled and that cluster provisioning will not operate as expected.
For more information on installing and using Rancher Turtles, refer to the Cluster API integration documentation. See #53291.
-
Major Bug Fixes
-
Fixed an issue where the
capi-controller-managerimage was not properly pulled and failed to come up asActivewhen installing Rancher in an air-gapped environment. See #52816. -
Fixed an issue where users were unable to provision or import downstream EKS clusters in an IPv6 Rancher environment. See #52154.
-
Fixed an issue where users were unable to successfully provision downstream K3s custom or node-driver clusters in an IPv6-only environment. In these scenarios, the clusters would get stuck during the provisioning process and fail to reach an
Activestate. See #51990. -
Fixed an issue where the
rke2-serverservice on etcd-only nodes would continuously crash with a “failed to get runtime classes” fatal error when provisioning an IPv6-only RKE2 custom cluster. This caused the etcd node to get stuck repeatedly flipping betweenWaiting for Node RefandReconcilingstates. See #51851. -
Fixed an issue where the Cluster page showed the Registration tab when updating or upgrading a hosted cluster. See #8524.
K3s and RKE2 Provisioning
Features and Enhancements
-
K3s and RKE2-provisioned clusters now support specifying different retention policies for S3 and local snapshots. See #53046.
-
Technology preview: native CAPI infrastructure providers (for example, CAPA and CAPV) support for Rancher-provisioned RKE2 clusters. Rancher v2.14 can provision RKE2 clusters using native CAPI infrastructure providers, such as CAPA (Cluster API Provider AWS) and CAPV (Cluster API Provider vSphere), in addition to the previously supported method of provisioning using Rancher Node Drivers (
rancher/machine). See #51561.This is an experimental feature made available as a technology preview for evaluation. It should not be used for production clusters, and some configuration fields are subject to change. Future versions of this feature may be incompatible with this version.
This method is distinct from provisioning using Rancher Turtles, for example, through CAPA + CAPRKE2 or Kubeadm.
Note that these clusters cannot yet be provisioned via the Rancher UI, and the cluster management UI is disabled for them (UI for exploring the cluster resources remains available).
For examples on how to create clusters using this method with CAPA and CAPV, refer to the documentation.
Known Issues
- After updating an RKE2 or K3s cluster to a supported Kubernetes version, the S3 snapshot retention defaults to
5, ignoring any previously setsnapshotRetentionvalues. This is caused by a known issue in RKE2 and K3s, tracked in #13769, which is targeted for a fix in a future KDM update. As a workaround, Rancher v2.14.0 introduces the ability to explicitly specifyetcd-s3-retentionto manage S3 snapshot retention independently. See also #53046.
Backup/Restore
Behavior Changes
- When performing a rollback from Rancher v2.14 to Rancher v2.13 using the Backup/Restore Operator, the restore process requires all Rancher-related resources to be cleaned up on the upstream (local) cluster. For more information on the required rollback steps, refer to the Rancher documentation. See also the above release note on Cluster API v1beta2, introduced by the cluster-api v1.12.2 upgrade, and #916.
Continuous Delivery (Fleet)
Behavior Changes
-
Fleet v0.15.0 has been migrated from Helm v3 to Helm v4. For more details, see #4351.
-
The Fleet
imagescanfeature, which has been provided as experimental, is now disabled by default and will be deprecated in a future release. With this feature disabled, Git repository paths referenced byGitReporesources will fail to deploy if they containimageScansin theirfleet.yamlor equivalent configuration files. If you still require this feature, you can explicitly enable it by settingimagescan.enabled=truewhen installing Fleet. For more information, refer to the Image Scan documentation and the guide on enabling imagescan. See #4671.
Known Issues
- Under certain circumstances, drift correction in Fleet v0.15.0 may not work as expected with Helm v4. See the v0.15.0 changelog and #4878.
Install/Upgrade Notes
If you’re installing Rancher for the first time, your environment must fulfill the installation requirements.
[!IMPORTANT]
Rancher now requires the cluster it runs on to have the Kubernetes API Aggregation Layer enabled. This is because Rancher extends Kubernetes with additional APIs by registering its own extension API server. Please note that all versions of Kubernetes supported in this Rancher versions K8s distributions (RKE2/K3s) will have the aggregation layer configured and enabled by default.
- Refer to the Extension API Server documentation and #50400 for more information.
Rancher Kubernetes Engine (RKE/RKE1) has reached end of life as of July 31, 2025. Rancher versions 2.12.0 and later no longer support provisioning or managing downstream RKE1 clusters. We recommend replatforming RKE1 clusters to RKE2 to ensure continued support and security updates. Learn more about the transition here.
- Rancher now has a pre-upgrade validation check for RKE1 resources which fails and lists the RKE1 resources if present. Refer to #50286 for more information.
It is crucial that you review the available disk space on your nodes and plan accordingly before upgrading to Rancher v2.12.0 and later to avoid potential disk pressure and pod eviction issues.
- For additional information, refer to the UI Server Side Pagination - Disk Space documentation.
Rancher now has an enablement option called
AUDIT_LOG_ENABLEDfor API Audit Logs for a Rancher installation.
- In Rancher versions 2.11.x and earlier, only the
AUDIT_LEVELcould be set and the default log level (0) would disable the audit log. In Rancher versions 2.12.x and later, the default log level (0) now only contains the log request and response metadata, and can be set when configuringAUDIT_LOG_ENABLED. If installing or upgrading via Helm you can enable the API Audit Logs and specify the log level by applying the following setting to your Helm command:--set auditLog.enabled=true --set auditLog.level=0. See the Enabling the API Audit Log to Record System Events documentation and #48941.
Changes in Image Artifacts
Image artifact digests are renamed in Rancher v2.12.0, v2.11.4 and v2.10.8. Up until this change, separate image digests files for each operating system and architecture have been maintained for compatibility reasons. With this change, only one file for each operating system is to be provided:
-
The
rancher-images-digests-linux-amd64.txtandrancher-images-digests-linux-arm64.txtfiles are to be renamed torancher-images-digests-linux.txt. -
The
rancher-images-digests-windows-ltsc2019.txtandrancher-images-digests-windows-ltsc2022.txtfiles are to be renamed torancher-images-digests-windows.txt.
Upgrade Requirements
-
Creating backups: Create a backup before you upgrade Rancher. To roll back Rancher after an upgrade, you must first back up and restore Rancher to the previous Rancher version. Because Rancher will be restored to the same state as when the backup was created, any changes post-upgrade will not be included after the restore.
-
Helm version requirements:
-
To manage Rancher 2.12.x and later, you must upgrade your Helm client to version 3.18 or newer.
-
This change is required to reflect the addition of Kubernetes 1.33 support with this release.
-
Currently, the official Helm Version Support Policy dictates that only Helm 3.18 supports the proper Kubernetes version range for Rancher 2.12.
-
-
CNI requirements:
-
For Kubernetes v1.19 and later, disable firewalld as it’s incompatible with various CNI plugins. See #28840.
-
When upgrading or installing a Linux distribution that uses nf_tables as the backend packet filter, such as SLES 15, RHEL 8, Ubuntu 20.10, Debian 10, or later, upgrade to RKE v1.19.2 or later to get Flannel v0.13.0. Flannel v0.13.0 supports nf_tables. See Flannel #1317.
-
-
Requirements for air-gapped environments:
-
When using a proxy in front of an air-gapped Rancher instance, you must pass additional parameters to
NO_PROXY. See the documentation and issue #2725. -
When installing Rancher with Docker in an air-gapped environment, you must supply a custom
registries.yamlfile to thedocker runcommand, as shown in the K3s documentation. If the registry has certificates, then you’ll also need to supply those. See #28969.
-
Versions
Images
- rancher/rancher:v2.14.0
Tools
- CLI - v2.14.0
Kubernetes Versions for RKE2/K3s
-
v1.35.2 (Default)
-
v1.34.5
-
v1.33.9
Rancher Helm Chart Versions
In Rancher v2.6.0 and later, in the Apps & Marketplace UI, many Rancher Helm charts are named with a major version that starts with 100. This avoids simultaneous upstream changes and Rancher changes from causing conflicting version increments. This also complies with semantic versioning (SemVer), which is a requirement for Helm. You can see the upstream version number of a chart in the build metadata, for example: 100.0.0+up2.1.0. See #32294.
Previous Rancher Behavior Changes
Previous Rancher Behavior Changes - Rancher General
-
Rancher v2.13.0:
- Official support for Kubernetes v1.31 and older versions has been removed. You can no longer provision new RKE2 or K3s clusters using the Kubernetes versions that fall outside of the supported range (v1.32 - v1.34). See #51253.
Previous Rancher Behavior Changes - Rancher App (Global UI)
-
Rancher v2.13.0:
-
The Rancher UI for provisioning hosted Kubernetes clusters (AKS, EKS, and GKE) has been updated to align with the new Cluster Provisioning v2 (kev2) framework. This change replaces the reliance on the older kontainerdriver (kev1) resources to determine which hosted providers are available for display. The UI now uses a new setting to manage the visibility of these providers, ensuring consistency and future compatibility. See #15391.
-
Rancher’s session inactivity logic has been moved from the UI to the backend. A new session TTL setting
auth-user-session-idle-ttl-minuteswas introduced, and it sets the maximum time a user is allowed to be idle within a browser session before the session expires. To enable the idle timeout feature, you must supplyauth-user-session-idle-ttl-minutesand set it to a value lower than the existing absolute session limit,auth-user-session-ttl-minutes. This new backend-driven mechanism, along with its associated TTL setting, replaces the previous session timeout configuration in the UI under Global Settings > Performance. See #12552.
-
Future Rancher Behavior Changes
Retention Policy for Rancher App Charts
To improve repository performance, Rancher is introducing a lifecycle management policy for charts available in the Apps feature of Rancher, specifically in the “Rancher” repository.
-
The Policy: Rancher will transition from a cumulative model (retaining all historical versions forever) to a retention model that preserves chart versions for the seven (7) most recent Rancher minor releases (approximately a 2.5-year window).
-
Timeline - Rancher v2.13 & v2.14: Legacy chart versions (older than the 7-version window) remain available.
-
Rancher v2.15: This will be the first version to enforce the policy. Versions falling outside the 7-version window and older than two years will no longer be available.
Impact: This change is non-destructive for existing Rancher installations. Historical versions will remain accessible but will not be available in newer release branches once they age out of the 7-version window. You are advised to upgrade your applications before upgrading to Rancher v2.15. Uninstallation after v2.15, and replacement with an updated version, will still be possible.
Long-standing Known Issues
Long-standing Known Issues - Cluster Provisioning
-
Not all cluster tools can be installed on a hardened cluster.
-
Rancher v2.8.1:
- When you attempt to register a new etcd/controlplane node in a CAPR-managed cluster after a failed etcd snapshot restoration, the node can become stuck in a perpetual paused state, displaying the error message
[ERROR] 000 received while downloading Rancher connection information. Sleeping for 5 seconds and trying again. As a workaround, you can unpause the cluster by runningkubectl edit clusters.cluster clustername -n fleet-defaultand setspec.unpausedtofalse. See #43735.
- When you attempt to register a new etcd/controlplane node in a CAPR-managed cluster after a failed etcd snapshot restoration, the node can become stuck in a perpetual paused state, displaying the error message
Long-standing Known Issues - RKE2 Provisioning
-
Rancher v2.7.7:
- Due to the backoff logic in various components, downstream provisioned K3s and RKE2 clusters may take longer to re-achieve Active status after a migration. If you see that a downstream cluster is still updating or in an error state immediately after a migration, please let it attempt to resolve itself. This might take up to an hour to complete. See #34518 and #42834.
-
Rancher v2.7.6:
Long-standing Known Issues - K3s Provisioning
-
Rancher v2.7.6:
-
Rancher v2.7.2:
- Clusters remain in an
Updatingstate even when they contain nodes in anErrorstate. See #39164.
- Clusters remain in an
Long-standing Known Issues - Rancher App (Global UI)
-
Rancher v2.10.0:
- After deleting a Namespace or Project in the Rancher UI, the Namespace or Project remains visible. As a workaround, refresh the page. See #12220.
-
Rancher v2.9.2:
- Although system mode node pools must have at least one node, the Rancher UI allows a minimum node count of zero. Inputting a zero minimum node count through the UI can cause cluster creation to fail due to an invalid parameter error. To prevent this error from occurring, enter a minimum node count at least equal to the node count. See #11922.
-
Rancher v2.7.7:
- When creating a cluster in the Rancher UI it does not allow the use of an underscore
_in theCluster Namefield. See #9416.
- When creating a cluster in the Rancher UI it does not allow the use of an underscore
Long-standing Known Issues - EKS
-
Rancher v2.7.0:
- EKS clusters on Kubernetes v1.21 or below on Rancher v2.7 cannot be upgraded. See #39392.
Long-standing Known Issues - Authentication
-
Rancher v2.9.0:
-
There are some known issues with the OpenID Connect provider support:
-
When the generic OIDC auth provider is enabled, and you attempt to add auth provider users to a cluster or project, users are not populated in the dropdown search bar. This is expected behavior as the OIDC auth provider alone is not searchable. See #46104.
-
When the generic OIDC auth provider is enabled, auth provider users that are added to a cluster/project by their username are not able to access resources upon logging in. A user will only have access to resources upon login if the user is added by their userID. See #46105.
-
When the generic OIDC auth provider is enabled and an auth provider user in a nested group is logged into Rancher, the user will see the following error when they attempt to create a Project:
projectroletemplatebindings.management.cattle.io`` is forbidden: User "u-gcxatwsnku" cannot create resource "projectroletemplatebindings" in API group "``management.cattle.io``" in the namespace "p-9t5pg". However, the project is still created. See #46106.
-
-
Long-standing Known Issues - Rancher Webhook
-
Rancher v2.7.2:
-
A webhook is installed in all downstream clusters. There are several issues that users may encounter with this functionality:
- If you rollback from a version of Rancher v2.7.2 or later, to a Rancher version earlier than v2.7.2, the webhooks will remain in downstream clusters. Since the webhook is designed to be 1:1 compatible with specific versions of Rancher, this can cause unexpected behaviors to occur downstream. The Rancher team has developed a script which should be used after rollback is complete (meaning after a Rancher version earlier than v2.7.2 is running). This removes the webhook from affected downstream clusters. See #40816.
-
Long-standing Known Issues - Virtualization Management (Harvester)
-
Rancher v2.13.1:
- When upgrading to Rancher v2.13.1 or higher while using Harvester v1.6.1, users may encounter an issue with their Load Balancers in downstream clusters using the Harvester Cloud Provider, and must perform the following workaround to instruct Calico to not use any of the IP/interface managed by
kube-vip. See #9767.
- When upgrading to Rancher v2.13.1 or higher while using Harvester v1.6.1, users may encounter an issue with their Load Balancers in downstream clusters using the Harvester Cloud Provider, and must perform the following workaround to instruct Calico to not use any of the IP/interface managed by
-
Rancher v2.7.2:
- If you’re using Rancher v2.7.2 with Harvester v1.1.1 clusters, you won’t be able to select the Harvester cloud provider when deploying or updating guest clusters. The Harvester release notes contain instructions on how to resolve this. See #3750.
Long-standing Known Issues - Backup/Restore
-
When migrating to a cluster with the Rancher Backup feature, the server-url cannot be changed to a different location. It must continue to use the same URL.
-
Rancher v2.13.0:
- When performing a rollback from Rancher v2.13.0 to v2.12.3 using the backup and restore operator (BRO), the restore does not complete successfully. See #844. To work around this issue, you must scale down your Rancher deployment and uninstall the Webhook chart before performing the restore. For details, refer to this Knowledge Base article.
-
Rancher v2.7.7:
- Due to the backoff logic in various components, downstream provisioned K3s and RKE2 clusters may take longer to re-achieve Active status after a migration. If you see that a downstream cluster is still updating or in an error state immediately after a migration, please let it attempt to resolve itself. This might take up to an hour to complete. See #34518 and #42834.