Release v2.11.0
[!CAUTION]
Note: If you are using Active Directory Federation Service (AD FS) upgrading to Rancher v2.10.1 or later may cause issues with authentication caused by the AD FS Relying Party Trust not being able to pick up a signature verification certificate from the metadata, that requires manual intervention. This can be corrected by either trying to update Relying Party Trust information from federation metadata (Relying Party Trust → Update from Federation Metadata…) or by directly adding the certificate (Relying Party Trust → Properties → Signature tab → Add → Select the certificate). For more information see #48655.
Important: Rancher Kubernetes Engine (RKE/RKE1) will reach end of life on July 31, 2025. Rancher 2.12.0 and later will no longer support provisioning or managing downstream RKE1 clusters. We recommend replatforming RKE1 clusters to RKE2 to ensure continued support and security updates. Learn more about the transition here.
Important: Rancher-Istio will be deprecated in Rancher v2.12.0; turn to the SUSE Application Collection build of Istio for enhanced security (included in SUSE Rancher Prime subscriptions). Detailed information can be found in this announcement
Important: Review the Install/Upgrade Notes before upgrading to any Rancher version.
Rancher v2.11.0 is the latest minor release of Rancher. This is a Community version release that introduces new features, enhancements, and various updates.
Rancher General
Features and Enhancements
- Rancher now supports Kubernetes v1.32. See #47934 for information on Rancher support for Kubernetes v1.32. You can view the upstream Kubernetes changelogs for v1.32 for a complete list of changes.
- Rancher has a new feature,
imperative-api-extension
, enabled by default that adds additional APIs using Kubernetes’ aggregation layer. See #47010. - You can now enable the automatic deployment of a Priority Class and Pod Disruption Budget for the
cattle-cluster-agent
. When this feature is enabled, all newly provisioned node driver, custom and imported RKE2/K3s clusters will automatically deploy a Priority Class and Pod Disruption Budget during provisioning. Existing clusters can be gradually updated with this new behavior using the Rancher UI or by setting a specific annotation on cluster objects. See #13068 and #48995.
Behavior Changes
- Kubernetes v1.28 and v1.29 are no longer supported. Before upgrading to Rancher v2.11.0, ensure all clusters are running Kubernetes v1.30 or later. See #48628.
Major Bug Fixes
- Fixed a memory leak issue where memory usage increased on new installations of Rancher. See #47694.
- Fixed an issue where
cattle-cluster-agent
wouldn’t exit cleanly (exit code 0) when terminated by the host OS when receiving a SIGTERM signal. See #43020. - Fixed an issue where the agent image field is not updated for imported clusters because Rancher overrides the
desiredAgentImage
value set on the cluster object. See #47593.
Rancher App (Global UI)
Major Bug Fixes
- Fixed an issue where extensions were not loading after logging into Rancher through an LDAP authentication provider. See #13499.
- Fixed an issue where clusters could be created or edited with a broken add-on YAML file. See #12323.
- Fixed an issue where cluster labels could not be removed, updated or modified. See #9563.
- Fixed an issue where when using the Rancher UI and
kubectl
to perform drain and cordon actions on a node, the UI became out of sync with the actual state of the node. See #39867.
Known Issues
- After deleting a Namespace or Project in the Rancher UI, the Namespace or Project remains visible. As a workaround, refresh the page. See #12220.
Behavior Changes
- Replaced instances of v-tooltip with v-clean-tooltip to fix an issue where the UI did not sanitize cluster description inputs, allowing the possibility of changes to a cluster (local or downstream) description to cause a stored XSS attack. For more information, see CVE-2024-52281 and #12564.
Cluster Provisioning
Features and Enhancements
- Cluster-level private registries are now supported for generic Kubernetes imported clusters. See #35192.
Behavior Changes
- Generic Kubernetes imported clusters now use the v3 management cluster object (
cluster.management.cattle.io
) for both the initial creation and the updates (POST and PUT API calls respectively). See #13151.
RKE2/K3s Provisioning
Features and Enhancements
- You can now disable Rancher-side Kubernetes version management for imported RKE2/K3s clusters at both the global and per-cluster levels. See #41010.
Behavior Changes
- etcd snapshots are now populated to Rancher by listing the
etcdsnapshotfile.k3s.cattle.io
resources in the downstream cluster instead of periodically scraping the CLI andrke2/k3s-etcd-snapshots
configmap. See #44452.
Rancher CLI
Features and Enhancements
- Improvements have been made to the
cluster list-members
andproject list-members
commands. See #47692.
Major Bug Fixes
- Rancher CLI now respects proxy settings and the HTTP timeout in the
token
command. See #48631. - Rancher CLI now respects proxy settings and sets the HTTP timeout in login and SSH commands when it downloads certificates and an SSH key, respectively. Also, it is now possible to set an explicit proxy URL and HTTP timeout for the server in the configuration file. See #48321.
Behavior Changes
- CLI commands corresponding to the multi-cluster app legacy feature are no longer available. See #48252.
- The deprecated subcommand
globaldns
was removed from the Rancher CLI. See #48129.
Authentication
Features and Enhancements
- A new configuration option, “User Login Filter,” was introduced for LDAP (OpenLDAP and FreeIPA) and Active Directory authentication providers that allows you to provide an LDAP filter expression that narrows down who can log in to Rancher based on user LDAP attributes. See #48071.
- A new feature flag,
clean-stale-secrets
, has been added to remove stale secrets from thecattle-impersonation-system
namespace. See #48312. - Rancher now allows users to enable the
GroupSearchEnabled
when configuring an OIDC authentication provider, allowing users to search for groups in the Rancher UI. See #48145.
Role-Based Access Control (RBAC)
Features and Enhancements
-
A new status field has been added to the
GlobalRoleBinding
, providing information about the status of resources created in upstream and downstream clusters. See #44668. -
A new status field has been added to the
ClusterRoleTemplateBinding
, providing information about the status of resources created in upstream and downstream clusters. See #44663. -
Rancher now supports GlobalRoleBindings (GRBs), allowing the use of
userPrincipalId: "activedirectory_user://CN=test,CN=Users,DC=ad,DC=ians,DC=farm"
. AUser
will be created with the provideduserPrincipalId
. Additionally, the created user can have theirdisplayName
supplemented with an annotation on theGlobalRoleBinding
. The following example configures the created user’s display name.kind: GlobalRoleBinding metadata: annotations: auth.cattle.io/principal-display-name: "test user"
See #47359.
Behavior Changes
- The
Restricted Admin
role has been removed. Existing users with theRestricted Admin
role will have privileges associated with this role revoked upon upgrade. See #47875.
Continuous Delivery (Fleet)
Behavior Changes
-
Fleet now honors custom certificate authority (CA) bundles configured in Rancher.
This prevents you from needing to copy your CA bundles to all
GitRepos
and/or Helm secrets referenced by thoseGitRepos
. Instead, you can configure those bundles directly through a single secret already known by Rancher, which Fleet will transparently use as a fallback option.See the Fleet documentation and fleet#2750.
-
Since the move from StatefulSet to a Deployment and the introduction of leader election for the agents, agent failover has improved. When failover has been tested by shutting down a node with a fleet agent, we observed the pods from that node to stay in the
terminating
state. We want to make sure that it is clear to our users, that this is not a problem of Fleet, nor is it Fleet related. This is how Kubernetes behaves when the node becomes unreachable. Failover works correctly, even if those pods are kept in theterminating
state. See fleet#3096 and kubernetes/kubernetes#72226.
Apps & Marketplace
Behavior Changes
- The multi-cluster app legacy feature is no longer available. See #39525.
Observability and Backup
Features and Enhancements
rancher-resource-set-basic
andrancher-resource-set-full
are ResourceSets, specifically for backing up Rancher, that are officially maintained and created by default by therancher-backup
operator.rancher-resource-set-basic
does not include any secrets in the backup files to safeguard confidential information, whilerancher-resource-set-full
includes all essential secrets in the backup files to ensure Rancher continues running smoothly after a restore or migration. See #607 and documentation.- Backup and restore metrics are now available on Prometheus. See #352.
Monitoring
Features and Enhancements
- Rancher Monitoring now uses kube-prometheus-stack-66.7.1. See #48992.
Behavior Changes
rancher-alerting-drivers
app now usesrancher/kuberlr-kubectl
, improving how alerts are sent and received. See #48849.
Windows
Major Bug Fixes
-
Fixed an issue where upgrading imported RKE2 clusters with Windows nodes gets stuck in an “Upgrading” state. See #45030.
Please note that Windows nodes are now skipped during an upgrade. We recommended disabling Rancher-side Kubernetes version management for imported RKE2 clusters with Windows nodes. See the documentation for more information.
Install/Upgrade Notes
- If you’re installing Rancher for the first time, your environment must fulfill the installation requirements.
Upgrade Requirements
- Creating backups: Create a backup before you upgrade Rancher. To roll back Rancher after an upgrade, you must first back up and restore Rancher to the previous Rancher version. Because Rancher will be restored to the same state as when the backup was created, any changes post-upgrade will not be included after the restore.
- CNI requirements:
- For Kubernetes v1.19 and later, disable firewalld as it’s incompatible with various CNI plugins. See #28840.
- When upgrading or installing a Linux distribution that uses nf_tables as the backend packet filter, such as SLES 15, RHEL 8, Ubuntu 20.10, Debian 10, or later, upgrade to RKE v1.19.2 or later to get Flannel v0.13.0. Flannel v0.13.0 supports nf_tables. See Flannel #1317.
- Requirements for air-gapped environments:
- When using a proxy in front of an air-gapped Rancher instance, you must pass additional parameters to
NO_PROXY
. See the documentation and issue #2725. - When installing Rancher with Docker in an air-gapped environment, you must supply a custom
registries.yaml
file to thedocker run
command, as shown in the K3s documentation. If the registry has certificates, then you’ll also need to supply those. See #28969.
- When using a proxy in front of an air-gapped Rancher instance, you must pass additional parameters to
- Requirements for general Docker installs:
- When starting the Rancher Docker container, you must use the
privileged
flag. See documentation. - When upgrading a Docker installation, a panic may occur in the container, which causes it to restart. After restarting, the container will come up and work as expected. See #33685.
- When starting the Rancher Docker container, you must use the
Versions
Please refer to the README for the latest and stable Rancher versions.
Please review our version documentation for more details on versioning and tagging conventions.
Images
- rancher/rancher:v2.11.0
Tools
Kubernetes Versions for RKE
- v1.32.3 (Default)
- v1.31.7
- v1.30.11
- v1.29.15
Kubernetes Versions for RKE2/K3s
- v1.32.3 (Default)
- v1.31.7
- v1.30.11
- v1.29.15
Rancher Helm Chart Versions
In Rancher v2.6.0 and later, in the Apps & Marketplace UI, many Rancher Helm charts are named with a major version that starts with 100. This avoids simultaneous upstream changes and Rancher changes from causing conflicting version increments. This also complies with semantic versioning (SemVer), which is a requirement for Helm. You can see the upstream version number of a chart in the build metadata, for example: 100.0.0+up2.1.0
. See #32294.
Other Notes
Experimental Features
Rancher now supports the ability to use an OCI Helm chart registry for Apps & Marketplace. View documentation on using OCI based Helm chart repositories and note this feature is in an experimental stage. See #29105 and #45062
Deprecated Upstream Projects
In June 2023, Microsoft deprecated the Azure AD Graph API that Rancher had been using for authentication via Azure AD. When updating Rancher, update the configuration to make sure that users can still use Rancher with Azure AD. See the documentation and issue #29306 for details.
Removed Legacy Features
Apps functionality in the cluster manager has been deprecated as of the Rancher v2.7 line. This functionality has been replaced by the Apps & Marketplace section of the Rancher UI.
Also, rancher-external-dns
and rancher-global-dns
have been deprecated as of the Rancher v2.7 line.
The following legacy features have been removed as of Rancher v2.7.0. The deprecation and removal of these features was announced in previous releases. See #6864.
UI and Backend
- CIS Scans v1 (Cluster)
- Pipelines (Project)
- Istio v1 (Project)
- Logging v1 (Project)
- RancherD
UI
- Multiclusterapps (Global): Apps within the Multicluster Apps section of the Rancher UI.
Long-standing Known Issues
Long-standing Known Issues - Cluster Provisioning
-
Not all cluster tools can be installed on a hardened cluster.
-
Rancher v2.8.1:
- When you attempt to register a new etcd/controlplane node in a CAPR-managed cluster after a failed etcd snapshot restoration, the node can become stuck in a perpetual paused state, displaying the error message
[ERROR] 000 received while downloading Rancher connection information. Sleeping for 5 seconds and trying again
. As a workaround, you can unpause the cluster by runningkubectl edit clusters.cluster clustername -n fleet-default
and setspec.unpaused
tofalse
. See #43735.
- When you attempt to register a new etcd/controlplane node in a CAPR-managed cluster after a failed etcd snapshot restoration, the node can become stuck in a perpetual paused state, displaying the error message
-
Rancher v2.7.2:
- If you upgrade or update any hosted cluster, and go to Cluster Management > Clusters while the cluster is still provisioning, the Registration tab is visible. Registering a cluster that is already registered with Rancher can cause data corruption. See #8524.
Long-standing Known Issues - RKE Provisioning
- Rancher v2.9.0:
- The Weave CNI plugin for RKE v1.27 and later is now deprecated, due to the plugin being deprecated for upstream Kubernetes v1.27 and later. RKE creation will not go through as it will raise a validation warning. See #11322.
Long-standing Known Issues - RKE2 Provisioning
- Rancher v2.9.0:
- When adding the
provisioning.cattle.io/allow-dynamic-schema-drop
annotation through the cluster config UI, the annotation disappears before adding the value field. When viewing the YAML, the respective value field is not updated and is displayed as an empty string. As a workaround, when creating the cluster, set the annotation by using the Edit Yaml option located in the dropdown ⋮ attached to your respective cluster in the Cluster Management view. See #11435.
- When adding the
- Rancher v2.7.7:
- Due to the backoff logic in various components, downstream provisioned K3s and RKE2 clusters may take longer to re-achieve
Active
status after a migration. If you see that a downstream cluster is still updating or in an error state immediately after a migration, please let it attempt to resolve itself. This might take up to an hour to complete. See #34518 and #42834.
- Due to the backoff logic in various components, downstream provisioned K3s and RKE2 clusters may take longer to re-achieve
- Rancher v2.7.6:
- Rancher v2.7.2:
- When viewing or editing the YAML configuration of downstream RKE2 clusters through the UI,
spec.rkeConfig.machineGlobalConfig.profile
is set tonull
, which is an invalid configuration. See #8480.
- When viewing or editing the YAML configuration of downstream RKE2 clusters through the UI,
Long-standing Known Issues - K3s Provisioning
- Rancher v2.7.6:
- Rancher v2.7.2:
- Clusters remain in an
Updating
state even when they contain nodes in anError
state. See #39164.
- Clusters remain in an
Long-standing Known Issues - Rancher App (Global UI)
- Rancher v2.9.2:
- Although system mode node pools must have at least one node, the Rancher UI allows a minimum node count of zero. Inputting a zero minimum node count through the UI can cause cluster creation to fail due to an invalid parameter error. To prevent this error from occurring, enter a minimum node count at least equal to the node count. See #11922.
- Rancher v2.7.7:
- When creating a cluster in the Rancher UI it does not allow the use of an underscore
_
in theCluster Name
field. See #9416.
- When creating a cluster in the Rancher UI it does not allow the use of an underscore
Long-standing Known Issues - Hosted Rancher
- Rancher v2.7.5:
- The Cluster page shows the Registration tab when updating or upgrading a hosted cluster. See #8524.
Long-standing Known Issues - EKS
- Rancher v2.7.0:
- EKS clusters on Kubernetes v1.21 or below on Rancher v2.7 cannot be upgraded. See #39392.
Long-standing Known Issues - Authentication
- Rancher v2.9.0:
- There are some known issues with the OpenID Connect provider support:
- When the generic OIDC auth provider is enabled, and you attempt to add auth provider users to a cluster or project, users are not populated in the dropdown search bar. This is expected behavior as the OIDC auth provider alone is not searchable. See #46104.
- When the generic OIDC auth provider is enabled, auth provider users that are added to a cluster/project by their username are not able to access resources upon logging in. A user will only have access to resources upon login if the user is added by their userID. See #46105.
- When the generic OIDC auth provider is enabled and an auth provider user in a nested group is logged into Rancher, the user will see the following error when they attempt to create a Project:
[projectroletemplatebindings.management.cattle.io](http://projectroletemplatebindings.management.cattle.io/) is forbidden: User "u-gcxatwsnku" cannot create resource "projectroletemplatebindings" in API group "[management.cattle.io](http://management.cattle.io/)" in the namespace "p-9t5pg"
. However, the project is still created. See #46106.
- There are some known issues with the OpenID Connect provider support:
Long-standing Known Issues - Rancher Webhook
- Rancher v2.7.2:
- A webhook is installed in all downstream clusters. There are several issues that users may encounter with this functionality:
- If you rollback from a version of Rancher v2.7.2 or later, to a Rancher version earlier than v2.7.2, the webhooks will remain in downstream clusters. Since the webhook is designed to be 1:1 compatible with specific versions of Rancher, this can cause unexpected behaviors to occur downstream. The Rancher team has developed a script which should be used after rollback is complete (meaning after a Rancher version earlier than v2.7.2 is running). This removes the webhook from affected downstream clusters. See #40816.
- A webhook is installed in all downstream clusters. There are several issues that users may encounter with this functionality:
Long-standing Known Issues - Harvester
- Rancher v2.7.2:
- If you’re using Rancher v2.7.2 with Harvester v1.1.1 clusters, you won’t be able to select the Harvester cloud provider when deploying or updating guest clusters. The Harvester release notes contain instructions on how to resolve this. See #3750.
Long-standing Known Issues - Backup/Restore
-
When migrating to a cluster with the Rancher Backup feature, the server-url cannot be changed to a different location. It must continue to use the same URL.
-
Rancher v2.7.7:
- Due to the backoff logic in various components, downstream provisioned K3s and RKE2 clusters may take longer to re-achieve
Active
status after a migration. If you see that a downstream cluster is still updating or in an error state immediately after a migration, please let it attempt to resolve itself. This might take up to an hour to complete. See #34518 and #42834.
- Due to the backoff logic in various components, downstream provisioned K3s and RKE2 clusters may take longer to re-achieve
Long-standing Known Issues - Continuous Delivery (Fleet)
- Rancher v2.10.0:
- Target customization for namespace labels and annotations cannot modify/remove labels when updating. See #3064.
- In version 0.10, GitRepo resources provided a comprehensive list of all deployed resources across all clusters in their status. However, in version 0.11, this list has been modified to report resources only once until the feature is integrated into the Rancher UI. While this change addresses a UI freeze issue, it may result in potential inaccuracies in the list of resources and resource counts under some conditions. See #3027.