I’m new with Rancher and I’m trying to automate the importing of an existing k3s (v1.25.10+k3s1) single-node cluster in my Rancher instance on Azure AKS (k8s v1.30.4, Rancher v2.9.1). What i’m trying to do is to move thel cluster (once registered) from the fleet-default workspace, to my custom “dev” workspace.
To do so, i’m using a bash script the following API call:
-
Get LoginToken:
LOGINTOKEN=$(curl -s \ "https://${SERVER_URL}/v3-public/localProviders/local?action=login" \ -H 'content-type: application/json' \ --data-binary "${AUTH_DATA}" | jq -r '.token')
-
Get API Token:
APITOKEN=$(curl -s \ "https://${SERVER_URL}/v3/token" \ -H 'content-type: application/json' \ -H "Authorization: Bearer $LOGINTOKEN" \ --data-binary "${JSON_DATA}" | jq -r '.token')
-
Cluster Definition: Here i try to set the payload of the API call in order to put the cluster in the “dev” workspace (and cluster-group, already present).
JSON_DATA=$(jq -n --arg orderId "$ORDER_ID" \ '{ "type": "cluster", "name": $orderId, "import": true, "metadata": { "labels": { "environment": "dev", "fleet.cattle.io/workspace": "dev", "fleet.cattle.io/cluster-group": "dev" }, "fleetWorkspaceName": "dev" }, "annotations": { "fleet.cattle.io/workspace": "dev", "fleet.cattle.io/cluster-group": "dev" }, "spec": { "fleetWorkspaceName": "dev" } }') CLUSTERID=$(curl -s "https://${SERVER_URL}/v3/cluster" \ -H 'content-type: application/json' \ -H "Authorization: Bearer $APITOKEN" \ --data-binary "${JSON_DATA}" | jq -r '.id') ```
-
Cluster Registration:
JSON_DATA=$(jq -n --arg clusterId ${CLUSTERID} \ '{ "type": "clusterRegistrationToken", "clusterId": $clusterId }' ID=$(curl -s "https://${SERVER_URL}/v3/clusters/${CLUSTERID}/clusterregistrationtoken" \ -H 'content-type: application/json' \ -H "Authorization: Bearer $APITOKEN" \ --data-binary "${JSON_DATA}" | jq -r '.id')
So, at this point, the cluster is registered on Rancher but it is inside the fleet-default workspace, even if i tried to set fleetWorkspace=“dev”.
What i thought is to try to update the cluster with a PUT API call, once the cluster in “Active” state.
To do so:
1 - Get the Cluster config:
current_config=$(curl -s \
"https://${SERVER_URL}/v3/clusters/${CLUSTERID}" \
-H "Authorization: Bearer ${APITOKEN}")`
2 - Update Cluster Config:
local PUT_JSON=$(echo "$current_config" | jq --arg ws "dev" '. | .fleetWorkspaceName = $ws' )
3 - POST API Call to push new cluster config:
curl -s \
-X PUT "https://${SERVER_URL}/v3/clusters/${CLUSTERID}" \
-H "Authorization: Bearer ${APITOKEN}" \
-H "Content-Type: application/json" \
--data-binary "${PUT_JSON}"
Once all those commands are executed, the Cluster still remain in fleet-default workspace. Now, if i retrieve once again the cluster config, inside the “conditions” field, i can notice the following element:
{
"lastUpdateTime": "2024-10-29T10:03:41Z",
"status": "True",
"type": "Updated"
}
I don’t know if this is about my last PUT api call, but still it didn’t work.
I have also enabled the “provisioningv2-fleet-workspace-back-population” Feature Flag from the Rancher Global Settings in the Rancher UI
At this point, i don’t know if Rancher API supports this kind of operation or if i’m missing something?