The 24-hour burn-in is a personal preference. The idea is that some issues might not pop up right away. For example, a memory leak in the Kube-apiserver isn’t going to show up right away, or if the k8s upgrade breaks one of Rancher controllers the effect might not show unless users start using it.
But the most important reason is application team smoking is not always the best, let’s say you do the upgrade from k8s v1.16 to v1.18 then right away upgrade again to v1.20. The next morning an application comes to you and says "My app is broken, your upgrade broke it roll it back. The question becomes which upgrade broke it? Was it the v1.18 or the v1.20?
So my rule is a minimum of 24hours for most environments so this gives the environment time to settle down and app teams to notice problems.
Personally, I recommend production and other mission-critical environments where uptime is very important. To schedule all upgrades a week apart. For example, one week I’ll upgrade Rancher one version, then the next week I’ll the upstream cluster then the next week I’ll do the downstream dev clusters, then the next week I’ll do qa, staging, non-mission-critical prod clusters. Finally, the next week I’ll do mission-critical prod clusters. Then 6 months later, the whole process repeats.
DevOps works best when you do lots of small steps instead of a few big jumps.