Kubernetes upgrade notes: 1.23.x to 1.24.x
If you used my Kubernetes the Not So Hard Way With Ansible blog posts to setup a Kubernetes (K8s) cluster this notes might be helpful for you (and maybe for others too that manage a K8s cluster on their own e.g.). I’ll only mention changes that might be relevant because they will either be interesting for most K8s administrators anyways (even in case they run a fully managed Kubernetes deployment) or if it’s relevant if you manage your own bare-metal/VM based on-prem Kubernetes deployment. I normally skip changes that are only relevant for GKE, AWS EKS, Azure or other cloud providers.
I’ve a general upgrade guide Kubernetes the Not So Hard Way With Ansible - Upgrading Kubernetes that worked quite well for me for the last past K8s upgrades. So please read that guide if you want to know HOW the components are updated. This post here is esp. for the
1.24.x upgrade and WHAT was interesting for me.
As usual I don’t update a production system before the
.2 release of a new major version is released. In my experience the
.1 are just too buggy. Nevertheless it’s important to test new releases (and even Beta or release candidates if possible) already in development or integration systems and report bugs!
I only upgrade from the latest version of the former major release. At the time writing this blog post
1.23.10 was the latest
1.23.x release. After reading the 1.23 CHANGELOG to figure out if any important changes where made between the current
1.23.x and latest
1.23.10 release I didn’t see anything that prevented me updating and I don’t needed to change anything.
So I did the
1.23.10 update first. If you use my Ansible roles that basically only means to change
k8s_release variable from
1.23.10 and deploy the changes for the control plane and worker nodes as described in my upgrade guide.
After that everything still worked as expected so I continued with the next step.
As it’s normally no problem to have a newer
kubectl utility that is only one major version ahead of the server version I updated
1.23.x to latest
1.24.x using my kubectl Ansible role.
Since K8s 1.14 there are also searchable release notes available. You can specify the K8s version and a K8s area/component (e.g. kublet, apiserver, …) and immediately get an overview what changed in that regard. Quite nice! :-)
As always before a major upgrade read the Urgent Upgrade Notes! If you used my Ansible roles to install Kubernetes and used most of the default settings then there should be no need to adjust any settings. For K8s
1.24 release I actually couldn’t find any urgent notes that were relevant for my Ansible roles or my own on-prem setup. But nevertheless there are three notes that might be relevant for some people:
1.23.xwas the last release that supported Docker runtime using dockershim in the kubelet. If you still have that one running then there is no way to upgrade to Kubernetes
1.24.xyet. In that case you need to replace dockershim first. Please read my blog post Kubernetes: Replace dockershim with containerd and runc and act accordingly. Afterwards you can continue with the Kubernetes
LegacyServiceAccountTokenNoAutoGenerationfeature gate is beta, and enabled by default. When enabled, Secret API objects containing service account tokens are no longer auto-generated for every ServiceAccount. Also see Service account token Secrets.
- The calculations for Pod topology spread skew now exclude nodes that don’t match the node affinity/selector. This may lead to unschedulable pods if you previously had pods matching the spreading selector on those excluded nodes (not matching the node affinity/selector), especially when the topologyKey is not node-level. Revisit the node affinity and/or pod selector in the topology spread constraints to avoid this scenario.
In What’s New (Major Themes) I’ve found the following highlight(s) that looks most important to me:
- Already mentioned above but very important so I mention it once more: Dockershim Removed from kubelet
- Beta APIs Off by Default. That was a little bit surprising for me as Beta APIs were enabled for quite some releases and only Alpha APIs were disabled. So if you want to use a Beta API with K8s v1.24 or higher you need to enable it.
A few interesting things I’ve found in Deprecation:
kube-apiserver: the insecure address flags
- The insecure address flags
kube-controller-managerhave had no effect since v1.20 and are removed in v1.24.
kube-schedulerinsecure flags. You can use
node.k8s.io/v1alpha1RuntimeClass API is no longer served. Use the
node.k8s.io/v1API version, available since v1.20.
- The cluster addon for dashboard was removed. To install dashboard, see here.
A few interesting API Changes also took place of course:
- Indexed Jobs graduated to stable.
StatefulSets, allows faster RollingUpdate by taking down more than 1 pod at a time. The number of pods you want to take down during a RollingUpdate is configurable using
- Pod affinity namespace selector and cross-namespace quota graduated to GA.
ServerSideFieldValidationfeature has graduated to beta and is now enabled by default. Kubectl
1.24and newer will use server-side validation instead of client-side validation when writing to API servers with the feature enabled.
- The feature
DynamicKubeletConfighas been removed from the kubelet.
And finally the interesting Features:
- Added completion for
kubectl config set-context.
- Added label selector flag to all
- Added support for kubectl commands (
kubectl port-forward) via a SOCKS5 proxy.
- Kubectl now supports shell completion for the / format for specifying resources. kubectl now provides shell completion for container names following the
-cflag of the
execcommand. kubectl’s shell completion now suggests resource types for commands that only apply to pods.
- Kubelet: the following dockershim related flags are also removed along with dockershim
1.24is built with Go
1.18, which will no longer validate certificates signed with a SHA-1 hash algorithm by default. See https://golang.org/doc/go1.18#sha1 for more details.
- Move volume expansion feature to GA.
- Promoted graceful shutdown based on pod priority to beta.
kubectl logswill now warn and default to the first container in a pod. This new behavior brings it in line with
- The kubelet now creates an iptables chain named
mangletable. Containerized components that need to modify iptables rules in the host network namespace can use the existence of this chain to more-reliably determine whether the system is using
- The output of
kubectl describe ingressnow includes an
IngressClassname if available.
kubectl apply -kto Kustomize
kubectl create tokencan now be used to request a service account token, and permission to request service account tokens is added to the
- Reverted graceful node shutdown to match 1.21 behavior of setting pods that have not yet successfully completed to “Failed” phase if the
GracefulNodeShutdownfeature is enabled in kubelet. The
GracefulNodeShutdownfeature is beta and must be explicitly configured via kubelet config to be enabled in 1.21+. This changes 1.22 and 1.23 behavior on node shutdown to match 1.21. If you do not want pods to be marked terminated on node shutdown in 1.22 and 1.23, disable the
- Remove deprecated
- Updated runc to
- Users who look at iptables dumps will see some changes in the naming and structure of rules.
kubectl versionlong output, will be replaced with
kubectl version --short. Users requiring full output should use
Lastest supported and tested
etcd version for Kubernetes
If you use CSI then also check the CSI Sidecar Containers documentation. Every sidecar container contains a matrix which version you need at a minimum, maximum and which version is recommend to use with whatever K8s version.
Nevertheless if your K8s update to
v1.24 worked fine I would recommend to also update the CSI sidecar containers sooner or later.
Here are a few links that might be interesting regarding what’s new in regards to new features in Kubernetes
Now I finally upgraded the K8s controller and worker nodes to version
1.24.x as described in Kubernetes the Not So Hard Way With Ansible - Upgrading Kubernetes.
That’s it for today! Happy upgrading! ;-)