Kubernetes upgrade notes: 1.29.x to 1.30.x

Introduction
If you used my Kubernetes the Not So Hard Way With Ansible blog posts to setup a Kubernetes (K8s) cluster this notes might be helpful for you (and maybe for others too that manage a K8s cluster on their own e.g.). I’ll only mention changes that might be relevant because they will either be interesting for most K8s administrators anyways (even in case they run a fully managed Kubernetes deployment) or if it’s relevant if you manage your own bare-metal/VM based on-prem Kubernetes deployment. I normally skip changes that are only relevant for GKE, AWS EKS, Azure or other cloud providers.
I’ve a general upgrade guide Kubernetes the Not So Hard Way With Ansible - Upgrading Kubernetes that worked quite well for me for the last past K8s upgrades. So please read that guide if you want to know HOW the components are updated. This post here is esp. for the 1.29.x to 1.30.x upgrade and WHAT was interesting for me.
As usual I don’t update a production system before the .2 release of a new major version is released. In my experience the .0 and .1 are just too buggy. Nevertheless it’s important to test new releases (and even beta or release candidates if possible) already in development environments and report bugs!
Update to latest current release
I only upgrade from the latest version of the former major release. At the time writing this blog post 1.29.9 was the latest 1.29.x release. After reading the 1.29 CHANGELOG to figure out if any important changes where made between the current 1.29.x and latest 1.28.9 release I didn’t see anything that prevented me updating and I don’t needed to change anything.
So I did the 1.29.9 update first. If you use my Ansible roles that basically only means to change k8s_ctl_release variable from 1.29.x to 1.29.9 (for the controller nodes) and the same for k8s_worker_release (for the worker nodes). Deploy the changes for the control plane and worker nodes as described in my upgrade guide.
After that everything still worked as expected, I continued with the next step.
Upgrading kubectl
As it’s normally no problem to have a newer kubectl utility that is only one major version ahead of the server version I updated kubectl from 1.29.x to latest 1.30.x using my kubectl Ansible role.
Release notes
Since K8s 1.14 there are also searchable release notes available. You can specify the K8s version and a K8s area/component (e.g. kubelet, apiserver, …) and immediately get an overview what changed in that regard. Quite nice! 😉
Urgent Upgrade Notes
Funny enough I think Kubernetes v1.30 release is the first that doesn’t have urgent upgrade notes 😉
What’s New (Major Themes)
All important stuff is listed in the Kubernetes v1.30: Uwubernetes release announcement.
The following listings of changes and features only contains stuff that I found useful and interesting. See the full Kubernetes v1.30 Changelog for all changes.
Graduated to stable
- Robust VolumeManager reconstruction after kubelet restart
- Prevent unauthorized volume mode conversion during volume restore
- Preventing unauthorized volume mode conversion moves to GA
- Pod Scheduling Readiness
- Min domains in PodTopologySpread
- Validating Admission Policy Is Generally Available
Graduated to beta
- 
Node log query That’s one of the more interesting features IMHO. To help with debugging issues on nodes, Kubernetes v1.27 introduced a feature that allows fetching logs of services running on the node. Following the v1.30release, this is now beta (you still need to enable the feature to use it, though). For Linux it is assumed that the service logs are available viajournald. To get the logs of thekubeleton a node one can use this command:kubectl get --raw "/api/v1/nodes/node-1.example/proxy/logs/?query=kubelet"For more information, see the log query documentation. 
- 
Beta Support For Pods With User Namespaces This feature definitely increases security of workload quite a bit. User namespaces are a Linux feature that isolates the UIDs and GIDs of the containers from the ones on the host. The identifiers in the container can be mapped to identifiers on the host in a way where the host UID/GIDs used for different containers never overlap. Furthermore, the identifiers can be mapped to unprivileged, non-overlapping UIDs and GIDs on the host. This feature needs Linux kernel >= 6.3. Sadly currently containerd v1.7isn’t supported. So one have to wait forcontainerd v2.0(at least…). Basically onlyCRI-Owith crun is currently supported but notCRI-Owithrunc. Currently it looks like that usingruncwith this feature is a general problem. Hopefully it’ll work some day as this is really an interesting feature.
- 
Contextual logging 
 Make Kubernetes aware of the LoadBalancer behavior
Alpha features
- Speed up recursive SELinux label change
- Recursive Read-only (RRO) mounts
- Job success/completion policy
- Traffic distribution for services
- Storage Version Migration
Further reading
- What’s New in Kubernetes 1.30?
- Exploring Kubernetes v1.30: Enhancements Relevant to MinIO Deployments
API changes
- kubeletallowed specifying a custom root directory for pod logs (instead of the default- /var/log/pods) using the- podLogsDirkey in kubelet configuration.
- AppArmor profiles can now be configured through fields on the PodSecurityContextand containerSecurityContext. The beta AppArmor annotations are deprecated, and AppArmor status is no longer included in the node ready condition.
- Contextual logging is now in beta and enabled by default.
- In kubeletconfiguration, the.memorySwap.swapBehaviorfield now accepts a new valueNoSwap, which becomes the default if unspecified. The previously acceptedUnlimitedSwapvalue has been dropped.
- ValidatingAdmissionPolicywas promoted to GA and will be enabled by default.
Features
- Added Timezonecolumn in the output of thekubectl get cronjobcommand.
- Changed --nodeport-addressesbehavior to default to “primary node IP(s) only” rather than “all node IPs”.
- kubectl get jobnow displays the status for the listed jobs.
- kubectl port-forwardover websockets (tunneling SPDY) can now be enabled using an Alpha feature flag environment variable:- KUBECTL_PORT_FORWARD_WEBSOCKETS=true. The API Server being communicated to must also have an Alpha feature flag enabled:- PortForwardWebsockets.
- Graduated “Forensic Container Checkpointing” (KEP #2008) from Alpha to Beta.
- Graduated HorizontalPodAutoscalersupport for per-container metrics to stable.
- Introduced a new alpha feature gate, SELinuxMount, which can now be enabled to accelerate SELinux relabeling.
- kubeletnow supports configuring the IDs used to create user namespaces.
- Promoted KubeProxyDrainingTerminatingNodesto beta
- The kubeletnow rejects creating the pod if hostUserns=false and the CRI runtime does not support user namespaces.
Other
- etcd: Updated to- v3.5.11
CSI
If you use CSI then also check the CSI Sidecar Containers documentation. Every sidecar container contains a matrix which version you need at a minimum, maximum and which version is recommend to use with whatever K8s version.
Nevertheless if your K8s update to v1.30 worked fine I would recommend to also update the CSI sidecar containers sooner or later.
Upgrade Kubernetes
Now I finally upgraded the K8s controller and worker nodes to version 1.30.x as described in Kubernetes the Not So Hard Way With Ansible - Upgrading Kubernetes.
That’s it for today! Happy upgrading! 😉