Kubernetes upgrade notes: 1.24.x to 1.25.x

If you used my Kubernetes the Not So Hard Way With Ansible blog posts to setup a Kubernetes (K8s) cluster this notes might be helpful for you (and maybe for others too that manage a K8s cluster on their own e.g.). I’ll only mention changes that might be relevant because they will either be interesting for most K8s administrators anyways (even in case they run a fully managed Kubernetes deployment) or if it’s relevant if you manage your own bare-metal/VM based on-prem Kubernetes deployment. I normally skip changes that are only relevant for GKE, AWS EKS, Azure or other cloud providers.

I’ve a general upgrade guide Kubernetes the Not So Hard Way With Ansible - Upgrading Kubernetes that worked quite well for me for the last past K8s upgrades. So please read that guide if you want to know HOW the components are updated. This post here is esp. for the 1.24.x to 1.25.x upgrade and WHAT was interesting for me.

As usual I don’t update a production system before the .2 release of a new major version is released. In my experience the .0 and .1 are just too buggy. Nevertheless it’s important to test new releases (and even Beta or release candidates if possible) already in development environments and report bugs!

I only upgrade from the latest version of the former major release. At the time writing this blog post 1.24.9 was the latest 1.24.x release. After reading the 1.24 CHANGELOG to figure out if any important changes where made between the current 1.24.x and latest 1.24.9 release I didn’t see anything that prevented me updating and I don’t needed to change anything.

So I did the 1.24.9 update first. If you use my Ansible roles that basically only means to change k8s_release variable from 1.24.x to 1.24.9 and deploy the changes for the control plane and worker nodes as described in my upgrade guide.

After that everything still worked as expected so I continued with the next step.

As it’s normally no problem to have a newer kubectl utility that is only one major version ahead of the server version I updated kubectl from 1.24.x to latest 1.25.x using my kubectl Ansible role.

Since K8s 1.14 there are also searchable release notes available. You can specify the K8s version and a K8s area/component (e.g. kublet, apiserver, …) and immediately get an overview what changed in that regard. Quite nice! :-)

As always before a major upgrade read the Urgent Upgrade Notes! If you used my Ansible roles to install Kubernetes and used most of the default settings then there should be no need to adjust any settings. For K8s 1.25 release I actually couldn’t find any urgent notes that were relevant for my Ansible roles or my own on-prem setup. But nevertheless there are three notes that might be noteworthy for some people:

  • The 1.25 release stopped serving the following deprecated API versions: CronJob batch/v1beta1, EndpointSlice discovery.k8s.io/v1beta1, Event events.k8s.io/v1beta1, HorizontalPodAutoscaler autoscaling/v2beta1, PodDisruptionBudget policy/v1beta1, PodSecurityPolicy policy/v1beta and RuntimeClass node.k8s.io/v1beta1. For more information see Deprecated API migration guide.
  • The intree volume plugins flocker, quobyte and storageos support were completely removed from Kubernetes.
  • There is a new OCI image registry (registry.k8s.io) that can be used to pull Kubernetes images. The old registry (k8s.gcr.io) will continue to be supported for the foreseeable future, but the new name should perform better because it frontends equivalent mirrors in other clouds.

In What’s New (Major Themes) I’ve found the following highlight(s) that looks most important to me:

  • PodSecurityPolicy is removed, Pod Security Admission graduates to stable. If you are currently relying on PodSecurityPolicy, please follow the instructions for migration to Pod Security Admission.
  • Ephemeral Containers graduate to stable. That’s actually a very useful feature if it comes to debugging pod issues where kubectl exec won’t work because the running containers don’t contain any debug tools (which is best practice anyways).
  • Support for cgroups v2 Graduates to Stable (also Kubernetes 1.25: cgroup v2 graduates to GA)
  • Promoted SeccompDefault to Beta. I’ve written about that already in my Kubernetes 1.22 to 1.23 upgrade notes when this feature was in Alpha state. As this is a really useful security related feature please also check Enable default seccomp profile which was part of the upgrade notes mentioned above. It contains some more useful information esp. how to test once this feature was enabled.
  • Promoted Local Ephemeral Storage Capacity Isolation to stable. It provides support for capacity isolation of local ephemeral storage between pods, such as EmptyDir, so that a pod can be hard limited in its consumption of shared resources by evicting Pods if its consumption of local ephemeral storage exceeds that limit.
  • Promoted CSI Ephemeral Volume to stable. The CSI Ephemeral Volume feature allows CSI volumes to be specified directly in the pod specification for ephemeral use cases.

A few interesting things I’ve found in Deprecation:

  • Support for the alpha seccomp annotations seccomp.security.alpha.kubernetes.io/pod and container.seccomp.security.alpha.kubernetes.io, deprecated since 1.19, was partially removed. Kubelets no longer support the annotations, use of the annotations in static pods is no longer supported, and the seccomp annotations are no longer auto-populated when pods with seccomp fields are created. Auto-population of the seccomp fields from the annotations is planned to be removed in 1.27. Pods should use the corresponding pod or container securityContext.seccompProfile field instead.
  • The beta PodSecurityPolicy admission plugin, deprecated since 1.21, is removed (also see above).
  • The intree volume plugins GlusterFS and Portworx are deprecated in 1.25.

A few interesting API Changes also took place of course:

  • New feature gate CheckpointRestore to enable support to checkpoint containers. If enabled it is possible to checkpoint a container using the newly kubelet API (/checkpoint/{podNamespace}/{podName}/{containerName})
  • Alpha support for user namespaces in pods phase 1 (KEP 127, feature gate: UserNamespacesStatelessPodsSupport)
  • Promoted CronJob’s TimeZone support to beta
  • Promote StatefulSet minReadySeconds to GA.
  • Promoted DaemonSet MaxSurge to GA
  • The endPort field in Network Policy is now promoted to GA (Please be aware that endPort field MUST BE SUPPORTED by the Network Policy provider!)
  • The PodSecurity admission plugin has graduated to GA and is enabled by default (see above)

And finally the interesting Features:

  • Graduated SeccompDefault to beta. The Kubelet feature gate is now enabled by default and the configuration/CLI flag still defaults to false
  • Update CoreDNS to v1.9.3
  • etcd: Update to v3.5.5
  • The kubectl diff changed to ignore managed fields by default, and a new --show-managed-fields flag has been added to allow you to include managed fields in the diff
  • Updated cAdvisor to v0.45.0
  • MaxUnavailable for StatefulSets, allows faster RollingUpdate by taking down more than 1 pod at a time. The number of pods you want to take down during a RollingUpdate is configurable using maxUnavailable parameter.
  • Feature gates that graduated to GA in 1.23 or earlier and were unconditionally enabled have been removed: CSIServiceAccountToken, ConfigurableFSGroupPolicy, EndpointSlice, EndpointSliceNodeName, EndpointSliceProxying, GenericEphemeralVolume, IPv6DualStack, IngressClassNamespacedParams, StorageObjectInUseProtection, TTLAfterFinished, VolumeSubpath, WindowsEndpointSliceProxying.
  • Improved kubectl run and kubectl debug error messages upon attaching failures.
  • In the event that more than one IngressClass is designated “default”, the DefaultIngressClass admission controller will choose one rather than fail.
  • Cleaning up IPTables Chain Ownership - Starting with v1.25, the Kubelet will gradually move towards not creating the following iptables chains in the nat table: KUBE-MARK-DROP, KUBE-MARK-MASQ and KUBE-POSTROUTING.

Lastest supported and tested etcd version for Kubernetes v1.25 is v3.5.5.

If you use CSI then also check the CSI Sidecar Containers documentation. Every sidecar container contains a matrix which version you need at a minimum, maximum and which version is recommend to use with whatever K8s version.
Nevertheless if your K8s update to v1.25 worked fine I would recommend to also update the CSI sidecar containers sooner or later.

Here are a few links that might be interesting regarding what’s new in regards to new features in Kubernetes 1.25:

Now I finally upgraded the K8s controller and worker nodes to version 1.25.x as described in Kubernetes the Not So Hard Way With Ansible - Upgrading Kubernetes.

That’s it for today! Happy upgrading! ;-)