Kubernetes upgrade notes: 1.24.x to 1.25.x
If you used my Kubernetes the Not So Hard Way With Ansible blog posts to setup a Kubernetes (K8s) cluster this notes might be helpful for you (and maybe for others too that manage a K8s cluster on their own e.g.). I’ll only mention changes that might be relevant because they will either be interesting for most K8s administrators anyways (even in case they run a fully managed Kubernetes deployment) or if it’s relevant if you manage your own bare-metal/VM based on-prem Kubernetes deployment. I normally skip changes that are only relevant for GKE, AWS EKS, Azure or other cloud providers.
I’ve a general upgrade guide Kubernetes the Not So Hard Way With Ansible - Upgrading Kubernetes that worked quite well for me for the last past K8s upgrades. So please read that guide if you want to know HOW the components are updated. This post here is esp. for the
1.25.x upgrade and WHAT was interesting for me.
As usual I don’t update a production system before the
.2 release of a new major version is released. In my experience the
.1 are just too buggy. Nevertheless it’s important to test new releases (and even Beta or release candidates if possible) already in development environments and report bugs!
I only upgrade from the latest version of the former major release. At the time writing this blog post
1.24.9 was the latest
1.24.x release. After reading the 1.24 CHANGELOG to figure out if any important changes where made between the current
1.24.x and latest
1.24.9 release I didn’t see anything that prevented me updating and I don’t needed to change anything.
So I did the
1.24.9 update first. If you use my Ansible roles that basically only means to change
k8s_release variable from
1.24.9 and deploy the changes for the control plane and worker nodes as described in my upgrade guide.
After that everything still worked as expected so I continued with the next step.
As it’s normally no problem to have a newer
kubectl utility that is only one major version ahead of the server version I updated
1.24.x to latest
1.25.x using my kubectl Ansible role.
Since K8s 1.14 there are also searchable release notes available. You can specify the K8s version and a K8s area/component (e.g. kublet, apiserver, …) and immediately get an overview what changed in that regard. Quite nice! :-)
As always before a major upgrade read the Urgent Upgrade Notes! If you used my Ansible roles to install Kubernetes and used most of the default settings then there should be no need to adjust any settings. For K8s
1.25 release I actually couldn’t find any urgent notes that were relevant for my Ansible roles or my own on-prem setup. But nevertheless there are three notes that might be noteworthy for some people:
1.25release stopped serving the following deprecated API versions:
RuntimeClass node.k8s.io/v1beta1. For more information see Deprecated API migration guide.
- The intree volume plugins
storageossupport were completely removed from Kubernetes.
- There is a new OCI image registry (registry.k8s.io) that can be used to pull Kubernetes images. The old registry (k8s.gcr.io) will continue to be supported for the foreseeable future, but the new name should perform better because it frontends equivalent mirrors in other clouds.
In What’s New (Major Themes) I’ve found the following highlight(s) that looks most important to me:
Pod Security Admissiongraduates to stable. If you are currently relying on PodSecurityPolicy, please follow the instructions for migration to Pod Security Admission.
- Ephemeral Containers graduate to stable. That’s actually a very useful feature if it comes to debugging pod issues where
kubectl execwon’t work because the running containers don’t contain any debug tools (which is best practice anyways).
- Support for cgroups v2 Graduates to Stable (also Kubernetes 1.25: cgroup v2 graduates to GA)
- Promoted SeccompDefault to Beta. I’ve written about that already in my Kubernetes
1.23upgrade notes when this feature was in Alpha state. As this is a really useful security related feature please also check Enable default seccomp profile which was part of the upgrade notes mentioned above. It contains some more useful information esp. how to test once this feature was enabled.
- Promoted Local Ephemeral Storage Capacity Isolation to stable. It provides support for capacity isolation of local ephemeral storage between pods, such as EmptyDir, so that a pod can be hard limited in its consumption of shared resources by evicting Pods if its consumption of local ephemeral storage exceeds that limit.
- Promoted CSI Ephemeral Volume to stable. The CSI Ephemeral Volume feature allows CSI volumes to be specified directly in the pod specification for ephemeral use cases.
A few interesting things I’ve found in Deprecation:
- Support for the alpha seccomp annotations
container.seccomp.security.alpha.kubernetes.io, deprecated since
1.19, was partially removed. Kubelets no longer support the annotations, use of the annotations in static pods is no longer supported, and the seccomp annotations are no longer auto-populated when pods with seccomp fields are created. Auto-population of the seccomp fields from the annotations is planned to be removed in
1.27. Pods should use the corresponding pod or container
- The beta
PodSecurityPolicyadmission plugin, deprecated since 1.21, is removed (also see above).
- The intree volume plugins
Portworxare deprecated in
A few interesting API Changes also took place of course:
- New feature gate
CheckpointRestoreto enable support to checkpoint containers. If enabled it is possible to checkpoint a container using the newly kubelet API (
- Alpha support for user namespaces in pods phase 1 (KEP 127, feature gate:
- Promoted CronJob’s TimeZone support to beta
endPortfield in Network Policy is now promoted to GA (Please be aware that
endPortfield MUST BE SUPPORTED by the Network Policy provider!)
PodSecurityadmission plugin has graduated to GA and is enabled by default (see above)
And finally the interesting Features:
SeccompDefaultto beta. The Kubelet feature gate is now enabled by default and the configuration/CLI flag still defaults to false
etcd: Update to
kubectl diffchanged to ignore managed fields by default, and a new
--show-managed-fieldsflag has been added to allow you to include managed fields in the diff
StatefulSets, allows faster
RollingUpdateby taking down more than 1 pod at a time. The number of pods you want to take down during a
RollingUpdateis configurable using
- Feature gates that graduated to GA in
1.23or earlier and were unconditionally enabled have been removed:
CSIServiceAccountToken, ConfigurableFSGroupPolicy, EndpointSlice, EndpointSliceNodeName, EndpointSliceProxying, GenericEphemeralVolume, IPv6DualStack, IngressClassNamespacedParams, StorageObjectInUseProtection, TTLAfterFinished, VolumeSubpath, WindowsEndpointSliceProxying.
kubectl debugerror messages upon attaching failures.
- In the event that more than one
IngressClassis designated “default”, the DefaultIngressClass admission controller will choose one rather than fail.
- Cleaning up IPTables Chain Ownership - Starting with v1.25, the Kubelet will gradually move towards not creating the following iptables chains in the nat table:
Lastest supported and tested
etcd version for Kubernetes
If you use CSI then also check the CSI Sidecar Containers documentation. Every sidecar container contains a matrix which version you need at a minimum, maximum and which version is recommend to use with whatever K8s version.
Nevertheless if your K8s update to
v1.25 worked fine I would recommend to also update the CSI sidecar containers sooner or later.
Here are a few links that might be interesting regarding what’s new in regards to new features in Kubernetes
- Kubernetes v1.25: Combiner - Kubernetes
- Kubernetes Removals and Major Changes In 1.25
- Kubernetes v1.25: Pod Security Admission Controller in Stable
- PodSecurityPolicy: The Historical Context
- The Hitchhiker’s Guide to Pod Security - Lachlan Evenson, Microsoft - Video
- Kubernetes Version 1.25: An Overview
- Kubernetes 1.25: alpha support for running Pods with user namespaces. This is a major improvement for running secure workloads in Kubernetes. Each pod will have access only to a limited subset of the available UIDs and GIDs on the system, thus adding a new security layer to protect from other pods running on the same system.
- Kubernetes version 1.25 – everything you should know
Now I finally upgraded the K8s controller and worker nodes to version
1.25.x as described in Kubernetes the Not So Hard Way With Ansible - Upgrading Kubernetes.
That’s it for today! Happy upgrading! ;-)