Kubernetes upgrade notes: 1.25.x to 1.26.x

If you used my Kubernetes the Not So Hard Way With Ansible blog posts to setup a Kubernetes (K8s) cluster this notes might be helpful for you (and maybe for others too that manage a K8s cluster on their own e.g.). I’ll only mention changes that might be relevant because they will either be interesting for most K8s administrators anyways (even in case they run a fully managed Kubernetes deployment) or if it’s relevant if you manage your own bare-metal/VM based on-prem Kubernetes deployment. I normally skip changes that are only relevant for GKE, AWS EKS, Azure or other cloud providers.

I’ve a general upgrade guide Kubernetes the Not So Hard Way With Ansible - Upgrading Kubernetes that worked quite well for me for the last past K8s upgrades. So please read that guide if you want to know HOW the components are updated. This post here is esp. for the 1.25.x to 1.26.x upgrade and WHAT was interesting for me.

As usual I don’t update a production system before the .2 release of a new major version is released. In my experience the .0 and .1 are just too buggy. Nevertheless it’s important to test new releases (and even Beta or release candidates if possible) already in development environments and report bugs!

I only upgrade from the latest version of the former major release. At the time writing this blog post 1.25.9 was the latest 1.25.x release. After reading the 1.25 CHANGELOG to figure out if any important changes where made between the current 1.25.x and latest 1.25.9 release I didn’t see anything that prevented me updating and I don’t needed to change anything.

So I did the 1.25.9 update first. If you use my Ansible roles that basically only means to change k8s_release variable from 1.25.x to 1.25.9 and deploy the changes for the control plane and worker nodes as described in my upgrade guide.

After that everything still worked as expected so I continued with the next step.

As it’s normally no problem to have a newer kubectl utility that is only one major version ahead of the server version I updated kubectl from 1.25.x to latest 1.26.x using my kubectl Ansible role.

Since K8s 1.14 there are also searchable release notes available. You can specify the K8s version and a K8s area/component (e.g. kublet, apiserver, …) and immediately get an overview what changed in that regard. Quite nice! :-)

As always before a major upgrade read the Urgent Upgrade Notes! If you used my Ansible roles to install Kubernetes and used most of the default settings then there should be no need to adjust any settings. For K8s 1.26 release I actually couldn’t find any urgent notes that were relevant for my Ansible roles or my own on-prem setup. But nevertheless there are three notes that might be noteworthy for some people:

  • GlusterFS in-tree storage driver which was deprecated in kubernetes v1.25 release is now removed entirely in v1.26.
  • As with most major Kubernetes releases some APIs were removed. See Deprecated API Migration Guide for v1.26. Esp. the HorizontalPodAutoscaler autoscaling/v2beta2 API version is no longer served as of v1.26. Migrate to autoscaling/v2. Also migrate from flowcontrol.apiserver.k8s.io/v1beta1 to flowcontrol.apiserver.k8s.io/v1beta3 if you use that API.
  • The in-tree cloud provider for OpenStack (and the cinder volume provider) has been removed.

In What’s New (Major Themes) I’ve found the following highlight(s) that looks most important to me:

  • New container image registry registry.k8s.io
  • CRI v1alpha2 removed: This means that containerd minor version 1.5 and older are not supported in Kubernetes 1.26; if you use containerd, you will need to upgrade to containerd version 1.6.0 or later before you upgrade that node to Kubernetes v1.26.
  • Delegate FSGroup to CSI Driver graduated to stable: Starting with this release, CSI drivers have the option to apply the fsGroup settings during attach or mount time of the volumes.
  • CEL in Admission Control graduates to alpha: Validating admission policies offer a declarative, in-process alternative to validating admission webhooks. The ValidatingAdmissionPolicy describes the abstract logic of a policy (think: “this policy makes sure a particular label is set to a particular value”). A ValidatingAdmissionPolicyBinding links the above resources together and provides scoping. If you only want to require an owner label to be set for Pods, the binding is where you would specify this restriction. A parameter resource provides information to a ValidatingAdmissionPolicy to make it a concrete statement (think “the owner label must be set to something that ends in .company.com”). A native type such as ConfigMap or a CRD defines the schema of a parameter resource. ValidatingAdmissionPolicy objects specify what Kind they are expecting for their parameter resource.
  • Pod scheduling improvements: PodSchedulingReadiness graduates to alpha. This feature introduces a .spec.schedulingGates field to Pod’s API, to indicate whether the Pod is allowed to be scheduled or not. External users/controllers can use this field to hold a Pod from scheduling based on their policies and needs.
  • The Service objects can now set the spec.trafficPolicy to optimize cluster traffic. Valid values are Cluster, Topology, PreferLocal or Local (see KEP-2086](https://github.com/kubernetes/enhancements/issues/2086))

A few interesting things I’ve found in Deprecation (also see Kubernetes Removals, Deprecations, and Major Changes in 1.26):

A few interesting API Changes also took place of course:

  • Added a ResourceClaim API (in the resource.k8s.io/v1alpha1 API group and behind the DynamicResourceAllocation feature gate). The new API is now more flexible than the existing Device Plugins feature of Kubernetes because it allows Pods to request (claim) special kinds of resources, which can be available at node level, cluster level, or following any other model you implement.
  • Container preStop and postStart lifecycle handlers using httpGet now honor the specified scheme and headers fields.
  • PodDisruptionBudget now adds an alpha spec.unhealthyPodEvictionPolicy field. When the PDBUnhealthyPodEvictionPolicy feature-gate is enabled in kube-apiserver, setting this field to AlwaysAllow allows pods to be evicted if they do not have a ready condition, regardless of whether the PodDisruptionBudget is currently healthy.
  • Add auth API to get self subject attributes (new selfsubjectreviews API is added). The corresponding command for kubectl is provided - kubectl auth whoami.
  • Added a feature that allows a StatefulSet to start numbering replicas from an arbitrary non-negative ordinal, using the .spec.ordinals.start field.
  • Added a kube-proxy flag (--iptables-localhost-nodeports, default true) to allow disabling NodePort services on loopback addresses. Note: this only applies to iptables mode and ipv4.
  • In kube-proxy: The userspace proxy mode (deprecated for over a year) is no longer supported on either Linux or Windows. Users should use iptables or ipvs on Linux, or kernelspace on Windows.
  • Introduced v1alpha1 API for validating admission policies, enabling extensible admission control via CEL expressions (KEP 3488: CEL for Admission Control). To use, enable the ValidatingAdmissionPolicy feature gate and the admissionregistration.k8s.io/v1alpha1 API via --runtime-config (also see above).
  • Kubelet external Credential Provider feature is moved to GA. Credential Provider Plugin and Credential Provider Config APIs updated from v1beta1 to v1 with no API changes.
  • New Pod API field .spec.schedulingGates is introduced to enable users to control when to mark a Pod as scheduling ready.
  • DynamicKubeletConfig feature gate has been removed from the API server. Dynamic kubelet reconfiguration now can’t be used even when older nodes are still attempting to rely on it. This is aligned with the Kubernetes version skew policy.
  • kubectl wait command with jsonpath flag will wait for target path until timeout.
  • Added selector validation to HorizontalPodAutoscaler: when multiple HPAs select the same set of Pods, scaling now will be disabled for those HPAs with the reason AmbiguousSelector. This change also covers a case when multiple HPAs point to the same deployment.
  • Added categories column to the kubectl api-resources command’s wide output (-o wide). Added --categories flag to the kubectl api-resources command, which can be used to filter the output to show only resources belonging to one or more categories.
  • Graduated Kubelet CPU Manager to GA.
  • Graduated Kubelet Device Manager to GA.
  • If more than one StorageClass is designated as default (via the storageclass.kubernetes.io/is-default-class annotation), choose the newest one instead of throwing an error.
  • kubectl shell completions for the bash shell now include descriptions.
  • Promoted kubectl alpha events to kubectl events.
  • Shell completion now shows plugin names when appropriate. Furthermore, shell completion will work for plugins that provide such support.
  • The ExpandedDNSConfig feature has graduated to beta and is enabled by default. Note that this feature requires container runtime support.
  • Introduce a new field spec.internalTrafficPolicy in Service that will allow clusterIP routing to be node local.

etcd stays at v3.5.5 as it was for Kubernetes v1.25.

If you use CSI then also check the CSI Sidecar Containers documentation. Every sidecar container contains a matrix which version you need at a minimum, maximum and which version is recommend to use with whatever K8s version.

Nevertheless if your K8s update to v1.26 worked fine I would recommend to also update the CSI sidecar containers sooner or later.

By using Trivy for misconfigurations in your manifests or Trivy Operator for your cluster, you will get information related to deprecated Kubernetes resources.

Here are a few links that might be interesting regarding what’s new in regards to new features in Kubernetes 1.25:

Now I finally upgraded the K8s controller and worker nodes to version 1.26.x as described in Kubernetes the Not So Hard Way With Ansible - Upgrading Kubernetes.

That’s it for today! Happy upgrading! ;-)