Kubernetes upgrade notes: 1.25.x to 1.26.x
If you used my Kubernetes the Not So Hard Way With Ansible blog posts to setup a Kubernetes (K8s) cluster this notes might be helpful for you (and maybe for others too that manage a K8s cluster on their own e.g.). I’ll only mention changes that might be relevant because they will either be interesting for most K8s administrators anyways (even in case they run a fully managed Kubernetes deployment) or if it’s relevant if you manage your own bare-metal/VM based on-prem Kubernetes deployment. I normally skip changes that are only relevant for GKE, AWS EKS, Azure or other cloud providers.
I’ve a general upgrade guide Kubernetes the Not So Hard Way With Ansible - Upgrading Kubernetes that worked quite well for me for the last past K8s upgrades. So please read that guide if you want to know HOW the components are updated. This post here is esp. for the
1.26.x upgrade and WHAT was interesting for me.
As usual I don’t update a production system before the
.2 release of a new major version is released. In my experience the
.1 are just too buggy. Nevertheless it’s important to test new releases (and even Beta or release candidates if possible) already in development environments and report bugs!
I only upgrade from the latest version of the former major release. At the time writing this blog post
1.25.9 was the latest
1.25.x release. After reading the 1.25 CHANGELOG to figure out if any important changes where made between the current
1.25.x and latest
1.25.9 release I didn’t see anything that prevented me updating and I don’t needed to change anything.
So I did the
1.25.9 update first. If you use my Ansible roles that basically only means to change
k8s_release variable from
1.25.9 and deploy the changes for the control plane and worker nodes as described in my upgrade guide.
After that everything still worked as expected so I continued with the next step.
As it’s normally no problem to have a newer
kubectl utility that is only one major version ahead of the server version I updated
1.25.x to latest
1.26.x using my kubectl Ansible role.
Since K8s 1.14 there are also searchable release notes available. You can specify the K8s version and a K8s area/component (e.g. kublet, apiserver, …) and immediately get an overview what changed in that regard. Quite nice! :-)
As always before a major upgrade read the Urgent Upgrade Notes! If you used my Ansible roles to install Kubernetes and used most of the default settings then there should be no need to adjust any settings. For K8s
1.26 release I actually couldn’t find any urgent notes that were relevant for my Ansible roles or my own on-prem setup. But nevertheless there are three notes that might be noteworthy for some people:
GlusterFSin-tree storage driver which was deprecated in kubernetes
v1.25release is now removed entirely in
- As with most major Kubernetes releases some APIs were removed. See Deprecated API Migration Guide for v1.26. Esp. the
autoscaling/v2beta2API version is no longer served as of
v1.26. Migrate to
autoscaling/v2. Also migrate from
flowcontrol.apiserver.k8s.io/v1beta3if you use that API.
- The in-tree cloud provider for OpenStack (and the cinder volume provider) has been removed.
In What’s New (Major Themes) I’ve found the following highlight(s) that looks most important to me:
- New container image registry
- CRI v1alpha2 removed: This means that
1.5and older are not supported in Kubernetes
1.26; if you use containerd, you will need to upgrade to
1.6.0or later before you upgrade that node to Kubernetes
- Delegate FSGroup to CSI Driver graduated to stable: Starting with this release, CSI drivers have the option to apply the
fsGroupsettings during attach or mount time of the volumes.
- CEL in Admission Control graduates to alpha: Validating admission policies offer a declarative, in-process alternative to validating admission webhooks. The
ValidatingAdmissionPolicydescribes the abstract logic of a policy (think: “this policy makes sure a particular label is set to a particular value”). A
ValidatingAdmissionPolicyBindinglinks the above resources together and provides scoping. If you only want to require an
ownerlabel to be set for
Pods, the binding is where you would specify this restriction. A parameter resource provides information to a
ValidatingAdmissionPolicyto make it a concrete statement (think “the owner label must be set to something that ends in
.company.com”). A native type such as
CRDdefines the schema of a parameter resource.
ValidatingAdmissionPolicyobjects specify what
Kindthey are expecting for their parameter resource.
- Pod scheduling improvements:
alpha. This feature introduces a
.spec.schedulingGatesfield to Pod’s API, to indicate whether the Pod is allowed to be scheduled or not. External users/controllers can use this field to hold a Pod from scheduling based on their policies and needs.
Serviceobjects can now set the
spec.trafficPolicyto optimize cluster traffic. Valid values are
A few interesting things I’ve found in Deprecation (also see Kubernetes Removals, Deprecations, and Major Changes in 1.26):
- CRI v1alpha2 removed: see above and here
- Removal of
flowcontrol.apiserver.k8s.io/v1beta1: see above and here and here.
- Removal of kube-proxy userspace modes
- Removal of dynamic kubelet configuration
kube-apiserver: the unused
--master-service-namespaceflag was deprecated and will be removed in
v1.27. See Deprecations for kube-apiserver command line arguments
- Deprecated the following
kubectl runflags, which are ignored if set:
A few interesting API Changes also took place of course:
- Added a ResourceClaim API (in the
resource.k8s.io/v1alpha1API group and behind the
DynamicResourceAllocationfeature gate). The new API is now more flexible than the existing Device Plugins feature of Kubernetes because it allows Pods to request (claim) special kinds of resources, which can be available at node level, cluster level, or following any other model you implement.
postStartlifecycle handlers using
httpGetnow honor the specified scheme and headers fields.
PodDisruptionBudgetnow adds an alpha
spec.unhealthyPodEvictionPolicyfield. When the
PDBUnhealthyPodEvictionPolicyfeature-gate is enabled in
kube-apiserver, setting this field to
AlwaysAllowallows pods to be evicted if they do not have a ready condition, regardless of whether the
PodDisruptionBudgetis currently healthy.
- Add auth API to get self subject attributes (new
selfsubjectreviewsAPI is added). The corresponding command for
kubectlis provided -
kubectl auth whoami.
- Added a feature that allows a
StatefulSetto start numbering replicas from an arbitrary non-negative ordinal, using the
- Added a
true) to allow disabling
NodePortservices on loopback addresses. Note: this only applies to iptables mode and ipv4.
userspaceproxy mode (deprecated for over a year) is no longer supported on either Linux or Windows. Users should use
ipvson Linux, or
v1alpha1API for validating admission policies, enabling extensible admission control via CEL expressions (KEP 3488: CEL for Admission Control). To use, enable the
ValidatingAdmissionPolicyfeature gate and the
--runtime-config(also see above).
- Kubelet external Credential Provider feature is moved to GA. Credential Provider Plugin and Credential Provider Config APIs updated from
v1with no API changes.
- New Pod API field
.spec.schedulingGatesis introduced to enable users to control when to mark a Pod as scheduling ready.
DynamicKubeletConfigfeature gate has been removed from the API server. Dynamic kubelet reconfiguration now can’t be used even when older nodes are still attempting to rely on it. This is aligned with the Kubernetes version skew policy.
kubectl waitcommand with
jsonpathflag will wait for target path until timeout.
- Added selector validation to
HorizontalPodAutoscaler: when multiple HPAs select the same set of Pods, scaling now will be disabled for those HPAs with the reason
AmbiguousSelector. This change also covers a case when multiple HPAs point to the same deployment.
- Added categories column to the
kubectl api-resourcescommand’s wide output (
-o wide). Added
--categoriesflag to the
kubectl api-resourcescommand, which can be used to filter the output to show only resources belonging to one or more categories.
- Graduated Kubelet CPU Manager to GA.
- Graduated Kubelet Device Manager to GA.
- If more than one
StorageClassis designated as default (via the
storageclass.kubernetes.io/is-default-classannotation), choose the newest one instead of throwing an error.
kubectlshell completions for the bash shell now include descriptions.
kubectl alpha eventsto
- Shell completion now shows plugin names when appropriate. Furthermore, shell completion will work for plugins that provide such support.
ExpandedDNSConfigfeature has graduated to beta and is enabled by default. Note that this feature requires container runtime support.
- Introduce a new field spec.internalTrafficPolicy in
Servicethat will allow clusterIP routing to be node local.
etcd stays at
v3.5.5 as it was for Kubernetes
If you use CSI then also check the CSI Sidecar Containers documentation. Every sidecar container contains a matrix which version you need at a minimum, maximum and which version is recommend to use with whatever K8s version.
Nevertheless if your K8s update to
v1.26 worked fine I would recommend to also update the CSI sidecar containers sooner or later.
By using Trivy for misconfigurations in your manifests or Trivy Operator for your cluster, you will get information related to deprecated Kubernetes resources.
Here are a few links that might be interesting regarding what’s new in regards to new features in Kubernetes
- Kubernetes 1.26: Electrifying
- Kubernetes 1.26 – What’s new?
- Kubernetes Version 1.26: An Overview
- Kubernetes 1.26: Support for Passing Pod fsGroup to CSI Drivers At Mount Time
- Kubernetes 1.26: GA Support for Kubelet Credential Providers
- Kubernetes 1.26: Introducing Validating Admission Policies
- Kubernetes Removals, Deprecations, and Major Changes in 1.26
- Kubernetes 1.26: Alpha support for cross-namespace storage data sources
- Kubernetes 1.26: Eviction policy for unhealthy pods guarded by PodDisruptionBudgets
Now I finally upgraded the K8s controller and worker nodes to version
1.26.x as described in Kubernetes the Not So Hard Way With Ansible - Upgrading Kubernetes.
That’s it for today! Happy upgrading! ;-)