Kubernetes upgrade notes: 1.28.x to 1.29.x
Introduction
If you used my Kubernetes the Not So Hard Way With Ansible blog posts to setup a Kubernetes (K8s) cluster this notes might be helpful for you (and maybe for others too that manage a K8s cluster on their own e.g.). I’ll only mention changes that might be relevant because they will either be interesting for most K8s administrators anyways (even in case they run a fully managed Kubernetes deployment) or if it’s relevant if you manage your own bare-metal/VM based on-prem Kubernetes deployment. I normally skip changes that are only relevant for GKE, AWS EKS, Azure or other cloud providers.
I’ve a general upgrade guide Kubernetes the Not So Hard Way With Ansible - Upgrading Kubernetes that worked quite well for me for the last past K8s upgrades. So please read that guide if you want to know HOW the components are updated. This post here is esp. for the 1.28.x
to 1.29.x
upgrade and WHAT was interesting for me.
As usual I don’t update a production system before the .2
release of a new major version is released. In my experience the .0
and .1
are just too buggy. Nevertheless it’s important to test new releases (and even beta or release candidates if possible) already in development environments and report bugs!
Important upgrade notes for my Ansible Kubernetes roles
With version 22.0.0+1.27.8
and up of my kubernetes_controller and version 24.0.0+1.27.8
and up of my kubernetes_worker quite some refactoring took place. So please read kubernetes_controller CHANGELOG and kubernetes_worker CHANGELOG carefully if you upgrade from earlier role versions!
If you already using version 23.0.0+1.28.5
and up of kubernetes_controller and version 25.0.0+1.28.5
and up of kubernetes_worker you can skip the following text and can continue with the next paragraph.
This refactoring was needed to make it possible to have githubixx.kubernetes_controller
and githubixx.kubernetes_worker
roles deployed on the same host e.g. They were some intersections between the two roles that had to be fixed. Also security for kube-apiserver
, kube-scheduler
and kube-controller-manager
was increased by using systemd
which allows to limit the exposure of the system towards the unit’s processes.
Basically if you keep the new defaults of k8s_ctl_conf_dir
and k8s_worker_conf_dir
you can delete the following directories after you upgraded a node to the new role version:
On the controller nodes:
- /var/lib/kube-controller-manager
- /var/lib/kube-scheduler
On the worker nodes:
- /var/lib/kube-proxy
On both type of nodes:
- /var/lib/kubernetes
Before this role version there was only k8s_conf_dir: /usr/lib/kubernetes
which was valid for both nodes. This variable is gone. The new defaults are k8s_ctl_conf_dir: /etc/kubernetes/controller
(for kubernetes_controller
role) and k8s_worker_conf_dir: /etc/kubernetes/worker
(for kubernetes_worker
role).
Basically all kubernetes_controller
related variables now start with k8s_ctl_
and all kubernetes_worker
related variables with k8s_worker_
.
The kubernetes_worker
role contains a Molecule scenario that set ups a fully functional Kubernetes cluster. You don’t need to deploy all the VMs but the Molecule configuration files might give you a good hint about which variables might need to be adjusted for your own deployment.
Also my containerd role had quite some changes recently with version 0.11.0+1.7.8
. So please consult the CHANGELOG of this role too. Esp. runc
and CNI plugins
are no longer installed by this role. Please use runc role and cni role accordingly.
And finally etcd role had quite some changes too with version 13.0.0+3.5.9
. So please read that CHANGELOG too.
Update to latest current release
I only upgrade from the latest version of the former major release. At the time writing this blog post 1.28.8
was the latest 1.28.x
release. After reading the 1.28 CHANGELOG to figure out if any important changes where made between the current 1.28.x
and latest 1.28.8
release I didn’t see anything that prevented me updating and I don’t needed to change anything.
So I did the 1.28.8
update first. If you use my Ansible roles that basically only means to change k8s_ctl_release
variable from 1.28.x
to 1.28.8
(for the controller nodes) and the same for k8s_worker_release
(for the worker nodes). Deploy the changes for the control plane and worker nodes as described in my upgrade guide.
After that everything still worked as expected so I continued with the next step.
Upgrading kubectl
As it’s normally no problem to have a newer kubectl
utility that is only one major version ahead of the server version I updated kubectl
from 1.28.x
to latest 1.29.x
using my kubectl Ansible role.
Release notes
Since K8s 1.14 there are also searchable release notes available. You can specify the K8s version and a K8s area/component (e.g. kubelet, apiserver, …) and immediately get an overview what changed in that regard. Quite nice! 😉
Urgent Upgrade Notes
As always before a major upgrade read the Urgent Upgrade Notes! If you used my Ansible roles to install Kubernetes and used most of the default settings then there should be no need to adjust any settings. For K8s 1.29
release I actually couldn’t find any urgent notes that were relevant for my Ansible roles or my own on-prem setup. This time it only was about kubeadm
which is not used in my roles.
Besides that:
- The deprecated
flowcontrol.apiserver.k8s.io/v1beta2
API version ofFlowSchema
andPriorityLevelConfiguration
are no longer served in Kubernetes v1.29. If you have manifests or client software that uses the deprecated beta API group, you should change these before you upgrade to v1.29. - Kubernetes 1.29: Cloud Provider Integrations Are Now Separate Components (not relevant for on-premise clusters)
What’s New (Major Themes)
All important stuff is listed in the Kubernetes v1.29: Mandala release announcement.
Graduated to stable
- ReadWriteOncePod PersistentVolume access mode
- Node volume expansion Secret support for CSI drivers
- KMS v2 encryption at rest generally available
Graduated to beta
- Node lifecycle separated from taint management
- Clean up for legacy Secret-based ServiceAccount tokens
Alpha features
- Define Pod affinity or anti-affinity using matchLabelKey
- NICE nftables backend for kube-proxy
- NICE APIs to manage IP address ranges for Services - Services are an abstract way to expose an application running on a set of Pods. Services can have a cluster-scoped virtual IP address, that is allocated from a predefined CIDR defined in the kube-apiserver flags. However, users may want to add, remove, or resize existing IP ranges allocated for Services without having to restart the kube-apiserver.
- Add support to containerd/kubelet/CRI to support image pull per runtime class
- In-place updates for Pod resources, for Windows Pods
Further reading
- Kubernetes 1.29: CSI Storage Resizing Authenticated and Generally Available in v1.29 - Kubernetes version v1.29 brings generally available support for authentication during CSI (Container Storage Interface) storage resize operations.
- Kubernetes 1.29: VolumeAttributesClass for Volume Modification - The v1.29 release of Kubernetes introduced an alpha feature to support modifying a volume by changing the
volumeAttributesClassName
that was specified for a PersistentVolumeClaim (PVC). With the feature enabled, Kubernetes can handle updates of volume attributes other than capacity. Allowing volume attributes to be changed without managing it through different provider’s APIs directly simplifies the current flow. - Kubernetes 1.29: New (alpha) Feature, Load Balancer IP Mode for Services
- Kubernetes 1.29: Single Pod Access Mode for PersistentVolumes Graduates to Stable - ReadWriteOncePod is an access mode for PersistentVolumes (PVs) and PersistentVolumeClaims (PVCs) introduced in Kubernetes v1.22. This access mode enables you to restrict volume access to a single pod in the cluster, ensuring that only one pod can write to the volume at a time. This can be particularly useful for stateful workloads that require single-writer access to storage.
- Kubernetes 1.29: Decoupling taint-manager from node-lifecycle-controller
- Kubernetes 1.29: PodReadyToStartContainers Condition Moves to Beta
- Contextual logging in Kubernetes 1.29: Better troubleshooting and enhanced logging
- Kubernetes 1.29: the security perspective
Deprecation
- Creation of new
CronJob
objects containingTZ
orCRON_TZ
in.spec.schedule
, accidentally enabled in v1.22, is now disallowed. Use the.spec.timeZone
field instead, supported in v1.25+ clusters in default configurations.
For more information see:
API changes
kube-apiserver
: adds--authentication-config
flag for readingAuthenticationConfiguration
files.--authentication-config
flag is mutually exclusive with the existing--oidc-*
flags. The alphaStructuredAuthorizationConfiguration
feature flag must be enabled for--authorization-config
to be specified.kube-scheduler
component config (KubeSchedulerConfiguration
)kubescheduler.config.k8s.io/v1beta3
is removed in v1.29. Migratedkube-scheduler
configuration files tokubescheduler.config.k8s.io/v1
- A new sleep action for the
PreStop
lifecycle hook was added, allowing containers to pause for a specified duration before termination. - Added
ImageMaximumGCAge
field to Kubelet configuration, which allows a user to set the maximum age an image is unused before it’s garbage collected. - Added a new ServiceCIDR type that allows to dynamically configure the cluster range used to allocate Service ClusterIPs addresses.
- Graduated
Job
BackoffLimitPerIndex
feature to beta. - Promoted
PodReadyToStartContainers
condition to beta. kube-proxy
now has a newnftables
-based mode, available by runningkube-proxy --feature-gates NFTablesProxyMode=true --proxy-mode nftables
. This is currently an alpha-level feature and while it probably will not eat your data, it may nibble at it a bit. (It passes e2e testing but has not yet seen real-world use.)
Features
- Added a new
--init-only
command line flag tokube-proxy
. Setting the flag makeskube-proxy
perform its initial configuration that requires privileged mode, and then exit. The--init-only
mode is intended to be executed in a privileged init container, so that the main container may run with a strictersecurityContext
. - Graduated the
ReadWriteOncePod
feature gate toGA
. - The
--interactive
flag inkubectl delete
is now visible to all users by default. kubelet
allows pods to use thenet.ipv4.tcp_fin_timeout
,net.ipv4.tcp_keepalive_intvl
andnet.ipv4.tcp_keepalive_probes
sysctl by default; Pod Security Admission allows this sysctl in v1.29+ versions of the baseline and restricted policies.
Other
etcd
: Updated tov3.5.10
CSI
If you use CSI then also check the CSI Sidecar Containers documentation. Every sidecar container contains a matrix which version you need at a minimum, maximum and which version is recommend to use with whatever K8s version.
Nevertheless if your K8s update to v1.29
worked fine I would recommend to also update the CSI sidecar containers sooner or later.
Upgrade Kubernetes
Now I finally upgraded the K8s controller and worker nodes to version 1.29.x
as described in Kubernetes the Not So Hard Way With Ansible - Upgrading Kubernetes.
That’s it for today! Happy upgrading! 😉