Kubernetes upgrade notes: 1.26.x to 1.27.x

If you used my Kubernetes the Not So Hard Way With Ansible blog posts to setup a Kubernetes (K8s) cluster this notes might be helpful for you (and maybe for others too that manage a K8s cluster on their own e.g.). I’ll only mention changes that might be relevant because they will either be interesting for most K8s administrators anyways (even in case they run a fully managed Kubernetes deployment) or if it’s relevant if you manage your own bare-metal/VM based on-prem Kubernetes deployment. I normally skip changes that are only relevant for GKE, AWS EKS, Azure or other cloud providers.

I’ve a general upgrade guide Kubernetes the Not So Hard Way With Ansible - Upgrading Kubernetes that worked quite well for me for the last past K8s upgrades. So please read that guide if you want to know HOW the components are updated. This post here is esp. for the 1.26.x to 1.27.x upgrade and WHAT was interesting for me.

As usual I don’t update a production system before the .2 release of a new major version is released. In my experience the .0 and .1 are just too buggy. Nevertheless it’s important to test new releases (and even beta or release candidates if possible) already in development environments and report bugs!

With version 22.0.0+1.27.8 of my kubernetes_controller and version 24.0.0+1.27.8 of my kubernetes_worker quite some refactoring took place. So please read kubernetes_controller CHANGELOG and kubernetes_worker CHANGELOG carefully!

This refactoring was needed to make it possible to have githubixx.kubernetes_controller and githubixx.kubernetes_worker roles deployed on the same host e.g. They were some intersections between the two roles that had to be fixed. Also security for kube-apiserver, kube-scheduler and kube-controller-manager was increased by using systemd which allows to limit the exposure of the system towards the unit’s processes.

Basically if you keep the new defaults of k8s_ctl_conf_dir and k8s_worker_conf_dir you can delete the following directories after you upgraded a node to the new role version:

On the controller nodes:

  • /var/lib/kube-controller-manager
  • /var/lib/kube-scheduler

On the worker nodes:

  • /var/lib/kube-proxy

On both type of nodes:

  • /var/lib/kubernetes

Before this role version there was only k8s_conf_dir: /usr/lib/kubernetes which was valid for both nodes. This variable is gone. The new defaults are k8s_ctl_conf_dir: /etc/kubernetes/controller (for kubernetes_controller role) and k8s_worker_conf_dir: /etc/kubernetes/worker (for kubernetes_worker role).

Basically all kubernetes_controller related variables now start with k8s_ctl_ and all kubernetes_worker related variables with k8s_worker_.

The kubernetes_worker role contains a Molecule scenario that set ups a fully functional Kubernetes cluster. You don’t need to deploy all the VMs but the Molecule configuration files might give you a good hint about which variables might need to be adjusted for your own deployment.

Also my containerd role had quite some changes recently with version 0.11.0+1.7.8. So please consult the CHANGELOG of this role too. Esp. runc and CNI plugins are no longer installed by this role. Please use runc role and cni role accordingly.

And finally etcd role had quite some changes too with version 13.0.0+3.5.9. So please read that CHANGELOG too.

I only upgrade from the latest version of the former major release. At the time writing this blog post 1.26.8 was the latest 1.26.x release. After reading the 1.26 CHANGELOG to figure out if any important changes where made between the current 1.26.x and latest 1.26.8 release I didn’t see anything that prevented me updating and I don’t needed to change anything.

So I did the 1.26.8 update first. If you use my Ansible roles that basically only means to change k8s_release variable from 1.26.x to 1.26.8 and deploy the changes for the control plane and worker nodes as described in my upgrade guide.

After that everything still worked as expected so I continued with the next step.

As it’s normally no problem to have a newer kubectl utility that is only one major version ahead of the server version I updated kubectl from 1.26.x to latest 1.27.x using my kubectl Ansible role.

Since K8s 1.14 there are also searchable release notes available. You can specify the K8s version and a K8s area/component (e.g. kublet, apiserver, …) and immediately get an overview what changed in that regard. Quite nice! 😉

As always before a major upgrade read the Urgent Upgrade Notes! If you used my Ansible roles to install Kubernetes and used most of the default settings then there should be no need to adjust any settings. For K8s 1.27 release I actually couldn’t find any urgent notes that were relevant for my Ansible roles or my own on-prem setup.

In general K8s 1.27 seems to be quite “conservative” if it comes to breaking changes if you manage a K8s cluster 😉

  • CSIStorageCapacity The storage.k8s.io/v1beta1 API version of CSIStorageCapacity will no longer be served in v1.27.
  • Support for the alpha seccomp annotations seccomp.security.alpha.kubernetes.io/pod and container.seccomp.security.alpha.kubernetes.io were deprecated since v1.19, now have been completely removed. The seccomp fields are no longer auto-populated when pods with seccomp annotations are created. Pods should use the corresponding pod or container securityContext.seccompProfile field instead.

For more information see:

  • Added warnings about workload resources (Pods, ReplicaSets, Deployments, Jobs, CronJobs, or ReplicationControllers) whose names are not valid DNS labels.
  • Adds feature gate NodeLogQuery which provides cluster administrators with a streaming view of logs using kubectl without them having to implement a client side reader or logging into the node.
  • Encryption of API Server at rest configuration now allows the use of wildcards in the list of resources. For example, . can be used to encrypt all resources, including all current and future custom resources.
  • Graduated Kubelet Topology Manager to GA.
  • Graduated KubeletTracing (Kubelet OpenTelemetry Tracing) to beta, which means that the feature gate is now enabled by default.
  • Graduated seccomp profile defaulting to GA. Also see Enable default seccomp profile and seccomp tutorial
  • Promoted CronJobTimeZone feature to GA
  • The PodDisruptionBudget spec.unhealthyPodEvictionPolicy field has graduated to beta and is enabled by default. On servers with the feature enabled, this field may be set to AlwaysAllow to always allow unhealthy pods covered by the PodDisruptionBudget to be evicted.
  • APIServerTracing feature gate is now enabled by default. Tracing in the API Server is still disabled by default, and requires a config file to enable.
  • PodSpec.Container.Resources became mutable for CPU and memory resource types. That’s a pretty insteresting in-place Pod vertical scaling feature as it allows to change CPU and memory setting without restarting a pod. Also In-Place Update of Pod Resources
  • kubelet: migrated --container-runtime-endpoint and --image-service-endpoint to kubelet config
  • Allow StatefulSet to control start replica ordinal numbering graduating to beta. Also see Kubernetes 1.27: StatefulSet Start Ordinal Simplifies Migration
  • kubectl will now display SeccompProfile for pods, containers and ephemeral containers, if values were set.
  • Kubelet TCP and HTTP probes are now more effective using networking resources: conntrack entries, sockets. This is achieved by reducing the TIME-WAIT state of the connection to 1 second, instead of the defaults 60 seconds. This allows kubelet to free the socket, and free conntrack entry and ephemeral port associated.
  • kubectl now uses HorizontalPodAutoscaler v2 by default.
  • kubelet: remove deprecated flag --container-runtime
  • CoreDNS: Updated to v1.10.1
  • etcd: Updated to v3.5.7

If you use CSI then also check the CSI Sidecar Containers documentation. Every sidecar container contains a matrix which version you need at a minimum, maximum and which version is recommend to use with whatever K8s version.
Nevertheless if your K8s update to v1.27 worked fine I would recommend to also update the CSI sidecar containers sooner or later.

Now I finally upgraded the K8s controller and worker nodes to version 1.27.x as described in Kubernetes the Not So Hard Way With Ansible - Upgrading Kubernetes.

That’s it for today! Happy upgrading! 😉