Kubernetes upgrade notes: 1.20.x to 1.21.x

If you used my Kubernetes the Not So Hard Way With Ansible blog posts to setup a Kubernetes (K8s) cluster this notes might be helpful for you (and maybe for others too that manage a K8s cluster on their own on prem e.g.).

I’ve a general upgrade guide Kubernetes the Not So Hard Way With Ansible - Upgrading Kubernetes that worked quite well for me for the last past K8s upgrades. So please read that guide if you want to know HOW the components are updated. This post here is esp. for the 1.20.x to 1.21.x upgrade and WHAT was interesting for me.

First: As usual I don’t update a production system before the .2 release of a new major version is released. In my experience the .0 and .1 are just too buggy (and to be honest sometimes it’s even better to wait for the .5 release ;-) ). Of course it is important to test new releases already in development or integration systems and report bugs!

Second: I only upgrade from the latest version of the former major release. In my case I was running 1.20.8 and at the time writing this text 1.20.10 was the latest 1.20.x release. After reading the 1.20.x changelog to see if any important changes where made between 1.20.8 and 1.20.10 I don’t saw anything that prevented me updating and I don’t needed to change anything. So I did the 1.20.8 to 1.20.10 upgrade first. If you use my Ansible roles that basically only means to change k8s_release variable from 1.20.8 to 1.20.10 and roll the changes for the control plane and worker nodes out as described in my upgrade guide. After that everything still worked as expected so I continued with the next step.

Here are a few links that might be interesting regarding what’s new in regards to new features in Kubernetes 1.20:

Kubernetes 1.21 CHANGELOG
Kubernetes 1.21: Power to the Community
What’s new in Kubernetes 1.21 - SysDig blog

Since K8s 1.14 there are also searchable release notes available. You can specify the K8s version and a K8s area/component (e.g. kublet, apiserver, …) and immediately get an overview what changed in that regard. Quite nice! :-)

As it is normally no problem to have a newer kubectl utility that is only one major version ahead of the server version I also updated kubectl from 1.20.x to 1.21.x using my kubectl Ansible role.

As always before a major upgrade read the Urgent Upgrade Notes! If you used my Ansible roles to install Kubernetes and used most of the default settings then there should be no need to adjust any settings.

In the Urgent Upgrade Notes there is one notable change:

  • Kube-proxy’s IPVS proxy mode no longer sets the net.ipv4.conf.all.route_localnet sysctl parameter.

In What’s New there are a few interesting features:

A few interesting things I’ve found in Deprecation:

  • Deprecation of PodSecurityPolicy
  • kube-proxy: remove deprecated --cleanup-ipvs flag of kube-proxy, and make --cleanup flag always to flush IPVS
  • kubectl: The deprecated kubectl alpha debug command is removed. Use kubectl debug instead.
  • The batch/v2alpha1 CronJob type definitions and clients are deprecated and removed.

A few interesting API Changes also took place of course:

  • Adds support for endPort field in NetworkPolicy (Alpha feature)
  • DaemonSets accept a MaxSurge integer or percent on their rolling update strategy that will launch the updated pod on nodes and wait for those pods to go ready before marking the old out-of-date pods as deleted. This allows workloads to avoid downtime during upgrades when deployed using DaemonSets. This feature is alpha and is behind the DaemonSetUpdateSurge feature gate.
  • Kubelet Graceful Node Shutdown feature graduates to Beta and enabled by default.
  • One new field InternalTrafficPolicy in Service is added. It specifies if the cluster internal traffic should be routed to all endpoints or node-local endpoints only. Cluster routes internal traffic to a Service to all endpoints. Local routes traffic to node-local endpoints only, and traffic is dropped if no node-local endpoints are ready. The default value is Cluster.
  • Promote CronJobs to batch/v1
  • Promote Immutable Secrets/ConfigMaps feature to Stable. This allows to set immutable field in Secret or ConfigMap object to mark their contents as immutable.
  • Services can specify loadBalancerClass to use a custom load balancer
  • Support for Indexed Job: a Job that is considered completed when Pods associated to indexes from 0 to (.spec.completions-1) have succeeded.
  • The PodDisruptionBudget API has been promoted to policy/v1 with no schema changes. The only functional change is that an empty selector ({}) written to a policy/v1 PodDisruptionBudget now selects all pods in the namespace. The behavior of the policy/v1beta1 API remains unchanged. The policy/v1beta1 PodDisruptionBudget API is deprecated and will no longer be served in 1.25+.
  • The controller.kubernetes.io/pod-deletion-cost annotation can be set to offer a hint on the cost of deleting a Pod compared to other pods belonging to the same ReplicaSet. Pods with lower deletion cost are deleted first. This is an alpha feature.
  • Users might specify the kubectl.kubernetes.io/default-exec-container annotation in a Pod to preselect container for kubectl commands.

And finally a few Features:

  • Add --permit-address-sharing flag to kube-apiserver to listen with SO_REUSEADDR. While allowing to listen on wildcard IPs like 0.0.0.0 and specific IPs in parallel, it avoids waiting for the kernel to release socket in TIME_WAIT state, and hence, considerably reducing kube-apiserver restart times under certain conditions.
  • Adds alpha feature VolumeCapacityPriority which makes the scheduler prioritize nodes based on the best matching size of statically provisioned PVs across multiple topologies.
  • CoreDNS was updated to v1.8.0.
  • The RunAsGroup feature has been promoted to GA in this release.
  • Update the latest validated version of Docker to 20.10.
  • Upgrade node local DNS to 1.17.0 for better IPv6 support.
  • Upgrades IPv6Dualstack to Beta and turns it on by default.

Before I upgraded Kubernetes I upgraded CoreDNS to 1.8.x. I already adjusted my CoreDNS playbook accordingly. There is a very handy tool that helps you upgrading CoreDNS’s configuration file Corefile. Read more about it at CoreDNS Corefile Migration for Kubernetes. The binary releases of that tool can be downloaded here: corefile-migration.

If you use CSI then also check the CSI Sidecar Containers documentation. Every sidecar container contains a matrix which version you need at a minimum, maximum and which version is recommend to use with whatever K8s version. Nevertheless if your K8s update to v1.21 worked fine I would recommend to also update the CSI sidecar containers sooner or later because a) lots of changes happen ATM in this area and b) you might require the newer versions for the next K8s version anyways.

Now I finally updated the K8s controller and worker nodes to version 1.21.x as described in Kubernetes the Not So Hard Way With Ansible - Upgrading Kubernetes.

That’s it for today! Happy upgrading! ;-)