Kubernetes upgrade notes: 1.20.x to 1.21.x
If you used my Kubernetes the Not So Hard Way With Ansible blog posts to setup a Kubernetes (K8s) cluster this notes might be helpful for you (and maybe for others too that manage a K8s cluster on their own on prem e.g.).
I’ve a general upgrade guide Kubernetes the Not So Hard Way With Ansible - Upgrading Kubernetes that worked quite well for me for the last past K8s upgrades. So please read that guide if you want to know HOW the components are updated. This post here is esp. for the 1.20.x to 1.21.x upgrade and WHAT was interesting for me.
First: As usual I don’t update a production system before the .2 release of a new major version is released. In my experience the .0 and .1 are just too buggy (and to be honest sometimes it’s even better to wait for the .5 release ;-) ). Of course it is important to test new releases already in development or integration systems and report bugs!
Second: I only upgrade from the latest version of the former major release. In my case I was running 1.20.8 and at the time writing this text 1.20.10 was the latest 1.20.x release. After reading the 1.20.x changelog to see if any important changes where made between 1.20.8 and 1.20.10 I don’t saw anything that prevented me updating and I don’t needed to change anything. So I did the 1.20.8 to 1.20.10 upgrade first. If you use my Ansible roles that basically only means to change k8s_release variable from 1.20.8 to 1.20.10 and roll the changes for the control plane and worker nodes out as described in my upgrade guide. After that everything still worked as expected so I continued with the next step.
Here are a few links that might be interesting regarding what’s new in regards to new features in Kubernetes 1.20:
Kubernetes 1.21 CHANGELOG
Kubernetes 1.21: Power to the Community
What’s new in Kubernetes 1.21 - SysDig blog
Since K8s 1.14 there are also searchable release notes available. You can specify the K8s version and a K8s area/component (e.g. kublet, apiserver, …) and immediately get an overview what changed in that regard. Quite nice! :-)
As it is normally no problem to have a newer kubectl utility that is only one major version ahead of the server version I also updated kubectl from 1.20.x to 1.21.x using my kubectl Ansible role.
As always before a major upgrade read the Urgent Upgrade Notes! If you used my Ansible roles to install Kubernetes and used most of the default settings then there should be no need to adjust any settings.
In the Urgent Upgrade Notes there is one notable change:
- Kube-proxy’s IPVS proxy mode no longer sets the
net.ipv4.conf.all.route_localnetsysctl parameter.
In What’s New there are a few interesting features:
A few interesting things I’ve found in Deprecation:
- Deprecation of PodSecurityPolicy
kube-proxy: remove deprecated--cleanup-ipvsflag ofkube-proxy, and make--cleanupflag always to flush IPVSkubectl: The deprecatedkubectl alpha debugcommand is removed. Usekubectl debuginstead.- The
batch/v2alpha1CronJob type definitions and clients are deprecated and removed.
A few interesting API Changes also took place of course:
- Adds support for
endPortfield inNetworkPolicy(Alpha feature) DaemonSetsaccept aMaxSurgeinteger or percent on their rolling update strategy that will launch the updated pod on nodes and wait for those pods to go ready before marking the old out-of-date pods as deleted. This allows workloads to avoid downtime during upgrades when deployed usingDaemonSets. This feature is alpha and is behind theDaemonSetUpdateSurgefeature gate.Kubelet Graceful Node Shutdownfeature graduates to Beta and enabled by default.- One new field
InternalTrafficPolicyinServiceis added. It specifies if the cluster internal traffic should be routed to all endpoints or node-local endpoints only.Clusterroutes internal traffic to aServiceto all endpoints.Localroutes traffic to node-local endpoints only, and traffic is dropped if no node-local endpoints are ready. The default value isCluster. - Promote CronJobs to
batch/v1 - Promote
Immutable Secrets/ConfigMapsfeature to Stable. This allows to set immutable field inSecretorConfigMapobject to mark their contents as immutable. Servicescan specifyloadBalancerClassto use a custom load balancer- Support for
Indexed Job: a Job that is considered completed when Pods associated to indexes from 0 to (.spec.completions-1) have succeeded. - The
PodDisruptionBudget APIhas been promoted topolicy/v1with no schema changes. The only functional change is that an empty selector ({}) written to apolicy/v1 PodDisruptionBudgetnow selects all pods in the namespace. The behavior of thepolicy/v1beta1API remains unchanged. Thepolicy/v1beta1 PodDisruptionBudget APIis deprecated and will no longer be served in 1.25+. - The
controller.kubernetes.io/pod-deletion-costannotation can be set to offer a hint on the cost of deleting a Pod compared to other pods belonging to the sameReplicaSet. Pods with lower deletion cost are deleted first. This is an alpha feature. - Users might specify the
kubectl.kubernetes.io/default-exec-containerannotation in aPodto preselect container forkubectlcommands.
And finally a few Features:
- Add
--permit-address-sharingflag tokube-apiserverto listen withSO_REUSEADDR. While allowing to listen on wildcard IPs like0.0.0.0and specific IPs in parallel, it avoids waiting for the kernel to release socket inTIME_WAITstate, and hence, considerably reducingkube-apiserverrestart times under certain conditions. - Adds alpha feature
VolumeCapacityPrioritywhich makes the scheduler prioritize nodes based on the best matching size of statically provisioned PVs across multiple topologies. CoreDNSwas updated tov1.8.0.- The
RunAsGroupfeature has been promoted to GA in this release. - Update the latest validated version of
Dockerto20.10. - Upgrade
node local DNSto1.17.0for better IPv6 support. - Upgrades
IPv6Dualstackto Beta and turns it on by default.
Before I upgraded Kubernetes I upgraded CoreDNS to 1.8.x. I already adjusted my CoreDNS playbook accordingly. There is a very handy tool that helps you upgrading CoreDNS’s configuration file Corefile. Read more about it at CoreDNS Corefile Migration for Kubernetes. The binary releases of that tool can be downloaded here: corefile-migration.
If you use CSI then also check the CSI Sidecar Containers documentation. Every sidecar container contains a matrix which version you need at a minimum, maximum and which version is recommend to use with whatever K8s version.
Nevertheless if your K8s update to v1.21 worked fine I would recommend to also update the CSI sidecar containers sooner or later because a) lots of changes happen ATM in this area and b) you might require the newer versions for the next K8s version anyways.
Now I finally updated the K8s controller and worker nodes to version 1.21.x as described in Kubernetes the Not So Hard Way With Ansible - Upgrading Kubernetes.
That’s it for today! Happy upgrading! ;-)