Kubernetes upgrade notes: 1.18.x to 1.19.x
If you used my
Kubernetes the Not So Hard Way With Ansible blog posts to setup a Kubernetes (K8s) cluster this notes might be helpful for you (and maybe for others too that manage a K8s cluster on their own on prem e.g.).
I’ve a general upgrade guide Kubernetes the Not So Hard Way With Ansible - Upgrading Kubernetes that worked quite well for me for the last past K8s upgrades. So please read that guide if you want to know HOW the components are updated. This post here is esp. for the 1.18.x to 1.19.x upgrade and WHAT was interesting for me.
First: As usual I don’t update a production system before the
.2 release of a new major version is released. In my experience the
.1 are just too buggy (and to be honest sometimes it’s even better to wait for the
.5 release ;-) ). Of course it is important to test new releases already in development or integration systems and report bugs!
Second: I only upgrade from the latest version of the former major release. In my case I was running
1.18.6 and at the time writing this text
1.19.4 was the latest
1.19.x release. After reading the 1.18.x changelog to see if any important changes where made between
1.18.12 I don’t saw anything that prevented me updating and I don’t needed to change anything. So I did the
1.18.12 upgrade first. If you use my Ansible roles that basically only means to change
k8s_release variable from
1.18.12 and roll the changes for the control plane and worker nodes out as described in my upgrade guide. After that everything still worked as expected so I continued with the next step.
Here are two links that might be interesting regarding what’s new in regards to new features in Kubernetes 1.19:
Since K8s 1.14 there are also searchable release notes available. You can specify the K8s version and a K8s area/component (e.g. kublet, apiserver, …) and immediately get an overview what changed in that regard. Quite nice! :-)
As it is normally no problem to have a newer
kubectl utility that is only one major version ahead of the server version I also updated
1.19.4 using my kubectl Ansible role.
As always before a major upgrade read the Urgent Upgrade Notes! If you used my Ansible roles to install Kubernetes and used most of the default settings then there should be no need to adjust any settings.
CoreDNS was upgraded to
1.7.0. I already adjusted my CoreDNS playbook accordingly and replaced deprecated options already when I upgraded to
CoreDNS v1.6.7. There is a very handy tool that helps you upgrading
CoreDNS’s configuration file
Corefile. Read more about it at CoreDNS Corefile Migration for Kubernetes. The binary releases of that tool can be downloaded here: https://github.com/coredns/corefile-migration/releases
CNI was upgraded to
v0.8.5) but besides this patch release version upgrade no configuration changes are needed.
Further interesting notes:
KubeSchedulerConfigurationgraduates to Beta (see https://kubernetes.io/docs/reference/scheduling/config ). Upgrade
IngressClassresources have graduated to
IngressClasstypes in the
networking.k8s.io/v1beta1API versions are deprecated and will no longer be served in
1.22+. Persisted objects can be accessed via the
networking.k8s.io/v1API. So after upgrading to K8s
1.19you should update all your ingress resources accordingly (at least before you upgrade to K8s
1.22;-) ). This version includes several changes, like the ServiceName and ServicePort fields are now service.name and service.port. Check the Kubernetes 1.19 documentation for more details.
kubectl run(#90569, @brianpursley) [SIG CLI]
componentstatus APIis deprecated. This API provided status of etcd, kube-scheduler, and kube-controller-manager components, but only worked when those components were local to the API server, and when kube-scheduler and kube-controller-manager exposed unsecured health endpoints. Instead of this API, etcd health is included in the kube-apiserver health check and kube-scheduler/kube-controller-manager health checks can be made directly against those components' health endpoints. (#93570, @liggitt) [SIG API Machinery, Apps and Cluster Lifecycle]. This effects
kubectl get componentstatusese.g.
- Expanded CLI support for
debugging workloads and nodes-> Since these new workflows don’t require any new cluster features, they’re available for experimentation with your existing clusters via
kubectl alpha debug.
kubectl alpha debugcommand now supports debugging pods by copy the original one. (#90094, @aylei) [SIG CLI]
kubectl alpha debugnow supports debugging nodes by creating a debugging container running in the node’s host namespaces. (#92310, @verb) [SIG CLI]
- seccomp support has graduated to GA. A new
seccompProfilefield is added to pod and container
securityContextobjects. Support for
container.seccomp.security.alpha.kubernetes.io/...annotations is deprecated, and will be removed in
ConfigMapvolumes can be marked as immutable, which significantly reduces load on the API server if there are many
ConfigMapvolumes in the cluster. This allows to set
ConfigMapobject to mark their contents as immutable.
storage.k8s.io/v1beta1is deprecated in favor of
storage.k8s.io/v1(#90671, @deads2k) [SIG Storage]
- Increase Kubernetes support window to one year
- Warning mechanism for use of deprecated APIs
- Structured logging
- Allow users to set a pod’s hostname to its Fully Qualified Domain Name (FQDN):
SetHostnameAsFQDNis a new field in PodSpec. When set to
true, the fully qualified domain name (FQDN) of a Pod is set as hostname of its containers. In Linux containers, this means setting the FQDN in the hostname field of the kernel (the nodename field of struct utsname).
- Node Topology Manager
- Generic ephemeral volumes, a new alpha feature under the
GenericEphemeralVolumefeature gate, provide a more flexible alternative to
EmptyDirvolumes: as with
EmptyDir, volumes are created and deleted for each pod automatically by Kubernetes. But because the normal provisioning process is used (
PersistentVolumeClaim), storage can be provided by third-party storage vendors and all of the usual volume features work. Volumes don’t need to be empty; for example, restoring from snapshot is supported.
--logging-formatflag to support structured logging
If you use CSI then also check the CSI Sidecar Containers documentation. Every sidecar container contains a matrix which version you need at a minimum, maximum and which version is recommend to use with whatever K8s version. Since this is quite new stuff basically all CSI sidecar container are working with K8s
1.19. The first releases of these sidecar containers only need K8s 1.10 but I wouldn’t use this old versions. So there is at least no urgent need to upgrade CSI sidecar containers ATM.
Nevertheless if your K8s update to
v1.19 worked fine I would recommend to also update the CSI sidecar containers sooner or later because a) lots of changes happen ATM in this area and b) you might require the newer versions for the next K8s version anyways.
Now I finally updated the K8s controller and worker nodes to version
1.19.4 as described in Kubernetes the Not So Hard Way With Ansible - Upgrading Kubernetes.
If you see errors like
kube-controller-manager: E0405 18:58:30.109867 3375 leaderelection.go:331] error retrieving resource lock kube-system/kube-controller-manager: leases.coordination.k8s.io "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
during upgrading the controller nodes then this seems to be ok. The error should go away if all controller nodes are using the new Kubernetes version (also see https://github.com/gardener/gardener/issues/1879).
That’s it for today! Happy upgrading! ;-)