Kubernetes upgrade notes: 1.13.x to 1.14.x

If you used my Kubernetes the Not So Hard Way With Ansible blog posts to setup a Kubernetes (K8s) cluster this notes might be helpful for you (and maybe for others too that manage a K8s cluster on their own). I wanted to do this for every major version upgrade since K8s v1.5.x where I started with my Ansible playbooks but somehow never managed it ;-)

I’ve a general upgrade guide Kubernetes the Not So Hard Way With Ansible - Upgrading Kubernetes that worked quite well for me for the last past K8s upgrades. So please read that guide if you want to know HOW the components are updated. This post here is esp. for the 1.13.x to 1.14.x upgrade and WHAT I changed.

But first: I usually don’t update before the .2 release of a new major version is released. In my experience the .0 and .1 are just too buggy. Now that 1.14.2 was released risked the update.

Second: I only upgrade from the latest version of the former major release. In my case I was running 1.13.2 and at the time writing this text 1.13.5 was the latest 1.13.x release. After reading the 1.13.x changelog to see if any important changes where made between 1.13.2 and 1.13.5 I don’t saw anything that prevented me updating and I don’t needed to change anything. So I did the 1.13.2 to 1.13.5 upgrade first. If you use my Ansible roles that basically only means to change k8s_release variable from 1.13.2 to 1.13.5 and roll the changes for the control plane and worker nodes out as described in my upgrade guide. After that everything still worked as expected so I continued with the next step.

As it is normally no problem to have a newer kubectl utility that is only one major version ahead of the server version I also updated kubectl from 1.13.x to 1.14.2 using my kubectl Ansible role.

Now I did read the Urgent Upgrade Nodes for the 1.14.x release and also the Known Issues.

Here are the findings that were important for me but your preferences may vary of course.

Starting with the Known Issues I saw that CoreDNS 1.3.1 crashes when K8s API shuts down but CoreDNS is still connected. That was fixed with CoreDNS 1.4.0. I normally stay with the versions mentioned for the External Dependencies which is CoreDNS 1.3.1 for K8s 1.14.x. So I made an exemption to the rule and upgraded from 1.2.6 (which I was using for K8s 1.13) to 1.4.0 after I read the release notes for CoreDNS 1.3.0, 1.3.1 and 1.4.0. With that information I was confident that I can keep my current CoreDNS config. So after changing the container version in the deployment rolled out the new CoreDNS deployment straight away. Also see Install CoreDNS in Kubernetes with Ansible.

Next relevant entry for me on the External Dependencies list was the Docker dependency. For K8s 1.13 the lastest supported Docker version was 18.06. For K8s 1.14 it’s 18.09. As K8s 1.14 supports both mentioned Docker versions (and even way older versions) I postponed this upgrade after the K8s upgrade to 1.14is done.

Also on the External Dependencies list is the update of the CNI plugins from v0.6.0 to v0.7.5. According to the CNI plugins releases page there were no breaking changes. I changed the k8s_cni_plugin_version variable accordingly to 0.7.5 in my K8s worker Ansible role. So we’ll roll out the new CNI plugins together with the new K8s worker binaries later (see further down below).

Now the final relevant external dependency for me on the list was etcd. That changed from version 3.2.24 to 3.3.10. As this is a quite critical component in the whole K8s setup great care should be taken. That said I first read the etcd 3.2 to 3.3 upgrade guide. This document says: In the general case, upgrading from etcd 3.2 to 3.3 can be a zero-downtime, rolling upgrade .... That’s at least a good start ;-) Reading further upgrading to 3.3 should be possible without further changes. For further information how to upgrade etcd read Kubernetes the Not So Hard Way With Ansible - Upgrading Kubernetes if you used my etcd Ansible role.

The next thing to check are the Urgent Upgrade Notes. This contains important breaking changes and stuff like that. In my case I didn’t find anything that would have been affecting my K8s cluster so I skipped that.

Next I had a look at the Deprecations. That was more “interesting”. For API the extensions/v1beta1 endpoint for ingress resources are deprecated. So after upgrading to K8s 1.14 you should change a few kinds as they’ll go away in K8s 1.16:

  • Ingress: migrate to networking.k8s.io/v1beta1
  • NetworkPolicy: migrate to networking.k8s.io/v1
  • PodSecurityPolicy migrate to policy/v1beta1
  • DaemonSet: migrate to apps/v1
  • Deployment: migrate to apps/v1
  • ReplicaSet: migrate to apps/v1
  • PriorityClass: migrate to scheduling.k8s.io/v1

To check what versions you currently use you can use the following commands:

kubectl get ingresses.extensions  -A
kubectl get ingress.networking.k8s.io -A

kubectl get networkpolicies.extensions -A
kubectl get networkpolicies.networking.k8s.io -A

kubectl get podsecuritypolicies.extensions -A
kubectl get podsecuritypolicies.policy -A

kubectl get daemonsets.extensions -A
kubectl get daemonsets.apps -A

kubectl get deployment.extensions -A
kubectl get deployment.apps -A

kubectl get replicasets.extensions -A
kubectl get replicasets.apps -A

kubectl get priorityclasses.scheduling.k8s.io -A

Futher interesting notes that I found interesting in the changelog:

  • Added kustomize as a subcommand in kubectl
  • The CoreDNS configuration now has the forward plugin for proxy in the default configuration instead of the proxy plugin.
  • If you are running the cloud-controller-manager and you have the pvlabel.kubernetes.io alpha Initializer enabled, you must now enable PersistentVolume labeling using the PersistentVolumeLabel admission controller instead. You can do this by adding PersistentVolumeLabel in the –enable-admission-plugins kube-apiserver flag.
  • CSINodeInfo and CSIDriver CRDs have been installed in the local cluster.
  • Added alpha support for ephemeral CSI inline volumes that are embedded in pod specs.

Now I finally updated the K8s controller and worker nodes to version 1.14.2 as described in Kubernetes the Not So Hard Way With Ansible - Upgrading Kubernetes.

As mentioned earlier finally time had come to upgrade Docker to version 18.09. So I upgraded that accordingly using my Ansible Docker role. Have a look there for more information about the update.

That’s it! Happy upgrading! ;-)