Kubernetes upgrade notes: 1.14.x to 1.15.x

Some upgrade notes that might be useful if you use my Kubernetes Ansible playbooks

September 11, 2019

If you used my Kubernetes the Not So Hard Way With Ansible blog posts to setup a Kubernetes (K8s) cluster this notes might be helpful for you (and maybe for others too that manage a K8s cluster on their own).

I’ve a general upgrade guide Kubernetes the Not So Hard Way With Ansible - Upgrading Kubernetes that worked quite well for me for the last past K8s upgrades. So please read that guide if you want to know HOW the components are updated. This post here is esp. for the 1.14.x to 1.15.x upgrade and WHAT I changed.

First: As usual I don’t update before the .2 release of a new major version is released. In my experience the .0 and .1 are just too buggy. While I wanted to upgrade to 1.15.2 1.15.3 was already released so we use that one right away of course. Also the changes for v1.15.1 to v1.15.3 didn’t contained anything but bugfixes (as it should be but sometimes even the “dot releases” contain important information that needs to be taken care off). So mainly the changelog for v1.15.0 was the important one.

Second: I only upgrade from the latest version of the former major release. In my case I was running 1.14.2 and at the time writing this text 1.14.6 was the latest 1.14.x release. After reading the 1.14.x changelog to see if any important changes where made between 1.14.2 and 1.14.6 I don’t saw anything that prevented me updating and I don’t needed to change anything. So I did the 1.14.2 to 1.14.6 upgrade first. If you use my Ansible roles that basically only means to change k8s_release variable from 1.14.2 to 1.14.6 and roll the changes for the control plane and worker nodes out as described in my upgrade guide. After that everything still worked as expected so I continued with the next step.

As it is normally no problem to have a newer kubectl utility that is only one major version ahead of the server version I also updated kubectl from 1.14.x to 1.15.2 using my kubectl Ansible role.

Since K8s 1.14 there are also searchable release notes available. You can specify the K8s version and a K8s area/component (e.g. kublet, apiserver, …) and immediately get an overview what changed in that regard. Quite nice! :-)

Before moving on there was one VERY important thing mentioned in the K8s 1.14.x changelog already and becomes even more important for K8s 1.15.x. If you’ve a look at the Deprecations and Removals you’ll notice that a few important objects won’t be served in K8s 1.16.x anymore at some endpoints. The extensions/v1beta1 endpoint for ingress resources are deprecated e.g. So after upgrading to K8s 1.15 you should REALLY change a few kinds as they’ll go away in K8s 1.16, 1.17 or 1.19 for some endpoints at latest. So this should be changed:

  • Ingress: migrate to networking.k8s.io/v1beta1
  • NetworkPolicy: migrate to networking.k8s.io/v1
  • PodSecurityPolicy migrate to policy/v1beta1
  • DaemonSet: migrate to apps/v1
  • Deployment: migrate to apps/v1
  • ReplicaSet: migrate to apps/v1
  • PriorityClass: migrate to scheduling.k8s.io/v1

To check what versions you currently use you can use the following commands:

kubectl get ingresses.extensions -A
kubectl get ingress.networking.k8s.io -A

kubectl get networkpolicies.extensions -A
kubectl get networkpolicies.networking.k8s.io -A

kubectl get podsecuritypolicies.extensions -A
kubectl get podsecuritypolicies.policy -A

kubectl get daemonsets.extensions -A
kubectl get daemonsets.apps -A

kubectl get deployment.extensions -A
kubectl get deployment.apps -A

kubectl get replicasets.extensions -A
kubectl get replicasets.apps -A

kubectl get priorityclasses.scheduling.k8s.io -A

A final tip on this topic: There is a video TGI Kubernetes 084: Kubernetes API removal and you about the API removals. For some APIs that’s your last chance to upgrade to avoid trouble! So act now before it’s too late ;-) This is also mentioned in Additional Notable Feature Updates.

Now I read the Urgent Upgrade Nodes for the 1.15.x release and also the Known Issues. Here are the findings that were important for me but your preferences may vary of course.

The Known Issues of the CHANGELOG contained no important information for me.

In the Urgent Upgrade Notes Network contains the information that the deprecated flag --conntrack-max has been removed from kube-proxy and replaced by --conntrack-min and --conntrack-max-per-core. Also --cleanup-iptables was removed without replacement.

The urgent upgrade notes for Node mentiones that the deprecated Kubelet flag --allow-privileged has been removed.

If you use CSI then the Node.Status.Volumes.Attached.DevicePath field is now unset for CSI volumes. You must update any external controllers that depend on this field (see Storage. Also check the CSI Sidecar Containers documentation. Every sidecar container contains a matrix which version you need at a minimum, maximum and which version is recommend to use with whatever K8s version. Since this is quite new stuff basically all CSI sidecar container are working with K8s 1.13 to 1.15. The first releases of this sidecar containers only need K8s 1.10 but I wouldn’t use this old versions. So there is at least no urgent need to upgrade CSI sidecar containers ATM. Nevertheless if your K8s update to v1.15 worked fine I would recommend to also update the CSI sidecar containers sooner or later because a) lots of changes happen ATM in this area and b) you might require the newer versions for the next K8s anyways.

Then to mention it again: Have a look at Deprecations and Removals esp. the API section.

On the Dependencies list (this topic was called External Dependencies in K8s 1.14) I found nothing that was relevant for me this time (so no Docker, etcd, … upgrade this time :-) ).

Now I finally updated the K8s controller and worker nodes to version 1.15.3 as described in Kubernetes the Not So Hard Way With Ansible - Upgrading Kubernetes.

That’s it for today! Happy upgrading! ;-)