Kubernetes the Not So Hard Way With Ansible - Upgrading Kubernetes

Upgrading Kubernetes services: kube-apiserver, kube-controller-manager, kube-scheduler, kube-proxy, kubelet, etcd

September 9, 2018



  • added hint about error message that could occur during upgrade of master modes while not all master/controller nodes using the same Kubernetes version


  • added text about upgrading etcd cluster

If you followed my Kubernetes the Not So Hard Way With Ansible blog posts so far and have a Kubernetes cluster running you’ll sooner or later want to upgrade to the next version. With this setup it’s pretty easy.

My experience so far with upgrading to a new major Kubernetes release is that it is a good idea to wait for at least the .2 release. E.g. if K8s v1.10.0 is the freshest release and you run v1.9.6 at the moment I would strongly recommend to wait at least for v.1.10.2 before upgrading. The .0 are often contains bugs that are fixed in later minor releases and that’re sometimes really hurts in production. But even minor releases sometimes contain changes that you wouldn’t expect. Having a development K8s cluster which is pretty close to the production cluster is very helpful to find issues before they hit you in production…


BEFORE upgrading to a new major release (e.g. 1.9.x -> 1.10.x) make sure that you upgraded to the latest minor release of your currently running major release! E.g. if you currently run 1.9.3 and want to upgrade to 1.10.3 make sure you upgrade 1.9.3 to the latest 1.9.x release like 1.9.7 if that’s the latest 1.9.x release! Afterwards you can do the major release upgrade. That’s important! Otherwise you risk a corrupt cluster state in etcd or other strange behavior’s.

The first thing you should do is to read the CHANGELOG of the version you want to upgrade. E.g. if you upgrade from v1.8.0 to v1.8.1 you only need to read CHANGELOG-1.8. Watch out for Action Required headlines. E.g. between v1.8.0 and v1.8.1 there was a change that requires action. That shouldn’t happen for minor releases but sometimes it can’t be avoided. If you want to upgrade the major version e.g. from v1.7.x to v1.8.0 read the CHANGELOG-1.8. The same advice as above applies of course.

As the whole Kubernetes cluster state is stored in etcd you should also consider creating a backup of the etcd data. Have a look at the etcd Admin Guide how to do this. This is especially true if you upgrading to a new major release. Also Heptio’s Ark could be a option. Heptio Ark is a utility for managing disaster recovery, specifically for your Kubernetes cluster resources and persistent volumes.


If you considered above prerequisites we’re ready to go. If you do a minor release update (v1.8.0 -> v1.8.1 e.g.) or a major release update (v1.7.x -> v1.8.0) the steps are basically the same. First we update the controller nodes node by node and afterwards the worker nodes.

One additional hint: Upgrading a major release while skipping one major release is a bad idea and calls for trouble ;-) So if you want upgrade from v1.6.x to v1.8.0 your upgrade steps should be v1.6.x -> v1.7.x -> v1.8.0.

Also please upgrade/use the roles ansible-role-kubernetes-controller and ansible-role-kubernetes-worker with version/tag v1.0.0_v1.8.2 or above.


From time to time the recommended and therefor tested/supported version of etcd changes. This is the case if you upgrade from K8s v1.10 to v1.11. The recommended etcd version for K8s v1.10 was 3.1.12. For K8s v1.11 it’s v3.2.18 (see CHANGELOG-1.11.md). So before we upgrade K8s we’ll upgrade etcd first.

Have a look at the etcd upgrade guides. In our example it’s Upgrade etcd from 3.1 to 3.2. The first line in the upgrade guide is: In the general case, upgrading from etcd 3.1 to 3.2 can be a zero-downtime, rolling upgrade. That’s cool because in that case we can upgrade node by node.

First check the cluster state. Before you continue make sure that the cluster is a healthy state! Log into one of the etcd nodes and check the cluster status e.g.

# The value of this variable should be the value of Ansible variable "k8s_ca_conf_directory"
export CERTIFICATE_DIR="/path/where/your/certificates/are/located"

ETCDCTL_API=3 etcdctl endpoint health \
  --endpoints=https://etcd-node1:2379,https://etcd-node2:2379,https://etcd-node3:2379 \
  --cacert=${CERTIFICATE_DIR}/ca-etcd.pem \
  --cert=${CERTIFICATE_DIR}/cert-etcd.pem \

Of course replace etcd-node(1-3) with your etcd node names or IPs. You should see now a output similar to this:

https://etcd-node1:2379 is healthy: successfully committed proposal: took = 9.416983ms
https://etcd-node2:2379 is healthy: successfully committed proposal: took = 6.206849ms
https://etcd-node3:2379 is healthy: successfully committed proposal: took = 8.409447ms

You can also check the current etcd API version (this will change if ALL etcd members are upgraded):

ETCDCTL_API=3 etcdctl version \
  --endpoints=https://etcd-node1:2379,https://etcd-node2:2379,https://etcd-node3:2379 \
  --cacert=${CERTIFICATE_DIR}/ca-etcd.pem \
  --cert=${CERTIFICATE_DIR}/cert-etcd.pem \

etcdctl version: 3.1.12
API version: 3.1

If we’ve a healthy cluster we can continue. Next set etcd_version: "3.2.18" in group_vars/all.yml (or where ever it makes sense for you).

Now we upgrade the first node e.g.:

ansible-playbook --tags=role-etcd --limit=controller01.i.domain.tld k8s.yml

If this was successful restart the etcd daemon on that node:

ansible -m command -a "systemctl restart etcd" controller01.i.domain.tld k8s.yml

Also keep an eye on the etcd logs e.g.:

ansible -m command -a 'journalctl --since="15m ago" -t etcd' controller01.i.domain.tld

If the logs are ok do the same for the remaining etcd nodes. If all etcd daemons are updated finally you should see something like this in the logs:

Sep 26 23:07:50 controller03 etcd[25519]: updating the cluster version from 3.1 to 3.2
Sep 26 23:07:50 controller03 etcd[25519]: updated the cluster version from 3.1 to 3.2
Sep 26 23:07:50 controller03 etcd[25519]: enabled capabilities for version 3.2

Again we can check the API version:

ETCDCTL_API=3 etcdctl version \
  --endpoints=https://etcd-node1:2379,https://etcd-node2:2379,https://etcd-node3:2379 \
  --cacert=${CERTIFICATE_DIR}/ca-etcd.pem \
  --cert=${CERTIFICATE_DIR}/cert-etcd.pem \

etcdctl version: 3.2.18
API version: 3.2

So now you have a etcd cluster running version v3.2. ;-) Afterwards consider restarting all kube-apiserver.

Other components

Keep an eye in the change log external dependencies. That’s mainly CNI, Docker and kube-dns/CoreDNS. You may need to upgrade them too. If you need to upgrade CNI and/or Docker I would recommend to drain the node to do this kind of upgrade (see further down the text). This way you can easily upgrade node by node.


Updating kubectl utility before you upgrade the controller and worker nodes makes sense. Normally a newer client version can talk to older server versions. The other way round isn’t always true. When I update my K8s controller and worker roles I also update my kubctl role. So have a look if you can find the version you look for and upgrade kubectl locally first e.g.

ansible-playbook --tags=role-kubectl k8s.yml

Controller nodes

Update your inventory cache with ansible -m setup all.

The next thing to do is to set k8s_release. Let’s assume we currently have set k8s_release: "1.8.0" and want to upgrade to 1.8.2 so we set k8s_release: "1.8.2" in group_vars/all.yml (or whatever place you defined this variable).

Normally my Kubernetes controller role has also tagged various releases e.g. v1.0.0_v1.8.2. In that case you can just update the role, have a look at the changelog what changed and adjust your variables and maybe other things accordingly. If you don’t find the tag with the K8s release you need you have to adjust the settings by yourself according to the K8s changelog.

Next we deploy the controller role one by one to every controller node e.g.:

ansible-playbook --tags=role-kubernetes-controller --limit=controller01.i.domain.tld k8s.yml

Of course replace controller01.i.domain.tld with the hostname of your first controller node. This will download the Kubernetes binaries, updates the old one and finally restarts kube-apiserver, kube-controller-manager and kube-scheduler. As in our current setup all worker services communicate only with the Kubernetes controller01 (we have no loadbalancer for the kube-apiserver yet) the API server will be shortly unavailable. But that only affects deployments/updates that would take place in this short time. All pods running on the worker are working as usual.

After the role is deployed you should have a look at the logfiles (with journalctl e.g.) on controller01 to verify everything worked well. Also check if the services are still listen in the ports they usually do (netstat -tlpn e.g.). You could also do a small Kubernetes test deployment via kubectl if this still works.

If you errors like this one

v1beta1.apiextensions.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.apiextensions.k8s.io": the object has been modified; please apply your changes to the latest version and try again

that should be ok at the moment. At this point you’ve one controller node with a newer version of K8s and two other noedes with a older K8s version. This message should disappear if you’ve updated all controller nodes.

If everything is ok go ahead and update controller02 and controller03 e.g.:

ansible-playbook --tags=role-kubernetes-controller --limit=controller02.i.domain.tld k8s.yml

# Wait until controller role is deployed on controller02...

ansible-playbook --tags=role-kubernetes-controller --limit=controller03.i.domain.tld k8s.yml

Now your controller nodes should be up2date!

Worker nodes

As with the Kubernetes controller role mentioned above I also tag the Kubernetes worker role accordingly.

For the worker nodes it’s basically the same as with the controller nodes. We start with worker01

ansible-playbook --tags=role-kubernetes-worker --limit=worker01.i.domain.tld k8s.yml

Of course replace worker01.i.domain.tld with the hostname of your first worker node. This will download the Kubernetes binaries, updates the old one and finally restarts kube-proxy and kubelet. While the two services are updated they won’t be able to start new pods or change network settings. But that’s only true while the services are restarted which takes only a few seconds and they will catch up the changes afterwards. Shouldn’t be a big deal as long as you don’t have a few thousand pods running ;-)

You can also drain a node before you start upgrading that node to avoid possible problems (see Safely Drain a Node while Respecting Application SLOs). You can use kubectl drain to safely evict all of your pods from a node before you perform maintenance on the node (e.g. kernel upgrade, hardware maintenance, etc.). Safe evictions allow the pod’s containers to gracefully terminate and will respect the PodDisruptionBudgets you have specified.

Again check the logs and if everything is ok continue with the other nodes:

ansible-playbook --tags=role-kubernetes-worker --limit=worker02.i.domain.tld k8s.yml

# Wait until worker role is deployed on worker02...

ansible-playbook --tags=role-kubernetes-worker --limit=worker03.i.domain.tld k8s.yml

If the worker role is deployed to all worker nodes we’re basically done with the Kubernetes upgrade!

Next up: Kubernetes the Not So Hard Way With Ansible - Network policies with kube-router