Kubernetes the Not So Hard Way With Ansible - Upgrading Kubernetes



  • etcd: cert-etcd.pem was renamed to cert-etcd-server.pem
  • etcd: cert-etcd-key.pem was renamed to cert-etcd-server-key.pem
  • updated text


  • Use etcd version 3.4.7 as example
  • Use ansible command to figure out etcd status and version


  • Heptio Ark is now called Velero


  • added hint about error message that could occur during upgrade of master modes while not all master/controller nodes using the same Kubernetes version


  • added text about upgrading etcd cluster

If you followed my Kubernetes the Not So Hard Way With Ansible blog posts so far and have a Kubernetes cluster running you’ll sooner or later want to upgrade to the next version. With this setup it’s pretty easy.

My experience so far with upgrading to a new major Kubernetes release is that it is a good idea to wait for at least the .2 release. E.g. if K8s v1.17.0 is the freshest release and you run v1.16.6 at the moment I would strongly recommend to wait at least for v.1.17.2 before upgrading. The .0 are often contains bugs that are fixed in later minor releases and that are sometimes really hurts in production. But even minor releases sometimes contain changes that you wouldn’t expect. Having a development K8s cluster which is pretty close to the production cluster is very helpful to find issues before they hit you in production…

Of course if everyone waits for the .2 release nobody would test the releases before ;-) So if you’ve a development or staging environment test new releases as early as possible and open bug tickets if you found any issues.


BEFORE upgrading to a new major release (e.g. 1.16.x -> 1.17.x) make sure that you upgraded to the latest minor release of your currently running major release! E.g. if you currently run 1.16.6 and want to upgrade to 1.17.3 make sure you upgrade 1.16.6 to the latest 1.16.x release like 1.16.7 if that’s the latest 1.16.x release! Afterwards you can do the major release upgrade. That’s important! Otherwise you risk a corrupt cluster state in etcd or other strange behavior’s.

The first thing you should do is to read the CHANGELOG’s of the version you want to upgrade. E.g. if you upgrade from v1.18.1 to v1.18.2 you only need to read CHANGELOG-1.18. Watch out for Urgent Upgrade Notes headlines. This could even happen for patch releases. That shouldn’t happen for patch releases but sometimes it can’t be avoided (Kubernetes version schema doesn’t follow SemVer btw.). If you want to upgrade the major version e.g. from v1.18.x to v1.19.0 read the CHANGELOG-1.19. The same advice as above applies of course.

As the whole Kubernetes cluster state is stored in etcd you should also consider creating a backup of the etcd data. Have a look at the etcd Admin Guide how to do this. This is especially true if you upgrading to a new major release. Also Velero (formerly Heptio Ark) could be a option. Velero is a utility for managing disaster recovery, specifically for your Kubernetes cluster resources and persistent volumes.


If you considered above prerequisites we’re ready to go. If you do a minor release update (v1.18.0 -> v1.18.1 e.g.) or a major release update (v1.18.x -> v1.19.0) the steps are basically the same. First we update the controller nodes node by node and afterwards the worker nodes.

One additional hint: Upgrading a major release while skipping one major release is a bad idea and calls for trouble ;-) So if you want upgrade from v1.17.x to v1.19.0 your upgrade steps should be v1.17.x -> v1.18.x -> v1.19.0.


From time to time the recommended and therefore tested/supported version of etcd changes. This is the case if you upgrade from K8s v1.16 to v1.17 e.g. The recommended etcd version for K8s v1.17 was 3.4.3. As the etcd releases up to 3.4.7 contained important bug fixes I used 3.4.7 right away. (see So before we upgrade K8s we’ll upgrade etcd first.

Have a look at the etcd upgrade guides. In our example it’s Upgrade etcd from 3.3 to 3.4. The first line in the upgrade guide is: In the general case, upgrading from etcd 3.3 to 3.4 can be a zero-downtime, rolling upgrade. That’s cool because in that case we can upgrade node by node. But before moving on make sure to read the upgrade guide as a whole to catch changes regarding flags e.g. Also the CHANGELOG-3.4 might contain important information.

First check the cluster state. Before you continue make sure that the cluster is a healthy state! Since we’re using Ansible we can do like so:

ansible -m shell -e "etcd_conf_dir=/etc/etcd" -a 'ETCDCTL_API=3 etcdctl endpoint health \
--endpoints=https://{{ ansible_wg0.ipv4.address }}:2379 \
--cacert={{ etcd_conf_dir }}/ca-etcd.pem \
--cert={{ etcd_conf_dir }}/cert-etcd-server.pem \
--key={{ etcd_conf_dir }}/cert-etcd-server-key.pem' \

I use Ansible’s shell module here. I also set a variable etcd_conf_dir which points to the directory where the etcd certificate files are located. That should be the same value as the value of etcd_conf_dir variable of the etcd role. Since my etcd processes listen on the Wireguard interface I use ansible_wg0.ipv4.address here as wg0 is the name of my Wireguard interface. If you use a different port than 2379 then of course you need to change that one too. You should see now a output similar to this:

etcd-node1 | CHANGED | rc=0 >> is healthy: successfully committed proposal: took = 2.807665ms
etcd-node2 | CHANGED | rc=0 >> is healthy: successfully committed proposal: took = 2.682864ms
etcd-node3 | CHANGED | rc=0 >> is healthy: successfully committed proposal: took = 10.169332ms

You can also check the current etcd API version (this will change if ALL etcd members are upgraded):

ansible -m shell -e "etcd_conf_dir=/etc/etcd" -a 'ETCDCTL_API=3 etcdctl version \
--endpoints=https://{{ ansible_wg0.ipv4.address }}:2379 \
--cacert={{ etcd_conf_dir }}/ca-etcd.pem \
--cert=/etc/etcd/cert-etcd-server.pem \
--key=/etc/etcd/cert-etcd-server-key.pem' k8s_etcd

which produces an output like this:

etcd-node1 | CHANGED | rc=0 >>
etcdctl version: 3.3.13
API version: 3.3
etcd-node2 | CHANGED | rc=0 >>
etcdctl version: 3.3.13
API version: 3.3
etcd-node3 | CHANGED | rc=0 >>
etcdctl version: 3.3.13
API version: 3.3

If we’ve a healthy cluster we can continue. Next set etcd_version: "3.4.7" in group_vars/all.yml (or where ever it makes sense for you).

Now we upgrade the first node e.g.:

ansible-playbook --tags=role-etcd --limit=controller01.i.domain.tld k8s.yml

If this was successful restart the etcd daemon on that node:

ansible -m command -a "systemctl restart etcd" controller01.i.domain.tld

Also keep an eye on the etcd logs e.g.:

ansible -m command -a 'journalctl --since=-15m -t etcd' controller01.i.domain.tld

If the logs are ok do the same for the remaining etcd nodes. If all etcd daemons are updated finally you should see something like this in the logs:

Apr 05 12:25:10 controller03 etcd[4598]: {"level":"info","ts":"2020-04-05T12:25:10.006+0200","caller":"etcdserver/server.go:2520","msg":"updating cluster version","from":"3.3","to":"3.4"}
Apr 05 12:25:10 controller03 etcd[4598]: {"level":"info","ts":"2020-04-05T12:25:10.011+0200","caller":"membership/cluster.go:546","msg":"updated cluster version","cluster-id":"74a0a98d4800b35d","local-member-id":"49d4179221cb766f","from":"3.3","from":"3.4"}
Apr 05 12:25:10 controller03 etcd[4598]: {"level":"info","ts":"2020-04-05T12:25:10.012+0200","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.4"}
Apr 05 12:25:10 controller03 etcd[4598]: {"level":"info","ts":"2020-04-05T12:25:10.013+0200","caller":"etcdserver/server.go:2543","msg":"cluster version is updated","cluster-version":"3.4"}

Again we can check the API version:

ansible -m shell -e "etcd_conf_dir=/etc/etcd" -a 'ETCDCTL_API=3 etcdctl version \
--endpoints=https://{{ ansible_wg0.ipv4.address }}:2379 \
--cacert={{ etcd_conf_dir }}/ca-etcd.pem \
--cert=/etc/etcd/cert-etcd.pem \
--key=/etc/etcd/cert-etcd-key.pem' k8s_etcd

etcd-node1 | CHANGED | rc=0 >>
etcdctl version: 3.4.7
API version: 3.4
etcd-node2 | CHANGED | rc=0 >>
etcdctl version: 3.4.7
API version: 3.4
etcd-node3 | CHANGED | rc=0 >>
etcdctl version: 3.4.7
API version: 3.4

Now you have a shiny new etcd cluster running version v3.4. ;-) Afterwards consider restarting all kube-apiserver and make one or two test deployments. I have had the case in the past that everything looked fine after the upgrade e.g. kubectl get pods -o wide -A worked fine but later I figured out that it was not possible to create or change deployments…

Other components

Keep an eye in the change log external dependencies. That’s mainly CNI, Docker and kube-dns/CoreDNS (Docker is deprecated in K8s v1.20.0 btw.). You may need to upgrade them too. If you need to upgrade CNI and/or Docker I would recommend to drain the node to do this kind of upgrade (see further down the text). This way you can easily upgrade node by node.


Updating kubectl utility before you upgrade the controller and worker nodes makes sense. Normally a newer client version can talk to older server versions. The other way round isn’t always true. When I update my K8s controller and worker roles I also update my kubectl role. So have a look if you can find the version you look for and upgrade kubectl locally first e.g.

ansible-playbook --tags=role-kubectl k8s.yml

Controller nodes

Update your inventory cache with ansible -m setup all.

The next thing to do is to set k8s_release. Let’s assume we currently have set k8s_release: "1.18.12" and want to upgrade to 1.19.4 so we set k8s_release: "1.19.4" in group_vars/all.yml (or whatever place you defined this variable).

Normally my Kubernetes controller role has also tagged various releases e.g. 13.0.0+1.19.4. In that case you can just update the role, have a look at the CHANGELOG what changed and adjust your variables and maybe other things accordingly. If you don’t find the tag with the K8s release you need you have to adjust the settings by yourself according to the K8s changelog.

Next we deploy the controller role one by one to every controller node e.g.:

ansible-playbook --tags=role-kubernetes-controller --limit=controller01.i.domain.tld k8s.yml

Of course replace controller01.i.domain.tld with the hostname of your first controller node. This will download the Kubernetes binaries, updates the old one and finally restarts kube-apiserver, kube-controller-manager and kube-scheduler. As in our current setup all worker services communicate only with the Kubernetes controller01 (we have no loadbalancer for the kube-apiserver yet) the API server will be shortly unavailable. But that only affects deployments/updates that would take place in this short time. All pods running on the worker are working as usual.

After the role is deployed you should have a look at the logfiles (with journalctl e.g.) on controller01 to verify everything worked well. Also check if the services are still listen in the ports they usually do (netstat -tlpn e.g.). You could also do a small Kubernetes test deployment via kubectl if this still works.

If you errors like this one failed with: Operation cannot be fulfilled on "": the object has been modified; please apply your changes to the latest version and try again

that should be ok at the moment. At this point you’ve one controller node with a newer version of K8s and two other noedes with a older K8s version. This message should disappear if you’ve updated all controller nodes.

If everything is ok go ahead and update controller02 and controller03 e.g.:

ansible-playbook --tags=role-kubernetes-controller --limit=controller02.i.domain.tld k8s.yml

# Wait until controller role is deployed on controller02...

ansible-playbook --tags=role-kubernetes-controller --limit=controller03.i.domain.tld k8s.yml

Now your controller nodes should be up2date!

Worker nodes

As with the Kubernetes controller role mentioned above I also tag the Kubernetes worker role accordingly.

For the worker nodes it’s basically the same as with the controller nodes. We start with worker01

ansible-playbook --tags=role-kubernetes-worker --limit=worker01.i.domain.tld k8s.yml

Of course replace worker01.i.domain.tld with the hostname of your first worker node. This will download the Kubernetes binaries, updates the old one and finally restarts kube-proxy and kubelet. While the two services are updated they won’t be able to start new pods or change network settings. But that’s only true while the services are restarted which takes only a few seconds and they will catch up the changes afterwards. Shouldn’t be a big deal as long as you don’t have a few thousand pods running ;-)

You can also drain a node before you start upgrading that node to avoid possible problems (see Safely Drain a Node while Respecting Application SLOs). You can use kubectl drain to safely evict all of your pods from a node before you perform maintenance on the node (e.g. kernel upgrade, hardware maintenance, etc.). Safe evictions allow the pod’s containers to gracefully terminate and will respect the PodDisruptionBudgets you have specified.

Again check the logs and if everything is ok continue with the other nodes:

ansible-playbook --tags=role-kubernetes-worker --limit=worker02.i.domain.tld k8s.yml

# Wait until worker role is deployed on worker02...

ansible-playbook --tags=role-kubernetes-worker --limit=worker03.i.domain.tld k8s.yml

If the worker role is deployed to all worker nodes we’re basically done with the Kubernetes upgrade!