Kubernetes the not so hard way with Ansible (at Scaleway) - Part 4 - Certificate authority (CA) [updated for Kubernetes v1.9]

Setup certificate authorities for etcd and Kubernetes plus certificates needed for Kubernetes components

December 28, 2016



  • Added variables available for ansible-role-kubectl
  • Removed unneded host names from k8s_apiserver_cert_hosts variable
  • Changed default location of k8s_encryption_config_directory to {{k8s_config_directory}}


  • Changed default of k8s_ca_conf_directory to {{ '~/k8s/certs' | expanduser }}. By default this will expand to user’s LOCAL $HOME (the user that run’s ansible-playbook ... plus /k8s/certs. That means if the user’s $HOME directory is e.g. /home/da_user then k8s_ca_conf_directory will have a value of /home/da_user/k8s/certs. As the user normally has write access to his $HOME directory we don’t rely on the parent directory permission if we deploy the role without root permissions. If you defined this variable with a different value before this change then you don’t need to bother about this change.


  • Corrected links to “Kubernetes The Hard Way” at Github (updated for K8s 1.8)
  • Clarify permissions of k8s_ca_conf_directory directory.
  • Updated links to Kubernetes diagram at Digital Ocean.
  • New variables needed for Kubernetes worker, kube-proxy and admin user certificates
  • New tasks to generate kubeconfigs and encryption key

This post is based on Kelsey Hightower’s Kubernetes The Hard Way - Installing the Client Tools and Kubernetes The Hard Way - Provisioning a CA and Generating TLS Certificates.

Now that we’ve done some prepartion for our Kubernetes cluster (Part 1 - The basics / Part 2 - Harden the instances / Part 3 - Peervpn) we need a PKI (public key infrastructure) to secure the communication between the Kubernetes components. We’ll use CloudFlare’s CFSSL PKI toolkit to bootstrap certificate authority’s and generate TLS certificates. ansible-role-cfssl will generate a few files for that purpose. You can generate the files on any host you want but I’ll use a directory on localhost (my workstation that runs Ansible) because other roles need to copy a few of the generated files to our Kubernetes hosts later so it makes sense to have the files in place where Ansible has access.

First we install the most important Kubernetes utility called kubectl. We’ll need it later. I’ve created a Ansible role to install kubectl locally. Add the following content to Ansible’s host file:

localhost ansible_connection=local

Then install the role with

ansible-galaxy install githubixx.kubectl

The role has a few variables you can change if you like (just add the variables and values you want to change to group_vars/all.yml or where it fit’s best for you):

# "kubectl" version to install
kubectl_version: "1.9.1"
# SHA256 checksum of the archive (see https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.9.md
# for the checksums
kubectl_checksum: "sha256:fe8fe40148df404b33069931ea30937699758ed4611ef6baddb4c21b7b19db5e"
# Where to install "kubectl" binary
kubectl_bin_directory: "/usr/local/bin"
# Directory to store the kubeclient archive
kubectl_tmp_directory: "{{lookup('env', 'TMPDIR') | default('/tmp',true)}}"
# Owner of "kubectl" binary
kubectl_owner: "root"
# Group of "kubectl" binary
kubectl_group: "root"
# Operarting system on which "kubectl" should run on
kubectl_os: "linux" # use "darwin" for MacOS X, "windows" for Windows
# Processor architecture "kubectl" should run on
kubectl_arch: "amd64" # other possible values: "386","arm64","arm","ppc64le","s390x"

To install kubectl binary simply run

ansible-playbook --tags=role-kubectl k8s.yml

Next we add a additional entry to the Ansible hosts file:

localhost ansible_connection=local

k8s_ca (short for kubernetes certificate authority) is a Ansible host group (in this case the group contains only one host). As you can see the real hostname is localhost. To make the role work on my workstation I need to add ansible_connection=local parameter which tells Ansible to run the role locally without connecting via SSH.

Now we install the cfssl role via

ansible-galaxy install githubixx.cfssl


- hosts: k8s_ca
      role: githubixx.cfssl 
      tags: role-cfssl

to your k8s.yml file. This adds the role githubixx.cfssl to the hosts group k8s_ca (which is only one host in your case). Have a look at defaults/main.yml file of that playbook for all variables you can change. Basically the important ones are:

# Specifies the version of CFSSL toolkit we want to download and use
cfssl_version: "R1.2"
# The directory where CFSSL binaries will be installed
cfssl_bin_directory: "/usr/local/bin"

Now we can install the cfssl binaries locally via

ansible-playbook --tags=role-cfssl k8s.yml

Next we can generate the certificate authorities (CA) for etcd and Kubernetes API server and the certificates to secure the communication between the components. DigitalOcean provides a good diagram of the Kubernetes operations flow: Kubernetes operations flow (from https://blog.digitalocean.com/vault-and-kubernetes/). Have a look at the diagram to get a better understanding of the K8s communication workflow. As always I’ve prepared a Ansible role to generate the CA’s and certificates. Install the role via

ansible-galaxy install githubixx.kubernetes-ca

Add the role to k8s.yml:

- hosts: k8s_ca
      role: githubixx.kubernetes-ca
      tags: role-kubernetes-ca

As with the CFSSL role this role will also be applied to the Ansible k8s_ca host (which is localhost as you may remember from above). Since the CA’s and certificates will be installed locally we need to define the variables for this role in host_vars/localhost or put them into group_vars/all.

This role has quite a few variables. But that’s mainly information needed for the certificates. In contrast to Kelsey’s Hightower’s guide Provisioning a CA and Generating TLS Certificates we create a different certificate authority for etcd and Kubernetes API server. Since only Kubernetes API server talks to etcd directly it makes sense not to use the same CA for etcd and Kubernetes API server to sign certificates. This adds an additional layer of security.

k8s_ca_conf_directory: "{{ '~/k8s/certs' | expanduser }}"
k8s_ca_certificate_owner: "root"
k8s_ca_certificate_group: "root"

k8s_ca_conf_directory tells Ansible where to store the CA’s and certificate files. To enable Ansible to read the files in later runs you should specify a user and group in k8s_ca_certificate_owner / k8s_ca_certificate_group which has permissions (in most cases this will be the user you use on your workstation). The important thing here is that Ansible needs write access to the parent directory. So choose a directory where Ansible is able to create the k8s/certs directory (use the default value to store the files in your $HOME directory if you don’t need to share the certificates).

ca_etcd_expiry: "87600h"

ca_etcd_expiry sets the expiry date for etcd root CA.

ca_etcd_csr_cn: "Etcd"
ca_etcd_csr_key_algo: "rsa"
ca_etcd_csr_key_size: "2048"
ca_etcd_csr_names_c: "DE"
ca_etcd_csr_names_l: "The_Internet"
ca_etcd_csr_names_o: "Kubernetes"
ca_etcd_csr_names_ou: "BY"
ca_etcd_csr_names_st: "Bayern"

This variables are used to create the CSR (certificate signing request) of the CA (certificate authority) which we use to sign certifcates for etcd.

ca_k8s_apiserver_expiry: "87600h"

ca_k8s_apiserver_expiry sets the expiry date for Kubernetes API server root CA.

ca_k8s_apiserver_csr_cn: "Kubernetes"
ca_k8s_apiserver_csr_key_algo: "rsa"
ca_k8s_apiserver_csr_key_size: "2048"
ca_k8s_apiserver_csr_names_c: "DE"
ca_k8s_apiserver_csr_names_l: "The_Internet"
ca_k8s_apiserver_csr_names_o: "Kubernetes"
ca_k8s_apiserver_csr_names_ou: "BY"
ca_k8s_apiserver_csr_names_st: "Bayern"

This variables are used to create the CSR (certificate signing request) of the CA (certificate authority) which we use to sign certifcates for the Kubernetes API server.

etcd_csr_cn: "Etcd"
etcd_csr_key_algo: "rsa"
etcd_csr_key_size: "2048"
etcd_csr_names_c: "DE"
etcd_csr_names_l: "The_Internet"
etcd_csr_names_o: "Kubernetes"
etcd_csr_names_ou: "BY"
etcd_csr_names_st: "Bayern"

This variables are used to create the CSR for the certificate that is used to secure the etcd communication.

k8s_apiserver_csr_cn: "Kubernetes"
k8s_apiserver_csr_key_algo: "rsa"
k8s_apiserver_csr_key_size: "2048"
k8s_apiserver_csr_names_c: "DE"
k8s_apiserver_csr_names_l: "The_Internet"
k8s_apiserver_csr_names_o: "Kubernetes"
k8s_apiserver_csr_names_ou: "BY"
k8s_apiserver_csr_names_st: "Bayern"

This variables are used to create the CSR for the certificate that is used to secure the Kubernetes API server communication.

k8s_admin_csr_cn: "admin"
k8s_admin_csr_key_algo: "rsa"
k8s_admin_csr_key_size: "2048"
k8s_admin_csr_names_c: "DE"
k8s_admin_csr_names_l: "The_Internet"
k8s_admin_csr_names_o: "system:masters" # DO NOT CHANGE!
k8s_admin_csr_names_ou: "BY"
k8s_admin_csr_names_st: "Bayern"

This variables are used to create the CSR for the certificate we need to authenicate the admin user which we’ll use later for the kubectl utility.

k8s_worker_csr_key_algo: "rsa"
k8s_worker_csr_key_size: "2048"
k8s_worker_csr_names_c: "DE"
k8s_worker_csr_names_l: "The_Internet"
k8s_worker_csr_names_o: "system:nodes" # DO NOT CHANGE!
k8s_worker_csr_names_ou: "BY"
k8s_worker_csr_names_st: "Bayern"

The kubelet process (a.k.a. Kubernetes worker) also needs to authenticate itself against the API server. The variables are used to create the CSR file which in turn is used to create the kubelet certificate.

k8s_kube_proxy_csr_cn: "system:kube-proxy" # DO NOT CHANGE!
k8s_kube_proxy_csr_key_algo: "rsa"
k8s_kube_proxy_csr_key_size: "2048"
k8s_kube_proxy_csr_names_c: "DE"
k8s_kube_proxy_csr_names_l: "The_Internet"
k8s_kube_proxy_csr_names_o: "system:node-proxier" # DO NOT CHANGE!
k8s_kube_proxy_csr_names_ou: "BY"
k8s_kube_proxy_csr_names_st: "Bayern"

And finally also the kube-proxy must authenicate itself against the API server. As above this variables will be used to create the CSR file which will be in turn used to create the kube-proxy certificate.

  - etcd0
  - etcd1
  - etcd2

Here you add all etcd hosts. The task Generate list of IP addresses and hostnames needed for etcd certificate will automatically add the hostname, the fully qualified domain name (FQDN), the internal IP address and the PeerVPN IP address of your etcd hosts to a list which is needed to create the etcd certificate. etcd_cert_hosts allows you to specifiy additional hostnames/IPs. In general I would recommend to always add and/or localhost. If you plan to expand your etcd cluster from 3 to 5 hosts later and know the hostname, the fully qualified domain name (FQDN), the internal IP address and the PeerVPN IP address of that hosts upfront add them here too. This will save you work later.

  - kubernetes
  - kubernetes.default
  - kubernetes.default.svc
  - kubernetes.default.svc.cluster.local

As with the etcd hosts above a task will automatically add the hostname, the fully qualified domain name (FQDN), the internal IP address and the PeerVPN IP address of your Kubernetes API hosts to a list which is needed to create the API server certificate. Additionally recommend to add (first IP address of service cluster IP range is the service address of kubernetes.default.svc.cluster.local. This address will be resolved by kube-dns for all pods, which wants to access api) and here. If you know that you will add more worker later add them here in advance to save you work later.

Now we can generate the CSR’s and the certificates via

ansible-playbook --tags=role-kubernetes-ca k8s.yml

We only run our kubernetes-ca role which we tagged as role-kubernetes-ca. After the execution of the role is done you will find quite a few files in k8s_ca_conf_directory. The filenames should give you a good hint what’s the content of the file and for what a file is used.

Next we need to generate Kubernetes configuration files, also known as kubeconfig’s which enable Kubernetes clients to locate and authenticate to the Kubernetes API Servers (also see Generating Kubernetes Configuration Files for Authentication). To create the files I created a few playbooks. Switch to a directory where you want to save the playbooks (e.g. on the same directory level as the Ansible roles directory) and get them via

git clone https://github.com/githubixx/ansible-kubernetes-misc

Switch to directory kubeauthconfig and run the playbooks with

ansible-playbook kubelets.yml


ansible-playbook kube-proxy.yml

Finally we need to generate the data encryption config and key. Kubernetes stores a variety of data including cluster state, application configurations, and secrets. Kubernetes supports the ability to encrypt cluster data at rest. This playbook will generate a encryption config containing the encryption key. For this to work Ansible needs your encryption key which we’ll put into group_vars/all (or host_vars/localhost should be fine too) e.g.:

# Same as "k8s_config_directory" in this case but could be
# if permissions are ok.
k8s_encryption_config_directory: "{{k8s_config_directory}}"
k8s_encryption_config_key: "YLXdi1xnNLOM4+IUd5aeNO6ps6JaKYTCDaMYJRrD+gk="

Now switch to directory kubeencryptionconfig and execute the playbook:

ansible-playbook kubeencryptionconfig.yml

The last two steps generated a few new files in k8s_config_directory/k8s_encryption_config_directory.

That’s it for part 4. In the next chapter we’ll install the etcd cluster and we’ll use the first CA and certificates that we generated in this part.