Kubernetes the not so hard way with Ansible (at Scaleway) - Part 4 - Certificate authority (CA) [updated for Kubernetes v1.10.x]

Setup certificate authorities for etcd and Kubernetes plus certificates needed for Kubernetes components

December 28, 2016



  • I’ll no longer update this text as I migrated my hosts to Hetzner Online because of constant network issues with Scaleway. I’ve created a new blog series about how to setup a Kubernetes cluster at Hetzner Online but since my Ansible playbooks are not provider depended the blog text should work for Scaleway too if you still want to use it. The new blog post is here.


  • Updated to kubectl v1.10.4
  • Added variables k8s_controller_manager_csr_* needed for kube-controller-manager client certificate
  • Added variables k8s_controller_manager_sa_csr_* needed for kube-controller-manager
  • Added variables k8s_scheduler_csr_* needed for kube-scheduler client certificate
  • Renamed certificate files cert-kube-proxy* -> cert-k8s-proxy* to be in pair with the other certificate file names
  • Make sure to set k8s_apiserver_secure_port variable in group_vars/all.yml
  • kubeauthconfig needs now generated for kube-(controller-manager|scheduler|admin-user) too


  • Added variables available for ansible-role-kubectl
  • Removed unneded host names from k8s_apiserver_cert_hosts variable
  • Changed default location of k8s_encryption_config_directory to {{k8s_config_directory}}


  • Changed default of k8s_ca_conf_directory to {{ '~/k8s/certs' | expanduser }}. By default this will expand to user’s LOCAL $HOME (the user that run’s ansible-playbook ... plus /k8s/certs. That means if the user’s $HOME directory is e.g. /home/da_user then k8s_ca_conf_directory will have a value of /home/da_user/k8s/certs. As the user normally has write access to his $HOME directory we don’t rely on the parent directory permission if we deploy the role without root permissions. If you defined this variable with a different value before this change then you don’t need to bother about this change.


  • Corrected links to “Kubernetes The Hard Way” at Github (updated for K8s 1.8)
  • Clarify permissions of k8s_ca_conf_directory directory.
  • Updated links to Kubernetes diagram at Digital Ocean.
  • New variables needed for Kubernetes worker, kube-proxy and admin user certificates
  • New tasks to generate kubeconfigs and encryption key

This post is based on Kelsey Hightower’s Kubernetes The Hard Way - Installing the Client Tools and Kubernetes The Hard Way - Provisioning a CA and Generating TLS Certificates.

Now that we’ve done some prepartion for our Kubernetes cluster (Part 1 - The basics / Part 2 - Harden the instances / Part 3 - Peervpn) we need a PKI (public key infrastructure) to secure the communication between the Kubernetes components. We’ll use CloudFlare’s CFSSL PKI toolkit to bootstrap certificate authority’s and generate TLS certificates. ansible-role-cfssl will generate a few files for that purpose. You can generate the files on any host you want but I’ll use a directory on localhost (my workstation that runs Ansible) because other roles need to copy a few of the generated files to our Kubernetes hosts later so it makes sense to have the files in place where Ansible has access.

First we install the most important Kubernetes utility called kubectl. We’ll configure it later as we just install it ATM. I’ve created a Ansible role to install kubectl locally. Add the following content to Ansible’s host file:

localhost ansible_connection=local

Then install the role with

ansible-galaxy install githubixx.kubectl

The role has a few variables you can change if you like (just add the variables and values you want to change to group_vars/all.yml or where it fit’s best for you):

# "kubectl" version to install
kubectl_version: "1.10.4"
# SHA256 checksum of the archive (see https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md
# for the checksums
kubectl_checksum: "sha256:2831fe621bf1542a1eac38b8f50aa40a96b26153e850b3ff7155e5ce4f4f400e"
# Where to install "kubectl" binary
kubectl_bin_directory: "/usr/local/bin"
# Directory to store the kubeclient archive
kubectl_tmp_directory: "{{lookup('env', 'TMPDIR') | default('/tmp',true)}}"
# Owner of "kubectl" binary
kubectl_owner: "root"
# Group of "kubectl" binary
kubectl_group: "root"
# Operarting system on which "kubectl" should run on
kubectl_os: "linux" # use "darwin" for MacOS X, "windows" for Windows
# Processor architecture "kubectl" should run on
kubectl_arch: "amd64" # other possible values: "386","arm64","arm","ppc64le","s390x"

To install kubectl binary simply run

ansible-playbook --tags=role-kubectl k8s.yml

Next we add a additional entry to the Ansible hosts file:

localhost ansible_connection=local

k8s_ca (short for kubernetes certificate authority) is a Ansible host group (in this case the group contains only one host). As you can see the real hostname is localhost. To make the role work on my workstation I need to add ansible_connection=local parameter which tells Ansible to run the role locally without connecting via SSH.

Now we install the cfssl role via

ansible-galaxy install githubixx.cfssl


- hosts: k8s_ca
      role: githubixx.cfssl 
      tags: role-cfssl

to your k8s.yml file. This adds the role githubixx.cfssl to the hosts group k8s_ca (which is only one host in your case). Have a look at defaults/main.yml file of that playbook for all variables you can change. Basically the important ones are:

# Specifies the version of CFSSL toolkit we want to download and use
cfssl_version: "R1.2"
# The directory where CFSSL binaries will be installed
cfssl_bin_directory: "/usr/local/bin"

Now we can install the cfssl binaries locally via

ansible-playbook --tags=role-cfssl k8s.yml

Next we can generate the certificate authorities (CA) for etcd and Kubernetes API server and the certificates to secure the communication between the components. DigitalOcean provides a good diagram of the Kubernetes operations flow: Kubernetes operations flow (from https://blog.digitalocean.com/vault-and-kubernetes/). Have a look at the diagram to get a better understanding of the K8s communication workflow. As always I’ve prepared a Ansible role to generate the CA’s and certificates. Install the role via

ansible-galaxy install githubixx.kubernetes-ca

Add the role to k8s.yml:

- hosts: k8s_ca
      role: githubixx.kubernetes-ca
      tags: role-kubernetes-ca

As with the CFSSL role this role will also be applied to the Ansible k8s_ca host (which is localhost as you may remember from above). Since the CA’s and certificates will be installed locally we need to define the variables for this role in host_vars/localhost or put them into group_vars/all.

This role has quite a few variables. But that’s mainly information needed for the certificates. In contrast to Kelsey’s Hightower’s guide Provisioning a CA and Generating TLS Certificates we create a different certificate authority for etcd and Kubernetes API server. Since only Kubernetes API server talks to etcd directly it makes sense not to use the same CA for etcd and Kubernetes API server to sign certificates. This adds an additional layer of security.

k8s_ca_conf_directory: "{{ '~/k8s/certs' | expanduser }}"
k8s_ca_certificate_owner: "root"
k8s_ca_certificate_group: "root"

k8s_ca_conf_directory tells Ansible where to store the CA’s and certificate files. To enable Ansible to read the files in later runs you should specify a user and group in k8s_ca_certificate_owner / k8s_ca_certificate_group which has permissions (in most cases this will be the user you use on your workstation). The important thing here is that Ansible needs write access to the parent directory. So choose a directory where Ansible is able to create the k8s/certs directory (use the default value to store the files in your $HOME directory if you don’t need to share the certificates).

ca_etcd_expiry: "87600h"

ca_etcd_expiry sets the expiry date for etcd root CA.

ca_etcd_csr_cn: "Etcd"
ca_etcd_csr_key_algo: "rsa"
ca_etcd_csr_key_size: "2048"
ca_etcd_csr_names_c: "DE"
ca_etcd_csr_names_l: "The_Internet"
ca_etcd_csr_names_o: "Kubernetes"
ca_etcd_csr_names_ou: "BY"
ca_etcd_csr_names_st: "Bayern"

This variables are used to create the CSR (certificate signing request) of the CA (certificate authority) which we use to sign certifcates for etcd.

ca_k8s_apiserver_expiry: "87600h"

ca_k8s_apiserver_expiry sets the expiry date for Kubernetes API server root CA.

ca_k8s_apiserver_csr_cn: "Kubernetes"
ca_k8s_apiserver_csr_key_algo: "rsa"
ca_k8s_apiserver_csr_key_size: "2048"
ca_k8s_apiserver_csr_names_c: "DE"
ca_k8s_apiserver_csr_names_l: "The_Internet"
ca_k8s_apiserver_csr_names_o: "Kubernetes"
ca_k8s_apiserver_csr_names_ou: "BY"
ca_k8s_apiserver_csr_names_st: "Bayern"

This variables are used to create the CSR (certificate signing request) of the CA (certificate authority) which we use to sign certifcates for the Kubernetes API server.

etcd_csr_cn: "Etcd"
etcd_csr_key_algo: "rsa"
etcd_csr_key_size: "2048"
etcd_csr_names_c: "DE"
etcd_csr_names_l: "The_Internet"
etcd_csr_names_o: "Kubernetes"
etcd_csr_names_ou: "BY"
etcd_csr_names_st: "Bayern"

This variables are used to create the CSR for the certificate that is used to secure the etcd communication.

k8s_apiserver_csr_cn: "Kubernetes"
k8s_apiserver_csr_key_algo: "rsa"
k8s_apiserver_csr_key_size: "2048"
k8s_apiserver_csr_names_c: "DE"
k8s_apiserver_csr_names_l: "The_Internet"
k8s_apiserver_csr_names_o: "Kubernetes"
k8s_apiserver_csr_names_ou: "BY"
k8s_apiserver_csr_names_st: "Bayern"

This variables are used to create the CSR for the certificate that is used to secure the Kubernetes API server communication.

k8s_admin_csr_cn: "admin"
k8s_admin_csr_key_algo: "rsa"
k8s_admin_csr_key_size: "2048"
k8s_admin_csr_names_c: "DE"
k8s_admin_csr_names_l: "The_Internet"
k8s_admin_csr_names_o: "system:masters" # DO NOT CHANGE!
k8s_admin_csr_names_ou: "BY"
k8s_admin_csr_names_st: "Bayern"

This variables are used to create the CSR for the certificate we need to authenicate the admin user which we’ll use later for the kubectl utility.

k8s_worker_csr_key_algo: "rsa"
k8s_worker_csr_key_size: "2048"
k8s_worker_csr_names_c: "DE"
k8s_worker_csr_names_l: "The_Internet"
k8s_worker_csr_names_o: "system:nodes" # DO NOT CHANGE!
k8s_worker_csr_names_ou: "BY"
k8s_worker_csr_names_st: "Bayern"

The kubelet process (a.k.a. Kubernetes worker) also needs to authenticate itself against the API server. The variables are used to create the CSR file which in turn is used to create the kubelet certificate. Kubernetes uses a special-purpose authorization mode called Node Authorizer, that specifically authorizes API requests made by Kubelets. In order to be authorized by the Node Authorizer, Kubelets must use a credential that identifies them as being in the system:nodes group, with a username of system:node:<nodeName>.

k8s_controller_manager_csr_cn: "system:kube-controller-manager" # DO NOT CHANGE!
k8s_controller_manager_csr_key_algo: "rsa"
k8s_controller_manager_csr_key_size: "2048"
k8s_controller_manager_csr_names_c: "DE"
k8s_controller_manager_csr_names_l: "The_Internet"
k8s_controller_manager_csr_names_o: "system:kube-controller-manager" # DO NOT CHANGE!
k8s_controller_manager_csr_names_ou: "BY"
k8s_controller_manager_csr_names_st: "Bayern"

This variables are needed to generate the CSR for the kube-controller-manager client certificate.

k8s_scheduler_csr_cn: "system:kube-scheduler" # DO NOT CHANGE!
k8s_scheduler_csr_key_algo: "rsa"
k8s_scheduler_csr_key_size: "2048"
k8s_scheduler_csr_names_c: "DE"
k8s_scheduler_csr_names_l: "The_Internet"
k8s_scheduler_csr_names_o: "system:kube-scheduler" # DO NOT CHANGE!
k8s_scheduler_csr_names_ou: "BY"
k8s_scheduler_csr_names_st: "Bayern"

This variables are needed to generate the CSR for the kube-scheduler client certificate.

k8s_controller_manager_sa_csr_cn: "service-accounts"
k8s_controller_manager_sa_key_algo: "rsa"
k8s_controller_manager_sa_csr_key_size: "2048"
k8s_controller_manager_sa_csr_names_c: "DE"
k8s_controller_manager_sa_csr_names_l: "The_Internet"
k8s_controller_manager_sa_csr_names_o: "Kubernetes"
k8s_controller_manager_sa_csr_names_ou: "BY"
k8s_controller_manager_sa_csr_names_st: "Bayern"

CSR parameter for kube-controller-manager service account key pair. The kube-controller-manager leverages a key pair to generate and sign service account tokens as described in the managing service accounts documentation.

k8s_kube_proxy_csr_cn: "system:kube-proxy" # DO NOT CHANGE!
k8s_kube_proxy_csr_key_algo: "rsa"
k8s_kube_proxy_csr_key_size: "2048"
k8s_kube_proxy_csr_names_c: "DE"
k8s_kube_proxy_csr_names_l: "The_Internet"
k8s_kube_proxy_csr_names_o: "system:node-proxier" # DO NOT CHANGE!
k8s_kube_proxy_csr_names_ou: "BY"
k8s_kube_proxy_csr_names_st: "Bayern"

And finally also the kube-proxy must authenicate itself against the API server. As above this variables will be used to create the CSR file which will be in turn used to create the kube-proxy certificate.

The kube-proxy, kube-controller-manager, kube-scheduler, and kubelet client certificates will be used to generate client authentication configuration files which will happen later. Only the resulting .kubeconfig files will be copied to the nodes later.

  - etcd0
  - etcd1
  - etcd2

Here you add all etcd hosts. The task Generate list of IP addresses and hostnames needed for etcd certificate in this role will automatically add the hostname, the fully qualified domain name (FQDN), the internal IP address and the PeerVPN IP address of your etcd hosts to a list which is needed to create. etcd_cert_hosts allows you to specifiy additional hostnames/IPs. In general I would recommend to always add and/or localhost. If you plan to expand your etcd cluster from 3 to 5 hosts later and know the hostname, the fully qualified domain name (FQDN), the internal IP address and the PeerVPN IP address of that hosts upfront add them here too. This will save you work later.

  - kubernetes
  - kubernetes.default
  - kubernetes.default.svc
  - kubernetes.default.svc.cluster.local

As with the etcd hosts above a task will automatically add the hostname, the fully qualified domain name (FQDN), the internal IP address and the PeerVPN IP address of your Kubernetes API hosts to a list which is needed to create the API server certificate. Additionally I recommend to add (first IP address of service cluster IP range is the service address of kubernetes.default.svc.cluster.local. This address will be resolved by kube-dns for all pods, which wants to access api) and here. If you know that you will add more worker later add them here in advance to save you work later.

Now we can generate the CSR’s and the certificates via

ansible-playbook --tags=role-kubernetes-ca k8s.yml

We only run our kubernetes-ca role which we tagged as role-kubernetes-ca. After the execution of the role is done you will find quite a few files in k8s_ca_conf_directory. The filenames should give you a good hint what’s the content of the file and for what a file is used. Here a overview which files you should at least get:


Next we need to generate Kubernetes configuration files, also known as kubeconfig’s which enable Kubernetes clients to locate and authenticate to the Kubernetes API Servers (see also Generating Kubernetes Configuration Files for Authentication). To create the files I created a few playbooks. Switch to a directory where you want to save the playbooks (e.g. on the same directory level as the Ansible roles directory) and get them via

git clone https://github.com/githubixx/ansible-kubernetes-playbooks

Make sure to set k8s_apiserver_secure_port: "6443" (or what ever port you’ve choosen for the kube-apiserver secure port) in group_vars/all.yml.

Switch to directory kubeauthconfig and run the playbooks:

ansible-playbook kubelets.yml
ansible-playbook kube-proxy.yml
ansible-playbook kube-controller-manager.yml
ansible-playbook kube-scheduler.yml
ansible-playbook kube-admin-user.yml

Finally we need to generate the data encryption config and key. Kubernetes stores a variety of data including cluster state, application configurations, and secrets. Kubernetes supports the ability to encrypt cluster data at rest. This playbook will generate a encryption config containing the encryption key. For this to work Ansible needs your encryption key which we’ll put into group_vars/all (or host_vars/localhost should be fine too) e.g.:

# Same as "k8s_config_directory" in this case but could be different
# if permissions are ok.
k8s_encryption_config_directory: "{{k8s_config_directory}}"
k8s_encryption_config_key: "YLXdi1xnNLOM4+IUd5aeNO6ps6JaKYTCDaMYJRrD+gk="

Now switch to directory kubeencryptionconfig and execute the playbook:

ansible-playbook kubeencryptionconfig.yml

The last two steps generated a few new files in {{k8s_config_directory}} and {{k8s_encryption_config_directory}}.

That’s it for part 4. In the next chapter we’ll install the etcd cluster and we’ll use the first CA and certificates that we generated in this part.