Kubernetes the not so hard way with Ansible - Certificate authority (CA)

Setup certificate authorities for etcd and Kubernetes plus certificates needed for Kubernetes components

September 4, 2018

This post is based on Kelsey Hightower’s Kubernetes The Hard Way - Installing the Client Tools and Kubernetes The Hard Way - Provisioning a CA and Generating TLS Certificates.

Now that we’ve done some prepartion for our Kubernetes cluster (The basics / Harden the instances / WireGuard) we need a PKI (public key infrastructure) to secure the communication between the Kubernetes components. We’ll use CloudFlare’s CFSSL PKI toolkit to bootstrap certificate authority’s and generate TLS certificates. ansible-role-cfssl will generate a few files for that purpose. You can generate the files on any host you want but I’ll use a directory on localhost (my workstation that runs Ansible) because other roles need to copy a few of the generated files to our Kubernetes hosts later so it makes sense to have the files in place where Ansible has access.

First we install the most important Kubernetes utility called kubectl. We’ll configure it later. At the moment we just install it. I’ve created a Ansible role to install kubectl locally. Add the following content to Ansible’s host file:


workstation is the hostname of my local workstation/laptop. Of course you need to change it. You may also need to add a entry to /etc/hosts like localhost workstation to make name resolution work. A Ansible host_vars file could look like this (host_vars/workstation):

wireguard_address: ""
wireguard_endpoint: ""
ansible_connection: local
ansible_become_user: root
ansible_become: true
ansible_become_method: sudo

As already mentioned in the previous part my workstation is part of the WireGuard fully meshed network that connects every Kubernetes node to all the other nodes. So I can access the Kubernetes API server via VPN and don’t need SSH forwarding or stuff like that to make kubectl work.

Then install the role with

ansible-galaxy install githubixx.kubectl

The role has a few variables you can change if you like (just add the variables and values you want to change to group_vars/all.yml or where it fit’s best for you):

# "kubectl" version to install
kubectl_version: "1.10.4"
# SHA256 checksum of the archive (see https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md
# for the checksums
kubectl_checksum: "sha256:2831fe621bf1542a1eac38b8f50aa40a96b26153e850b3ff7155e5ce4f4f400e"
# Where to install "kubectl" binary
kubectl_bin_directory: "/usr/local/bin"
# Directory to store the kubeclient archive
kubectl_tmp_directory: "{{lookup('env', 'TMPDIR') | default('/tmp',true)}}"
# Owner of "kubectl" binary
kubectl_owner: "root"
# Group of "kubectl" binary
kubectl_group: "root"
# Operarting system on which "kubectl" should run on
kubectl_os: "linux" # use "darwin" for MacOS X, "windows" for Windows
# Processor architecture "kubectl" should run on
kubectl_arch: "amd64" # other possible values: "386","arm64","arm","ppc64le","s390x"

To install kubectl binary simply run

ansible-playbook --tags=role-kubectl k8s.yml

Next we add a additional entry to the Ansible hosts file:


k8s_ca (short for kubernetes certificate authority) is a Ansible host group (in this case the group contains only one host). As you can see my workstation will also contain all certificate authority files.

Now we install the cfssl role via

ansible-galaxy install githubixx.cfssl


- hosts: k8s_ca
      role: githubixx.cfssl 
      tags: role-cfssl

to your k8s.yml file. This adds the role githubixx.cfssl to the hosts group k8s_ca (which is only one host in your case as already mentioned). Have a look at defaults/main.yml file of that playbook for all variables you can change. Basically the important ones are:

# Specifies the version of CFSSL toolkit we want to download and use
cfssl_version: "R1.2"
# The directory where CFSSL binaries will be installed
cfssl_bin_directory: "/usr/local/bin"

Now we can install the cfssl binaries locally via

ansible-playbook --tags=role-cfssl k8s.yml

Next we can generate the certificate authorities (CA) for etcd and Kubernetes API server and the certificates to secure the communication between the components. DigitalOcean provides a good diagram of the Kubernetes operations flow: Kubernetes operations flow (from https://blog.digitalocean.com/vault-and-kubernetes/). Have a look at the diagram to get a better understanding of the K8s communication workflow. As always I’ve prepared a Ansible role to generate the CA’s and certificates. Install the role via

ansible-galaxy install githubixx.kubernetes-ca

Add the role to k8s.yml:

- hosts: k8s_ca
      role: githubixx.kubernetes-ca
      tags: role-kubernetes-ca

As with the CFSSL role this role will also be applied to the Ansible k8s_ca host (which is your workstation as you may remember from above).

This role has quite a few variables. But that’s mainly information needed for the certificates. In contrast to Kelsey’s Hightower’s guide Provisioning a CA and Generating TLS Certificates we create a different certificate authority for etcd and Kubernetes API server. Since only Kubernetes API server talks to etcd directly it makes sense not to use the same CA for etcd and Kubernetes API server to sign certificates. This adds an additional layer of security.

k8s_ca_conf_directory: "{{ '~/k8s/certs' | expanduser }}"
k8s_ca_certificate_owner: "root"
k8s_ca_certificate_group: "root"

k8s_ca_conf_directory tells Ansible where to store the certificate authority (CA) and certificate files. To enable Ansible to read the files in later runs you should specify a user and group in k8s_ca_certificate_owner / k8s_ca_certificate_group which has permissions (in most cases this will be the user you use on your workstation).

ca_etcd_expiry: "87600h"

ca_etcd_expiry sets the expiry date for etcd root CA.

ca_etcd_csr_cn: "Etcd"
ca_etcd_csr_key_algo: "rsa"
ca_etcd_csr_key_size: "2048"
ca_etcd_csr_names_c: "DE"
ca_etcd_csr_names_l: "The_Internet"
ca_etcd_csr_names_o: "Kubernetes"
ca_etcd_csr_names_ou: "BY"
ca_etcd_csr_names_st: "Bayern"

This variables are used to create the CSR (certificate signing request) of the CA (certificate authority) which we use to sign certifcates for etcd.

ca_k8s_apiserver_expiry: "87600h"

ca_k8s_apiserver_expiry sets the expiry date for Kubernetes API server root CA.

ca_k8s_apiserver_csr_cn: "Kubernetes"
ca_k8s_apiserver_csr_key_algo: "rsa"
ca_k8s_apiserver_csr_key_size: "2048"
ca_k8s_apiserver_csr_names_c: "DE"
ca_k8s_apiserver_csr_names_l: "The_Internet"
ca_k8s_apiserver_csr_names_o: "Kubernetes"
ca_k8s_apiserver_csr_names_ou: "BY"
ca_k8s_apiserver_csr_names_st: "Bayern"

This variables are used to create the CSR (certificate signing request) of the CA (certificate authority) which we use to sign certifcates for the Kubernetes API server.

etcd_csr_cn: "Etcd"
etcd_csr_key_algo: "rsa"
etcd_csr_key_size: "2048"
etcd_csr_names_c: "DE"
etcd_csr_names_l: "The_Internet"
etcd_csr_names_o: "Kubernetes"
etcd_csr_names_ou: "BY"
etcd_csr_names_st: "Bayern"

This variables are used to create the CSR for the certificate that is used to secure the etcd communication.

k8s_apiserver_csr_cn: "Kubernetes"
k8s_apiserver_csr_key_algo: "rsa"
k8s_apiserver_csr_key_size: "2048"
k8s_apiserver_csr_names_c: "DE"
k8s_apiserver_csr_names_l: "The_Internet"
k8s_apiserver_csr_names_o: "Kubernetes"
k8s_apiserver_csr_names_ou: "BY"
k8s_apiserver_csr_names_st: "Bayern"

This variables are used to create the CSR for the certificate that is used to secure the Kubernetes API server communication.

k8s_admin_csr_cn: "admin"
k8s_admin_csr_key_algo: "rsa"
k8s_admin_csr_key_size: "2048"
k8s_admin_csr_names_c: "DE"
k8s_admin_csr_names_l: "The_Internet"
k8s_admin_csr_names_o: "system:masters" # DO NOT CHANGE!
k8s_admin_csr_names_ou: "BY"
k8s_admin_csr_names_st: "Bayern"

This variables are used to create the CSR for the certificate we need to authenticate the admin user which we’ll use later for the kubectl utility.

k8s_worker_csr_key_algo: "rsa"
k8s_worker_csr_key_size: "2048"
k8s_worker_csr_names_c: "DE"
k8s_worker_csr_names_l: "The_Internet"
k8s_worker_csr_names_o: "system:nodes" # DO NOT CHANGE!
k8s_worker_csr_names_ou: "BY"
k8s_worker_csr_names_st: "Bayern"

The kubelet process (a.k.a. Kubernetes worker) also needs to authenticate itself against the API server. The variables are used to create the CSR file which in turn is used to create the kubelet certificate. Kubernetes uses a special-purpose authorization mode called Node Authorizer, that specifically authorizes API requests made by Kubelets. In order to be authorized by the Node Authorizer, Kubelets must use a credential that identifies them as being in the system:nodes group, with a username of system:node:<nodeName>.

k8s_controller_manager_csr_cn: "system:kube-controller-manager" # DO NOT CHANGE!
k8s_controller_manager_csr_key_algo: "rsa"
k8s_controller_manager_csr_key_size: "2048"
k8s_controller_manager_csr_names_c: "DE"
k8s_controller_manager_csr_names_l: "The_Internet"
k8s_controller_manager_csr_names_o: "system:kube-controller-manager" # DO NOT CHANGE!
k8s_controller_manager_csr_names_ou: "BY"
k8s_controller_manager_csr_names_st: "Bayern"

This variables are needed to generate the CSR for the kube-controller-manager client certificate.

k8s_scheduler_csr_cn: "system:kube-scheduler" # DO NOT CHANGE!
k8s_scheduler_csr_key_algo: "rsa"
k8s_scheduler_csr_key_size: "2048"
k8s_scheduler_csr_names_c: "DE"
k8s_scheduler_csr_names_l: "The_Internet"
k8s_scheduler_csr_names_o: "system:kube-scheduler" # DO NOT CHANGE!
k8s_scheduler_csr_names_ou: "BY"
k8s_scheduler_csr_names_st: "Bayern"

This variables are needed to generate the CSR for the kube-scheduler client certificate.

k8s_controller_manager_sa_csr_cn: "service-accounts"
k8s_controller_manager_sa_key_algo: "rsa"
k8s_controller_manager_sa_csr_key_size: "2048"
k8s_controller_manager_sa_csr_names_c: "DE"
k8s_controller_manager_sa_csr_names_l: "The_Internet"
k8s_controller_manager_sa_csr_names_o: "Kubernetes"
k8s_controller_manager_sa_csr_names_ou: "BY"
k8s_controller_manager_sa_csr_names_st: "Bayern"

CSR parameter for kube-controller-manager service account key pair. The kube-controller-manager leverages a key pair to generate and sign service account tokens as described in the managing service accounts documentation.

k8s_kube_proxy_csr_cn: "system:kube-proxy" # DO NOT CHANGE!
k8s_kube_proxy_csr_key_algo: "rsa"
k8s_kube_proxy_csr_key_size: "2048"
k8s_kube_proxy_csr_names_c: "DE"
k8s_kube_proxy_csr_names_l: "The_Internet"
k8s_kube_proxy_csr_names_o: "system:node-proxier" # DO NOT CHANGE!
k8s_kube_proxy_csr_names_ou: "BY"
k8s_kube_proxy_csr_names_st: "Bayern"

And finally also the kube-proxy must authenicate itself against the API server. As above this variables will be used to create the CSR file which will be in turn used to create the kube-proxy certificate.

The kube-proxy, kube-controller-manager, kube-scheduler, and kubelet client certificates will be used to generate client authentication configuration files which will happen later. Only the resulting .kubeconfig files will be copied to the nodes later.

  - etcd0
  - etcd1
  - etcd2

Here you add all etcd hosts. The task Generate list of IP addresses and hostnames needed for etcd certificate in this role will automatically add the hostname, the fully qualified domain name (FQDN), the internal IP address and the VPN IP address of your etcd hosts to a list which is needed to create.

The VPN IP is the IP of the WireGuard interface if you used WireGuard as the VPN solution i suggested in the previous part of this blog series. But you can also use PeerVPN or whatever you like as long as every Kubernetes/etcd host can talk to every other Kubernetes/etcd host via the interface you specify in k8s_interface. The Kubernetes CA Ansible role used here will fetch and use the IP address of the interface specified in the

k8s_interface: "wg0"

variable. As you can see in this case the interface is called wg0 which is the default WireGuard interface and which we created in the previous post with the wireguard Ansible role. The IP address of the k8s_interface will be included in the Kubernetes and etcd certificates to authenticate all the hosts.

etcd_cert_hosts allows you to specifiy additional hostnames/IPs. In general I would recommend to always add and/or localhost. If you plan to expand your etcd cluster from 3 to 5 hosts later and know the hostname, the fully qualified domain name (FQDN), the internal IP address and esp. the VPN IP address (the WireGuard IP) of that hosts upfront add them here too. This will save you a lot of work later.

  - kubernetes
  - kubernetes.default
  - kubernetes.default.svc
  - kubernetes.default.svc.cluster.local

As with the etcd hosts above a further task will automatically add the hostname, the fully qualified domain name (FQDN), the internal IP address and the VPN IP address of your Kubernetes API hosts to a list which is needed to create the API server certificate.

Additionally I recommend to add (first IP address of service cluster IP range is the service address of kubernetes.default.svc.cluster.local. This address will be resolved by kube-dns for all pods, which wants to access api) and here. If you know that you will add more worker later add them here in advance to save you work later.

Now we can generate the CSR’s and the certificates via

ansible-playbook --tags=role-kubernetes-ca k8s.yml

We only run our kubernetes-ca role which we tagged as role-kubernetes-ca. After the execution of the role is done you will find quite a few files in k8s_ca_conf_directory. The filenames should give you a good hint what’s the content of the file and for what a file is used. Here a overview which files you should at least get:


Next we need to generate Kubernetes configuration files, also known as kubeconfig’s which enable Kubernetes clients to locate and authenticate to the Kubernetes API Servers (see also Generating Kubernetes Configuration Files for Authentication). To create the files I created a few playbooks. Switch to a directory where you want to save the playbooks (e.g. on the same directory level as the Ansible roles directory) and get them via

git clone https://github.com/githubixx/ansible-kubernetes-playbooks

Make sure to set k8s_apiserver_secure_port: "6443" (or whatever port you’ve choosen for the kube-apiserver secure port) in group_vars/all.yml.

Switch to directory kubeauthconfig and run the playbooks:

ansible-playbook kubelets.yml
ansible-playbook kube-proxy.yml
ansible-playbook kube-controller-manager.yml
ansible-playbook kube-scheduler.yml
ansible-playbook kube-admin-user.yml

Finally we need to generate the data encryption config and key. Kubernetes stores a variety of data including cluster state, application configurations, and secrets. Kubernetes supports the ability to encrypt cluster data at rest. This playbook will generate a encryption config containing the encryption key. For this to work Ansible needs your encryption key which we’ll put into group_vars/all.yml e.g.:

# Same as "k8s_config_directory" in this case but could be different.
k8s_encryption_config_directory: "{{k8s_config_directory}}"
k8s_encryption_config_key: "YLXdi1xnNLOM4+IUd5aeNO6ps6JaKYTCDaMYJRrD+gk="

Now switch to directory kubeencryptionconfig and execute the playbook:

ansible-playbook kubeencryptionconfig.yml

The last two steps generated a few new files in {{k8s_config_directory}} and {{k8s_encryption_config_directory}}.

That’s it for now. In the next chapter we’ll install the etcd cluster and we’ll use the first CA and certificates that we generated in this part.