Kubernetes the not so hard way with Ansible (at Scaleway) - Part 4 - Certificate authority (CA) [updated for K8s v1.8]

December 28, 2016

CHANGELOG

2017-10-08

  • Corrected links to “Kubernetes The Hard Way” at Github (updated for K8s 1.8)
  • Clarify permissions of k8s_ca_conf_directory directory.
  • Updated links to Kubernetes diagram at Digital Ocean.
  • New variables needed for Kubernetes worker, kube-proxy and admin user certificates
  • New tasks to generate kubeconfigs and encryption key

This post is based on Kelsey Hightower’s Kubernetes The Hard Way - Installing the Client Tools and Kubernetes The Hard Way - Provisioning a CA and Generating TLS Certificates.

Now that we’ve done some prepartion for our Kubernetes cluster (Part 1 - The basics / Part 2 - Harden the instances / Part 3 - Peervpn) we need a PKI (public key infrastructure) to secure the communication between the Kubernetes components. We’ll use CloudFlare’s CFSSL PKI toolkit to bootstrap certificate authority’s and generate TLS certificates. ansible-role-cfssl will generate a few files for that purpose. You can generate the files on any host you want but I’ll use a directory on localhost (my workstation that runs Ansible) because other roles need to copy a few of the generated files to our Kubernetes hosts later so it makes sense to have the files in place where Ansible has access.

First we install the most important Kubernetes utility called kubectl. We’ll need it later. I’ve created a Ansible role to install kubectl locally. Add the following content to Ansible’s host file:

[k8s_kubectl]
localhost ansible_connection=local

Then install the role with

ansible-galaxy install githubixx.kubectl

To install kubectl binary simply run

ansible-playbook --tags=role-kubectl k8s.yml

Next we add a additional entry to the Ansible hosts file:

[k8s_ca]
localhost ansible_connection=local

k8s_ca (short for kubernetes certificate authority) is a Ansible host group (in this case the group contains only one host). As you can see the real hostname is localhost. To make the role work on my workstation I need to add ansible_connection=local parameter which tells Ansible to run the role locally without connecting via SSH.

Now we install the cfssl role via

ansible-galaxy install githubixx.cfssl

Add

- hosts: k8s_ca
  roles:
    - 
      role: githubixx.cfssl 
      tags: role-cfssl

to your k8s.yml file. This adds the role githubixx.cfssl to the hosts group k8s_ca (which is only host in your case). Have a look at defaults/main.yml file of that playbook for all variables you can change. Basically the important ones are:

cfssl_version: R1.2
cfssl_bin_directory: /usr/local/bin

I think the variables are pretty self-explanatory. cfssl_version specifies the version of CFSSL toolkit we want to download and use. cfssl_bin_directory is the directory where CFSSL binaries will be installed.

Now we can install the cfssl binaries locally via

ansible-playbook --tags=role-cfssl k8s.yml

Next we can generate the certificate authorities (CA) for etcd and Kubernetes API server and the certificates to secure the communication between the components. DigitalOcean provides a good diagram of the Kubernetes operations flow: Kubernetes operations flow (from https://blog.digitalocean.com/vault-and-kubernetes/). Have a look at the diagram to get a better understanding of the K8s communication workflow. As always I’ve prepared a Ansible role to generate the CA’s and certificates. Install the role via

ansible-galaxy install githubixx.kubernetes-ca

Add the role to k8s.yml:

- hosts: k8s_ca
  roles:
    -
      role: githubixx.kubernetes-ca
      tags: role-kubernetes-ca

As the CFSSL role this role will also be applied to the Ansible k8s_ca host (which is localhost as you may remember from above). Since the CA’s and certificates will be installed locally we need to define the variables for this role in host_vars/localhost.

This role has quite a few variables. But that’s mainly information needed for the certificates. In contrast to Kelsey’s Hightower’s guide Provisioning a CA and Generating TLS Certificates we create a different certificate authority for etcd and Kubernetes API server. Since only Kubernetes API server talks to etcd directly it makes sense not to use the same CA for etcd and Kubernetes API server to sign certificates. This adds an additional layer of security.

k8s_ca_conf_directory: /etc/k8s-certs
k8s_ca_certificate_owner: root
k8s_ca_certificate_group: root

k8s_ca_conf_directory tells Ansible where to store the CA’s and certificate files. To enable Ansible to read the files in later runs you should specify a user and group in k8s_ca_certificate_owner / k8s_ca_certificate_group which has permissions (in most cases this will be the user you use on your workstation). The important thing here is that Ansible needs write access to the parent directory. In the example above the parent directory would be /etc and it’s unlikely that a ordinary user has write access to this directory. So choose a directory where Ansible is able to create the k8s-certs directory (maybe your $HOME directory if you don’t need to share the certificates).

ca_etcd_expiry: 87600h

ca_etcd_expiry sets the expiry date for etcd root CA.

ca_etcd_csr_cn: Etcd
ca_etcd_csr_key_algo: rsa
ca_etcd_csr_key_size: 2048
ca_etcd_csr_names_c: DE
ca_etcd_csr_names_l: The_Internet
ca_etcd_csr_names_o: Kubernetes
ca_etcd_csr_names_ou: BY
ca_etcd_csr_names_st: Bayern

This variables are used to create the CSR (certificate signing request) of the CA (certificate authority) which we use to sign certifcates for etcd.

ca_k8s_apiserver_expiry: 87600h

ca_k8s_apiserver_expiry sets the expiry date for Kubernetes API server root CA.

ca_k8s_apiserver_csr_cn: Kubernetes
ca_k8s_apiserver_csr_key_algo: rsa
ca_k8s_apiserver_csr_key_size: 2048
ca_k8s_apiserver_csr_names_c: DE
ca_k8s_apiserver_csr_names_l: The_Internet
ca_k8s_apiserver_csr_names_o: Kubernetes
ca_k8s_apiserver_csr_names_ou: BY
ca_k8s_apiserver_csr_names_st: Bayern

This variables are used to create the CSR (certificate signing request) of the CA (certificate authority) which we use to sign certifcates for the Kubernetes API server.

etcd_csr_cn: Etcd
etcd_csr_key_algo: rsa
etcd_csr_key_size: 2048
etcd_csr_names_c: DE
etcd_csr_names_l: The_Internet
etcd_csr_names_o: Kubernetes
etcd_csr_names_ou: BY
etcd_csr_names_st: Bayern

This variables are used to create the CSR for the certificate that is used to secure the etcd communication.

k8s_apiserver_csr_cn: Kubernetes
k8s_apiserver_csr_key_algo: rsa
k8s_apiserver_csr_key_size: 2048
k8s_apiserver_csr_names_c: DE
k8s_apiserver_csr_names_l: The_Internet
k8s_apiserver_csr_names_o: Kubernetes
k8s_apiserver_csr_names_ou: BY
k8s_apiserver_csr_names_st: Bayern

This variables are used to create the CSR for the certificate that is used to secure the Kubernetes API server communication.

k8s_admin_csr_cn: admin
k8s_admin_csr_key_algo: rsa
k8s_admin_csr_key_size: 2048
k8s_admin_csr_names_c: DE
k8s_admin_csr_names_l: The_Internet
k8s_admin_csr_names_o: system:masters # DO NOT CHANGE!
k8s_admin_csr_names_ou: BY
k8s_admin_csr_names_st: Bayern

This variables are used to create the CSR for the certificate we need to authenicate the admin user which we’ll use later for the kubectl utility.

k8s_worker_csr_key_algo: rsa
k8s_worker_csr_key_size: 2048
k8s_worker_csr_names_c: DE
k8s_worker_csr_names_l: The_Internet
k8s_worker_csr_names_o: system:nodes # DO NOT CHANGE!
k8s_worker_csr_names_ou: BY
k8s_worker_csr_names_st: Bayern

The kubelet process (a.k.a. Kubernetes worker) also needs to authenticate itself against the API server. The variables are used to create the CSR file which in turn is used to create the kubelet certificate.

k8s_kube_proxy_csr_cn: system:kube-proxy # DO NOT CHANGE!
k8s_kube_proxy_csr_key_algo: rsa
k8s_kube_proxy_csr_key_size: 2048
k8s_kube_proxy_csr_names_c: DE
k8s_kube_proxy_csr_names_l: The_Internet
k8s_kube_proxy_csr_names_o: system:node-proxier # DO NOT CHANGE!
k8s_kube_proxy_csr_names_ou: BY
k8s_kube_proxy_csr_names_st: Bayern

And finally also the kube-proxy must authenicate itself against the API server. As above this variables will be used to create the CSR file which will be in turn used to create the kube-proxy certificate.

etcd_cert_hosts:
  - 127.0.0.1
  - etcd0
  - etcd1
  - etcd2

Here you add all etcd hosts. The task https://github.com/githubixx/ansible-role-kubernetes-ca/blob/1a412e0322d633cb31af69359ad919f9c03ae4b3/tasks/main.yml#L26-L38 will automatically add the hostname, the fully qualified domain name (FQDN), the internal IP address and the PeerVPN IP address of your etcd hosts to a list which is needed to create the etcd certificate. etcd_cert_hosts allows you to specifiy additional hostnames/IPs. In general I would recommend to always add 127.0.0.1 and/or localhost. If you plan to expand your etcd cluster from 3 to 5 hosts later and know the hostname, the fully qualified domain name (FQDN), the internal IP address and the PeerVPN IP address of that hosts upfront add them here too. This will save you work later.

k8s_apiserver_cert_hosts:
  - 127.0.0.1
  - 10.32.0.1
  - kubernetes
  - kubernetes.default
  - kubernetes.default.svc
  - kubernetes.default.svc.cluster.local

As with the etcd hosts above a task will automatically add the hostname, the fully qualified domain name (FQDN), the internal IP address and the PeerVPN IP address of your Kubernetes API hosts to a list which is needed to create the API server certificate. Additionally recommend to add 10.32.0.1 (first IP address of service cluster IP range is the service address of kubernetes.default.svc.cluster.local. This address will be resolved by kube-dns for all pods, which wants to access api) and 127.0.0.1 here. If you know that you will add more worker later add them here in advance to save you work later.

Now we can generate the CSR’s and the certificates via

ansible-playbook --tags=role-kubernetes-ca k8s.yml

We only run our kubernetes-ca role which we tagged as role-kubernetes-ca. After the execution of the role is done you will find quite a few files in k8s_config_directory. The files names should give you a good hint what’s the content of the file and for what a file is used.

Next we need to generate Kubernetes configuration files, also known as kubeconfig’s which enable Kubernetes clients to locate and authenticate to the Kubernetes API Servers (also see Generating Kubernetes Configuration Files for Authentication. To create the files I created a few playbooks. Switch to a directory where you want to save the playbooks (e.g. on the same directory level as the Ansible roles directory) and get them via git clone https://github.com/githubixx/ansible-kubernetes-misc. Switch to directory kubeauthconfig and run the playbooks with

ansible-playbook kubelets.yml

and

ansible-playbook kube-proxy.yml

Finally we need to generate the data encryption config and key. Kubernetes stores a variety of data including cluster state, application configurations, and secrets. Kubernetes supports the ability to encrypt cluster data at rest. This playbook will generate a encryption config containing the encryption key. For this to work Ansible needs your encryption key which we’ll put into group_vars/all (or host_vars/localhost should be fine too) e.g.:

k8s_encryption_config_directory: "~/k8s/config_encryption"
k8s_encryption_config_key: "YLXdi1xnNLOM4+IUd5aeNO6ps6JaKYTCDaMYJRrD+gk=" # CHANGE THIS VALUE!

Now switch to directory kubeencryptionconfig and execute the playbook:

ansible-playbook kubeencryptionconfig.yml

The last two steps generated a few new files in k8s_config_directory (whatever value you specified for this variable).

That’s it for part 4. In the next chapter we’ll install the etcd cluster and we’ll use the first CA and certificates that we generated in this part.