Kubernetes the not so hard way with Ansible - Certificate authority (CA) - (K8s v1.27)

2023-09-10

  • rename Ansible Galaxy role_name from kubernetes-ca to kubernetes_ca
  • fix outdated URLs

2021-09-12

  • updated links

2020-08-06

  • adjust blog post to match the latest version of kubernetes-ca role

2020-02-25

  • update kubectl variables
  • update kubernetes-ca role variables

2019-11-04

  • update cfssl utilities to v1.4.0
  • update cfssl Ansible role variables
  • update kubectl_* variables

2019-01-31

  • update kubectl variables
  • update cfssl variables
  • update links to Kelsey’s docs

This post is based on Kelsey Hightower’s Kubernetes The Hard Way - Installing the Client Tools and Kubernetes The Hard Way - Provisioning a CA and Generating TLS Certificates.

Now that we’ve done some preparation for our Kubernetes cluster

we need a PKI (public key infrastructure) to secure the communication between the Kubernetes components.

We’ll use CloudFlare’s CFSSL PKI toolkit to bootstrap certificate authority’s and generate TLS certificates. ansible-role-cfssl will generate a few files for that purpose. You can generate the files on any host you want but I’ll use a directory on localhost (my workstation that runs Ansible) because other roles need to copy a few of the generated files to the Kubernetes hosts later so it makes sense to have the files at a place where Ansible has access (but of course you can also use a network share or something like that).

First we install the most important Kubernetes utility called kubectl. We’ll configure it later. At the moment we just install it. I’ve created a Ansible role to install kubectl locally. Add the following content to Ansible’s host file:

[k8s_kubectl]
workstation

workstation is the hostname of my local workstation/laptop. Of course you need to change it. You may also need to add a entry to /etc/hosts like 127.0.0.1 localhost workstation to make name resolution work. A Ansible host_vars file could look like this (host_vars/workstation):

---
wireguard_address: "10.8.0.2/24"
wireguard_endpoint: ""
ansible_connection: local
ansible_become_user: root
ansible_become: true
ansible_become_method: sudo

As already mentioned in the previous part my workstation is part of the WireGuard fully meshed network that connects every Kubernetes node to all the other nodes. So I can access the Kubernetes API server via VPN and don’t need SSH forwarding or stuff like that to make kubectl work.

Then install the role with

ansible-galaxy install githubixx.kubectl

The role has a few variables you can change if you like (just add the variables and values you want to change to group_vars/all.yml or where it fit’s best for you). To get an overview see the kubectl role homepage at Github.

To install kubectl binary simply run

ansible-playbook --tags=role-kubectl k8s.yml

Next we add a additional entry to the Ansible hosts file:

[k8s_ca]
workstation

k8s_ca (short for kubernetes certificate authority) is an Ansible host group (in this case the group contains only one host). As you can see my workstation will also store all certificate authority files.

Now we install the cfssl role via

ansible-galaxy install githubixx.cfssl

Add

- hosts: k8s_ca
  roles:
    -
      role: githubixx.cfssl
      tags: role-cfssl

to your k8s.yml file. This adds the role githubixx.cfssl to the hosts group k8s_ca (which is only one host in your case as already mentioned). Have a look at defaults/main.yml file of that role for all variables you can change.

Now we can install the cfssl binaries locally via

ansible-playbook --tags=role-cfssl k8s.yml

Next we can generate the certificate authorities (CA) for etcd and Kubernetes API server and the certificates to secure the communication between the components. DigitalOcean provides a good diagram of the Kubernetes operations flow:

https://assets.digitalocean.com/blog/static/vault-and-kubernetes/communication_paths.png
Kubernetes operations flow
(from Using Vault as a Certificate Authority for Kubernetes). Have a look at the diagram to get a better understanding of the K8s communication workflow.

As always I’ve prepared a Ansible role to generate the CA’s and certificates. Install the role via

ansible-galaxy install githubixx.kubernetes_ca

Add the role to k8s.yml:

- hosts: k8s_ca
  roles:
    -
      role: githubixx.kubernetes_ca
      tags: role-kubernetes-ca

As with the CFSSL role this role will also be applied to the Ansible k8s_ca host (which is the workstation as you may remember from above).

This role has quite a few variables. But that’s mainly information needed for the certificates. In contrast to Kelsey’s Hightower’s guide Provisioning a CA and Generating TLS Certificates we create a different certificate authority for etcd and Kubernetes API server. Since only Kubernetes API server talks to etcd directly it makes sense not to use the same CA for etcd and Kubernetes API server to sign certificates. This adds an additional layer of security. All variables are documented at the kubernetes-ca role homepage at Github. So I won’t repeat them here again.

The kube-proxy, kube-controller-manager, kube-scheduler, and kubelet client certificates will be used to generate client authentication configuration files what we’ll do later. Only the resulting .kubeconfig files will be copied to the nodes later.

This list also contains certificate files for Cilium and Traefik which I’ll install later. The certificates for Cilium and Traefik are also generated because I set this Ansible variable (it’s documented in the roles README):

etcd_additional_clients:
  - traefik 
  - cilium
  - k8s-apiserver-etcd

I’ll install Cilium (needed for Kubernetes networking) and Traefik (needed for Ingress - allows external users to access your services running in the Kubernetes cluster). Like Kubernetes (the API server) these two services also need an etcd key/value store. They all store their state there. So instead of install etcds for all these services they’ll use the etcd which will be installed for Kubernetes (or kube-apiserver to be more specific) anyways. To secure communication to etcd some certificates are needed.

To accomplish this the role has a variable called etcd_additional_clients as already mentioned above. By default there is only k8s-apiserver-etcd specified in the list to create the etcd client certificate for Kubernetes API server. If traefik and cilium are specified, additional files will be generated for these two services e.g.:

etcd_additional_clients:
  - k8s-apiserver-etcd
  - traefik
  - cilium

If you’re done with setting all variables the CSRs and the certificates can be generated via

ansible-playbook --tags=role-kubernetes-ca k8s.yml

This only runs the Ansible kubernetes_ca role which was tagged as role-kubernetes-ca. After running the role there will quite a few files in k8s_ca_conf_directory. The filenames should give a good hint whats the content of these files and for what a file is used (also see the defaults/main.yml file of the role for more information). Here is an overview which files you should at least get:

ca-etcd-config.json
ca-etcd.csr
ca-etcd-csr.json
ca-etcd-key.pem
ca-etcd.pem
ca-k8s-apiserver-config.json
ca-k8s-apiserver.csr
ca-k8s-apiserver-csr.json
ca-k8s-apiserver-key.pem
ca-k8s-apiserver.pem
cert-admin.csr
cert-admin-csr.json
cert-admin-key.pem
cert-admin.pem
cert-cilium.csr
cert-cilium-csr.json
cert-cilium-key.pem
cert-cilium.pem
cert-etcd-peer.csr
cert-etcd-peer-csr.json
cert-etcd-peer-key.pem
cert-etcd-peer.pem
cert-etcd-server.csr
cert-etcd-server-csr.json
cert-etcd-server-key.pem
cert-etcd-server.pem
cert-k8s-apiserver.csr
cert-k8s-apiserver-csr.json
cert-k8s-apiserver-etcd.csr
cert-k8s-apiserver-etcd-csr.json
cert-k8s-apiserver-etcd-key.pem
cert-k8s-apiserver-etcd.pem
cert-k8s-apiserver-key.pem
cert-k8s-apiserver.pem
cert-k8s-controller-manager.csr
cert-k8s-controller-manager-csr.json
cert-k8s-controller-manager-key.pem
cert-k8s-controller-manager.pem
cert-k8s-controller-manager-sa.csr
cert-k8s-controller-manager-sa-csr.json
cert-k8s-controller-manager-sa-key.pem
cert-k8s-controller-manager-sa.pem
cert-k8s-proxy.csr
cert-k8s-proxy-csr.json
cert-k8s-proxy-key.pem
cert-k8s-proxy.pem
cert-k8s-scheduler.csr
cert-k8s-scheduler-csr.json
cert-k8s-scheduler-key.pem
cert-k8s-scheduler.pem
cert-traefik.csr
cert-traefik-csr.json
cert-traefik-key.pem
cert-traefik.pem
cert-worker01.i.domain.tld.csr
cert-worker01.i.domain.tld-csr.json
cert-worker01.i.domain.tld-key.pem
cert-worker01.i.domain.tld.pem
cert-worker02.i.domain.tld.csr
cert-worker02.i.domain.tld-csr.json
cert-worker02.i.domain.tld-key.pem
cert-worker02.i.domain.tld.pem

Next we need to generate a few Kubernetes configuration files, also known as kubeconfig’s which enable Kubernetes clients to locate and authenticate to the Kubernetes API Servers (see also Generating Kubernetes Configuration Files for Authentication). To create the files I created a few playbooks. Switch to a directory where you want to save the playbooks (e.g. on the same directory level as the Ansible roles directory) and get them via

git clone https://github.com/githubixx/ansible-kubernetes-playbooks

Make sure to set k8s_apiserver_secure_port: "6443" (or whatever port you’ve chosen for the kube-apiserver secure port) in group_vars/all.yml.

Make sure to set a few variables (e.g. in group_vars/all.yml) and adjust the sample values below:

k8s_config_cluster_name: "your-clustername"
k8s_config_directory: "{{ '~/k8s/configs' | expanduser }}"
k8s_config_directory_perm: "0770"
k8s_config_file_perm: "0660"
k8s_config_owner: "the-owner"
k8s_config_group: "the-group"

Switch to directory kubeauthconfig and run the playbooks (you may need to set ANSIBLE_CONFIG to a directory where your ansible.cfg is located):

ansible-playbook kubelets.yml
ansible-playbook kube-proxy.yml
ansible-playbook kube-controller-manager.yml
ansible-playbook kube-scheduler.yml
ansible-playbook kube-admin-user.yml

Finally we need to generate the data encryption configuration and key. Kubernetes stores a variety of data including cluster state, application configurations, and secrets. Kubernetes supports the ability to encrypt cluster data at rest. This playbook will generate a encryption configuration containing the encryption key. For this to work Ansible needs your encryption key which we’ll put into group_vars/all.yml e.g. (as this is a secret of course it makes sense to think about managing the secret with ansible-vault e.g.):

# Same as "k8s_config_directory" in this case but could be different.
k8s_encryption_config_directory: "{{k8s_config_directory}}"
# CHANGE THIS VALUE!
k8s_encryption_config_key: "YLXdi1xnNLOM4+IUd5aeNO6ps6JaKYTCDaMYJRrD+gk="

Now switch to directory kubeencryptionconfig and execute the playbook:

ansible-playbook kubeencryptionconfig.yml

The last two steps generated a few new files in {{k8s_config_directory}} and {{k8s_encryption_config_directory}}.

That’s it for now. In the next chapter we’ll install the etcd cluster and we’ll use the first CA and certificates that we generated in this part.