Kubernetes the not so hard way with Ansible - Control plane - (K8s v1.23)


  • update k8s_release to 1.23.3
  • add parameter authentication-kubeconfig, authorization-kubeconfig and requestheader-client-ca-file to k8s_scheduler_settings (see K8s Deprecations 1.23)
  • remove healthzBindAddress and metricsBindAddress from kube-scheduler.yaml.j2 (deprecated)


  • update k8s_release to 1.22.5
  • add parameter authentication-kubeconfig, authorization-kubeconfig and requestheader-client-ca-file to k8s_controller_manager_settings (see K8s Deprecations 1.22
  • removed kubelet-https: "true" from k8s_apiserver_settings as no longer supported by kube-apiserver (see: Mark –kubelet-https deprecated]


  • update k8s_release to 1.21.8


  • update k8s_release to 1.21.4


  • update k8s_release to 1.20.8
  • --service-account-key-file, --service-account-key-file and --service-account-signing-key-file are now required kube-apiserver flags (see Kubernetes upgrade notes: 1.19.x to 1.20.x)



  • update k8s_release to 1.18.6
  • renamed cert-etcd.pem/cert-etcd-key.pem to cert-k8s-apiserver-etcd.pem/cert-k8s-apiserver-etcd-key.pem. This was also adjusted in etcd_certificates list. The changed name makes it more obvious that this is a client certificate for kube-apiserver used to connect to a TLS secured etcd cluster. In fact kube-apiserver is just a client to etcd as all clients. In my ansible-role-kubernetes-ca this was also changed accordingly (see etcd_additional_clients list). ansible-role-kubernetes-ca is now able to generate client certificates for other services like Traefik or Cilium which are often used in a Kubernetes cluster. So the already existing etcd cluster for Kubernetes (esp. for kube-apiserver) can be reused for other components.
  • replaced cluster-signing-cert-file": "{{k8s_conf_dir}}/ca-k8s-apiserver.pem with cluster-signing-cert-file": "{{k8s_conf_dir}}/cert-k8s-apiserver.pem in k8s_controller_manager_settings
  • removed deprecated port setting in k8s_controller_manager_settings which was replaced by secure-port setting (default value 10257)
  • removed k8s_apiserver_secure_port as it makes no sense. The value 6443 can be set in k8s_apiserver_settings ("secure-port": "6443") as it is not used elsewhere
  • kubescheduler.config.k8s.io/v1alpha1 changed to kubescheduler.config.k8s.io/v1alpha2 in kube-scheduler.yaml.j2 (see: CHANGELOG-1.18.md)
  • added "allocate-node-cidrs": "true" to k8s_controller_manager_settings because otherwise cluster-cidr won’t be used


  • update k8s_release to 1.17.4
  • rbac.authorization.k8s.io/v1beta1 changed to rbac.authorization.k8s.io/v1
  • update runtime-config to api/all=true (needs boolean expression now)


  • update k8s_release to 1.16.3
  • remove deprecated enable-swagger-ui option from kube-apiserver


  • update k8s_release to 1.15.3


  • update k8s_release to 1.14.2
  • add all admissions plugins to enable-admission-plugins option that are enabled by default in K8s 1.14
  • remove Initializers addmission plugin (no longer available in 1.14)


  • update k8s_release to 1.13.2
  • kube-apiserver: –experimental-encryption-provider-config flag is deprecated and replaced in favor of –encryption-provider-config
  • kube-apiserver: the configuration file referenced by –encryption-provider-config now uses kind: EncryptionConfiguration and apiVersion: apiserver.config.k8s.io/v1. Support for kind: EncryptionConfig and apiVersion: v1 is deprecated and will be removed in a future release. See kubeencryptionconfig and Kubernetes the not so hard way with Ansible - Certificate authority (search for kubeencryptionconfig.yml). To avoid deprecation warnings it makes sense to create a new encryption-config.yaml before running this role to update kube-apiserver.


  • upgrade to k8s_release to 1.12.3 for Kubernetes v1.12.3
  • kube-apiserver: added Priority admission plugin
  • kube-scheduler: deprecated group version changed from componentconfig/v1alpha1 to kubescheduler.config.k8s.io/v1alpha1
  • kube-controller-manager: replace deprecated --address setting with --bind-address


  • upgrade to k8s_release to 1.11.3 for Kubernetes v1.11.3

This post is based on Kelsey Hightower’s Bootstrapping the Kubernetes Control Plane.

This time we install a three node Kubernetes controller cluster (that’s Kubernetes API server, scheduler and controller manager). All this components will run on every node. In Kubernetes certificate authority we installed our PKI (public key infrastructure) in order to secure communication between our Kubernetes components/infrastructure. As with the etcd-cluster we use the certificate authority and generated certificates but for Kubernetes API server we generated a separate CA and certificate. If you used the default values in the other playbooks so far you most likely don’t need to change any default variable setting which are:

# The directory to store the K8s certificates and other configuration
k8s_conf_dir: "/var/lib/kubernetes"

# The directory to store the K8s binaries
k8s_bin_dir: "/usr/local/bin"

# K8s release
k8s_release: "1.23.3"

# The interface on which the K8s services should listen on. As all cluster
# communication should use a VPN interface the interface name is
# normally "wg0" (WireGuard),"peervpn0" (PeerVPN) or "tap0".
k8s_interface: "wg0"

# The directory from where to copy the K8s certificates. By default this
# will expand to user's LOCAL $HOME (the user that run's "ansible-playbook ..."
# plus "/k8s/certs". That means if the user's $HOME directory is e.g.
# "/home/da_user" then "k8s_ca_conf_directory" will have a value of
# "/home/da_user/k8s/certs".
k8s_ca_conf_directory: "{{ '~/k8s/certs' | expanduser }}"

# Directory where kubeconfig for Kubernetes worker nodes and kube-proxy
# is stored among other configuration files. Same variable expansion
# rule applies as with "k8s_ca_conf_directory"
k8s_config_directory: "{{ '~/k8s/configs' | expanduser }}"

# K8s control plane binaries to download
  - kube-apiserver
  - kube-controller-manager
  - kube-scheduler
  - kubectl

# K8s kube-(apiserver|controller-manager-sa) certificates
  - ca-k8s-apiserver.pem
  - ca-k8s-apiserver-key.pem
  - cert-k8s-apiserver.pem
  - cert-k8s-apiserver-key.pem
  - cert-k8s-controller-manager-sa.pem
  - cert-k8s-controller-manager-sa-key.pem

# K8s API daemon settings (can be overridden or additional added by defining
# "k8s_apiserver_settings_user")
  "advertise-address": "{{hostvars[inventory_hostname]['ansible_' + k8s_interface].ipv4.address}}"
  "bind-address": "{{hostvars[inventory_hostname]['ansible_' + k8s_interface].ipv4.address}}"
  "secure-port": "6443"
  "enable-admission-plugins": "NodeRestriction,NamespaceLifecycle,LimitRanger,ServiceAccount,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,PersistentVolumeClaimResize,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
  "allow-privileged": "true"
  "apiserver-count": "3"
  "authorization-mode": "Node,RBAC"
  "audit-log-maxage": "30"
  "audit-log-maxbackup": "3"
  "audit-log-maxsize": "100"
  "audit-log-path": "/var/log/audit.log"
  "event-ttl": "1h"
  "kubelet-preferred-address-types": "InternalIP,Hostname,ExternalIP"  # "--kubelet-preferred-address-types" defaults to:
                                                                       # "Hostname,InternalDNS,InternalIP,ExternalDNS,ExternalIP"
                                                                       # Needs to be changed to make "kubectl logs" and "kubectl exec" work.
  "runtime-config": "api/all=true"
  "service-cluster-ip-range": ""
  "service-node-port-range": "30000-32767"
  "client-ca-file": "{{k8s_conf_dir}}/ca-k8s-apiserver.pem"
  "etcd-cafile": "{{k8s_conf_dir}}/ca-etcd.pem"
  "etcd-certfile": "{{k8s_conf_dir}}/cert-k8s-apiserver-etcd.pem"
  "etcd-keyfile": "{{k8s_conf_dir}}/cert-k8s-apiserver-etcd-key.pem"
  "encryption-provider-config": "{{k8s_conf_dir}}/encryption-config.yaml"
  "kubelet-certificate-authority": "{{k8s_conf_dir}}/ca-k8s-apiserver.pem"
  "kubelet-client-certificate": "{{k8s_conf_dir}}/cert-k8s-apiserver.pem"
  "kubelet-client-key": "{{k8s_conf_dir}}/cert-k8s-apiserver-key.pem"
  "service-account-key-file": "{{k8s_conf_dir}}/cert-k8s-controller-manager-sa.pem"
  "service-account-signing-key-file": "{{k8s_conf_dir}}/cert-k8s-controller-manager-sa-key.pem"
  "service-account-issuer": "https://{{ groups.k8s_controller|first }}:6443"
  "tls-cert-file": "{{k8s_conf_dir}}/cert-k8s-apiserver.pem"
  "tls-private-key-file": "{{k8s_conf_dir}}/cert-k8s-apiserver-key.pem"

# The directory to store controller manager configuration.
k8s_controller_manager_conf_dir: "/var/lib/kube-controller-manager"

# K8s controller manager settings (can be overridden or additional added by defining
# "k8s_controller_manager_settings_user")
  "bind-address": "{{hostvars[inventory_hostname]['ansible_' + k8s_interface].ipv4.address}}"
  "secure-port": "10257"
  "cluster-cidr": ""
  "allocate-node-cidrs": "true"
  "cluster-name": "kubernetes"
  "authentication-kubeconfig": "{{k8s_controller_manager_conf_dir}}/kube-controller-manager.kubeconfig"
  "authorization-kubeconfig": "{{k8s_controller_manager_conf_dir}}/kube-controller-manager.kubeconfig"
  "kubeconfig": "{{k8s_controller_manager_conf_dir}}/kube-controller-manager.kubeconfig"
  "leader-elect": "true"
  "service-cluster-ip-range": ""
  "cluster-signing-cert-file": "{{k8s_conf_dir}}/cert-k8s-apiserver.pem"
  "cluster-signing-key-file": "{{k8s_conf_dir}}/cert-k8s-apiserver-key.pem"
  "root-ca-file": "{{k8s_conf_dir}}/ca-k8s-apiserver.pem"
  "requestheader-client-ca-file": "{{k8s_conf_dir}}/ca-k8s-apiserver.pem"
  "service-account-private-key-file": "{{k8s_conf_dir}}/cert-k8s-controller-manager-sa-key.pem"
  "use-service-account-credentials": "true"

# The directory to store scheduler configuration.
k8s_scheduler_conf_dir: "/var/lib/kube-scheduler"

# kube-scheduler settings
  "bind-address": "{{hostvars[inventory_hostname]['ansible_' + k8s_interface].ipv4.address}}"
  "config": "{{k8s_scheduler_conf_dir}}/kube-scheduler.yaml"
  "authentication-kubeconfig": "{{k8s_scheduler_conf_dir}}/kube-scheduler.kubeconfig"
  "authorization-kubeconfig": "{{k8s_scheduler_conf_dir}}/kube-scheduler.kubeconfig"
  "requestheader-client-ca-file": "{{k8s_conf_dir}}/ca-k8s-apiserver.pem"

# The port the control plane components should connect to etcd cluster
etcd_client_port: "2379"

# The interface the etcd cluster is listening on
etcd_interface: "tap0"

# The etcd certificates needed for the control plane components to be able
# to connect to the etcd cluster.
  - ca-etcd.pem
  - ca-etcd-key.pem
  - cert-k8s-apiserver-etcd.pem
  - cert-k8s-apiserver-etcd-key.pem

The kube-apiserver settings defined in k8s_apiserver_settings can be overridden by defining a variable called k8s_apiserver_settings_user. You can also add additional settings for the kube-apiserver daemon by using this variable. E.g. to override audit-log-maxage and audit-log-maxbackup default values and add watch-cache option add the following settings to group_vars/all.yml (or where ever it fit’s best for you):

  "audit-log-maxage": "40"
  "audit-log-maxbackup": "4"
  "watch-cache": "false"

The same is true for the kube-controller-manager by adding entries to k8s_controller_manager_settings_user variable. For kube-scheduler add entries to k8s_scheduler_settings_user variable to override settings in k8s_scheduler_settings dictionary or to add new one.

As you can see we install Kubernetes 1.23.x by default. The role will search for the certificates we created in certificate authority in the directory you specified in k8s_ca_conf_directory on the host you run Ansible.

Also the encryption file will be used, which this role should find in k8s_encryption_config_directory (which is the same as k8s_config_directory in my case). The CA and certificate files used here are listed in k8s_certificates.

The binaries listed in k8s_controller_binaries will be downloaded and stored into the directory you specify in k8s_bin_dir. If you followed my guide so far the interface for the VPN is again wg0 for k8s_interface.

If you ask yourself “why do we need to specify etcd_certificates here again?": Well the Kubernetes API server needs to communicate with the Kubernetes components AND the etcd cluster as you may remember. That’s the reason why it must be aware of both CA’s and certificates. But since we store all group variables in group_vars/all.yml it’s of course sufficient to specify all variables only once there even if you see the same variable in different roles. Just get sure you’ve set the variables to a valid value.

Now add an entry for your controller hosts into Ansible’s hosts file e.g. (of course you need to change controller0[1:3].i.domain.tld to your own hostnames):


Install the role via

ansible-galaxy install githubixx.kubernetes-controller

Next add the role ansible-role-kubernetes-controller to the k8s.yml playbook file e.g.:

  hosts: k8s_controller
      role: githubixx.kubernetes-controller
      tags: role-kubernetes-controller

Apply the role via

ansible-playbook --tags=role-kubernetes-controller k8s.yml

After the role is applied you can basically check the status of the components with:

kubectl get componentstatuses

BUT first we need to configure kubectl ;-) We already installed kubectl locally in harden the instances of my tutorial. I’ve prepared a playbook to do the kubectl configuration. You should already have cloned my ansible-kubernetes-playbooks repository. I recommend to place it at the same directory level as Ansible’s roles directory (git clone https://github.com/githubixx/ansible-kubernetes-playbooks). Switch to ansible-kubernetes-playbooks/kubectlconfig directory.

There is now one thing you may need to change: kubectlconfig.yml. This complicated looking line gets the first hostname in our [k8s_controller] host group and uses the IP address of this host’s VPN interface as the API server address for kubectl (kubectl is basically the frontend utility for the API server). My laptop has WireGuard installed and it’s part of this Kubernetes WireGuard VPN fully meshed network. This allows kubectl on my laptop to contact the API server.

But that may not work for you if your workstation is not part the WireGuard VPN. Either do the same or you maybe setup ssh forwarding to one of the controller node’s VPN interface (port 6443 by default) and then use --server=https://localhost:6443 or you do something completely different ;-) You could also copy $HOME/.kube directory (if the configs are generated in a moment) to one of the Kubernetes hosts and work from there.

Now generate (again you might set ANSIBLE_CONFIG variable to the path where ansible.cfg is located) the kubectl configuration with

ansible-playbook kubectlconfig.yml

If you have your Ansible variables all in place as I suggested in my previous posts it should just work. The playbook will configure kubectl using the admin certificates we created with the Ansible role role-kubernetes-ca.

If you now run kubectl cluster-info you should see this output:

kubectl cluster-info

Kubernetes control plane is running at
CoreDNS is running at

To further debug and diagnose cluster problems, use ‘kubectl cluster-info dump’.

Now it’s time to setup the Kubernetes worker.