- updated etcd to 3.2.8
k8s_ca_conf_directoryand change default location
- smaller changes needed for Kubernetes v1.8
This post is based on Kelsey Hightower’s Kubernetes The Hard Way - Bootstrapping the etcd cluster.
In part 4 we installed our PKI (public key infrastructure) in order to secure communication between our Kubernetes components/infrastructure. Now we use the certifcate authorities (CA) and generated keys for the first and very important component - the etcd cluster. etcd is basically a distributed key/value database. The Kubernetes components are stateless. All state is stored in etcd so you should take care of your etcd cluster in production. If you loose all etcd nodes you loose the whole Kubernetes state…
I want to mention that if your etcd nodes won’t join then a possible reason could be the certificate. If it isn’t your firewall blocking traffic between your etcd nodes the certifcate’s host list could be the problem. The error message isn’t clear about the issue.
As usual we add the role ansible-role-etcd to the
k8s.yml file e.g.:
hosts: k8s_etcd roles: - role: githubixx.etcd tags: role-etcd
Next install the role via
ansible-galaxy install githubixx.etcd
(or just clone the Github repo whatever you like). Basically you don’t need to change a lot of variables which are:
k8s_ca_conf_directory: /etc/k8s/certs etcd_version: 3.2.8 etcd_client_port: 2379 etcd_peer_port: 2380 etcd_interface: tap0 etcd_initial_cluster_token: etcd-cluster-0 etcd_initial_cluster_state: new etcd_name: etcd_kubernetes etcd_conf_dir: /etc/etcd etcd_download_dir: /opt/etcd etcd_bin_dir: /usr/local/bin etcd_data_dir: /var/lib/etcd etcd_certificates: - ca-etcd.pem - ca-etcd-key.pem - cert-etcd.pem - cert-etcd-key.pem
The playbook will search for the certificates we created in part 4 in the directory you specify in
k8s_ca_conf_directory on the host you run Ansible. The files used here are listed in
etcd_certificates. If you used a different name for the PeerVPN interface we created in part 3 you want to change
We run the playbook via
ansible-playbook --tags=role-etcd k8s.yml
This will only install the etcd cluster. Have a look at the logs of your etcd hosts if everything worked and the etcd nodes are connected. Use
journalct --no-pager or
journalctl -f to check the systemd log. Log into one of the etcd nodes and check the cluster status via
ETCDCTL_API=3 etcdctl member list (you should see a output similar to this):
645277c31f2e59fe, started, k8s-controller1, https://10.3.0.201:2380, https://10.3.0.201:2379 a81925033e34d269, started, k8s-controller2, https://10.3.0.202:2380, https://10.3.0.202:2379 ecf70543fa3a5935, started, k8s-controller3, https://10.3.0.203:2380, https://10.3.0.203:2379
Next we’ll install the Kubernetes control plane.