Contents

Kubernetes the not so hard way with Ansible - The basics - (K8s v1.21)

2021-07-05

  • added new links

2021-01-19

  • updated a few links

2020-07-23

  • Fix some typos

2020-05-09

  • Updated and added a few links
  • Updated text to reflect current state of technology

I created a series of posts about running Kubernetes (K8s for short) managed by Ansible. I have part of my hosts at Hetzner Cloud. I also run part of my Kubernetes cluster VMs on my local machine at home to save costs and that also makes it possible to pass through my second graphics card to a K8s VM for doing some machine learning e.g. That’s all possible because all VMs are connected securely through WireGuard VPN. But in general you should be able to use the playbooks with minor or no modifications for other ISP e.g. Scaleway or Digital Ocean. I’ll only test this with Ubuntu 20.04 LTS but with no/minimal modifications it should work with all systemd based Linux operating systems and maybe also with Ubuntu 16.04 LTS.

I used Kelsey Hightower’s wonderful guide Kubernetes the hard way as starting point. My goal is to install a Kubernetes cluster with Ansible which could be used in production and is maintainable. It’s not H/A at the moment because the Kubernetes components currently communicate only with one kube-apiserver (beside the fact that there are three kube-apiserver running there is just no loadbalancing in place yet but can be easily implemented with nginx e.g.). So at the moment there are still a few TODO’s besides making requests to kube-apiserver H/A. One idea to make the requests from a K8s worker node to kube-apiserver H/A is by installing nginx webserver as loadbalancer on the worker node and balance the requests between the three kube-apiserver. In this case kube-proxy and kublet on the worker nodes can talk to the local nginx instance and if one kube-apiserver fails nginx would automatically route the requests to the next healthy kube-apiserver. nginx on the other hand can run as DaemonSet which will take care to restart nginx if needed. But that’s just a idea ATM ;-)

If you need something fast maybe have a look at this projects:

Minikube
kubeadm
hetzner-kube
kubespray
kind (Kubernetes IN Docker) MicroK8s

To enable the Kubernetes services to communicate securely between the hosts I’ll use WireGuard. Linux kernel 5.6 (currently available with Archlinux e.g.) and also Ubuntu 20.04 LTS which includes kernel 5.4 have the wireguard module included. Other distributions need Dynamic Kernel Module Support (DKMS). My WireGuard Ansible Module supports other OSs like Debian, CentOS, Fedora, e.g. (more about that in a later blog post). Kelsey Hightower uses Google Cloud which supports cool networking options but we don’t have this features. So WireGuard will help us to compensate this a little bit as we can create a network at layer 2 with communication encrypted and it’s easy to install (e.g. with the WireGuard Ansible Module mentioned above.

Start your engines

If you want to do something real with your Kubernetes cluster you’ll need at least 4 or 5 instances. Three for the Kubernetes controller nodes and another three for etcd (for high availability). You also need at least one or two nodes for the worker (the nodes that will run the Docker container and do the actual work). For smaller workloads at Hetzner Online CX11 instances (1x64 bit core, 2 GB RAM, 20 GB SSD each) are sufficient for the controller nodes. I try to keep costs low. So if you run production load you should distribute the services on more hosts and use bigger hosts for the worker (maybe something like CX31 or even bigger).

As a side note: We’ll install the etcd cluster on the controller nodes (e.g. where the API server, K8s scheduler and K8s controller manager runs) to save costs. But it’s recommended for production to install etcd on it’s own hosts. So you may install three additional hosts just for etcd. Also fast storage is recommended for etcd. So using at least SSD or even better NVMe disks for the etcd hosts makes lot of sense in production.

To setup the Kubernetes hosts at Hetzner you can use the hcloud modules included with Ansible. It makes sense to use always the latest Ansible version (e.g. >= 2.9) as some of them were added in the latest few releases. Of course you also find the Scaleway modules there. Besides other cloud modules you can also manage VMs that are created via libvirt with the virt module e.g. to deploy VMs that run locally and use KVM/QEMU.

Another possibility is to just use the Hetzner Cloud Console UI to setup the hosts or a tool like Hashicorp’s Terraform. Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. There is a Hetzner Cloud Provider available for Terraform. I won’t get into detail how to setup hosts here as it depends on your provider. For local testing a few VMs setup with Hashicorp’s Vagrant could also be an option.

Personally I try to stick with Ansible where ever possible as one can basically manage everything with it.

Prepare Ansible

If you never heard of Ansible: Ansible is a powerful IT automation engine. Instead of managing and handling your instances or deployments by hand Ansible will do this for you. This is less error prone and everything is repeatable. To do something like installing a package you create a Ansible task. This tasks are organized in playbooks. The playbooks can be modified via variables for hosts, host groups, and so on. A very useful feature of Ansible are roles. E.g. you want to install ten hosts with Apache webserver. In that case you just add a Apache role to that ten hosts and maybe modify some host group variables and roll out Apache webserver on all the hosts you specified. Very easy! For more information read Getting started with Ansible. But I’ll add some comments in my blog posts what’s going on in the roles/playbooks we use.

For Ansible beginners: Also have a look here: ANSIBLE BEST PRACTICES: THE ESSENTIALS

I was also thinking about using ImmutableServer and Immutable infrastructure but decided to go with Ansible for now. These immutable server/infrastructure concepts have some real advantages and we also using it in my company very successfully together with the Google Cloud. Using virtual machines like Docker container and throw them away at any time is quite cool :-) . VM images can be created with Hashicorp’s Packer e.g. and rolled out with Ansible, Terraform, or whatever you prefer. When a server starts up a startup-script or cloud-init can setup the services by reading the instance metadata e.g. But that’s going to far for now. I just wanted to mention it ;-)

Setup Ansible

If you haven’t already setup a Ansible directory which holds the hosts file, the roles and so on then do so now. The default directory for Ansible roles is /etc/ansible/roles. To add an additional roles directory adjust the Ansible configuration /etc/ansible/ansible.cfg and add your roles path to roles_path setting (separated by :). It’s also possible to have ansible.cfg in different places. See The Ansible configuration file for more information. I’m keen on of having everything in one place so I put everything Ansible related into /opt/ansible directory. So the roles path for me is roles_path /opt/ansible/roles:/etc/ansible/roles.

A tool for installing Ansible roles is ansible-galaxy which is included if you install Ansible. Also have a look at https://galaxy.ansible.com/ for more information (you can also browse the available roles there).

My Ansible directory structure will look like this when everything is setup at the end of the blog series (also see Ansible directory layout best practice):

.
├── group_vars
│   ├── all.yml
├── hosts
├── host_vars
│   └── controller01.i.domain.tld
│   └── controller02.i.domain.tld
│   └── controller03.i.domain.tld
│   └── worker01.i.domain.tld
│   └── worker02.i.domain.tld
│   └── workstation
├── k8s.yml
├── playbooks
│   └── kubernetes-misc
│       ├── coredns
│       ├── kubeauthconfig
│       ├── kubectlconfig
│       ├── kubeencryptionconfig
│       ├── kube-router
│       ├── LICENSE
│       └── README.md
└── roles
    ├── githubixx.ansible_role_wireguard
    ├── githubixx.cilium_kubernetes
    ├── githubixx.cfssl
    ├── githubixx.docker
    ├── githubixx.etcd
    ├── githubixx.harden-linux
    ├── githubixx.kubectl
    ├── githubixx.kubernetes-ca
    ├── githubixx.kubernetes-controller
    ├── githubixx.kubernetes-worker
    ├── githubixx.traefik_kubernetes 

Don’t worry if directories doesn’t contain all the files yet we’ll get there. Just make sure that at least the top level directories like group_vars, host_vars, playbooks and roles exist. As you can see from the output group_vars, host_vars, playbooks and roles are directories. hosts and k8s.yml are files. I’ll explain what this directories and files are good for while you walk through the blog posts and I’ll also tell a little bit more about Ansible.

Hint: Quite a few variables are needed by more then one role and playbooks. Put this kind of variables into group_vars/all.yml. Especially variables needed by the playbooks (not the roles) fit good there. But it’s up to you where you want to place the variables as long as the roles/playbooks find them when they’re needed ;-) Throughout the tutorial I’ll put all common variables into group_vars/all.yml as it makes things more straight forward. You may organize variables differently (also see Using variables).

That’s it for the basics. Continue with harden the instances.