Developers, be KIND to your containers! It’s easy with Kubernetes IN Docker stack ;)

Here, at NubiSoft, most of our CI/CD processes are configured in the cloud (after all, the name of our company obliges to something). But sometimes it is useful to deploy locally e.g. for development purposes. Having solutions built around the Kubernetes ecosystem makes it relatively problematic …at least until recently.

So in this post, we will demonstrate how to build configuration enabling deploying Kubernetes cluster locally. Previously there existed solutions like Minikube or kubeadm-dind-cluster trying to approach running Kubernetes cluster but they are limited to the extent that their practical usage (not only for learning purposes) was disputable. All the more I got excited after a speech at a conference last year where a completely new approach was presented, which is KIND. The months have passed and I finally conclude that it is already mature enough to be implemented in our team.

When it comes to computing resources, everything I show below is run on my laptop – DELL XPS 13 with 16GB RAM and i7 processor, but in a VirtualBox environment. This environment runs Ubuntu 18 (Bionic Beaver) and has 8GB of RAM and 4 CPUs allocated. So let’s roll up our sleeves and get to work!

First, we have to upgrade our Ubuntu Bionic OS.

sudo apt update
sudo apt -y upgrade

As Kubernetes itself and KIND are implemented in Golang, we need to install the Go platform first. At the time of writing this post, the minimum version of Go required by KIND is 1.13. Let us download the proper binaries.

wget https://dl.google.com/go/go1.13.3.linux-amd64.tar.gz

Now we need to extract the downloaded archive to a shared place in a filesystem to be available for every user,

sudo tar -xvf go1.13.3.linux-amd64.tar.gz
sudo mv go /usr/local

and configure environment variables – I prefer doing that for a local profile:

nano ~/.profile

At the end of the .profile file, we have to add the following lines:

export GOROOT=/usr/local/go
export PATH=$GOROOT/bin:~/go/bin:$PATH

After reread of the profile file we may check if the Go language is installed and accessible.

source ~/.profile
go version

Now it is time to install Docker. I prefer doing that using a dedicated repository.

sudo apt install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable"
sudo apt update
sudo apt install docker-ce

Normally, when using Docker commands it is required to sudo, but to be able to use KIND commands we have to configure Docker to be usable without sudoing.

sudo usermod -aG docker NULL

And to apply the above configuration without re-logging to the OS, we do the following:

su - NULL

Now it is time for installing KIND. We won’t work out much here 😉

GO111MODULE="on" go get sigs.k8s.io/kind@v0.7.0

And finally, we’re ready to launch our Kubernetes cluster. Please notice that the first execution has to last a little bit because kind has to pull (i.e. download) the proper docker image. For the start let’s initialize our Kubernetes cluster with default settings.

kind create cluster

After that, we can verify the default single-node configuration of just created cluster. The name of the node is kind-control-plane.

kind get nodes

To be able to interact with our cluster we need to install one more tool – namely kubectl. There are many ways of doing that, but while I’m a big fan of containerization technologies, on client machines, I try to avoid using snap packages. So I am doing it my way (notice the usage of xenial repo, as at the time of writing this post there is not a bionic one yet):

sudo apt-get update && sudo apt-get install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubectl

Now we are able to verify what pods are running in our cluster:

kubectl get po --all-namespaces

And thanks to use the –all-namespaces flag we see quite a lot of them because this command lists also all infrastructural pods.

Ok, what if we would like to create an HA cluster with more than one node?

With KIND it is possible and easy. All we need is to prepare the configuration file (saved as kind-multinode.yaml), like this shown below.

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: control-plane
- role: control-plane
- role: worker
- role: worker
- role: worker

Now it is enough to recreate our cluster:

kind delete cluster
kind create cluster --config kind-multinode.yaml

Let’s verify if all nodes were created:

kubectl get nodes

… and voilà 😉

In the end, I will show you how to run the Kubernetes Dashboard for such a built environment. I must point out here that I personally do not use this tool and that to support production environments, kubectl is more than enough. But this post is dedicated to developers, not DevOps, and developers like graphical environments after all 😉

The most important thing is to know that Dashboard runs on a POD, so all you need is to execute:

 kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml

This will install a dashboard but grant no access to it to any user. Kubernetes implements RBAC model and granting privileges on production environments should be done with extreme caution. Many users hit problems starting the use of dashboard resulting in numerous errors like: ‘User “system:anonymous” cannot list resource‘. This is because the user they trying to log in has no required privileges to access cluster metadata. But here we have a local environment so we’ll take a shortcut and grant admin role to the default user.

kubectl create clusterrolebinding --user system:serviceaccount:default:default default-sa-admin --clusterrole cluster-admin

To the dashboard, we will log in using a secret token. So we have to get it from our cluster configuration.

kubectl get secret

For the name of the listed secret, we have to copy to clipboard its secret token.

Then we have to start proxying the dashboard

kubectl proxy

and type the proper URL as an address in your web browser

http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login

And finally, we see that access is granted.

That is all. And you? What techniques do you use to deploy Kubernetes clusters locally?

Another view angles

  1. If so far you have used kubeadm-dind-cluster solution, please note that the authors claim is it already outdated: < NOTE: This project is deprecated in favor of kind. Try kind today, it’s great! >
  2. It is worth to mention that the alternative approach to deploying Kubernetes clusters locally implemented by Minikube requires virtualization technology (VirtualBox or KVM), which may lead to problems when you want to set up such a cluster in the virtual environment by itself.
  3. I must admit that KIND also has some limitations compared to a real cluster, so nothing can replace the final tests of the implemented software on the target cluster.
  4. Be aware, that cluster created using KIND is (like all containers) ephemeral. This means that the cluster won’t survive the system reboot, and after that must be manually recreated. This is a known issue reported here, and here. The KIND developers declaring there some improvements in the future.

Leave a Reply

Your email address will not be published. Required fields are marked *