Manually creating a Kubernetes cluster with kubeadm

I’ve talked about Pivotal Container Service (PKS) before and now work for Pivotal, so I’ve frequently got K8s on my mind.  I’ve discussed at length the benefits of PKS and the creation of K8s clusters, but didn’t have much of a point of reference for alternatives.  I know about Kelsey Hightower’s book and was looking for something a little less in-the-weeds.

Enter kubeadm, included in recent versions of K8s.  With this tool, you’re better able to understand what goes into a cluster, how the master and workers are related and how the networking is organized.  I really wanted to stand up a K8s cluster alongside a PKS-managed cluster in order to better understand the differences (if any).  This is also a part of the Linux Foundation training “Kubernetes Fundamentals”.  I don’t want to spoil the course for you, but will point to some of the docs on kubernetes.io

 

Getting Ready

I used VMware Fusion on the Macbook to create and run two Ubuntu 18.04 VMs.  Each was a linked clone with 2GB RAM, 1vCPU.  Had to make sure that they had different MAC addresses, IP, UUIDs and host names. I’m sure you can use nearly any virtualization tool to get your VMs running.  Once running, be sure you can SSH into each.

Install Docker

I thought, “hey I’ve done this before” and just installed Docker as per usual, but that method does not leverage the correct cgroup driver, so we’ll want to install Docker with the script found here.

Install Tools

Once again, the kubernetes.io site provides commands to properly install kubeadm, kubelet and kubectl on our Ubuntu nodes.  Use the commands on that page to ensure kubelet is installed and held to the correct version.

Choose your CNI pod network

Ok, what?  CNI is the Container Network Interface is a specification for networking add-ons for K8s.  Kubeadm requires that we use a pod network addon that uses the CNI spec.  The pod network – we may have only one per k8s cluster – is the network that the pods communicate on; think of it as using a NAT rather than the network you’ve actually assigned to the Ubuntu nodes.  Further, this can be confusing, because this address space is not what is actually assigned to the pods.  This address space is used when we “expose” a service.  What pod-network-cidr you assign depends on which network add-on you select.  In my case, I went with Canal as it seems to be both powerful and flexible.  Also, the pod-network cidr used by Calico is “192.168.0.0/16”, which is already in use in my home lab – it may not have actually been a conflict, but it certainly would be confusing if it were in use twice.

Create the master node
Make sure you’re ssh’d into your designated “Master” Ubuntu VM, make sure that you’ve installed kubeadm, kubelet, kubectl and docker from the steps above.  If you also choose canal, you’ll initialize the master node (not on the worker node – we’ll have a different command for that one) by running

kubeadm init –pod-network-cidr=10.244.0.0/16

Exactly that CIDR. It’ll take a few minutes to download, install and configure the k8s components.  When the initialization has completed, you’ll see a message like this:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.129:6443 –token zdjzrp.5jad4gihqjo46olg \
–discovery-token-ca-cert-hash sha256:90d1b349aa93a7130ee91668e4e763a4c29e5fc1502060191b38ea0e31d3cec8

Using, this, we’ll exit su and run the

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Make a note of the bottom section as we’ll need it in order to join our worker to the newly-formed cluster.

Sanity-check:

On the master, run kubectl get nodes.  you’ll notice that we have 1 node and it’s not ready:

Install the network pod add-on

Referencing the docs, you’ll note that Canal has a couple yaml files to be applied to our cluster.  So, again on our master node, we’ll run these commands to install and configure Canal:


kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/canal/rbac.yaml
kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/canal/canal.yaml

Sanity-check:

On the master, run kubectl get nodes.  You’ll now notice that we still have 1 node but now it’s ready:

Enable pod placement on master
By default, the cluster will not put pods on the master node. If you want to use a single-node cluster or use compute capacity on the master node for pods, we’ll need to remove the taint.

Default taint on master

We’ll remove the taint with the command

kubectl taint nodes –all node-role.kubernetes.io/master-

Join worker to cluster
You saved the output from kubeadm init earlier, right? We’ll need that now to join the worker to the cluster. On the worker VM, become root via sudo su – and run that command:

Join worker to cluster

Now, back on master, we run kubectl get nodes and can see both nodes!

Master and Worker Ready

Summary and Next Steps

At this point, we have a functional kubernetes cluster with 1 master and 1 worker.  Next in this series, we’ll deploy some applications and compare the behavior to a PKS-managed kubernetes cluster.

 

Advertisement
%d bloggers like this: