Raspberry Pi 5 - a Kubernetes cluster

Raspberry Pi - The everything computer.

The basic setup of a Raspberry Pi:

If configured properly you should be able to connect to the Raspberry Pi via SSH. Log in and get the system up to date before we start the actual installation:

sudo apt update && sudo apt upgrade -y

Sure thing, there are a lot of flavors and ways to install a Kubernetes cluster... Since we want to work with the device to get some hands-on experience for the CKA and CKS we'll use the setup tool kubeadm.

Installing CRI plus CNIs

First things first: Let's go through the checklist and prerequisites...

We need to install a CRI (Container Runtime Interface) and a CNI (Container Network Interface) on the device. One that worked for us was containerd:

sudo apt-get install containerd containernetworking-plugins -y

Note: The sandbox image should be in sync with the Kubernetes version you are using.

...and configure the device as specified for containerd

Tip: You can check or dump the default configuration of containerd with containerd config default.

A snippet with the relevant parts:

version = 2

    sandbox_image = "registry.k8s.io/pause:3.9"
      bin_dir = "/usr/lib/cni"
      conf_dir = "/etc/cni/net.d"
      SystemdCgroup = true
    path = "/var/lib/containerd/opt"

If everything looks good...then restart containerd:

sudo systemctl restart containerd

Installing kubeadm

Installing kubeadm:

sudo apt-get install -y apt-transport-https ca-certificates curl gpg

curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

sudo apt-get update
sudo apt-mark unhold kubelet kubeadm kubectl
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

Configure cgroup and iptables

A few more configurations to go: Enable cgroup features: cpuset, memory,...:

cgroup="$(head -n1 /boot/firmware/cmdline.txt) cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1"
echo $cgroup | sudo tee /boot/firmware/cmdline.txt

Forwarding IPv4 and letting iptables see bridged traffic

Check and configure more kernel modules:

lsmod | grep br_netfilter
lsmod | grep overlay

If not present add them to /etc/modules-load.d/containerd.conf:

cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf

This will create the following file (/etc/modules-load.d/containerd.conf), and persist the configuration across reboots:


Tip: You can load the kernel modules without a reboot

sudo modprobe overlay
sudo modprobe br_netfilter

...iptables tuning is next:

cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1

Tip: Like the previous configuration this can be applied without a reboot

sudo sysctl --system

Check the result with:

sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward

Turn off swap

The following commands can check the current swap configuration/status:

sudo swapon --summary
sudo service dphys-swapfile status

Deactivate or remove the swap:

sudo dphys-swapfile swapoff
sudo dphys-swapfile uninstall
sudo update-rc.d dphys-swapfile remove
sudo systemctl disable dphys-swapfile

kubeadm init

Finally the actual setup of the Kubernetes cluster with kubeadm:

Pull the images (optional, included in the second step)

sudo kubeadm config images pull

...the actual initialization:

Note: The creation of the configuration file seems obsolete - Configuring a cgroup driver

Create the configuration file kubeadm-config.yaml (with the proper version!):

# kubeadm-config.yaml
kind: ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1beta3
kubernetesVersion: v1.29.1
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd

After some failed experiments, the only parameter needed (for smooth installation of the flannel CNI) is the pod-network-cidr:

sudo kubeadm init --pod-network-cidr=

Install a CNI (flannel)

kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

Prepare the local user environment

Check the output of the kubeadm init for the setup suitable for your installation. The following snippets have been captured during the (third, and finally successful) setup session of this article:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Tips for single node clusters

To easily schedule pods on a single node cluster (only a control-plane node), e.g. for testing you can remove the taint node-role.kubernetes.io/control-plane:NoSchedule from the (only) node with:

kubectl taint nodes <controlplane-node> node-role.kubernetes.io/control-plane:NoSchedule-



With the one manual setup done, there is a chance we automate the process with Ansible :-) Stay tuned...

Image by DALL-E

Can you create a black and white sketch of a Raspberry Pi with a bookworm working through the case and a Kubernetes steering wheel attached to it?