Raspberry Pi - The everything computer.
The basic setup of a Raspberry Pi:
- The most convenient way to provision the SDCard is to use the Raspberry Pi Imager.
If configured properly you should be able to connect to the Raspberry Pi via SSH. Log in and get the system up to date before we start the actual installation:
sudo apt update && sudo apt upgrade -y
sudo apt autoremove
Sure thing, there are a lot of flavors and ways to install a Kubernetes cluster...
Since we want to work with the device to get some hands-on experience for the CKA and CKS we'll use the setup tool kubeadm
.
Installing CRI plus CNIs
First things first: Let's go through the checklist and prerequisites...
We need to install a CRI (Container Runtime Interface) and a CNI (Container Network Interface) on the device.
One that worked for us was containerd
:
sudo apt-get install containerd containernetworking-plugins -y
Note: The sandbox image should be in sync with the Kubernetes version you are using.
...and configure the device as specified for containerd
Tip: You can start with a full configuration (which might solve some issues with missing values)
containerd config default > /etc/containerd/config.toml
.
A snippet with the relevant parts (which should be in /etc/containerd/config.toml
):
version = 2
[plugins]
...
[plugins."io.containerd.grpc.v1.cri"]
...
sandbox_image = "registry.k8s.io/pause:3.9"
...
[plugins."io.containerd.grpc.v1.cri".cni]
bin_dir = "/usr/lib/cni"
conf_dir = "/etc/cni/net.d"
...
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
...
[plugins."io.containerd.internal.v1.opt"]
path = "/var/lib/containerd/opt"
...
If everything looks good...then restart containerd
:
sudo systemctl restart containerd
Note: You can check the final main configuration with
containerd config dump
.
Installing kubeadm
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-mark unhold kubelet kubeadm kubectl
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
Configure cgroup
and iptables
A few more configurations to go: Enable cgroup features: cpuset
, memory
,...:
cgroup="$(head -n1 /boot/firmware/cmdline.txt) cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1"
echo $cgroup | sudo tee /boot/firmware/cmdline.txt
Forwarding IPv4 and letting iptables see bridged traffic
Check and configure more kernel modules:
lsmod | grep br_netfilter
lsmod | grep overlay
If not present add them to /etc/modules-load.d/containerd.conf
:
cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
This will create the following file (/etc/modules-load.d/containerd.conf
), and persist the configuration across reboots:
overlay
br_netfilter
Tip: You can load the kernel modules without a reboot
sudo modprobe overlay
sudo modprobe br_netfilter
...iptables tuning is next:
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
Tip: Like the previous configuration this can be applied without a reboot
sudo sysctl --system
Check the result with:
sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward
Turn off swap
The following commands can check the current swap configuration/status:
sudo swapon --summary
sudo service dphys-swapfile status
Deactivate or remove the swap:
sudo dphys-swapfile swapoff
sudo dphys-swapfile uninstall
sudo update-rc.d dphys-swapfile remove
sudo systemctl disable dphys-swapfile
kubeadm init
Finally the actual setup of the Kubernetes cluster with kubeadm
:
Pull the images (optional, included in the second step)
sudo kubeadm config images pull
...the actual initialization:
Note: The creation of the configuration file seems obsolete - Configuring a cgroup driver
After some failed experiments, the only parameter needed (for smooth installation of the ~flannel CNI~ Calico) is the pod-network-cidr
:
sudo kubeadm init --pod-network-cidr=192.168.0.0/16
Warning: Please check the CIDR range for conflicts with your local network!
(Outdated) Create the configuration file kubeadm-config.yaml
(with the proper version!):
# kubeadm-config.yaml
kind: ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1beta3
kubernetesVersion: v1.29.3
---
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd
Prepare the local user environment
Check the output of the kubeadm init
for the setup suitable for your installation.
The following snippets have been captured during the (third, and finally successful) setup session of this article:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Install a CNI (~flannel~ Calico)
Installation as posted in:
curl -L -o /usr/lib/cni/calico https://github.com/projectcalico/cni-plugin/releases/download/v3.14.0/calico-arm64
chmod 755 /usr/lib/cni/calico
curl -L -o /usr/lib/cni/calico-ipam https://github.com/projectcalico/cni-plugin/releases/download/v3.14.0/calico-ipam-arm64
chmod 755 /usr/lib/cni/calico-ipam
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.3/manifests/tigera-operator.yaml
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.3/manifests/custom-resources.yaml
```
> Beware: Outdated information below: We switched to Calico.
```shell
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
Look around with crictl
Show all running containers:
sudo crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps
Tip: You can configure the runtime endpoint for
crictl
:
```/etc/crictl.yaml runtime-endpoint: unix:///run/containerd/containerd.sock
## Tips for single node clusters
To easily schedule pods on a single node cluster (only a control-plane node), e.g. for testing you can remove the taint `node-role.kubernetes.io/control-plane:NoSchedule` from the (only) node with:
```shell
kubectl taint nodes <controlplane-node> node-role.kubernetes.io/control-plane:NoSchedule-
Add more nodes to the cluster
On the control-plane node, the following command will generate a token for joining the cluster:
kubeadm token create --print-join-command
..and on the node to be added, the output of the previous command can be used to join the cluster:
kubeadm join <control-plane-host>:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash>
Resources
- Building a Raspberry Pi Kubernetes Cluster on Ubuntu 20.04 using kubeadm and Containerd as the runtime.
- How to install Kubernetes on Raspberry PI
Outlook
With the one manual setup done, there is a chance we automate the process with Ansible :-) Stay tuned...
Image by DALL-E
Can you create a black and white sketch of a Raspberry Pi with a bookworm working through the case and a Kubernetes steering wheel attached to it?