Logo

Armand.nz

Home / About / Linkedin / Github

Kubernetes Cluster on Raspberry Pi using Ubuntu 22.04 LTS, K3s, and MetalLB

#kubernetes #k3s #raspberrypi #Metallb |

*This post documents my first k3s build of my minilab. In 2023, I rebuilt the cluster by introducing an intel-based master/worker node and using Cilium as my CNI. You can see this post: [[Kubernetes Cluster on Raspberry Pi using Ubuntu 22.04 LTS, K3s, and Cilium!]]

Building a Kubernetes cluster on Raspberry Pi is a great way to get started with Kubernetes and K3s is a great choice for a Lightweight Kubernetes for the Pi. The post documents my experience through the process of setting up a k3s cluster on Raspberry Pi using Ubuntu 22.04.

minilab-v1-side minilab-v1-back

Introduction

Why Raspberry Pi?

The Raspberry Pi is a Single-Board Computer (SBC) that was originally designed as an educational tool to help people learn how to code, but it has since become very popular in the hardware and hacking communities, with people using the device for hardware projects, home automation, robotics, and other applications.

For me, it just seems like a fun way to learn Kubernetes and an excuse to tinker with physical computers again

Why Ubuntu?

Ubuntu is a popular Linux distribution on PCs, and it is available as a pre-built release for the Raspberry Pi.

The most important reason I am using Ubuntu for this project is my experience with the Linux distribution. cloud-init is a significant benefit since we can establish the initial setup (creating SSH keys for the default user, as well as installing necessary software) without requiring additional monitors or laborious manual processes.

The OS is not the most important piece of this project, you should use whatever Linux distribution you prefer on the Raspberry PI.

At the time of writing, Ubuntu 22.04 is the latest LTS release of Ubuntu. Find out the release date and the release schedule, find out what’s new in this release, and more.

Why K3s?

Initially, I had to set up “Kubernetes the hard way” on a Raspberry PI, and it was an excellent opportunity to prepare for my CKA. However, I ran into difficulties with my Raspberry PI’s OS wiping itself off cheap microSDs (which I threw away) and it was a pain going through all of the manual steps to re-deploy my Kubernetes cluster.

K3s is a lightweight “stripped-down” version of the Kubernetes Distribution, which is compatible and suitable for Raspberry Pi hardware because it is quick to install. In fact, K3s have been designed to work on ARM systems. Its simplicity makes it a breeze to install/remove/re-install when things go wrong.

Hardware Requirements

We need a few Raspberry Pi Single-Board Computers to construct our Raspberry PI Kubernetes cluster. I paid $75 for each Raspberry Pi 4 Model B (8GB memory) in 2020. With more or fewer instances, you may create a smaller or larger Kubernetes cluster.

Generally, We need

I used the following building components (at prices in 2020):

All my components were purchased from Pi Shop Amazon and ebay

Operating System Setup

We must first set up and configure the Ubuntu Linux Operating System on each node of the future Kubernetes cluster.

Our cluster will include four machines (I mean the same thing when I saw a Raspberry PI device, a machine, a node, or a host) with each computer having its own name and IP address

  1. k8s0 - Master Node (10.0.0.190)
  2. k8s1 - Worker Node (10.0.0.191)
  3. k8s2 - Worker Node (10.0.0.192)
  4. k8s3 - Worker Node (10.0.0.193)

The Master Node is the cluster’s primary node, in charge of orchestration. Although it is uncommon in a multi-node cluster, the Master Node may also function as a worker and execute apps if required.

A Worker node is dedicated to running our applications. It is controlled by the master node from a distance. Our worker node is a computer dedicated to running applications only. It is remotely managed by the master node.

This is my setup:

k3s topology

Im not sure what Distributed file system is supported or works best with K3s on Raspberry PI. Perhaps GlusterFS? or Ceph? We will leave that problem for another day.

Onwards we go…

Flash Ubuntu OS onto the Micro SD cards

I’m running Ubuntu Server 22.04 LTS 64bit. This version does not include a Desktop or Recommended package. Using this edition, we may begin with a completely clean, light, and fresh installation.

Raspberry Pi Imager is the easiest way to Install Raspberry Pi OS onto your SD card and you can download and install the Target Ubuntu OS from the tool

  1. Open Raspberry PI imager > Other General Purpose OS > Ubuntu > Ubuntu Server 22.04 LTS - make sure to select 64bit

    pi imager

  2. Plug a Micro SD Card into your local machine

  3. Select the target Micro SD and ‘WRITE

Configuring your instance during boot

Once the SD Cards are flashed with the Ubuntu, re-mount the SD Card if necessary (eject and re-insert to your machine), open the SD card with your file manager, and edit the two files user-data and cmdline.txt to set an initial configuration to the raspberry pi systems with a static IP address, hostname, and install some prerequisites software.

As a reference or template to edit from, here are two files used:

Edit cmdline.txt

  1. Inspect the cmdline.txt file. There likely won’t be anything required to be modified here. note that the Kubernetes install prerequisites have already been appended to the boot command line:

    cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1
    

    Check out my cmdline.txt as an example

Add user-data user specific configurations (consistent on all nodes)

Edit the user-data file for your _user specific configurations (consistent on all nodes)

If you are not familiar with this file, feel free to inspect and use my user-data file as a template to edit from. Take a look at the file and comments, here are some user specific configurations in my example:

  1. Change the following to your needs

Feel free to add your own parameters to build your own setup.

user-data node specific configurations (unique per machine)

Futher edit the user-data for your _node-specific configurations (specific machine nodes)

  1. For each node, change the node specific details - hostname and fqdn:

     hostname: k8s0
     fqdn: k8s0.your-network.com.com
    
  2. For each node, change the node specific details of network interface configurations, i.e static IP Address(es)

IMPORTANT: Known Issue: Kubelet in Kubernetes supports no more than three DNS server entries in  /etc/resolv.conf on each node in the cluster.

```yaml
- path: /etc/netplan/50-cloud-init.yaml # Set Static IP
  permissions: '0644'
  content: |
    network:
	version: 2
	ethernets:
		eth0:
		dhcp4: no
		addresses:
			- 10.0.0.191/24
		gateway4: 10.0.0.1
		nameservers:
			search: [your-network.com]
			addresses: [10.0.0.2, 1.1.1.1, 8.8.8.8]
```
  1. For each node, change the hostfile entries in the hosts file. i.e. hostname and static IP Address assignment

     - path: /etc/hosts # Hosts files
     content: |
     127.0.0.1 localhost k8s1 k8s1.your-network.com k8s-1 k8s-1.your-network.com
     ::1 localhost
     10.0.0.2 dns dns.your-network.com
     10.0.0.190 k8s0 k8s0.your-network.com
     10.0.0.191 k8s1 k8s1.your-network.com
     10.0.0.192 k8s2 k8s2.your-network.com
     10.0.0.193 k8s3 k8s3.your-network.com
    

Cloud-init lets you do a lot. You may set up network configurations, ssh keys, and public keys for immediate SSH remote access, go the extra mile of setting up Kubernetes and k3s prerequisites or even completely build out your system from scratch. Refer to the online documentation for details on all options.

In my setup (user-data), I added a few tools for convenience and some cosmetic items like MOTD. Most importantly, though, is enabling SSH access so we avoid plugging in a monitor and keyboard to configure each node. Adding a non-root admin user with sudo access is also recommended. I’ll perform the other configurations manually for documentation purposes, like installing k3s and other Kubernetes tools.

Fire up the Raspberry PIs and connect via SSH

We’ll try to connect from our local computer to the node via SSH after each device has been powered up. If you’re using Linux, MacOS, or a similar system, all you have to do is open a new terminal. Windows users can use Putty as an SSH client if they download and install it.

The following steps need to be done on each Raspberry PI node. Using a tool like TMUX and its ## Synchronize Panes feature makes this process very easy to do in one go. This magical tool is not covered here, but you can learn more about it here.

In the last step, we had set the default user (“ubuntu”) password, a static IP address, unique Hostname, and static entries in the host files. We now assume we can access and each Node via ssh can communicate with each other via hostname resolution, therefore we can skip these basic network and user configurations

Note: If you opt to use DHCP, Your router will assign an arbitrary IP address when a device tries to join the network. To find the address attributed to the device, you can check either on your router admin panel or via a tool like netstat or angryIP

Install linux-modules-extra-raspi extra package

Through lots of frustration, I discovered Ubuntu installations were missing a kernel module that always resulted in STATUS: NotReady when my Kubernetes K3S was installed. Only by installing this kernel module was I able to finally get my Ubuntu Raspberry Pi modules to a STATUS: Ready state. Evidently, this only affects the Rasberry Pi install of Ubuntu.

On each Raspberry PI Install the linux-modules-extra-raspi extra package, a specific requirement for Ubuntu 21.10+ and k3s on Raspberry Pis

  1. Run the following to install the module.

     sudo apt install linux-modules-extra-raspi
     sudo reboot
    
  2. A reboot is required to take effect:

     sudo reboot
    

After the reboot, we will be good to go to install K3s

Setup the Master k3s Node

The Master node’s (k8s0) initial k3s installation will serve as the control plane for our Kubernetes Cluster. I disabled the load balancer service Klipper and Traefik ingress and balancer in my install because I prefer the option to use non-bundled solutions such as MetalLB and NGINX ingress, which, in my opinion, provide more features.

  1. Install K3s with the flags to make sure /etc/rancher/k3s/k3s.yaml is world-readable and service loadBalancer, Klipper, and traefik are disabled

     export K3S_KUBECONFIG_MODE="644"
     export INSTALL_K3S_EXEC=" --disable servicelb --disable traefik"
     curl -sfL https://get.k3s.io | sh -
    
     [INFO] Finding release for channel stable
     [INFO] Using v1.23.6+k3s1 as release
     [INFO] Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.23.6+k3s1/sha256sum-amd64.txt
     [INFO] Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.23.6+k3s1/k3s
     [INFO] Verifying binary download
     [INFO] Installing k3s to /usr/local/bin/k3s
     [INFO] Skipping installation of SELinux RPM
     [INFO] Skipping /usr/local/bin/kubectl symlink to k3s, already exists
     [INFO] Creating /usr/local/bin/crictl symlink to k3s
     [INFO] Skipping /usr/local/bin/ctr symlink to k3s, command exists in PATH at /usr/bin/ctr
     [INFO] Creating killall script /usr/local/bin/k3s-killall.sh
     [INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh
     [INFO] env: Creating environment file /etc/systemd/system/k3s.service.env
     [INFO] systemd: Creating service file /etc/systemd/system/k3s.service
     [INFO] systemd: Enabling k3s unit
     Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
     [INFO] systemd: Starting k3s
    
    
  2. Check if the k3s service installed successfully,:

     systemctl status k3s
    
     ● k3s.service - Lightweight Kubernetes
    
     Loaded: loaded (/etc/systemd/system/k3s.service; enabled; vendor preset: enabled)
     Active: active (running) since Tue 2022-05-24 10:48:12 MDT; 36s ago
     Docs: https://k3s.io
     Process: 1979 ExecStartPre=/bin/sh -xc ! /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service (code=exited, status=0/SUCCESS)
     Process: 1981 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
     Process: 1982 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
     Main PID: 1983 (k3s-server)
     Tasks: 88
     Memory: 803.5M
     CPU: 55.037s
     CGroup: /system.slice/k3s.service
     ├─1983 /usr/local/bin/k3s server
     ├─2004 containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent>
     ├─2588 /var/lib/rancher/k3s/data/8c2b0191f6e36ec6f3cb68e2302fcc4be850c6db31ec5f8a74e4b3be403101d8/bin/containerd-shim-runc-v2 -namespace k8s.io -id 1a9d266ca9f4ce8e62b49294ce5>
     ├─2613 /var/lib/rancher/k3s/data/8c2b0191f6e36ec6f3cb68e2302fcc4be850c6db31ec5f8a74e4b3be403101d8/bin/containerd-shim-runc-v2 -namespace k8s.io -id 7ede00e8f0dc750dc8c92ae15ea>
     └─2643 /var/lib/rancher/k3s/data/8c2b0191f6e36ec6f3cb68e2302fcc4be850c6db31ec5f8a74e4b3be403101d8/bin/containerd-shim-runc-v2 -namespace k8s.io -id 8a589604b9b0183db0d109d1e44>
    
  3. You can check if the master node is working. At this point, there is only one Master node

     k3s kubectl get node
    
     NAME STATUS ROLES AGE VERSION
     k8s0 Ready control-plane,master 75s v1.23.6+k3s1
    

Connect to your K3s Kubernetes Config on the Master node

In order to manage the Kubernetes cluster you have to let kubectl know where to find the kubeconfig. You can do this by either specifying the kubeconfig file as an environment variable or copying it to the default path at ~/.kube/config. Here we do the latter

  1. Copy k3s.yaml to ~/.kube/config

     cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
     kubectl get nodes
    
  2. k3s also installed the Kubernetes Command Line Tools, so it is now possible to start querying the cluster using kubectl as it looks for the Kubernetes config at ~/.kube/config by default

     kubectl get nodes
    
     NAME STATUS ROLES AGE VERSION
     k8s0 Ready control-plane,master 3m50s v1.23.6+k3s1
    

If Applicable: Allow ports on the firewall

If you have a firewall (ufw) enabled we need to enable ports for worker nodes to communicate over

  1. Enable ports on ufw

     # We need to allow ports that will will be used to communicate between the master and the worker nodes. The ports are 443 and 6443.
     sudo ufw allow 6443/tcp
     sudo ufw allow 443/tcp
    

Prepare token for adding working nodes

You need to extract the K3S_TOKEN from the Master node that will be used to join the Worker nodes to the Master Node.

  1. On the Master node, make a node of the k3s join token

    sudo cat /var/lib/rancher/k3s/server/node-token
    

    You will then obtain a token that looks like:

    K103cc1634360ddec824fd7afa3beea11c4e733fe6f642752ec928419bf949f29bb::server:cbba8f488b3ab6dcb61438f8d48c43a9
    

Install k3s on Worker nodes and connect them to the Master Node

The next step is to install k3s on the Kubernetes Worker nodes (k8s1, k8s2 and k8s3). We will install k3s while providing the join token. Remember to replace the Master node IP address and token with your specific deployment

  1. Set variables for K3S_URL and K3S_TOKEN and run the k3s installation script

     # curl -sfL https://get.k3s.io | K3S_URL=https://<master_IP>:6443 K3S_TOKEN=<join_token> sh -s
    
     export K3S_KUBECONFIG_MODE="644"
     export K3S_URL="https://10.0.0.190:6443"
     export K3S_TOKEN="K103cc1634360ddec824fd7....f642752ec928419bf949f29bb::server:cbba...."
    
     curl -sfL https://get.k3s.io | sh -
    
    
  2. We can verify if the k3s-agent on the Worker nodes is running by:

     sudo systemctl status k3s-agent
    
  3. If the service failed to start, you may need to restart and check again:

     sudo systemctl restart k3s-agent
     sudo systemctl status k3s-agent
    
     ● k3s-agent.service - Lightweight Kubernetes
    
     Loaded: loaded (/etc/systemd/system/k3s-agent.service; enabled; vendor preset: enabled)
     Active: active (running) since Tue 2022-05-24 11:59:49 CDT; 465ms ago
     Docs: https://k3s.io
     Process: 4620 ExecStartPre=/bin/sh -xc ! /usr/bin/systemctl is-enabled --quiet nm-cloud-se>
     Process: 4622 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
     Process: 4623 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
     Main PID: 4624 (k3s-agent)
     Tasks: 8
     Memory: 18.5M
     CPU: 829ms
     CGroup: /system.slice/k3s-agent.service
     └─4624 "/usr/local/bin/k3s " "" "" "" "" ""
     May 24 11:59:49 k8s2 systemd[1]: Starting Lightweight Kubernetes...
     May 24 11:59:49 k8s2 sh[4620]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service
     May 24 11:59:49 k8s2 sh[4621]: Failed to get unit file state for nm-cloud-setup.service: No su>
     May 24 11:59:49 k8s2 systemd[1]: Started Lightweight Kubernetes.
    
    
  4. To verify that our Worker nodes have successfully been added to the k3s cluster, run this kubectl command back on the Master Node where kubectl has been installed:

     kubectl get nodes -o wide
     # or 
     k3s kubectl get nodes -o wide
    	
    
     NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
    
     k8s0 Ready control-plane,master 35m v1.23.6+k3s1 10.0.0.190 <none> Ubuntu 22.04 LTS 5.15.0-1008-raspi containerd://1.5.11-k3s2
    
     k8s3 Ready <none> 26m v1.23.6+k3s1 10.0.0.193 <none> Ubuntu 22.04 LTS 5.15.0-1008-raspi containerd://1.5.11-k3s2
     k8s2 Ready <none> 26m v1.23.6+k3s1 10.0.0.192 <none> Ubuntu 22.04 LTS 5.15.0-1008-raspi containerd://1.5.11-k3s2
     k8s1 Ready <none> 26m v1.23.6+k3s1 10.0.0.191 <none> Ubuntu 22.04 LTS 5.15.0-1008-raspi containerd://1.5.11-k3s2
    
    

Connect remotely to the k3s cluster from your local machine

Using kubectl we can manage our Kubernetes cluster remotely from a our local machine. If you have not done so already, make sure you have installed kubetl by following the instructions on the Kubernetes documentation, e.g. Install kubectl binary with curl on Linux

The following steps now assume you have Installed kubectl on your local machine and are managing other Kubernetes clusters, so we will merge and add the new cluster to the local config (e.g. ~./kube/config)

By default, the k3s cluster is called “default” and if you already have a Kubernetes cluster on your local machine named “default, we won’t be able to merge our new k3s cluster config with our existing Kubernetes configuration. While there are several approaches to resolving this, the following is one, and will delete the existing conflicting “default” cluster configuration before merging our new k3s config:

  1. Add our new Kubernetes cluster config to the local machine. In this example, I replace localhost 127.0.0.1 with the remote address of the master node 10.0.0.190 and replace my “default” cluster by first deleting the existing cluster config

    
     # Copy Kube config from master node to local machine
     scp [email protected]:~/.kube/config otherconfig
    	
     # Replace 127.0.0.1 with external IP or Hostname
     sed -i '' 's/127\.0\.0\.1/10\.0\.0\.190/g' otherconfig
    	
     # Backup current kube config
     cp ~/.kube/config ~/.kube/config_BACKUP
    
     # Delete potential conflicting "default" cluster config
     kubectl config delete-cluster default
    
     # Merge config
     konfig=$(KUBECONFIG=~/.kube/config:otherconfig kubectl config view --flatten)
     echo "$konfig" > ~/.kube/config
    
    
  2. If you get an error “WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /Users/xxx/.kube/config ”. you can set the correct permissions:

     chmod go-r ~/.kube/config
    
  3. Change kubectl cluster context to k3s (“default”)

    
     # Get a list of Kubernetes clusters in your local Kube config
    
     kubectl config get-clusters
    
    	
     NAME
     default
     do-sfo2-doks-armand-sfo2
     docker-desktop
     arn:aws:eks:us-west-2:832984185795:cluster/eks-armand-uswest2
    
     # Set context to our k3s cluster "default"
     kubectl config use-context default
    
     # Check which context you are currently targeting
     kubectl config current-context
    
    
     # Get Nodes in the target Kubernetes cluster
    
     kubectl get nodes
    
     kubectl get nodes
     NAME      STATUS   ROLES                  AGE   VERSION
     k8s0   Ready    control-plane,master   26h   v1.23.6+k3s1
     k8s1      Ready    <none>                 26h   v1.23.6+k3s1
     k8s2      Ready    <none>                 26h   v1.23.6+k3s1
     k8s3      Ready    <none>                 26h   v1.23.6+k3s1
    
    

Install Metal LB

MetalLB is a load-balancer implementation for bare metal Kubernetes clusters. MetalLB hooks into your Kubernetes cluster and provides a load-balancing implementation. It allows you to set up Kubernetes services of type “LoadBalancer” in your own cluster like a cloud provider managed Kubernetes platform

When configuring a Kubernetes service of type LoadBalancer, MetalLB will dedicate a Virtual IP from an address pool to be used as a load balancer for an application.

This step assumes the helm is installed on your local machine. If you do not have helm installed, you can do so now by following the install instrucitons, Install Helm

To install MetalLB from Helm, we need to simply run the following command helm install. Here are the flags and values I used:

Time to test: Deploy a example application

If you have got this far without issues then we are ready to deploy our first application!

I have provided a yaml manifest here that will deploy a sample application and expose it outside of your Kubernetes cluster with metal LB.

  1. Apply the yaml manifest using kubectl

     kubectl apply -f the-moon-all-in-one.yaml
    
  2. We can see everything being created using the watch command in combination with kubectl

     watch kubectl get all -n solar-system -o wide
    
     NAME                        READY   STATUS              RESTARTS   AGE   IP           NODE   NOMINATED NODE   READINESS GATES
     pod/moon-856d78f799-874lp   0/1     ContainerCreating   0          3s    <none>       k8s2   <none>           <none>
     pod/moon-856d78f799-pqqb5   1/1     Running             0          3s    10.42.4.14   k8s2   <none>           <none>
     pod/moon-856d78f799-zvc92   1/1     Running             0          3s    10.42.3.11   k8s3   <none>           <none>
     pod/moon-856d78f799-8gbtp   1/1     Running             0          3s    10.42.1.12   k8s1   <none>           <none>
    
     NAME                  TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE   SELECTOR
     service/moon-lb-svc   LoadBalancer   10.43.116.33    10.0.0.230    80:31555/TCP   3s    app=moon
     service/moon-svc      ClusterIP      10.43.158.140   <none>        80/TCP         3s    app=moon
    
     NAME                   READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                                SELECTOR
     deployment.apps/moon   3/4     4            3           4s    moon         armsultan/solar-system:moon-nonroot   app=moon
    
     NAME                              DESIRED   CURRENT   READY   AGE   CONTAINERS   IMAGES                                SELECTOR
     replicaset.apps/moon-856d78f799   4         4         3       4s    moon         armsultan/solar-system:moon-nonroot   app=moon,pod-template-hash=856d78f799
    
    

In the example output above, my metallb service LoadBalancer has exposed my “moon” application on the IP address 10.0.0.230

…and that my friend, is that moon. the moon

  1. If you would like to delete this deployment we can simple delete everything in namespace “solar-system

     kubectl delete namespace solar-system
    

We have our K3s cluster up and running!

comments powered byDisqus

Copyright © Armand