Introduction
If you’re using an older version of k3s and are eager to upgrade to a new stable release, this guide is hopefully useful for you. In this blog, we’ll walkthrough the automated method for upgrading k3s, contrasting it with the manual approach.
Prerequisites
- k3s cluster
- [
kubectl
]*https://kubernetes.io/docs/reference/kubectl/ KUBECONFIG
file, and env pointing to your Kubernetes config file
First, connect to your k3s cluster and list all the Kubernetes nodes.
# Ensure KUBECONFIG env is pointing to your kubernetes config file
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s2 Ready <none> 240d v1.29.6+k3s2
k8s3 Ready <none> 240d v1.29.6+k3s2
k8s1 Ready <none> 240d v1.29.6+k3s2
k8s4 Ready <none> 240d v1.29.6+k3s2
beelink Ready control-plane,master 240d v1.29.6+k3s2
In my cluster, you can see all the nodes are running 1.29.6+k3s2
Install the system-upgrade-controller
See k3s docs
First, we must install the controller and its associated Custom Resource Definitions (CRDs). Deploying the system-upgrade-controller
into your cluster is key. This process involves setting up a service account, clusterrolebinding, and a configmap. To get started, use the following commands:
- Create a namespace called
system-upgrade,
as the plans must be created in the same namespace where the controller is deployed. - Configure and customize the controller via the configmap. Remember that changes will take effect when you redeploy the controller.
Finally, to apply the plans, ensure the system-upgrade-controller
CRD is deployed
# Create the designated Namespace
kubectl create ns system-upgrade
# Deploy the system-upgrade-controller
kubectl apply -f https://github.com/rancher/system-upgrade-controller/releases/latest/download/system-upgrade-controller.yaml
# Deploy the CRD
kubectl apply -f https://github.com/rancher/system-upgrade-controller/releases/latest/download/crd.yaml
You’ll notice the upgrade controller starting in the system-upgrade
namespace, as shown below.
kubectl get all -n system-upgrade
NAME READY STATUS RESTARTS AGE
pod/system-upgrade-controller-66c9f76ffc-m8sx4 1/1 Running 0 2m5s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/system-upgrade-controller 1/1 1 1 2m5s
NAME DESIRED CURRENT READY AGE
replicaset.apps/system-upgrade-controller-66c9f76ffc 1 1 1 2m5s
Now the cluster controller is deployed and ready on our k3s cluster
The Plan defines upgrade policies and requirements. The controller schedules upgrades by monitoring these plans and selecting nodes to run upgrade jobs on it. So lets create two plans: one for server (master) nodes and other one for agent (worker) nodes. The following two example plans will upgrade your cluster to k3s v1.30.2+k3s2
. (See k3s releases for all k3s releases and change release notes)
Here’s an example of my plan for upgrading from 1.29 to 1.30: plan.yaml
:
apiVersion: upgrade.cattle.io/v1
kind: Plan
metadata:
name: server-plan
namespace: system-upgrade
spec:
concurrency: 1
cordon: true
nodeSelector:
matchExpressions:
- key: node-role.kubernetes.io/control-plane
operator: In
values:
- "true"
serviceAccountName: system-upgrade
upgrade:
image: rancher/k3s-upgrade
version: v1.30.2+k3s2
---
apiVersion: upgrade.cattle.io/v1
kind: Plan
metadata:
name: agent-plan
namespace: system-upgrade
spec:
concurrency: 1
cordon: true
nodeSelector:
matchExpressions:
- key: node-role.kubernetes.io/control-plane
operator: DoesNotExist
prepare:
args:
- prepare
- server-plan
image: rancher/k3s-upgrade
serviceAccountName: system-upgrade
upgrade:
image: rancher/k3s-upgrade
version: v1.30.2+k3s2
Configure Plans
The Plan defines upgrade policies and requirements. The controller schedules upgrades by monitoring these plans and selecting nodes to run upgrade jobs on it. So lets create two plans: one for server (master) nodes and other one for agent (worker) nodes. The following two example plans will upgrade your cluster to k3s v1.30.2+k3s2
. (see k3s releases)
Here’s an example of my plan for upgrading from 1.29 to 1.30: plan.yaml
There are a few key fields to pay attention to in the above plans:
- We have two plans:
server-plan
andagent-plan
, which define upgrade policies for both types of nodes in k3s. - The
concurrency
field tells us how many nodes can be upgraded simultaneously. - The
nodeSelector
field allows us to specify which nodes a particular plan will target. For example, I simply want to upgrade every node and so I use the existence ofnode-role.kubernetes.io/control-plane
target master nodes, and the non existence of the labelnode-role.kubernetes.io/control-plane
for the worker nodes in k3s. - In the
agent-plan
, theprepare
field ensures that upgrade jobs wait for the server-plan to complete before they execute. This guarantees that all master nodes are upgraded before the worker nodes. - The
image: rancher/k3s-upgrade
field specifies the k3s-upgrade image. This image is responsible for upgrading the k3s version via the System Upgrade Controller. It achieves this by replacing the k3s binary with the new version and restarting k3s with the new version. - The
version
field specifies the k3s version we need to upgrade to. In our case, it’s set to versionv1.30.2+k3s2
.
Feel free to dive into these details and modify this for your own projects.
Apply plans
Now apply the plans defined above using kubectl apply command.
kubectl apply -f plans.yaml
plan.upgrade.cattle.io/server-plan created
plan.upgrade.cattle.io/agent-plan created
As soon as you apply the plan, the controller will detect it and will begin the upgrade process. On the other hand, if you update any existing plans the controller will re-evaluate the plan and will determine if another upgrade is needed.
Verify the upgrade
You can monitor the progress of an upgrade by viewing the plan and jobs via kubectl:
kubectl -n system-upgrade get plans -o yaml
kubectl -n system-upgrade get jobs -o yaml
Once the upgrade is done. Verify the new installed version on all the nodes using kubectl:
# Watch the upgrades roll out
$ kubectl get nodes -w
NAME STATUS ROLES AGE VERSION
beelink Ready control-plane,master 240d v1.30.2+k3s2
k8s1 Ready <none> 240d v1.30.2+k3s2
k8s2 Ready <none> 240d v1.30.2+k3s2
k8s3 Ready <none> 240d v1.30.2+k3s2
k8s4 Ready <none> 240d v1.30.2+k3s2
Conclusion
For my homelab, the k3s-upgrade and system-upgrade-controller make upgrading k3s much simpler and more efficient than the manual upgrade method. They ensure minimal downtime by proceeding in a rolling deployment fashion. This approach saves time and keeps your systems running smoothly throughout the upgrade process.