How to build a CI/CD pipeline for your Raspberry Pi 4

Because why not?

Flora Thiebaut
6 min readSep 30, 2021
Close-up of my Raspberry Pi 4

So I recently got my hands on a Raspberry Pi 4 kit to play around on rainy days. This is my first project with it and the idea is quite simple: can I setup a simple CI/CD pipeline in GitLab which deploys a web app to my Pi 4?

Disclaimer: This is not a step-by-step guide. Instead, I tried to detail here the tricky parts I came across while working on this project and how I solved them. Should you want to do something similar with your own Pi, I hope that this post makes it much easier for you than it was for me!

1. Setting up the Raspberry Pi 4

Let’s get started with our little computing box and see how I did set it up.

My Raspberry Pi 4 kit

First, I picked Ubuntu server 20.04 as the OS for this experiment and then I installed microk8s to have a single-node Kubernetes cluster.

Here are the add-ons running on my microk8s:

$ microk8s status                          
microk8s is running
high-availability: no
datastore master nodes: 127.0.0.1:19001
datastore standby nodes: none
addons:
enabled:
dns # CoreDNS
ha-cluster # Configure high availability on the current node
helm3 # Helm 3 - Kubernetes package manager
ingress # Ingress controller for external access
metrics-server # K8s Metrics Server for API access to service metrics
rbac # Role-Based Access Control for authorisation
storage # Storage class; allocates storage from host directory
disabled:
dashboard # The Kubernetes dashboard
helm # Helm 2 - the package manager for Kubernetes
host-access # Allow Pods connecting to Host services smoothly
linkerd # Linkerd is a service mesh for Kubernetes and other frameworks
metallb # Loadbalancer for your Kubernetes cluster
portainer # Portainer UI for your Kubernetes cluster
prometheus # Prometheus operator for monitoring and logging
registry # Private image registry exposed on localhost:32000
traefik # traefik Ingress controller for external access

Since I’m running a single-node microk8s cluster, I’m exposing the ports of the Nginx ingress controller using the Host Network.

At this point, we can already see the default backend page from Nginx:

$ curl http://localhost 
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body>
</html>

2. The web app we’re going to deploy

Getting a fancy app deployed is not the focus here, so I’m just going with the starter NextJS app for this project.

Here is the link to the repository so you can follow along: https://gitlab.com/leafty/candybox-playground.

Now, we come across the first tricky part, which is to have our pipeline build a Docker 🐳️ image suitable for the Pi.

To do so, we can make use of BuildKit:

# From .gitlab/build.sh
docker buildx build \
--platform linux/arm64/v8 \
--cache-from "type=registry,ref=${image_previous}" \
--cache-from "type=registry,ref=${image_latest}" \
--cache-from "type=registry,ref=${image_main}" \
-f "${DOCKERFILE_PATH}" \
$AUTO_DEVOPS_BUILD_IMAGE_EXTRA_ARGS \
--build-arg APP_VERSION="${CI_COMMIT_SHA}" \
--tag "${image_tagged}" \
--tag "${image_latest}" \
--progress=plain \
--push \
.

At this point, it’s probably a good idea to check if the Docker image works as intended:

$ docker login registry.gitlab.com
$ docker pull <TODO>
$ docker pull registry.gitlab.com/leafty/candybox-playground/main
$ CONTAINER_ID=$(docker run --rm -d -p 5000:5000 registry.gitlab.com/leafty/candybox-playground/main)
$ curl -v http://localhost:5000/
# We get HTTP 200 and our index page
$ docker kill "$CONTAINER_ID"

3. It’s pipeline time!

Now we’re attacking the main piece of this project. First, we need to find out how we will manage to get the GitLab pipeline to interact with the microk8s cluster hosted on the Raspberry Pi 4.

The answer is relatively simple, we’ll have a GitLab runner instance deployed in the cluster and it will run the deployment jobs of our pipeline.

Deploying the runner

Here we’re starting by following GitLab’s to deploy the runner on our microk8s cluster. Our helm values read something like this:

# values.yaml
gitlabUrl: "https://gitlab.com/"
runnerRegistrationToken: REGISTRATION_TOKEN_HERE
concurrent: 1
runners:
config: |
[[runners]]
[runners.kubernetes]
namespace = "{{.Release.Namespace}}"
image = "alpine:latest"
helper_image = "registry.gitlab.com/gitlab-org/gitlab-runner/gitlab-runner-helper:arm64-${CI_RUNNER_REVISION}"
name: "candybox-runner-1"
tags: "candybox"
runUntagged: false
rbac:
create: true

Let’s deploy this configuration:

$ microk8s.kubectl create namespace gitlab-runner
$ microk8s.helm3 repo add gitlab https://charts.gitlab.io
$ microk8s.helm3 upgrade --install --namespace gitlab-runner -f values.yaml gitlab-runner gitlab/gitlab-runner

So how is the runner going to be able to interact with microk8s and deploy our app? The first piece is that containers running in Kubernetes can use the in-cluster configuration and are already configured to talk to the k8s API. In other words, invoking `helm` from our deployment job running inside the cluster will interact with the cluster itself.

The only missing piece for this to work is to make sure that the deployment jobs will have the permission to manage the resources on the cluster. Let’s update the runner to enable this.

$ microk8s.kubectl -n gitlab-runner create serviceaccount gitlab-deployer# clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: gitlab-deployer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: gitlab-deployer
namespace: gitlab-runner
$ microk8s.kubectl apply -f clusterrolebinding.yaml

Lastly, edit values.yaml and update the runner deployment:

# values.yaml
gitlabUrl: "https://gitlab.com/"
runnerRegistrationToken: REGISTRATION_TOKEN_HERE
concurrent: 1
runners:
config: |
[[runners]]
[runners.kubernetes]
namespace = "{{.Release.Namespace}}"
image = "alpine:latest"
helper_image = "registry.gitlab.com/gitlab-org/gitlab-runner/gitlab-runner-helper:arm64-${CI_RUNNER_REVISION}"
service_account = "gitlab-deployer"
name: "candybox-runner-1"
tags: "candybox"
runUntagged: false
rbac:
create: true
$ microk8s.helm3 upgrade --namespace gitlab-runner -f values.yaml gitlab-runner gitlab/gitlab-runner

With this configuration, the jobs spawned from our GitLab runner will have the cluster-admin permission letting them manage all resources in the cluster.

The last piece: the deployment job

Our GitLab runner is ready for some action, so let’s write that deployment job:

# .gitlab-ci.yml
.auto-deploy:
image: "registry.gitlab.com/gitlab-org/cluster-integration/auto-deploy-image:${AUTO_DEPLOY_IMAGE_VERSION}"
dependencies: []
before_script:
- export KUBE_INGRESS_BASE_DOMAIN="candybox"
- export KUBE_NAMESPACE="$CI_PROJECT_NAME-$CI_PROJECT_ID-$CI_ENVIRONMENT_SLUG"
- echo "$KUBE_INGRESS_BASE_DOMAIN"
- echo "$KUBE_NAMESPACE"
.production: &production_template
extends: .auto-deploy
stage: production
tags:
- candybox
script:
- auto-deploy check_kube_domain
- auto-deploy download_chart
- auto-deploy ensure_namespace
- auto-deploy initialize_tiller
- auto-deploy create_secret
- auto-deploy deploy
- auto-deploy delete canary
- auto-deploy persist_environment_url
environment:
name: production
url: http://candybox
artifacts:
paths: [environment_url.txt, tiller.log]
when: always
production_manual:
<<: *production_template
allow_failure: false
rules:
- if: '$CI_DEPLOY_FREEZE != null'
when: never
- if: '$INCREMENTAL_ROLLOUT_ENABLED'
when: never
- if: '$INCREMENTAL_ROLLOUT_MODE'
when: never
- if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH'
when: manual

As can be seen in the job definition, we are using a tag to make sure that the production_manual task runs inside our microk8s cluster.

This job deploys a Helm chart which consists of a deployment, a service and an ingress to expose our web app.

Important note here: running a microk8s cluster is quite a stretch for the Raspberry Pi (even my RPi4 8GiB) and the deployment jobs frequently fail as the Kubernetes API may get unresponsive due to the load.

Once the deployment job has managed to run successfully we can inspect and hit our web app:

$ microk8s.kubectl get namespaces
# Find the deployment namespace, it should be similar to `candybox-playground-1234567-production`
$ microk8s.kubectl -n candybox-playground-29546654-production get all
$ curl -v http://localhost -H "Host: candybox"
# We get HTTP 200 and our index page

Conclusion

Success! We managed to create a basic CI/CD pipeline which we can use to build and deploy a simple web app to the Raspberry Pi 4.

This was an interesting project: it shows that arch64 is a viable platform which still need a bit more love so that we can use it more seamlessly.

I’m looking forward starting new projects to build for my little compute box, so stay tuned for more!

--

--