Skip to content

Instantly share code, notes, and snippets.

@jlollis
Forked from AlinaNova21/README.md
Created May 3, 2021 17:32
Show Gist options
  • Save jlollis/794af3c3bde0011ddf79419a238a0603 to your computer and use it in GitHub Desktop.
Save jlollis/794af3c3bde0011ddf79419a238a0603 to your computer and use it in GitHub Desktop.
Rancher 2.0, RKE, and some Raspberry Pi 3s

Kubernetes and Arm

Getting rke and Rancher setup to run kubernetes on arm is interesting. There is no official support yet via rancher, although there is interest and some work done towards those efforts. This is my attempt at getting a cluster of 3 Pis (2 3Bs and 1 3B+) provisioned and registered to a rancher 2 server.

Prep

I've successfully completed this both with Hypriot OS 1.9.0 and the arm64 builds https://github.com/DieterReuter/image-builder-rpi64 Both times I used the same basic cloud-init setup

NOTE: I have since abandoned support for arm, and now only focus on arm64. There are a few packages that don't have 32bit binaries available.

Docker 18.05 Bug

With 1.9.0, Docker 18.05 is installed. Unfortunately, there exists a rather annoying bug that causes bind mounts to fail. Solution: A quick fstab edit

/var/lib/rancher /var/lib/rancher none defaults,bind 0 0
/var/lib/docker /var/lib/docker none defaults,bind 0 0
/var/lib/kubelet /var/lib/kubelet none defaults,bind 0 0

Hypriot arm64 images

Edit /boot/cmdline.txt and add cgroup_enable=memory in order for kubelet to start.

Building Arm Images

I had to build images for both rke-tools and flannel-cni. This was as simple cloning of the repos and build. On a Pi:

# Install Dapper, needed for building some images
sudo curl -sL https://releases.rancher.com/dapper/latest/dapper-`uname -s`-`uname -m` -o /usr/local/bin/dapper;
sudo chmod +x /usr/local/bin/dapper

export REPO=ags131
export ARCH=arm64
export DAPPER_HOST_ARCH=arm # the rancher repos aren\'t currently configured to build `arm64` binaries, so we use `arm` instead
docker login # Login to DockerHub so we can push

git clone https://github.com/coreos/flannel-cni
cd flannel-cni
git checkout v0.3.0
sed -i 's/amd64/'$ARCH'/' scripts/build-image.sh
sed -i 's_quay.io/coreos_'$REPO'_/' scripts/build-image.sh
scripts/build-image.sh
docker tag $REPO/flannel-cni:v0.3.0-dirty $REPO/flannel-cni:v0.3.0-$ARCH
docker push $REPO/flannel-cni:v0.3.0-$ARCH
cd ..

git clone https://github.com/rancher/rke-tools.git
cd rke-tools
git checkout v0.1.10
sed -i 's/amd64/'$ARCH'/' package/Dockerfile
sed -i 's|https://get.docker.com/builds/Linux/x86_64/docker-1.12.3.tgz|https://download.docker.com/linux/static/stable/aarch64/docker-17.09.1-ce.tgz|' package/Dockerfile
sed -i 's/3\.0\.17/3.3.8/' package/Dockerfile
dapper
docker tag $REPO/rke-tools:dev $REPO/rke-tools:v0.1.10_$ARCH
git push $REPO/rke-tools:v0.1.10_arm64
cd ..

# Skip this one if not importing into Rancher 2
git clone https://github.com/rancher/rancher
cd rancher
git checkout v2.0.4
dapper
git push $REPO/rancher-agent:v2.0.4_arm
cd ..

Provision Cluster

I used a basic cluster.yml with rke for provisioning. You can use kubeadm, but why when rke is so much nicer? :)

The important parts are:

  • ignore_docker_version: true This prevents it from complaining about Docker versions, I did not want to deal with downgrading in Hypriot
  • network plugin: using flannel since it has arm images already, AFAIK, canal/calico does not.
  • system_images: this configures all the system images for the cluster, since we are running arm, we need arm images. If using an arm64 OS, change all the images except the rke-tools from arm to arm64

Register to Rancher

I run a central rancher server for my clusters, so I registered on it.

In Rancher, create a new imported cluster, don't run the kubectl apply line directly, we want to edit the yaml first. Save the yaml, change both instances of image to your arm image ($REPO/rancher-agent:v2.0.4_$ARCH from the earlier steps)

Here is where it gets tricky: If we apply the yaml as is, it will deploy and register with rancher correctly, however, rancher will attempt to 'upgrade' it to the original amd64 images.

Since we don't want that, we need to edit the rancher settings:

  1. Open your cluster in rancher (it should still be in the unavailable/provisioning state)
  2. Go to the cluster tab, and on the far right menu select View in API
  3. Click Edit.
  4. In each of the large empty textboxes that are environment configs and enter {}
  5. In desiredAgentImage insert your image name from earlier.
  6. Select Show Request then Send Request.
  7. Back in shell, run kubectl apply -f <filename> using the edited yaml.

After a few moments, rancher should take over and be ready to go. You may have to go into projects/namespaces settings and move the default namespace into the default project.

I do not know if this will survive a Rancher upgrade, as when 2.0.5 or higher comes out, you will need to build a newer image and possibly update the API again

nodes:
- address: pi-cluster-1
user: adam
role:
- controlplane
- etcd
- worker
labels:
pi-model: 3bplus
- address: pi-cluster-2
user: adam
role:
- worker
labels:
pi-model: 3b
- address: pi-cluster-3
user: adam
role:
- worker
labels:
pi-model: 3b
- address: pi-cluster-4
user: adam
role:
- worker
labels:
pi-model: 3b
authentication:
strategy: x509
sans: []
ignore_docker_version: true
network:
plugin: flannel
system_images:
kubernetes: k8s.gcr.io/hyperkube-arm64:v1.10.5 #rancher/hyperkube:v1.10.3-rancher2
etcd: k8s.gcr.io/etcd-arm64:3.1.17 #rancher/coreos-etcd:v3.1.12
alpine: ags131/rke-tools:v0.1.10_arm64
nginx_proxy: ags131/rke-tools:v0.1.10_arm64
cert_downloader: ags131/rke-tools:v0.1.10_arm64
kubernetes_services_sidecar: ags131/rke-tools:v0.1.10_arm64
kubedns: k8s.gcr.io/k8s-dns-kube-dns-arm64:1.14.8
dnsmasq: k8s.gcr.io/k8s-dns-dnsmasq-nanny-arm64:1.14.8
kubedns_sidecar: k8s.gcr.io/k8s-dns-sidecar-arm64:1.14.8
kubedns_autoscaler: k8s.gcr.io/cluster-proportional-autoscaler-arm64:1.0.0
pod_infra_container: k8s.gcr.io/pause:3.1
# Flannel Networking Options
flannel: quay.io/coreos/flannel:v0.9.1-arm64
flannel_cni: ags131/flannel-cni:v0.3.0-arm64
# Ingress Options
ingress: quay.io/kubernetes-ingress-controller/nginx-ingress-controller-arm64:0.10.2
ingress_backend: k8s.gcr.io/defaultbackend-arm64:1.4
addons: |-
---
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.0.200-192.168.0.210
addons_include:
- https://raw.githubusercontent.com/google/metallb/v0.6.2/manifests/metallb.yaml
#cloud-config
hostname: pi-cluster-1
manage_etc_hosts: true
users:
- name: adam
sudo: ALL=(ALL) NOPASSWD:ALL
shell: /bin/zsh
groups: users,docker,video
ssh_authorized_keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDlPf3egS4avuZs9+BCqO7mW1/uk1UOIBLX5oj9qtO3IHbHAJCXCAKcRmZPc6uGQpv2HZjcpkSnr1pxGT3mubcc8/tFR6JO3ZeTMfA6UcrOQjPJXv+/5w8sopdPjFETnnsaXxBKkjKh7aswiYzYoiXTYkUTuSIvh50uAs2HI+C18xYkKSMLOF+G6CQTMRFD+ZaqAZW1M0/L4gWvA/A2r6kzJzXrTLQTqaJ62KfuRbVL5YqYziO/cuXxbvnq2qP6bfk/6i+K7VnC7DZNu17XIYjU4ajy5YWBns7GksE5MopMUyOhLFuGRYGgNtqf1q621fcz+7b13OfM4hLCCU/N7oVB adam@IMS-ADAM
package_update: true
package_upgrade: true
package_reboot_if_required: true
packages:
- ntp
- zsh
- htop
locale: "en_US.UTF-8"
timezone: "America/Chicago"
write_files:
- path: "/etc/docker/daemon.json"
owner: "root:root"
content: |
{
"labels": [ "os=linux", "arch=arm64" ],
"experimental": true
}
runcmd:
- [ systemctl, restart, avahi-daemon ]
- [ systemctl, restart, docker ]
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment