Getting rke and Rancher setup to run kubernetes on arm is interesting. There is no official support yet via rancher, although there is interest and some work done towards those efforts. This is my attempt at getting a cluster of 3 Pis (2 3Bs and 1 3B+) provisioned and registered to a rancher 2 server.
I've successfully completed this both with Hypriot OS 1.9.0 and the arm64 builds https://github.com/DieterReuter/image-builder-rpi64 Both times I used the same basic cloud-init setup
NOTE: I have since abandoned support for arm, and now only focus on arm64. There are a few packages that don't have 32bit binaries available.
With 1.9.0, Docker 18.05 is installed. Unfortunately, there exists a rather annoying bug that causes bind mounts to fail. Solution: A quick fstab edit
/var/lib/rancher /var/lib/rancher none defaults,bind 0 0
/var/lib/docker /var/lib/docker none defaults,bind 0 0
/var/lib/kubelet /var/lib/kubelet none defaults,bind 0 0
Edit /boot/cmdline.txt
and add cgroup_enable=memory
in order for kubelet to start.
I had to build images for both rke-tools and flannel-cni. This was as simple cloning of the repos and build. On a Pi:
# Install Dapper, needed for building some images
sudo curl -sL https://releases.rancher.com/dapper/latest/dapper-`uname -s`-`uname -m` -o /usr/local/bin/dapper;
sudo chmod +x /usr/local/bin/dapper
export REPO=ags131
export ARCH=arm64
export DAPPER_HOST_ARCH=arm # the rancher repos aren\'t currently configured to build `arm64` binaries, so we use `arm` instead
docker login # Login to DockerHub so we can push
git clone https://github.com/coreos/flannel-cni
cd flannel-cni
git checkout v0.3.0
sed -i 's/amd64/'$ARCH'/' scripts/build-image.sh
sed -i 's_quay.io/coreos_'$REPO'_/' scripts/build-image.sh
scripts/build-image.sh
docker tag $REPO/flannel-cni:v0.3.0-dirty $REPO/flannel-cni:v0.3.0-$ARCH
docker push $REPO/flannel-cni:v0.3.0-$ARCH
cd ..
git clone https://github.com/rancher/rke-tools.git
cd rke-tools
git checkout v0.1.10
sed -i 's/amd64/'$ARCH'/' package/Dockerfile
sed -i 's|https://get.docker.com/builds/Linux/x86_64/docker-1.12.3.tgz|https://download.docker.com/linux/static/stable/aarch64/docker-17.09.1-ce.tgz|' package/Dockerfile
sed -i 's/3\.0\.17/3.3.8/' package/Dockerfile
dapper
docker tag $REPO/rke-tools:dev $REPO/rke-tools:v0.1.10_$ARCH
git push $REPO/rke-tools:v0.1.10_arm64
cd ..
# Skip this one if not importing into Rancher 2
git clone https://github.com/rancher/rancher
cd rancher
git checkout v2.0.4
dapper
git push $REPO/rancher-agent:v2.0.4_arm
cd ..
I used a basic cluster.yml with rke for provisioning. You can use kubeadm, but why when rke is so much nicer? :)
The important parts are:
ignore_docker_version: true
This prevents it from complaining about Docker versions, I did not want to deal with downgrading in Hypriot- network plugin: using flannel since it has arm images already, AFAIK, canal/calico does not.
- system_images: this configures all the system images for the cluster,
since we are running arm, we need arm images. If using an arm64 OS, change all the images except the rke-tools from
arm
toarm64
I run a central rancher server for my clusters, so I registered on it.
In Rancher, create a new imported cluster, don't run the kubectl apply
line directly,
we want to edit the yaml first.
Save the yaml, change both instances of image
to your arm image
($REPO/rancher-agent:v2.0.4_$ARCH
from the earlier steps)
Here is where it gets tricky: If we apply the yaml as is, it will deploy and register with rancher correctly, however, rancher will attempt to 'upgrade' it to the original amd64 images.
Since we don't want that, we need to edit the rancher settings:
- Open your cluster in rancher (it should still be in the unavailable/provisioning state)
- Go to the cluster tab, and on the far right menu select
View in API
- Click
Edit
. - In each of the large empty textboxes that are environment configs and enter
{}
- In
desiredAgentImage
insert your image name from earlier. - Select
Show Request
thenSend Request
. - Back in shell, run
kubectl apply -f <filename>
using the edited yaml.
After a few moments, rancher should take over and be ready to go. You may have to go into projects/namespaces settings and move the default namespace into the default project.
I do not know if this will survive a Rancher upgrade, as when 2.0.5 or higher comes out, you will need to build a newer image and possibly update the API again