Skip to content

Instantly share code, notes, and snippets.

@triangletodd
Last active November 11, 2024 23:01
Show Gist options
  • Save triangletodd/02f595cd4c0dc9aac5f7763ca2264185 to your computer and use it in GitHub Desktop.
Save triangletodd/02f595cd4c0dc9aac5f7763ca2264185 to your computer and use it in GitHub Desktop.
k3s in LXC on Proxmox

On the host

Ensure these modules are loaded

cat /proc/sys/net/bridge/bridge-nf-call-iptables

Disable swap

sysctl vm.swappiness=0
swapoff -a

Enable IP Forwarding

The first time I tried to get this working, once the cluster was up, the traefik pods were in CrashloopBackoff due to ip_forwarding being disabled. Since LXC containers share the host's kernel, we need to enable this on the host.

echo 'net.ipv4.ip_forward=1' >> /etc/sysctl.conf
sysctl --system

Create the k3s container

Uncheck unprivileged container

general.png

Set swap to 0

memory.png

Enable DHCP

network.png

Results

confirm.png

Back on the Host

Edit the config file for the container (/etc/pve/lxc/$ID.conf) and add the following:

lxc.apparmor.profile: unconfined
lxc.cgroup.devices.allow: a
lxc.cap.drop:
lxc.mount.auto: "proc:rw sys:rw"

In the container

/etc/rc.local

/etc/rc.local doesn't exist in the default 20.04 LXC template provided by Rroxmox. Create it with these contents:

#!/bin/sh -e

# Kubeadm 1.15 needs /dev/kmsg to be there, but it's not in lxc, but we can just use /dev/console instead
# see: https://github.com/kubernetes-sigs/kind/issues/662
if [ ! -e /dev/kmsg ]; then
    ln -s /dev/console /dev/kmsg
fi

# https://medium.com/@kvaps/run-kubernetes-in-lxc-container-f04aa94b6c9c
mount --make-rshared /

Then run this:

chmod +x /etc/rc.local
reboot

Installing k8s

k3sup Installation

Assuming $HOME/bin is in your PATH:

curl -sLS https://get.k3sup.dev | sh
mv k3sup ~/bin/k3sup && chmod +x ~/bin/k3sup

k8s Installation

k3sup install --ip $CONTAINER_IP --user root

Test

KUBECONFIG=kubeconfig kubectl get pods --all-namespaces
NAMESPACE     NAME                                     READY   STATUS      RESTARTS   AGE
kube-system   metrics-server-7566d596c8-zm7tj          1/1     Running     0          69m
kube-system   local-path-provisioner-6d59f47c7-ldbcl   1/1     Running     0          69m
kube-system   helm-install-traefik-glt48               0/1     Completed   0          69m
kube-system   coredns-7944c66d8d-67lxp                 1/1     Running     0          69m
kube-system   traefik-758cd5fc85-wzcst                 1/1     Running     0          68m
kube-system   svclb-traefik-cwd9h                      2/2     Running     0          42m

References

@ky-bd
Copy link

ky-bd commented Jul 11, 2023

I was able to use unprivileged containers too, but I'm not sure cgroup:rw is necessary. I didn't use it, but, everything seems to be working.

Scratch that. There are too many apps that error out when trying to do this unprivileged, like rancher.

2023/07/10 06:33:16 [INFO] Applying CRD machinesets.cluster.x-k8s.io
2023/07/10 06:33:23 [FATAL] error running the jail command: exit status 2

Privileged works though.

Yeah, I found that unprivilegd LXC failed to mount block devices, so Longhorn and probably other CSI driver won't work. I gave it up and just turned to VMs though.

@glassman81
Copy link

I was able to use unprivileged containers too, but I'm not sure cgroup:rw is necessary. I didn't use it, but, everything seems to be working.

Scratch that. There are too many apps that error out when trying to do this unprivileged, like rancher.

2023/07/10 06:33:16 [INFO] Applying CRD machinesets.cluster.x-k8s.io
2023/07/10 06:33:23 [FATAL] error running the jail command: exit status 2

Privileged works though.

Yeah, I found that unprivilegd LXC failed to mount block devices, so Longhorn and probably other CSI driver won't work. I gave it up and just turned to VMs though.

I'm having the same problem even with privileged LXCs. Longhorn goes through this process of constantly attaching/detaching when the frontend is block device. When it's iSCSI, it doesn't even attempt to attach, though I think that's because the CSI driver doesn't support iSCSI mode.

Did you ever get longhorn to work with privileged LXCs, or it just didn't work all around?

@glassman81
Copy link

Well, it seems in its current state, longhorn won't work with LXCs:

longhorn/longhorn#2585
longhorn/longhorn#3866

This is not to say that it can't, just that someone hasn't figured it out yet. Maybe if someone like @timothystewart6 is interested (hopefully), he can have a go at it. His pretty awesome work led me here in the first place, so I can only hope.

@ky-bd
Copy link

ky-bd commented Jul 14, 2023

Well, it seems in its current state, longhorn won't work with LXCs:

longhorn/longhorn#2585 longhorn/longhorn#3866

This is not to say that it can't, just that someone hasn't figured it out yet. Maybe if someone like @timothystewart6 is interested (hopefully), he can have a go at it. His pretty awesome work led me here in the first place, so I can only hope.

I read those issues before, and that's part of the reason why I gave up before trying privileged LXC.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment