Skip to content

Instantly share code, notes, and snippets.

@Drallas
Last active December 25, 2024 14:46
Show Gist options
  • Save Drallas/e03eb5a4f68bb526f920a423455bc0c9 to your computer and use it in GitHub Desktop.
Save Drallas/e03eb5a4f68bb526f920a423455bc0c9 to your computer and use it in GitHub Desktop.

Docker Swarm in LXC Containers

Part of collection: Hyper-converged Homelab with Proxmox

After struggling for some days, and since I really needed this to work (ignoring the it can't be done vibe everywhere), I managed to get Docker to work reliable in privileged Debian 12 LXC Containers on Proxmox 8

(Unfortunately, I couldn't get anything to work in unprivileged LXC Containers)

There are NO modifications required on the Proxmox host or the /etc/pve/lxc/xxx.conf file; everything is done on the Docker Swarm host. So the only obvious candidate who could break this setup, are future Docker Engine updates!

Host Setup

My host are Debian 12 LXC containers, installed via tteck's Proxmox VE Helper Scripts

Install the LXC via the Proxmox VE Helper Script

bash -c "$(wget -qLO - https://github.com/tteck/Proxmox/raw/main/ct/debian.sh)"

Backing filesystems

Docker info shows i'm using overlay2, this is the recommended storage driver for Debian. This storage driver requires XFS or EXT4 as backing file system.

docker info | grep -A 7 "Storage Driver:"

 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Using metacopy: false
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: systemd

Set userns-remap

As Neuer_User pointed out, running the Docker Containers unprivileged on a privileged

LXC seems the best compromise to run the containers in a relative secure way.

To do so, add a daemon.json on the Docker Servers that are part of the Swarm.

mkdir /etc/docker
nano /etc/docker/daemon.json
{
  "userns-remap": "root"
}

And reboot reboot the Docker Host.

(This moves everything below /var/lib/docker/ to the folder /var/lib/docker/0.0/ existing workload disappear, hence it's a step pre Docker installation!)

Install Docker

The get-docker.sh script is the most convenient way to quickly install the latest Docker-CE release!

curl -fsSL https://get.docker.com -o get-docker.sh
chmod +x get-docker.sh
./get-docker.sh

Join Create Docker / Swarm

Without this step, the next step(s) fail!

# Manager Node
docker swarm init

# Add Node
docker swarm join --token <some-very-long-token>

# Display Join token again
docker swarm join-token worker
docker swarm join-token manager

Add ipv4 for Ingress_sbox

For Docker in LXC to work, the only thing needed is to execute:

nsenter --net=/run/docker/netns/ingress_sbox sysctl -w net.ipv4.ip_forward=1

on the Docker Swarm Servers

Make it permanent

This doesn't survive reboots, so I created an oneshot systemd service for it, to make sure that after each reboot the setting is applied.

Create a Bash Script

First, we need a Bash script to be executed by the service.

nano /usr/local/bin/ipforward.sh

#!/bin/bash
nsenter --net=/run/docker/netns/ingress_sbox sysctl -w net.ipv4.ip_forward=1

Make it executable

chmod +x /usr/local/bin/ipforward.sh

Create a Systemd Service

This service is of the type oneshot, during startup it waits for the docker.service to be started, and then 10 seconds for run-docker-netns-ingress_sbox.mount to be loaded. Only after that net.ipv4.ip_forward=1 can be applied.

nano /etc/systemd/system/ingress-sbox-ipforward.service
[Unit]
Description = Set net.ipv4.ip_forward for ingress_sbox namespace
After = docker.service
Wants = docker.service

[Service]
Type = oneshot
RemainAfterExit = yes
ExecStartPre = /bin/sleep 10
ExecStart = /usr/local/bin/ipforward.sh

[Install]
WantedBy = multi-user.target

Start the service and check if it's healthy

systemctl daemon-reload
systemctl enable ingress-sbox-ipforward.service
systemctl start ingress-sbox-ipforward.service
systemctl status ingress-sbox-ipforward.service

Final Checks

Without ipv4.ip_forward set to 1, the Ingress Networking to the Docker Swarm is not active. So it's important to verify if the value is applied successfully.

Manual check if ipv4.ip_forward is set to 1

systemctl status ingress-sbox-ipforward.service | grep ipforward.sh

# Or in a script via:

current_value=$(nsenter --net=/run/docker/netns/ingress_sbox sysctl -n net.ipv4.ip_forward)
echo $current_value

(Now, Docker in LXC seems to behave as Docker in a VM.)

Issues

  1. Service in docker-compose resolved wrong ip

To fix this, it’s needed to add a hostname entry for each swarm service, to make it more logical I also add a prefix service to the service names.

services:
  service_nginx: # Prefix service_
    image: nginx
    hostname: nginx

Links

Screenshot

Screenshot 2023-09-21 at 14 38 45

@scyto
Copy link

scyto commented Sep 24, 2023

There is something very wrong in docker on lxc if that ip issue is an issue. Reading the linked gist issue everything is working as it should - the service name resolves to the docker VIP - I think the issue is people using weird network approaches - like host networking (don't do in swarm) and using same ranges on VIP network and host network....

Also not sure why you have to create a system service - isn't needed on real Debian? It all makes me nervous docker on lxc is very fragile...

@Drallas
Copy link
Author

Drallas commented Sep 24, 2023

This was pre my Virtiofs discovery, now I can move Docker Swarm to VMs. 😀

The service is needed to set net.ipv4.ip_forward=1 which only can be done after run-docker-netns-ingress_sbox.mount becomes active.

Overall this approach is pretty ok, no weird host config, but only one simple setting inside the docker host.

I couldn’t find anyone with a better solution I could work off.

@scyto
Copy link

scyto commented Sep 24, 2023

oh to be clear, i am darn impressed, reading all the horror strories on the forum around docker in LXC made me assume it wasn't really possible
it is the beauty of linux that it is so customizable

@dlasher
Copy link

dlasher commented Jan 27, 2024

So there's a couple of subtle things:

  1. The VE helper scripts above, if you accept defaults, set up UNPRIV LXC containers - which make docker inside fail in unpretty ways. You mention it both ways, wouldn't hurt to put something in bold/red. I went through this a dozen times and missed that point. (And if you use the docker scripts from https://tteck.github.io/Proxmox/ - they are all UNPRIV as well)

  2. I wrote a little startup script to make sure the ingress_sbox is active, then sets the net_ipv4.ip_forward=1. (Posted on proxmox forum, but will share it here)

#!/bin/bash
for lp in {1..60};do
        if exists=$(test -f /run/docker/netns/ingress_sbox)
        then
                nsenter --net=/run/docker/netns/ingress_sbox sysctl -w net.ipv4.ip_forward=1
                exit
        else
                echo "waiting $lp/60 - ingress_sbox does not exist"
                sleep 1
        fi
done
  1. You can use other backing storage than XFS/ZFS, but it takes a little more work, and some help from fuse-overlayfs. Using your guide, I got docker swarm happy on a full proxmox 8.1.x cluster, with CEPH as the backing store. (https://c-goes.github.io/posts/proxmox-lxc-docker-fuse-overlayfs/)

Thanks for putting this together - it got me 99% of the way there, much appreciated.

@SbMan1
Copy link

SbMan1 commented May 5, 2024

Very interesting reads. When you have time, Is this all still working?

@Drallas
Copy link
Author

Drallas commented May 5, 2024

@SbMan1 My setup is still working, but depending on your OS / setup it might need some tweaking, see comments for some help.

@00Asgaroth00
Copy link

00Asgaroth00 commented May 17, 2024

interesting! I've just managed to get this going with pve 8.2. I've also bind mounted a cephfs mount into the lxc containers. while testing (with portainer swarm compose deployed) if i drain the portainer app off of a manager node, when it restarts on another manager it claims that the poertainer.db file is invalid. only thing restoring the service is if i delete the portainer.db file and re-setup portainer again, which is mildly annoying. did you ever encounter this issue? I have the portainer volumes mounted in as local directories off of the cephfs mount in the lxc container.

edit: I'm using the rocky lxc image for the lxc os (i'm more familiar with it). I tried alpine but I could not get it to work reliably at all.

@Drallas
Copy link
Author

Drallas commented May 17, 2024

@00Asgaroth00 I tested this with Debian 12 LXC’s, not sure what’s different on Rocky. Is the Docker Volume with the portainer.db on a shared volume that all swarm nodes can RWX to?

@00Asgaroth00
Copy link

Hi, yes the cephfs is mounted across all swarm nodes, the mount is defined in lxc conf file as follows:

mp0: /mnt/pve/cephfs/swarm_data,mp=/data,shared=1

Where the pve mount for cephfs is /mnt/pve/cephfs, the "swarm_data" is a directory under that mount point on the pve host itself.

I can see the data on all lxc nodes and ai can "cat" text files on all nodes and the data appears correct.
The portainer.db file is a boltdb data file so i cannot easily see its data to see where it is going wrong :/

@Drallas
Copy link
Author

Drallas commented May 17, 2024

Not sure, not using this anymore, i will see I have a backup in can restore to test..

@00Asgaroth00
Copy link

00Asgaroth00 commented May 17, 2024

Did you end up moving over to vm's with the virtiofs option (i see the heading there on your main page). I may switch to that if it has less headaches than running swarm in lxc. I find the creation of the vm's much simpler for lxc than qemu vms (i currently use ansible to automate the lot for lxc), however, if i switch to vm's i'll need to hook in packer to create a template first before cloning the vm's. anyhooo, i might switch to vm's and look into virtiofs for the cephfs shared filesystem, this is where i hoped the bind mounts for lxc would have sufficed...

no need to do a restore, thanks for commenting though!

EDIT:

For reference, this is the error message I get from the portainer app when the app fails over to another node while testing:

[root@swarm-manager-01 portainer]# docker logs abfb8f866522
2024/05/17 04:05PM INF github.com/portainer/portainer/api/cmd/portainer/main.go:369 > encryption key file not present | filename=portainer
2024/05/17 04:05PM INF github.com/portainer/portainer/api/cmd/portainer/main.go:392 > proceeding without encryption key |
2024/05/17 04:05PM INF github.com/portainer/portainer/api/database/boltdb/db.go:125 > loading PortainerDB | filename=portainer.db
2024/05/17 04:05PM FTL github.com/portainer/portainer/api/cmd/portainer/main.go:98 > failed opening store | error="invalid database"

@Drallas
Copy link
Author

Drallas commented May 17, 2024

I did run into this with Portainer and it might happen on VM’s too, need to check my documentation for details.

do other Containers persist data correctly when you move them over to another node?

@00Asgaroth00
Copy link

00Asgaroth00 commented May 17, 2024

I've not tested any other container to be honest, i might try something small like adguard or something like that just to see if it exhibits the same issue.

EDIT:

Just tried it out with adguardhome, and it looks like I have the same issue there as well. First start is okay, as soon as I drain the node to force a failover, the container fails to read the data on startup on the new node. I am however able to see to text files contents on all nodes and i can create new files on each of the nodes.

Adguardhome's spsecific error message:

[root@swarm-worker-02 ~]# docker logs 591ded60ee2b
2024/05/17 18:21:55.592321 [info] AdGuard Home, version v0.107.48
2024/05/17 18:21:55.593258 [info] tls: using default ciphers
2024/05/17 18:21:55.594867 [info] safesearch default: reset 253 rules
2024/05/17 18:21:55.693092 [info] Initializing auth module: /opt/adguardhome/work/data/sessions.db
2024/05/17 18:21:55.700098 [error] auth: open DB: /opt/adguardhome/work/data/sessions.db: invalid database
2024/05/17 18:21:55.700108 [fatal] initializing auth module failed

@Drallas
Copy link
Author

Drallas commented May 24, 2024

Perhaps AdGuard isn’t closing / shutting the DB in a clean state..

@00Asgaroth00
Copy link

00Asgaroth00 commented May 24, 2024

I'm not sure it seems to happen with both portainer and adguard. both databases get corrupted when i test a "failover", ie: drain the node the containers are running on and wait for them to be rescheduled elsewhere.

virtiofsd debug logs dont seem to indicate any issues either :(

@jimbothigpen
Copy link

@00Asgaroth00 : Have you made any progress debugging this issue? I'm bumping up against the same problem. Privileged LXCs 3 node swarm, Portainer works after the service first places and starts the container, but when the service restarts the container on another node, I have the same errors you're seeing. Portainer's /data directory is a bind mount to a cephfs directory, readable and writable by all swarm members.

@Drallas
Copy link
Author

Drallas commented May 28, 2024

@jimbothigpen How did you install Portainer and the agent?

I ran into similar issues, but didn’t document it at the time.

All I remember is that following this guide helped me.

@00Asgaroth00
Copy link

@00Asgaroth00 : Have you made any progress debugging this issue? I'm bumping up against the same problem. Privileged LXCs 3 node swarm, Portainer works after the service first places and starts the container, but when the service restarts the container on another node, I have the same errors you're seeing. Portainer's /data directory is a bind mount to a cephfs directory, readable and writable by all swarm members.

Hi, no, I did not make any progress with this, using lxc with bind mounts on the cephfs directory results in an invalid database when testing failover, its as if file state is not sync'd in time before failover completes on secondary node.

All I remember is that following this guide helped me.

That is exactly the guide I followed to start portainer in both lxc's and vm's.

With vm's and using virtiofs I can actually remove the files and do an ls on the remaining nodes and the files still show up implying that [i|d]node entries are not synced between instances. I'm still trying different parameters on virtiofsd, for example cache=never|none to see if i can force it to re-read directly from file system, but i've had no luck with it so far. at this point i'm starting to consider older tech like glusterfs with gfs2/ocfs2 filesystems for this. although, knowing that cephfs is available is messing with my ocd, i want to use that mount

@jimbothigpen
Copy link

Portainer & portainer agent were both installed using the same stack file you pointed to, just changed the /data volume to a bind mount aimed at the proper directory on the shared cephfs mount.

Frustrating thing is that I know this worked in the recent past. I've had this setup running for a while -- docker swarm in privileged lxc with a cephfs mount for persistent container data. Portainer was happily chugging along for the better part of a year, with dozens of host restarts and service relocations just working as expected. At some point in the last 6 weeks I noticed the portainer service failing (not 100% certain when it stopped working, as my attention has been elsewhere, and hadn't actually tried to log on to the portainer interface for a while). Seems entirely unrelated to the docker or portainer versions (I've tried multiple versions of docker and portainer recently, trying to get it to work as expected).

I also tried removing the mount point from the LXC and installing the ceph client inside the container and mounting via fstab. Same behavior.

Gave up hope this morning. Since I already have an NFS server exporting a couple of the cephfs directories, I just used that -- mounted the same directory via NFS on the docker hosts. Portainer now behaves as expected -- service is able to move to any host w/out complaint.

But yeah -- the added (admittedly minor) complexity of using NFS mounts inside the docker containers instead of a cephfs mount on the LXC makes my eye twitch a bit.

@00Asgaroth00
Copy link

I also tried removing the mount point from the LXC and installing the ceph client inside the container and mounting via fstab. Same behavior.

I just mounted the cephfs filesystem using the ceph client within the virtual machines fstab and portainer/adguard are working properly now. I did not try this in an lxc container though. I had to create a local bridge on each hypervisor and nat out traffic over the point-to-point link to get the virtual machine running on host 1 to communicate with monitors on host 2 and 3, but it is working away nicely now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment