Skip to content

Instantly share code, notes, and snippets.

@abasu0713
Last active June 30, 2024 14:57
Show Gist options
  • Save abasu0713/0333b3fb75442d17fa2020c5a3ac9e1c to your computer and use it in GitHub Desktop.
Save abasu0713/0333b3fb75442d17fa2020c5a3ac9e1c to your computer and use it in GitHub Desktop.
Deploy MongoDB Community K8s Operator on ARM64

MongoDB Community Kubernetes Operator on ARM64

This gist aims to provide the shortest path to deploying MongoDB Community Kubernetes Operator on ARM64 machines with some clarifications in response to the following Open Issues about ARM64 support on the official repository:

  1. #1514
  2. #1420

Prerequisite

  • Any ARM64 machine which can run Kubernetes with a Linux Operating System that supports snap daemon. I will be doing this on, you guessed it right, an Orange Pi 5B

You are free to use any Kubernetes Installer of your choice. I am using Microk8s since it's zero-ops and the lightest and most elegant Kuberentes installer that exists.

Step 1: Deploy the K8s cluster

Reference material: https://microk8s.io/docs/getting-started

Estimated Time: 3 min

# Install Kubernetes
sudo snap install microk8s --classic --channel=1.30

# Join the Group and restart session
sudo usermod -a -G microk8s $USER
mkdir -p ~/.kube
chmod 0700 ~/.kube
su - $USER

# Check for your K8s cluster being ready
microk8s status --wait-ready

# (Optional/Recommended) Enable Metrics Server so we can monitor the node usage
sudo microk8s enable metrics-server

# Alias the packaged tools
# And/Or install them if you are using a different K8s installer
alias kubectl="microk8s kubectl"
alias helm="microk8s helm"

Step 2: Provision a storage provider

If you are testing this out feel free to use local storage

Setup Dynamic Volume provisioning using Single Node Ceph

Reference Material: https://microk8s.io/docs/how-to-ceph

Estimated Time: 4-6 min

# Install Microceph
sudo snap install microceph --channel=latest/edge
# Let's hold it since without disaster recovery plan Ceph shouldn't be upgraded
sudo snap refresh --hold microceph
# Bootstrap the cluster
sudo microceph cluster bootstrap
# Add some OSDs
sudo microceph disk add loop,4G,3

# Check the cluster status
sudo microceph status
sudo microceph.ceph status

# Enable the rook-ceph operator
sudo microk8s enable rook-ceph

# Connect the k8s cluster to the ceph cluster
sudo microk8s connect-external-ceph

# let's patch the default storage class so deploying stateful applications can auto provision the volumes
kubectl patch sc ceph-rbd -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

With this approach your storage is now resilient. It's not High Availability though - but you could easily turn a ceph cluster into HA. Refer my other Github Gists to check how.

Step 3: Deploy MongoDB Community K8s Operator on Arm64 using Helm

Reference Material: https://github.com/mongodb/mongodb-kubernetes-operator/tree/master

Estimated Time: 5-7 min

  • Add the Helm Repo first:

    helm repo add mongodb https://mongodb.github.io/helm-charts
  • Grab the latest values and tweak them as necessary. Most important being watchNamespace and the mongodb-agent-ubi container registry for the mongodb-agent

    helm show values mongodb/community-operator > values.yaml

    Feel free to copy over the following values without worrying about having to tweak them

    ## Reference to one or more secrets to be used when pulling images
    ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
    imagePullSecrets: []
    # - name: "image-pull-secret"
    ## Operator
    operator:
      # Name that will be assigned to most of internal Kubernetes objects like
      # Deployment, ServiceAccount, Role etc.
      name: mongodb-kubernetes-operator
    
      # Name of the operator image
      operatorImageName: mongodb-kubernetes-operator
    
      # Name of the deployment of the operator pod
      deploymentName: mongodb-kubernetes-operator
    
      # Version of mongodb-kubernetes-operator
      version: 0.9.0
    
      # Uncomment this line to watch all namespaces
      watchNamespace: "*"
    
      # Resources allocated to Operator Pod
      resources:
        limits:
          cpu: 750m
          memory: 750Mi
        requests:
          cpu: 200m
          memory: 200Mi
    
      # replicas deployed for the operator pod. Running 1 is optimal and suggested.
      replicas: 1
    
      # Additional environment variables
      extraEnvs: []
      # environment:
      # - name: CLUSTER_DOMAIN
      #   value: my-cluster.domain
    
      podSecurityContext:
        runAsNonRoot: true
        runAsUser: 2000
    
      securityContext: {}
    
    ## Operator's database
    database:
      name: mongodb-database
      # set this to the namespace where you would like
      # to deploy the MongoDB database,
      # Note if the database namespace is not same
      # as the operator namespace,
      # make sure to set "watchNamespace" to "*"
      # to ensure that the operator has the
      # permission to reconcile resources in other namespaces
      # namespace: mongodb-database
    
    agent:
      # This is the Important bit. Without this your deployment will fail on an ARM64 machine
      name: mongodb-agent-ubi
      version: 107.0.6.8587-1-arm64
    versionUpgradeHook:
      name: mongodb-kubernetes-operator-version-upgrade-post-start-hook
      version: 1.0.8
    readinessProbe:
      name: mongodb-kubernetes-readinessprobe
      version: 1.0.17
    mongodb:
      name: mongo
      repo: docker.io
    
    registry:
      agent: quay.io/mongodb
      versionUpgradeHook: quay.io/mongodb
      readinessProbe: quay.io/mongodb
      operator: quay.io/mongodb
      pullPolicy: Always
    
    # Set to false if CRDs have been installed already. The CRDs can be installed
    # manually from the code repo: github.com/mongodb/mongodb-kubernetes-operator or
    # using the `community-operator-crds` Helm chart.
    community-operator-crds:
      enabled: true
  • Now let's deploy it.

    # let's deploy it in the default namespace. Since we are setting it to watch 
    # all namespaces - it will be fine. 
    kubectl config set-context --current --namespace=default
    
    # Deploy the cluster operator
    helm install community-operator mongodb/community-operator -f values.yaml
    
    # Check status
    kubectl get po -n default
    NAME                                           READY   STATUS    RESTARTS        AGE
    mongodb-kubernetes-operator-5f5b89f5df-6r9mb   1/1     Running   1 (3h25m ago)   3d8h

Step 4: Deploy Replica Sets for MongoDB deployments

I am going to show you how to deploy resources in 1 namespace. The steps have to be repeated for each namespace you want the operator to control resources.

In this case I will use preview namespace

  • First install the RBAC configuratins

    kubectl config set-context --current --namespace=preview
    git clone https://github.com/mongodb/mongodb-kubernetes-operator.git
    cd mongodb-kubernetes-operator
    kubectl apply -k config/rbac --namespace preview
  • Create a secrets.yaml file where we are going to store credentials for use by the operator.

    It is recommended you delete the secrets after all the steps of deployment is complete. Once the MongoDB stateful set starts up you don't need to keep the secrets on Kubernetes any longer.

    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: admin-credential
    type: Opaque
    stringData:
      password: <your-admin-password>
    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: preview-readonly-minimal-credential
    type: Opaque
    stringData:
      password: <your-readonly-password>
    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: preview-readwrite-credential
    type: Opaque
    stringData:
      password: <your-read-write-password>

    Now create the secrets

    kubectl apply -k secrets.yaml -n preview

    Now let's create a Standalone deployment spec. Copy over the following file:

    ---
    apiVersion: mongodbcommunity.mongodb.com/v1
    kind: MongoDBCommunity
    metadata:
      name: preview-mongodb
    spec:
      members: 1
      type: ReplicaSet
      version: "6.0.5"
      security:
        authentication:
          modes: ["SCRAM"]
        roles: # custom roles are defined here
          - role: dbTestReadOnly
            db: test
            privileges:
              - resource:
                  db: "test"
                  collection: "" # an empty string indicates any collection
                actions:
                  - find
            roles: []
      users:
        - name: admin
          db: admin
          passwordSecretRef: # a reference to the secret that will be used to generate the user's password. One we created earlier
            name: admin-credential
          connectionStringSecretName: admin-connection-string
          roles:
            - name: clusterAdmin
              db: admin
            - name: userAdminAnyDatabase
              db: admin
            - name: readWriteAnyDatabase
              db: admin
            - name: dbAdminAnyDatabase
              db: admin
          scramCredentialsSecretName: admin-scram
        - name: preview-readonly-minimal
          db: test
          passwordSecretRef: # a reference to the secret that will be used to generate the user's password
            name: preview-readonly-minimal-credential
          connectionStringSecretName: preview-readonly-minimal-connection-string
          roles:
            - name: dbTestReadOnly
              db: test
          scramCredentialsSecretName: preview-readonly-minimal-scram
        - name: preview-readwrite
          db: test
          passwordSecretRef: # a reference to the secret that will be used to generate the user's password
            name: preview-readwrite-credential
          connectionStringSecretName: preview-backend-readwrite-connection-string
          roles:
            - name: readWrite
              db: test
          scramCredentialsSecretName: preview-backend-readwrite-scram
      additionalMongodConfig:
        storage.wiredTiger.engineConfig.journalCompressor: zlib
      statefulSet:
        spec:
          volumeClaimTemplates:
            - metadata:
                name: data-volume
              spec:
                accessModes: 
                  - "ReadWriteOnce"
                resources:
                  requests:
                    storage: 3Gi
            - metadata:
                name: logs-volume
              spec:
                accessModes: 
                  - "ReadWriteOnce"
                resources:
                  requests:
                    storage: 512Mi
          template:
            spec:
              containers:
                - name: mongodb-agent
                  readinessProbe:
                    failureThreshold: 40
                    initialDelaySeconds: 5
    
    # the user credentials will be generated from this secret
    # once the credentials are generated, this secret is no longer required

Explanation:

In above YAML we are:

  1. Defining an admin user with all elevated privileges.
  2. Defining a custom role dbTestReadOnly with readonly rights to a single database called: test. We can use this for a frontend API workload that only responds to GET/Read like requests.
  3. Defining an user with Read/Write permissions.
  4. Defining some volume specs for our underlying storage.

Let's go ahead and deploy it now:

kubectl apply -k standalone.yaml

# Check the status
kubectl get sts

# Check volumes provisioned
kubectl get pv,pvc
Screenshot 2024-06-29 at 10 01 16 PM Screenshot 2024-06-29 at 10 02 10 PM

And voila! You now have a Mongodb deployed.

Step 5: Validate MongoDB Deployment

We are simply going to use the mongosh tool to validate access:

# Get the connection string 
kubectl get secret preview-backend-readwrite-connection-string -o json | jq -r '.data | with_entries(.value |= @base64d)'

# Copy the Value for key: connectionString.standardSrv from the above output

# Connect to the mongodb pod
kubectl exec --stdin --tty preview-mongodb-0 -- /bin/bash

# Connect to mongosh 
mongosh "<connectionString.standardSrv>"

image

And boom. Feel free to write queries and play around now.

Conclusion

At this point you can simply mount those credentials as secrets for your pods to consume and access the MongoDB deployments using whichever client library you like. The documentation for deploying this operator with differnet levels of permissions is really scant especially for ARM64 architectures. Hope this Gist is helpful to people who might be trying to deploy this on a Raspeberry/Orange Pi or Raxda like device.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment