apt-get update
apt-get upgrade
apt-get install curl
# Check VXLAN exists
curl -sSL https://raw.githubusercontent.com/docker/docker/master/contrib/check-config.sh | bash
apt-get install -y docker.io
cat << EOF > /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=cgroupfs"]
}
EOF
apt-get install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
Comment out your swap lines in /etc/fstab
Set hostname to node1
. Instructions.
First, comment out the definition of KUBELET_NETWORK_ARGS
in /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
. This disables the use of the CNI networking plugin.
kubeadm init --pod-network-cidr=10.244.0.0/16 --node-name=node1
Flannel seems to drop forwarded traffic, so fix it:
iptables -P FORWARD ACCEPT
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
This is only a single-admin cluster, so give the dashboard admin rights:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh
chmod 700 get_helm.sh
./get_helm.sh
helm init
kubectl create clusterrolebinding tiller --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl create clusterrolebinding ks-default --clusterrole=cluster-admin --serviceaccount=kube-system:default
Note that the last line gives helm admin priviliges. This means that anyone who has helm access will have admin access to the cluster. Fine for a personal cluster, but do something more robust otherwise
Follow these instructions, dont forget the auth: https://github.com/kubernetes-incubator/external-storage/tree/master/nfs
Use the following storage class:
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: nfs
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: developerapp.com/nfs
parameters:
mountOptions: "vers=4.1"
Reference: https://github.com/gluster/gluster-kubernetes
sudo mkfs.xsf /dev/md4
sudo mkdir -p /data/gluster
sudo mount /dev/sdb1 /data/gluster
Add /dev/md4 /data/gluster ext4 defaults 0 0
to /etc/fstab
. Actually, no?
git clone https://github.com/gluster/gluster-kubernetes.git
cd gluster-kubernetes/deploy
Create topology.json
as follows:
{
"clusters": [
{
"nodes": [
{
"node": {
"hostnames": {
"manage": [
"node1"
],
"storage": [
"79.137.68.39"
]
},
"zone": 1
},
"devices": [
"/dev/md4"
]
}
]
}
]
}
https://github.com/coreos/quartermaster/tree/master/examples/glusterfs/auth/rbac
kubectl create namespace gluster
./gk-deploy --deploy-gluster --namespace gluster --object-capacity 2Ti
If this hangs when creating nodes, you probably need to enter the gluster node (reference):
ps lax
kill -9 1234 # PID of pvcreate command
rm /run/lock/lvm/P_orphans # Remove the lock
pvcreate --metadatasize=128M --dataalignment=256K /dev/md4
Create defaultstorageclass.yaml
:
kind: StorageClass
metadata:
name: gluster-heketi
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/glusterfs
parameters:
resturl: "http://deploy-heketi.gluster.svc.cluster.local:8080"
restuser: "ignore"
restuserkey: "ignore"
helm install stable/openvpn --name vpn --namespace vpn
helm upgrade --set service.type=NodePort vpn stable/openvpn
Now:
kubectl -nvpn edit svc vpn-openvpn
Update spec to look like this:
spec:
...
type: NodePort
ports:
- name: openvpn
nodePort: 30443
port: 443
protocol: TCP
targetPort: 443
...
Create makeClientKey.sh
:
#!/bin/bash -e
if [ $# -ne 1 ]
then
echo "Usage: $0 <CLIENT_KEY_NAME>"
exit
fi
KEY_NAME=$1
NAMESPACE=$(kubectl get pods --all-namespaces -l type=openvpn -o jsonpath='{.items[0].metadata.namespace}')
POD_NAME=$(kubectl get pods -n $NAMESPACE -l type=openvpn -o jsonpath='{.items[0].metadata.name}')
SERVICE_NAME=$(kubectl get svc -n $NAMESPACE -l type=openvpn -o jsonpath='{.items[0].metadata.name}')
SERVICE_IP=79.137.68.39 # CUSTOMISE
kubectl -n $NAMESPACE exec -it $POD_NAME /etc/openvpn/setup/newClientCert.sh $KEY_NAME $SERVICE_IP
kubectl -n $NAMESPACE exec -it $POD_NAME cat /etc/openvpn/certs/pki/$KEY_NAME.ovpn > $KEY_NAME.ovpn
./makeClientKey.sh joebloggs
Copy the created config to your local machine and load into tunnelblick.
export HELM_HOST=tiller-deploy.kube-system.svc.cluster.local:44134 # Probably put in .bashrc or some such
helm init --client-only
helm install --name registry --namespace registry incubator/docker-registry --set persistentVolume.enabled=true,persistentVolume.storageClass=nfs
On the node, edit /etc/docker/daemon.json
to add:
"insecure-registries" : ["registry.developerapp.net"]
Note: This assumes you setup CoreDns to resolve registry.developerapp.net
to a CNAME for the registry service created above.
Assuming you have an nfs drive you can mount for backing up to:
apt-get install rsnapshot
sudo apt-get install postfix # Need for cron to tell you of errors
Add backup drive to /etc/fstab
:
server:path /backup nfs rsize=65536,wsize=65536,timeo=30,intr,nfsvers=4
Edit /etc/rsnapshot.conf
. In particular:
snapshot_root /backup/rsnapshot/
...
retain daily 6
retain weekly 12
...
backup /home/ localhost/
backup /etc/ localhost/
backup /usr/local/ localhost/
backup /data/ localhost/
backup /root/ localhost/
backup /var/lib/etcd/ localhost/
Update crontab (crontab -e
):
[email protected]
00 03 * * * /usr/bin/rsnapshot daily
00 06 * * 6 /usr/bin/rsnapshot weekly