Skip to content

Instantly share code, notes, and snippets.

@superbrothers
Last active July 4, 2024 06:41
Show Gist options
  • Save superbrothers/089fabaa888d2a56e7c98400fe32c95b to your computer and use it in GitHub Desktop.
Save superbrothers/089fabaa888d2a56e7c98400fe32c95b to your computer and use it in GitHub Desktop.
Metrics proxy server for Kubernetes components
# based on https://github.com/kubermatic/kubeone/issues/1215#issuecomment-992471229
apiVersion: v1
kind: ConfigMap
metadata:
name: metrics-proxy-config
namespace: monitoring
data:
haproxy.cfg: |
defaults
mode http
timeout connect 5000ms
timeout client 5000ms
timeout server 5000ms
default-server maxconn 10
frontend kube-controller-manager
bind ${NODE_IP}:10257
mode tcp
default_backend kube-controller-manager
backend kube-controller-manager
mode tcp
server kube-controller-manager 127.0.0.1:10257
frontend kube-scheduler
bind ${NODE_IP}:10259
mode tcp
default_backend kube-scheduler
backend kube-scheduler
mode tcp
server kube-scheduler 127.0.0.1:10259
frontend kube-proxy
bind ${NODE_IP}:10249
http-request deny if !{ path /metrics }
default_backend kube-proxy
backend kube-proxy
server kube-proxy 127.0.0.1:10249
frontend etcd
bind ${NODE_IP}:2381
http-request deny if !{ path /metrics }
default_backend etcd
backend etcd
server etcd 127.0.0.1:2381
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: metrics-proxy
namespace: monitoring
spec:
selector:
matchLabels:
app: metrics-proxy
template:
metadata:
labels:
app: metrics-proxy
spec:
containers:
- env:
- name: NODE_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.hostIP
image: docker.io/haproxy:2.5
name: haproxy
securityContext:
allowPrivilegeEscalation: false
runAsUser: 99 # 'haproxy' user
volumeMounts:
- mountPath: /usr/local/etc/haproxy
name: config
hostNetwork: true
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Exists
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
operator: Exists
volumes:
- configMap:
name: metrics-proxy-config
name: config
@superbrothers
Copy link
Author

If you built your cluster with kubeadm, the port number of etcd metrics endpoint is different from the kube-prometheus-stack default value, so the following change to values.yaml is required.

kubeEtcd:
  service:
    port: 2381
    targetPort: 2381

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment