Skip to content

Instantly share code, notes, and snippets.

@samelie
Last active November 10, 2022 12:20
Show Gist options
  • Save samelie/e9a30aecf0fbbc6e903fe5598153273b to your computer and use it in GitHub Desktop.
Save samelie/e9a30aecf0fbbc6e903fe5598153273b to your computer and use it in GitHub Desktop.

Project Contour, grpc-web, GKE, cert-manager, tanka

The goal is to get a project off the ground from just a simple domain to an application running a g RPC server accessible via grpc-web on GKE.

Clone the repository so you have access to each individual yaml file. https://github.com/projectcontour/contour/blob/9c14f3d4a7/examples/contour/README.md

There is only one line to add to make this work on GKE.

Copy the files /examples/contour to a new directory eg :my-dir

touch my-dir/my-envoy-config.yaml

kind: ConfigMap
metadata:
  name: MY-ENVOY-CONFIG
data:
  config.yaml: |
    domain: contour
    admin:
      access_log_path: /tmp/admin_access.log
      address:
        socket_address: { address: 0.0.0.0, port_value: 9901 }

    static_resources:
      listeners:
      - name: listener_0
        address:
          socket_address: { address: 0.0.0.0, port_value: 8080 }
        filter_chains:
        - filters:
          - name: envoy.filters.network.http_connection_manager
            typed_config:
              "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
              codec_type: auto
              stat_prefix: ingress_http
              route_config:
                name: local_route
                virtual_hosts:
                - name: local_service
                  domains: ["*"]
                  routes:
                  - match: { prefix: "/" }
                    route:
                      cluster: echo_service
                      timeout: 0s
                      max_stream_duration:
                        grpc_timeout_header_max: 0s
                  cors:
                    allow_origin_string_match:
                    - prefix: "*"
                    allow_methods: GET, PUT, DELETE, POST, OPTIONS
                    allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,custom-header-1,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout
                    max_age: "1728000"
                    expose_headers: custom-header-1,grpc-status,grpc-message
              http_filters:
              - name: envoy.filters.http.grpc_web
              - name: envoy.filters.http.cors
              - name: envoy.filters.http.router
      clusters:
      - name: echo_service
        connect_timeout: 0.25s
        type: logical_dns
        http2_protocol_options: {}
        lb_policy: round_robin
        load_assignment:
          cluster_name: cluster_0
          endpoints:
            - lb_endpoints:
                - endpoint:
                    address:
                      socket_address:
                        address: node-server
                        port_value: MY-PORT

Notice the name MY-ENVOY-CONFIG and port_value . The latter should be port of your grpc application.

In my-dir/02-service-envoy add loadBalancerIP.

  externalTrafficPolicy: Local
  loadBalancerIP: 12.345.567.89
  ports:
  	...

To obtain an external IP address that will work on GKE, you need to go into the VPC Network settings on Google cloud console and create a regional static external IP address, whose value then you can use.

gcloud compute addresses create example-ip --project=MY-PROJECT --region=us-central1

Finally, kubectl apply -f my-dir

Now project Contour has been installed need to create the Ingress rules to route to our application.

Let's Encrypt and Ingress

dns

Before doing anything we need to make sure that the DNS is setup correctly.

To accomplish this on GKE we assume that you have control over your domain and that you're nameservers are set to Cloud DNS nameservers.

First we create a cloud DNS domain you can do this to the cloud console URI or with the command line

gcloud beta dns --project=MY-PROJECT managed-zones create example-com --description="" --dns-name="example.com." --visibility="public" --dnssec-state="off"

Next we need to make two a records with the static IP address from the previous step you can either do this in the UI or the command line

gcloud beta dns --project=MY-PROJECT record-sets transaction start --zone="example-com"

gcloud beta dns --project=MY-PROJECT record-sets transaction add <MY-STATIC-IP-USED-IN-LOADBALENCER> --name="*.example.com." --ttl="300" --type="A" --zone="example-com"

gcloud beta dns --project=MY-PROJECT record-sets transaction add <MY-STATIC-IP-USED-IN-LOADBALENCER> --name="example.com." --ttl="300" --type="A" --zone="example-com"

gcloud beta dns --project=MY-PROJECT record-sets transaction execute --zone="example-com"

Eg: MY-STATIC-IP-USED-IN-LOADBALENCER=12.345.567.89

That's it. This is necessary because cert manager using the secret that we're going to create, will do a DNS challenge to give let's encrypt the assurances it needs in order to provide us with certificates.

Cert-manager

We need TLS on our Ingress routes do this we are going to use let's encrypt managed by the cert-manager project.

Follow the installation instructions for the helm 3 https://cert-manager.io/docs/installation/helm/

The final command for the helm install should look something like this

$ helm install \
  cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --create-namespace \
  --version v1.5.4 \
  --set installCRDs=true

Creating the service-account secret

Before we create the certificate and the issuer, we need to create a secret that the cert manager can use in order to interact with our kubernetes engine.

Go into your Google Cloud console to the service account page, create a new service account whose role is DNS admin only. Create a new key and download this Json key to my-dir system. Rename it to dns-admin-key.json . Be mindful where you copy this key.

A quick note about namespaces in k8

In this guide were assuming that a project belongs to a namespace so we are going to create all our resources in this same namespace.

kubectl create ns my-namespace

Creating the kubernetes secret that cert manager will use

cd my-dir

kubectl create secret generic clouddns-service-account --from-file=dns-admin-key.json -n my-namespace

Now we can create the cert-manager certificate and issuer for TLS on our domains.

First we're going to set up the Tanka project order to give us some benefits when it comes to templating the following yaml.

Tanka - cert manager + contour

The following will implement let's encrypt cert manager and project Contour Ingress within a Tonka project.

Before going any further, go to Tanka and follow their tutorial to get a good idea of the benefits that it brings and how a Tanka project is set up.

Copy this folder contents to my-dir/tanka https://github.com/grafana/tanka/tree/main/examples/prom-grafana

cd my-dir/tanka

Our Tonka project is going to be named: my-grpc-app Substitute this for yours.

mkdir environments/my-grpc-app && mkdir lib/my-grpc-app

The following files we're going to create will be the library files used by our environments. You'll notice that there are quite a few configs that are passed into these jsonnet functions. We will complete the config when we actually implement our application in Tonka.

This guided assumes that you have a service and deployment created for the grpc server application. In addition as you will see, it is advised to have an additional application that will exposed to the web.

The service must have the following annotations. EG: MY-GRPC-APP-PORT=50051

 annotations: {
       'contour.heptio.com/upstream-protocol.h2': 'MY-GRPC-APP-PORT,grpc',
     },

touch lib/contour/ingress.libsonnet

{

 // create the object for contour if not present
  _contour+:: {

    domain: {
      new(c): {
        apiVersion: 'projectcontour.io/v1',
        kind: 'HTTPProxy',
        metadata: {
          annotations: std.mergePatch({
            'cert-manager.io/cluster-issuer': 'letsencrypt',
            'kubernetes.io/ingress.class': 'contour',
            'ingress.kubernetes.io/custom-response-headers': 'Access-Control-Allow-Origin:* || Access-Control-Allow-Methods:POST, GET, HEAD, OPTIONS, PUT, DELETE',
            'ingress.kubernetes.io/force-ssl-redirect': "true",
            'projectcontour.io/upstream-protocol.h2c': "443",
            'kubernetes.io/ingress.global-static-ip-name': $._config.staticIpName,
            'kubernetes.io/tls-acme': 'true',
          }, c.annotations),
          name: c.name,
        },
        spec: {
          virtualhost: std.prune({
            fqdn: c.domain,
            corsPolicy: if 'cors' in c then c.cors else null,
            tls: c.tls,
          }),
          routes: c.routes,
        },
      },
    },
  },
}

touch lib/letsencrypt.libsonnet

{

  local secret = $.core.v1.secret,

  _le+:: {
    issuerStaging: {
      new(): {
        apiVersion: 'cert-manager.io/v1alpha2',
        kind: 'Issuer',
        metadata:
          {
            name: 'letsencrypt-staging',
          },
        spec:
          {
            acme:
              {
                email: $._config.email,
                privateKeySecretRef:
                  {
                    name: 'letsencrypt-staging-secret',
                  },
                server: 'https://acme-staging-v02.api.letsencrypt.org/directory',
                solvers: [
                  {
                    dns01:
                      {
                        clouddns:
                          {
                            project: $._config.projectId,
                            serviceAccountSecretRef:
                              {
                                key: $._config.serviceAccountSecretRef,
                                name: 'clouddns-service-account',
                              },
                          },
                      },
                  },
                ],
              },
          },
      },
    },
    issuerProduction: {
      new(): {
        apiVersion: 'cert-manager.io/v1alpha2',
        kind: 'Issuer',
        metadata:
          {
            name: 'letsencrypt-production',
          },
        spec:
          {
            acme:
              {
                email: $._config.email,
                privateKeySecretRef:
                  {
                    name: 'letsencrypt-production-secret',
                  },
                server: 'https://acme-v02.api.letsencrypt.org/directory',
                solvers: [
                  {
                    dns01:
                      {
                        clouddns:
                          {
                            project: $._config.projectId,
                            serviceAccountSecretRef:
                              {
                                key: $._config.serviceAccountSecretRef,
                                name: 'clouddns-service-account',
                              },
                          },
                      },
                  },
                ],
              },
          },
      },
    },
    certificate: {
      new(domains, env="staging"): {
        apiVersion: 'cert-manager.io/v1alpha2',
        kind: 'Certificate',
        metadata: {
          name: env + '-cert',
        },
        spec: {
          dnsNames: domains,
          issuerRef: {
            kind: 'Issuer',
            name: 'letsencrypt-'+env ,
          },
          secretName: 'letsencrypt-'+ env +'-tls',
        },
      },
    },
  },
}

touch environments/my-grpc-app/main.jsonnet

(import 'ksonnet-util/kausal.libsonnet') +
(import 'letsencrypt.libsonnet') +
(import 'contour.libsonnet') +
// THE CONFIG !!!!!!!
{
  _config+:: {
    namespace: 'my-namespace',
    appName:'my-grpc-app',
    projectId: 'MY-PROJECT',
    email: '[email protected]',
    environment: 'prod',
    serviceAccountSecretRef: 'dns-admin-key.json',
    clouddnsServiceAccount: 'clouddns-service-account',
    staticIpName: 'example-ip',
  },

  local name = $._config.namespace,

  my_namespace: {
    apiVersion: 'v1',
    kind: 'Namespace',
    metadata: {
      name: $._config.namespace,
    },
  },

  
  ingGrpcApp: $._contour.domain.new(
    {
      name: $._config.appName,
      annotations: {},
      domain: 'my-grpc-app.example.com',
      tls: { secretName: 'letsencrypt-production-tls' },
      cors: {
        allowCredentials: true,
        allowOrigin: ['*'],
        allowMethods: ['OPTIONS', 'POST'],
        allowHeaders: [
          'Accept',
          'Accept-Version',
          'Content-Length',
          'Access-Control-Allow-Origin',
          'origin',
          'keep-alive',
          'user-agent',
          'cache-control',
          'content-type',
          'content-transfer-encoding',
          'custom-header-1',
          'x-accept-content-transfer-encoding',
          'x-accept-response-streaming',
          'x-user-agent',
          'x-grpc-web',
          'grpc-timeout',
          'access-control-request-method',
          'access-control-expose-headers',
          'Access-Control-Allow-Credentials',
          'Content-MD5',
          'Authorization',
          'Content-Type',
          'Date',
          'TE',
          'trailers',
          'trailer',
          'grpc-status',
          'grpc-message',
          'grpc-timeout',
          'X-Auth-Token',
        ],
        exposeHeaders: [
          'Content-Type',
          'X-Auth-Token',
          'origin',
          'access-control-allow-credentials',
          'date',
          'TE',
          'trailers',
          'trailer',
          'vary',
          'authorization',
          'grpc-accept-encoding',
          'Access-Control-Allow-Origin',
          'grpc-status',
          'grpc-timeout',
          'grpc-message',
        ],
      },
      routes: [{
        conditions: [{
          prefix: '/',
        }],
        services: [
          {
            name: MY-GRPC-APP-SERVICE-NAME,
            port: MY-GRPC-APP-SERVICE-PORT-NUMBER,
            protocol: 'h2c',
          },
        ],
      }],
    }
  ),


  issuerStaging: $._le.issuerStaging.new(),
  issuerProduction: $._le.issuerProduction.new(),
  certificateStaging: $._le.certificate.new([
    'example.com',
    '*.example.com',
  ]),
  certificateProduction: $._le.certificate.new([
    'example.com',
    '*.example.com',
  ], 'production'),

}

Make sure to double-check all the config variables and make sure they match.

Fixing let's encrypt for new domains

Let's encrypt only issue a certificate there is an application running on / of your domain that issues a response.

Separate from this guide, create a service and a deployment to an application if you don't already have one.

To make an Ingress to the service to be the root of the domain, add this block to the main file.

  ingRoot: $._contour.domain.new(
    {
      name: 'root-app',
      annotations: {},
      domain: 'example.com',
      tls: { secretName: 'letsencrypt-production-tls' },
      routes: [{
        conditions: [{
          prefix: '/',
        }],
        services: [
          {
            name: 'root-app-svc',
            port: 80,
          },
        ],
      }],
    }
  ),

It may take a little bit for let's encrypt issue the certificates but you can check their progress with kubectl describe commands.

Conclusion

And that's it we've set it up. This main file represents our applications config and we're now ready to create these resources within our namespace on the cluster.

If there is a deployment resource exposing Port 50051 then the service will connect to it.

Here is the final command that will apply these resources to the cluster you have a chance to verify before continuing.

tanka apply environments/my-grpc-app

your grpc application will now be accessible by grpc-web at my-grpc-app.example.com

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment