In this lab, we will see how to automatically generate signed SSL certificates for your HTTP applications running in your Kubernetes cluster. To do this, we will deploy a tool called cert-manager. This awesome tool was developed by Jetstack and is able to automate the generation of signed SSL certificates via Let's Encrypt.
Requirements
For this lab, you will need a working Kubernetes cluster. If you don't already have one, you can follow the Install and configure a multi-master Kubernetes cluster with kubeadm article, or the Install and manage automatically a Kubernetes cluster on VMware vSphere with Terraform and Kubespray article if you are using VMware vSphere.
You will also need a load balancer for the Kubernetes services. I suggest using MetalLB for this if your Kubernetes cluster is deployed on premise. You can follow the article Install and configure MetalLB as a load balancer for Kubernetes if needed.
Another requirement is a public domain name. You must be able to edit its configuration.
Lab architecture
As you can see on the drawing, I am using OPNSense as a Firewall for my lab but it is possible to use other products for this. I am also using OPNSense as a private DNS server for all my Kubernetes nodes.
Configuration of the public DNS
We need to configure two sub domains in our DNS record, k8s.inkubate.io and *.k8s.inkubate.io. Both of these sub domains will point to the public IP of the OPNSense firewall.
I am using Gandi as my public DNS provider, and my OPNSense public IP is 213.239.213.126. Here is a screenshot of my configuration:
Deploying the Nginx ingress controller
We are going to deploy Nginx as a Kubernetes ingress controller. This will allow us to route incoming traffic to different Kubernetes services depending on the sub domain called by the end user.
1- On your client machine, create the ingress-controller.yaml file.
$ vim ingress-controller.yaml
apiVersion: v1
kind: Namespace
metadata:
name: ingress-system
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: default-http-backend
labels:
app: default-http-backend
namespace: ingress-system
spec:
replicas: 1
selector:
matchLabels:
app: default-http-backend
template:
metadata:
labels:
app: default-http-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
# Any image is permissible as long as:
# 1. It serves a 404 page at /
# 2. It serves 200 on a /healthz endpoint
image: gcr.io/google_containers/defaultbackend:1.4
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
---
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
namespace: ingress-system
labels:
app: default-http-backend
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: default-http-backend
---
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-system
labels:
app: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: ingress-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: udp-services
namespace: ingress-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: ingress-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: nginx-ingress-clusterrole
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role
namespace: ingress-system
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "-"
# Here: "-"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-nginx"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: nginx-ingress-role-nisa-binding
namespace: ingress-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nginx-ingress-clusterrole-nisa-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-system
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: ingress-system
spec:
replicas: 1
selector:
matchLabels:
app: ingress-nginx
template:
metadata:
labels:
app: ingress-nginx
annotations:
prometheus.io/port: '10254'
prometheus.io/scrape: 'true'
spec:
serviceAccountName: nginx-ingress-serviceaccount
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.17.1
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
# www-data -> 33
runAsUser: 33
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
---
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-system
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
- name: https
port: 443
targetPort: 443
protocol: TCP
selector:
app: ingress-nginx
2- Deploy the Nginx ingress controller.
$ kubectl apply -f ingress-controller.yaml
3- Check that the Nginx ingress pod is running.
$ kubectl get pods -n ingress-system
NAME READY STATUS RESTARTS AGE
default-http-backend-846b65fb5f-2nc62 1/1 Running 0 11s
nginx-ingress-controller-dc94q 1/1 Running 0 11s
4- Check that the Nginx ingress service got an external IP and note this IP>
$ kubectl get services -n ingress-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default-http-backend ClusterIP 10.233.40.88 none 80/TCP 11s
ingress-nginx LoadBalancer 10.233.41.188 192.168.10.1 80:31052/TCP,443:32342/TCP 11s
In my case the Nginx ingress controller service got the IP 192.168.10.1 from the MetalLB load balancer.
Configuring NAT on OPNSense
1- Connect to your OPNSense web interface.
2- Browse to Firewall, NAT, Port Forward, and click on Add.
3- Create a port forward for the HTTP traffic. Leave every option as default except Destination, the Destination port range, the Redirect target IP, and the Redirect target port. The Redirect target IP should be set to match the IP of your Nginx ingress service external IP.
4- Create the same port forward but for the HTTPS traffic.
5- Apply the new configuration.
Configuring the private DNS wildcard entry on OPNSense
1- Browse to Service, Unbound DNS, General, and add the following custom options to configure the wildcard DNS entry.
2- Save the new configuration and apply the changes.
Testing the Nginx ingress controller
1- Create the nginx-ingress-test.yaml file and change nginx.k8s.inkubate.io to match your domain name.
$ vim nginx-ingress-test.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:1.15.2-alpine
name: nginx
ports:
- containerPort: 80
name: http
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
type: ClusterIP
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx
spec:
rules:
- host: nginx.k8s.inkubate.io
http:
paths:
- backend:
serviceName: nginx
servicePort: 80
2- Deploy the test application.
$ kubectl apply -f nginx-ingress-test.yaml
3- Open a web browser and browse to http://nginx.k8s.inkubate.io (modify the URL to reflect your own domain name).
4- Delete the test application.
$ kubectl delete -f nginx-ingress-test.yaml
Deploying the certificate manager application
1- Create the file cert-manager.yaml.
$ vim cert-manager.yaml
---
apiVersion: v1
kind: Namespace
metadata:
name: "cert-manager"
labels:
name: "cert-manager"
certmanager.k8s.io/disable-validation: "true"
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: cert-manager
namespace: "cert-manager"
labels:
app: cert-manager
chart: cert-manager-v0.6.0-dev.2
release: cert-manager
heritage: Tiller
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: certificates.certmanager.k8s.io
annotations:
"helm.sh/hook": crd-install
labels:
app: cert-manager
chart: cert-manager-v0.6.0-dev.2
release: cert-manager
heritage: Tiller
spec:
group: certmanager.k8s.io
version: v1alpha1
scope: Namespaced
names:
kind: Certificate
plural: certificates
shortNames:
- cert
- certs
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: challenges.certmanager.k8s.io
annotations:
"helm.sh/hook": crd-install
labels:
app: cert-manager
chart: cert-manager-v0.6.0-dev.2
release: cert-manager
heritage: Tiller
spec:
group: certmanager.k8s.io
version: v1alpha1
names:
kind: Challenge
plural: challenges
scope: Namespaced
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: clusterissuers.certmanager.k8s.io
annotations:
"helm.sh/hook": crd-install
labels:
app: cert-manager
chart: cert-manager-v0.6.0-dev.2
release: cert-manager
heritage: Tiller
spec:
group: certmanager.k8s.io
version: v1alpha1
names:
kind: ClusterIssuer
plural: clusterissuers
scope: Cluster
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: issuers.certmanager.k8s.io
annotations:
"helm.sh/hook": crd-install
labels:
app: cert-manager
chart: cert-manager-v0.6.0-dev.2
release: cert-manager
heritage: Tiller
spec:
group: certmanager.k8s.io
version: v1alpha1
names:
kind: Issuer
plural: issuers
scope: Namespaced
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: orders.certmanager.k8s.io
annotations:
"helm.sh/hook": crd-install
labels:
app: cert-manager
chart: cert-manager-v0.6.0-dev.2
release: cert-manager
heritage: Tiller
spec:
group: certmanager.k8s.io
version: v1alpha1
names:
kind: Order
plural: orders
scope: Namespaced
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: cert-manager
labels:
app: cert-manager
chart: cert-manager-v0.6.0-dev.2
release: cert-manager
heritage: Tiller
rules:
- apiGroups: ["certmanager.k8s.io"]
resources: ["certificates", "issuers", "clusterissuers", "orders", "challenges"]
verbs: ["*"]
- apiGroups: [""]
resources: ["configmaps", "secrets", "events", "services", "pods"]
verbs: ["*"]
- apiGroups: ["extensions"]
resources: ["ingresses"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: cert-manager
labels:
app: cert-manager
chart: cert-manager-v0.6.0-dev.2
release: cert-manager
heritage: Tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cert-manager
subjects:
- name: cert-manager
namespace: "cert-manager"
kind: ServiceAccount
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cert-manager-view
labels:
app: cert-manager
chart: cert-manager-v0.6.0-dev.2
release: cert-manager
heritage: Tiller
rbac.authorization.k8s.io/aggregate-to-view: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rules:
- apiGroups: ["certmanager.k8s.io"]
resources: ["certificates", "issuers"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cert-manager-edit
labels:
app: cert-manager
chart: cert-manager-v0.6.0-dev.2
release: cert-manager
heritage: Tiller
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rules:
- apiGroups: ["certmanager.k8s.io"]
resources: ["certificates", "issuers"]
verbs: ["create", "delete", "deletecollection", "patch", "update"]
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: cert-manager
namespace: "cert-manager"
labels:
app: cert-manager
chart: cert-manager-v0.6.0-dev.2
release: cert-manager
heritage: Tiller
spec:
replicas: 1
selector:
matchLabels:
app: cert-manager
release: cert-manager
template:
metadata:
labels:
app: cert-manager
release: cert-manager
annotations:
spec:
serviceAccountName: cert-manager
containers:
- name: cert-manager
image: "quay.io/jetstack/cert-manager-controller:v0.5.0"
imagePullPolicy: IfNotPresent
args:
- --cluster-resource-namespace=$(POD_NAMESPACE)
- --leader-election-namespace=$(POD_NAMESPACE)
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
resources:
requests:
cpu: 10m
memory: 32Mi
2- Deploy the certificate manager.
$ kubectl apply -f cert-manager.yaml
3- Create a staging certificate issuer.
$ vim letsencrypt-staging.yaml
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
namespace: cert-manager
spec:
acme:
email: k8s@inkubate.io
http01: {}
privateKeySecretRef:
name: letsencrypt-staging
server: https://acme-staging-v02.api.letsencrypt.org/directory
$ kubectl apply -f letsencrypt-staging.yaml
4- Create a production certificate issuer.
$ vim letsencrypt-production.yaml
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt-production
namespace: cert-manager
spec:
acme:
email: k8s@inkubate.io
http01: {}
privateKeySecretRef:
name: letsencrypt-production
server: https://acme-v02.api.letsencrypt.org/directory
$ kubectl apply -f letsencrypt-production.yaml
Testing the staging certificate manager
1- Create the nginx-staging.yaml file and change your domain name accordingly.
$ vim nginx-staging.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:1.15.2-alpine
name: nginx
ports:
- containerPort: 80
name: http
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
type: ClusterIP
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx
annotations:
kubernetes.io/tls-acme: "true"
certmanager.k8s.io/cluster-issuer: "letsencrypt-staging"
spec:
rules:
- host: nginx.k8s.inkubate.io
http:
paths:
- backend:
serviceName: nginx
servicePort: 80
tls:
- secretName: nginx-staging-tls
hosts:
- nginx.k8s.inkubate.io
2- Deploy the application.
$ kubectl apply -f nginx-staging.yaml
3- After a minute or two, open a web browser and browse to https://nginx.k8s.inkubate.io (modify the URL to reflect your own domain name).
4- Accept the certificate. The certificate is not signed by Let's Encrypt as we used the staging environment, but the workflow to sign the certificate was tested.
5- Delete the test application.
$ kubectl delete -f nginx-staging.yaml
Testing the production certificate manager
1- Create the nginx-production.yaml file and change your domain name accordingly.
$ vim nginx-production.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:1.15.2-alpine
name: nginx
ports:
- containerPort: 80
name: http
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
type: ClusterIP
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx
annotations:
kubernetes.io/tls-acme: "true"
certmanager.k8s.io/cluster-issuer: "letsencrypt-production"
spec:
rules:
- host: nginx.k8s.inkubate.io
http:
paths:
- backend:
serviceName: nginx
servicePort: 80
tls:
- secretName: nginx-production-tls
hosts:
- nginx.k8s.inkubate.io
2- Deploy the application.
$ kubectl apply -f nginx-production.yaml
3- After a minute or two, open a web browser and browse to https://nginx.k8s.inkubate.io (change the URL to reflect your own domain name). This time the certificate was signed by Let's Encrypt and you should have a green lock.
4- Delete the test application.
$ kubectl delete -f nginx-production.yaml
Thanks to Let's Encrypt, you now have a way to automatically secure your web applications running in your Kubernetes cluster.