Admission control is the process of allowing - or blocking - workloads from running in the cluster. You can use this to enforce your own rules. You might want to block all containers unless they're using an image from an approved registry, or block Pods which don't include resource limits in the spec.
You can do this with admission controller webhooks - HTTP servers which run inside the cluster and get invoked by the API to apply rules when objects are created. Admission controllers can use your own logic, or can use a standard tool like Open Policy Agent.
Admission controller webhooks give you the most flexibility, because you can run any code you like. You'll typically run them inside the cluster using standard Deployment and Service objects.
But the Kubernetes API server will only call webhooks if they're served over HTTPS using a trusted certificate. A nice way to do that is with cert-manager, a CNCF project which generates TLS certificates and creates them as Secrets.
kubectl apply -f labs/admission/specs/cert-manager
Cert-manager uses Issuers to define how certificates get created. You can configure this to use a real certificate provider - e.g. Let's Encrypt - but we'll use a self-signed issuer:
📋 Create the issuer and print the details.
It's a custom resource, but it's all YAML to Kubernetes:
kubectl apply -f labs/admission/specs/cert-manager/issuers
And you work with it in the usual way:
kubectl get issuers kubectl describe issuer selfsigned
You'll see a lot of output, including the status showing the issuer is ready to use.
Our admission controller is a NodeJS web app, built to match the webhook API spec (source code on GitHub):
webhook-server/admission-webhook.yaml - defines a Deployment and Service. There's no RBAC or special permissions, this is a standalone web server - but it does run under HTTPS, expecting to find the TLS cert in a Secret
webhook-server/certificate.yaml - will create the certificate using the self-signed issuer and store it in the Secret the Pod expects to use. cert-manager will take care of creating and rotating this cert.
📋 Deploy the webhook server and confirm the certificate Secret gets created.
The specs are in the
kubectl apply -f labs/admission/specs/webhook-server
Check the certificate objects:
kubectl get certificates
You should see that the cert is Ready and the output shows the Secret name where it is stored:
kubectl describe secret admission-webhook-cert
The Secret contains the TLS certificate and key, and the CA certificate for the issuer.
This is just a standard web server, so we can test the HTTPS setup by running a sleep Pod:
kubectl apply -f labs/admission/specs/sleep # you'll get a security error here: kubectl exec sleep -- curl https://admission-webhook.default.svc
The error means the certificate has been applied, but curl doesn't trust the issuer
The admission controller is running, but it's not doing anything yet. It needs to be configured as a webhook for the Kubernetes API server to call:
/validatepath when Pods are created or updated; the annotation is there to configure the self-signed cert as trusted.
This is a validating webhook - the logic in the server will block any Pods from being created, where the spec does not set the
automountServiceAccountToken field to
Apply the validating webhook:
kubectl apply -f labs/admission/specs/validating-webhook
Check the details and you'll see cert-manager has applied the CA cert from the certificate it generated:
kubectl describe validatingwebhookconfiguration servicetokenpolicy
Now the webhook is running, Kubernetes won't run any Pods that don't meet the rules - like the one in this whoami app Deployment.
Create the application objects:
kubectl apply -f labs/admission/specs/whoami
📋 The app won't run. Debug it to find the error message generated by the admission controller.
Check the Deployment:
kubectl get deploy whoami kubectl describe deploy whoami
There should be two Pods, but none are ready. The events show the ReplicaSet has been created and scaled up, so there are no errors here.
Check the RS:
kubectl describe rs -l app=whoami
Here you see the message from the admission controller: Error creating: admission webhook "servicetokenpolicy.courselabs.co" denied the request: automountServiceAccountToken must be set to false
This app won't fix itself - the ReplicaSet will keep trying to create Pods and they will keep getting rejected by the admission controller.
To get it running you need to change the Pod spec - you can edit the Deployment or apply a new spec which meets the validation rules:
kubectl apply -f labs/admission/specs/whoami/fix kubectl get po -l app=whoami --watch
Now the Pods get created.
Validating webhooks are a powerful way of ensuring your apps meet your policies - any objects can be targetted and the whole spec is sent to the webhook, so you can use it for security, performance or reliability rules.
Validating webhooks either allow an oject to be created or they block it. The other type of admission control is to silently edit the incoming object spec using a mutating webhook.
The webhook server we're running has mutation logic too:
/mutateendpoint on the server.
Deploy the new webhook:
kubectl apply -f labs/admission/specs/mutating-webhook kubectl describe mutatingwebhookconfiguration nonrootpolicy
There's no information about what this policy actually does...
Try running another app - using this spec for the Pi website:
kubectl apply -f labs/admission/specs/pi
📋 This app won't run either. Check the objects and the spec to try to find out what went wrong.
Look at the Pods:
kubectl get po -l app=pi-web
You'll see the status is CreateContainerConfigError. Check the Pod details:
kubectl describe po -l app=pi-web
You'll see an error message in the events: Error: container has runAsNonRoot and image will run as root.
That means the container image uses the root user by default, but the Pod spec is set with a security context so it won't run containers as root.
The Pod spec in the Deployment doesn't say anything about non-root users, that's been applied by the mutating webhook.
You can get the app running by applying this updated spec:
kubectl apply -f labs/admission/specs/pi/fix
Custom webhooks have two drawbacks: you need to write the code yourself, which adds to your maintenance estate; and their rules are not discoverable through the cluster, so you'll need external documentation.
OPA Gatekeeper is an alternative which implements admission control using generic rule descriptions (in a language called Rego).
We'll deploy admission rules with Gatekeeper - first delete all of the custom webhooks (ours and cert-manager's):
kubectl delete ns,all,ValidatingWebhookConfiguration,MutatingWebhookConfiguration -l kubernetes.courselabs.co=admission kubectl delete crd,ValidatingWebhookConfiguration,MutatingWebhookConfiguration -l app.kubernetes.io/instance=cert-manager
OPA Gatekeeper is another complex component, where you trade the overhead of managing it with the issues of running your own controllers:
kubectl apply -f labs/admission/specs/opa-gatekeeper
📋 What custom resource types does Gatekeeper install?
Check the CustomResourceDefinitions:
kubectl get crd
You'll see a few - the main one we work with is the ConstraintTemplate.
There are two parts to applying rules with Gatekeeper:
Create a ConstraintTemplate which defines a generic constraint (e.g. containers in a given namespace can only use a given image registry)
Create a Constraint from the template (e.g.containers in namespace
apod can only use images from
courselabs repos on Docker Hub)
The rule definition is done with the Rego generic policy language:
requiredLabels-template.yaml - defines a simple (!) template to require labels on an object
resourceLimits-template.yaml - defines a more complex template requiring container objects to have resources set
Create the templates:
kubectl apply -f labs/admission/specs/opa-gatekeeper/templates
📋 Check the custom resources again; how do you think Gatekeeper stores constraints in Kubernetes?
kubectl get crd
You see new CRDs for the constraint templates:
Gatekeeper creates a CRD for each constraint template, so each constraint becomes a Kubernetes resource.
Here are the constraints which use the templates:
requiredLabels.yaml - requires
version labels on Pods, and a
kubernetes.courselabs.co label on namespaces
resourceLimits.yaml - requires resources to be specified for any Pods in the
Deploy the constraints:
kubectl apply -f labs/admission/specs/opa-gatekeeper/constraints
📋 Print the details of the required labels namespace constraint. Is it clear what it's enforcing?
The constraint type is a CRD so you can list objects in the usual way:
kubectl get requiredlabels kubectl describe requiredlabels requiredlabels-ns
You'll see all the existing violations of the rule, and it should be clear what's required - the label on each namespace.
Now we have OPA Gatekeeper in place, we can see how it works.
Try deploying the APOD app from the specs for this lab:
kubectl apply -f labs/admission/specs/apod
It will fail because the resources don't meet the constraints we have in place. Your job is to fix up the specs and get the app running - without making any changes to policies :)
Remove all the lab's namespaces:
kubectl delete ns -l kubernetes.courselabs.co=admission
And the CRDs:
kubectl delete crd -l gatekeeper.sh/system kubectl delete crd -l gatekeeper.sh/constraint