You don't usually create Pods directly because that isn't flexible - you can't change a Pod to release application updates, and you can only scale them by manually deploying new Pods. Don't do that.
Instead you'll use a controller - a Kubernetes object which manages other objects. The controller you'll use most for Pods is the Deployment, which has features to support upgrades and scale.
Deployments use a template to create Pods, and a label selector to identify the Pods they own.
Deployments definitions have the usual metadata.
The spec is more interesting - it includes a label selector but also a Pod spec. The Pod spec is the same format you would use to define a Pod in YAML, except you don't include a name.
apiVersion: apps/v1
kind: Deployment
metadata:
name: whoami
spec:
selector:
matchLabels:
app: whoami
template:
metadata:
labels:
app: whoami
spec:
containers:
- name: app
image: sixeyed/whoami:21.04.01
The labels in the Pod metadata must include the labels in the selector for the Deployment, or you'll get an error when you try to apply the YAML.
spec.selector
- list of labels to find Podsspec.template
- the template to use to create Podsspec.template.metadata
- metadata for the Pods - no name
fieldspec.template.metadata.labels
- labels to apply to Pods, must include those in the selectorspec.template.spec
- full Pod specYour cluster should be empty if you cleared down the last lab. This spec describes a Deployment to create a whoami Pod:
Create the Deployment and it will create the Pod:
kubectl apply -f labs/deployments/specs/deployments/whoami-v1.yaml
kubectl get pods -l app=whoami
Deployments apply their own naming system when they create Pods, they end with a random string
Deployments are first-class objects, you work with them in Kubectl in the usual way.
📋 Print the details of the Deployment.
kubectl get deployments
kubectl get deployments -o wide
kubectl describe deploy whoami
The events talk about another object called a ReplicaSet - we'll get to that soon
The Deployment knows how to create Pods from the template in the spec. You can create as many replicas - different Pods created from the same Pod spec - as your cluster can handle.
You can scale imperatively with Kubectl:
kubectl scale deploy whoami --replicas 3
kubectl get pods -l app=whoami
But now your running Deployment object is different from the spec you have in source control. This is bad.
It's better to make the changes declaratively in YAML.
📋 Update the Deployment using that spec and check the Pods again.
kubectl apply -f labs/deployments/specs/deployments/whoami-v1-scale.yaml
kubectl get pods -l app=whoami
The Deployment removes one Pod, because the current state (3 replicas) does not match the desired state in the YAML (2 replicas)
Because Pod names are random, the easiest way to manage them with Kubectl is to use labels. We've done that with get
, and it works for logs
too:
kubectl logs -l app=whoami
And if you need to run commands in the Pod, you can use exec
at the Deployment level:
# this will fail
kubectl exec deploy/whoami -- /app/whoami
Kubernetes runs the command, but it errors. You can't run two copies of this app in one container, as they both try to bind to the same app
The Pod spec in the Deployment template applies a label.
📋 Print details - including IP address and labels - for all Pods with the label app=whoami
.
kubectl get pods -o wide --show-labels -l app=whoami
The label selector in these Services matches that label too:
Deploy the Services and check the Pod IP endpoints:
kubectl apply -f labs/deployments/specs/services/
kubectl get endpoints whoami-np whoami-lb
So you can still access the app from your machine:
# either
curl http://localhost:8080
# or
curl http://localhost:30010
Application updates usually mean a change to the Pod spec - a new container image, or a configuration change. You can't change the spec of a running Pod, but you can change the Pod spec in a Deployment. It makes the change by starting up new Pods and terminating the old ones.
# open a new terminal to monitor the Pods:
kubectl get po -l app=whoami --watch
# apply the change:
kubectl apply -f labs/deployments/specs/deployments/whoami-v2.yaml
You'll see new Pods created, and when they're running the old Pods are terminated. Try the app again - you'll see a smaller output and if you repeat your requests are load-balanced:
# either
curl http://localhost:8080
# or
curl http://localhost:30010
Deployments store previous specifications in the Kubernetes database, and you can easily rollback if your release is broken:
kubectl rollout history deploy/whoami
kubectl rollout undo deploy/whoami
kubectl get po -l app=whoami
Try the app again and you'll see we're back to the full output
Rolling updates aren't always what you want - they mean the old and new versions of your app are running at the same time, both processing requests.
You may want a blue-green deployment instead, where you have both versions running but only one is receiving traffic.
Write your own Deployment and Service YAMLs to create a blue-green update for the whoami app. Start by running two replicas for v1 and two for v2, but only the v1 Pods should receive traffic.
Then make your update to switch traffic to the v2 Pods without any changes to Deployments.
Did you notice a pattern in the Pod names in the rollback exercise? When you rolled back your update, you might have seen that the new Pods had the same prefix as the previous set of Pods.
Deployments create the Pod names but they're not totally random - the pattern is [deployment-name]-[template-hash]-[random-suffix]
. You can update a Deployment spec without changing the Pod spec (e.g. to set replicas) and that doesn't cause Pod replacement.
When you change the Pod spec in the template, that does mean new Pods - and the Deployment delegates responsibility for creating Pods to a ReplicaSet:
kubectl get replicaset
The name is the Deployment name plus the template hash
Deployments manage updates by creating ReplicaSets and managing the number of desired Pods for the ReplicaSet. Replaced specs are scaled down to 0, but if a new upate matches an old spec, the original ReplicaSet gets re-used.
# in a new terminal:
kubectl get rs --watch
kubectl apply -f labs/deployments/specs/deployments/whoami-v2.yaml
You'll see the rolling update in action - the new ReplicaSet is scaled up incrementally, while the old one is scaled down
Cleanup by removing objects with this lab's label:
kubectl delete deploy,svc -l kubernetes.courselabs.co=deployments