While resources such as replicasets, replicacontroller, services are useful to manage a desired state of the application using pods, they are tied to a particular version of the application. To rollout the new version of the application, these resources are not helpful. The deployment object helps to control and release the release of new versions of the application.
Deployments helps to move from one version of application to another using a user-configurable rollout process. It also uses health checks to ensure that new version of the application is operating correctly and enable additional options such as stopping the deployment if too many failures occur.
Creating a Deployment
Like all objects in Kubernetes, you would need to use YAML to create a deployment manifest. Below is one of the sample deployment objects to create three replicas of application demo-ui01
:
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-ui-deployment
labels:
version: "1.0.0"
app: demoui
spec:
replicas: 3
selector:
matchLabels:
app: demoui
template:
metadata:
labels:
app: demoui
spec:
containers:
- name: demoui
image: docker.io/mohitgoyal/demo-ui01:1.0.0
ports:
- name: http
containerPort: 80
protocol: TCP
In above example, kind
represents the Kubernetes object type i.e. Deployment, metadata.name
represents the deployment object name demo-ui-deployment, metadata.labels
represents the labels that will be assigned to demo-ui-deployment.
The spec.selector
field defines how the deployment finds which Pods to manage. Here, we have defined that pods matching label app:demoui
will be selected and matched to demo-ui-deployment. The spec.template
is the regular pod template for pod creation and label assignment.
Lets go ahead and create this object:
# create a namespace to segregate our work cloud_user@d7bfd02ab81c:~$ kubectl create namespace deployment-demo namespace/deployment-demo created # create deployment using manifest cloud_user@d7bfd02ab81c:~/workspace$ kubectl apply -f deployment-simple.yaml -n deployment-demo deployment.apps/demo-ui-deployment created # get the deployment object cloud_user@d7bfd02ab81c:~/workspace$ kubectl get deployment -n deployment-demo NAME READY UP-TO-DATE AVAILABLE AGE demo-ui-deployment 3/3 3 3 9m55s
To see the Deployment rollout status, you can use kubectl rollout status
command:
cloud_user@d7bfd02ab81c:~/workspace$ kubectl rollout status deployment demo-ui-deployment -n deployment-demo deployment "demo-ui-deployment" successfully rolled out
To see all objects (ReplicaSets and Pods) created by deployment object, we can use below:
cloud_user@d7bfd02ab81c:~/workspace$ kubectl get all -n deployment-demo NAME READY STATUS RESTARTS AGE pod/demo-ui-deployment-5b8db9bb7b-dnmvp 1/1 Running 0 28m pod/demo-ui-deployment-5b8db9bb7b-82s86 1/1 Running 0 28m pod/demo-ui-deployment-5b8db9bb7b-b7r2m 1/1 Running 0 28m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/demo-ui-deployment 3/3 3 3 28m NAME DESIRED CURRENT READY AGE replicaset.apps/demo-ui-deployment-5b8db9bb7b 3 3 3 28m
We can see that it has created a replicaset named demo-ui-deployment-5b8db9bb7b and 3 replicas of the pod. To see the labels automatically generated for each Pod, run kubectl get pods --show-labels
:
cloud_user@d7bfd02ab81c:~/workspace$ kubectl get pods --show-labels -n deployment-demo NAME READY STATUS RESTARTS AGE LABELS demo-ui-deployment-5b8db9bb7b-dnmvp 1/1 Running 0 31m app=demoui,pod-template-hash=5b8db9bb7b demo-ui-deployment-5b8db9bb7b-82s86 1/1 Running 0 31m app=demoui,pod-template-hash=5b8db9bb7b demo-ui-deployment-5b8db9bb7b-b7r2m 1/1 Running 0 31m app=demoui,pod-template-hash=5b8db9bb7b
Notice that we have specified only one label app=demoui
, but each pod has two lables. The another label pod-template-hash=5b8db9bb7b
is added by the deployment controller to every replicaset that it creates or updates. This ensures that child replicasets of a deployment, do not overlap. Pod name is formed by adding a hash id to this label.
Do note to make sure that labels defined in spec.selector
, do not overlap with other deployment objects and statefulsets.
Scaling a Deployment
We can scale deployment object to increase or decrease the number of pod replicas. We can do this imperatively by using kubectl scale
command or declaratively by updating the deployment manifest and apply the updated manifest.
For example, we can increase number of replicas from 3 to 5 in deployment demo-ui-deployment
created above:
# scale replicas from 3 to 5 cloud_user@d7bfd02ab81c:~/workspace$ kubectl scale deployment demo-ui-deployment --replicas=5 -n deployment-demo deployment.apps/demo-ui-deployment scaled # verify the deployment status after scaling cloud_user@d7bfd02ab81c:~/workspace$ kubectl get deployment demo-ui-deployment -n deployment-demo NAME READY UP-TO-DATE AVAILABLE AGE demo-ui-deployment 5/5 5 5 162m # verify the pods running for the deployment cloud_user@d7bfd02ab81c:~/workspace$ kubectl get pods -n deployment-demo NAME READY STATUS RESTARTS AGE demo-ui-deployment-5b8db9bb7b-dnmvp 1/1 Running 0 162m demo-ui-deployment-5b8db9bb7b-82s86 1/1 Running 0 162m demo-ui-deployment-5b8db9bb7b-b7r2m 1/1 Running 0 162m demo-ui-deployment-5b8db9bb7b-5ldbl 1/1 Running 0 26s demo-ui-deployment-5b8db9bb7b-nx4nr 1/1 Running 0 26s # scale down replicas from 5 to 2 cloud_user@d7bfd02ab81c:~/workspace$ kubectl scale deployment demo-ui-deployment --replicas=2 -n deployment-demo deployment.apps/demo-ui-deployment scaled # verify the pods running for the deployment cloud_user@d7bfd02ab81c:~/workspace$ kubectl get pods -n deployment-demo NAME READY STATUS RESTARTS AGE demo-ui-deployment-5b8db9bb7b-dnmvp 1/1 Running 0 162m demo-ui-deployment-5b8db9bb7b-82s86 1/1 Running 0 162m demo-ui-deployment-5b8db9bb7b-b7r2m 0/1 Terminating 0 162m demo-ui-deployment-5b8db9bb7b-nx4nr 0/1 Terminating 0 54s cloud_user@d7bfd02ab81c:~/workspace$ kubectl get pods -n deployment-demo NAME READY STATUS RESTARTS AGE demo-ui-deployment-5b8db9bb7b-dnmvp 1/1 Running 0 163m demo-ui-deployment-5b8db9bb7b-82s86 1/1 Running 0 163m
Updating a Deployment
To release new version of software, we need to update the container image in the deployment object. To do this, we need to modify the spec.template
in the deployment manifest and apply the new manifest. For example, lets consider we want to update the container image from demo-ui01:1.0.0
to demo-ui01:1.0.1
. Let’s do the same:
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-ui-deployment
labels:
version: "1.0.1"
app: demoui
spec:
replicas: 3
selector:
matchLables:
app: demoui
template:
metadata:
labels:
app: demoui
spec:
containers:
- name: demoui
image: docker.io/mohitgoyal/demo-ui01:1.0.1
imagePullPolicy: Always
ports:
- name: http
containerPort: 80
protocol: TCP
# create new release by updating deployment object cloud_user@d7bfd02ab81c:~/workspace$ kubectl apply -f deployment-simple-v1.0.1.yaml -n deployment-demo deployment.apps/demo-ui-deployment configured # check deployment rollout status cloud_user@d7bfd02ab81c:~/workspace$ kubectl rollout status deployment demo-ui-deployment -n deployment-demo deployment "demo-ui-deployment" successfully rolled out # check deployment object status cloud_user@d7bfd02ab81c:~/workspace$ kubectl get deployment -n deployment-demo NAME READY UP-TO-DATE AVAILABLE AGE demo-ui-deployment 3/3 3 3 3h
We can run kubectl get rs
to see that the deployment updated the pods by creating a new replicaset and scaling it up to 3 replicas, as well as scaling down the old replicaset to 0 replicas:
cloud_user@d7bfd02ab81c:~/workspace$ kubectl get rs -n deployment-demo NAME DESIRED CURRENT READY AGE demo-ui-deployment-68bd594749 3 3 3 4m35s demo-ui-deployment-5b8db9bb7b 0 0 0 3h4m cloud_user@d7bfd02ab81c:~/workspace$ kubectl get rs -n deployment-demo -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR demo-ui-deployment-68bd594749 3 3 3 5m16s demoui docker.io/mohitgoyal/demo-ui01:1.0.1 app=demoui,pod-template-hash=68bd594749 demo-ui-deployment-5b8db9bb7b 0 0 0 3h4m demoui docker.io/mohitgoyal/demo-ui01:1.0.0 app=demoui,pod-template-hash=5b8db9bb7b
We can get further details about deployment with kubectl describe
command:
cloud_user@d7bfd02ab81c:~/workspace$ kubectl describe deployment -n deployment-demo Name: demo-ui-deployment Namespace: deployment-demo CreationTimestamp: Sun, 09 May 2021 12:42:18 +0000 Labels: app=demoui version=1.0.1 Annotations: deployment.kubernetes.io/revision: 2 Selector: app=demoui Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: app=demoui Containers: demoui: Image: docker.io/mohitgoyal/demo-ui01:1.0.1 Port: 80/TCP Host Port: 0/TCP Environment: <none> Mounts: <none> Volumes: <none> Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable OldReplicaSets: <none> NewReplicaSet: demo-ui-deployment-68bd594749 (3/3 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 18m deployment-controller Scaled up replica set demo-ui-deployment-5b8db9bb7b to 5 Normal ScalingReplicaSet 52s (x2 over 3h) deployment-controller Scaled up replica set demo-ui-deployment-5b8db9bb7b to 3 Normal ScalingReplicaSet 52s deployment-controller Scaled up replica set demo-ui-deployment-68bd594749 to 1 Normal ScalingReplicaSet 48s (x2 over 17m) deployment-controller Scaled down replica set demo-ui-deployment-5b8db9bb7b to 2 Normal ScalingReplicaSet 48s deployment-controller Scaled up replica set demo-ui-deployment-68bd594749 to 2 Normal ScalingReplicaSet 45s deployment-controller Scaled down replica set demo-ui-deployment-5b8db9bb7b to 1 Normal ScalingReplicaSet 45s deployment-controller Scaled up replica set demo-ui-deployment-68bd594749 to 3 Normal ScalingReplicaSet 42s deployment-controller Scaled down replica set demo-ui-deployment-5b8db9bb7b to 0
Pausing and Resuming a Deployment
By default, deployment being a controller object is always watching and making sure the desired state of application is maintained. You can pause a deployment before triggering one or more updates and then resume it. This allows you to apply multiple fixes in between pausing and resuming without triggering unnecessary rollouts.
For example, lets get current deployment state for demo-ui-deployment as below:
cloud_user@d7bfd02ab81c:~/workspace$ kubectl get deployment -n deployment-demo NAME READY UP-TO-DATE AVAILABLE AGE demo-ui-deployment 3/3 3 3 3h12m cloud_user@d7bfd02ab81c:~/workspace$ kubectl get rs -n deployment-demo NAME DESIRED CURRENT READY AGE demo-ui-deployment-68bd594749 3 3 3 12m demo-ui-deployment-5b8db9bb7b 0 0 0 3h12m
We can now pause the deployment by using kubectl rollout pause
command:
cloud_user@d7bfd02ab81c:~/workspace$ kubectl rollout pause deployment demo-ui-deployment -n deployment-demo deployment.apps/demo-ui-deployment paused
This will make it to neglect any changes in the deployment object and it will not create a new rollout:
cloud_user@d7bfd02ab81c:~/workspace$ kubectl set image deployment demo-ui-deployment demoui=docker.io/mohitgoyal/demo-ui01:1.0.2 -n deployment-demo deployment.apps/demo-ui-deployment image updated cloud_user@d7bfd02ab81c:~/workspace$ kubectl get rs -n deployment-demo -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR demo-ui-deployment-68bd594749 3 3 3 21m demoui docker.io/mohitgoyal/demo-ui01:1.0.1 app=demoui,pod-template-hash=68bd594749 demo-ui-deployment-5b8db9bb7b 0 0 0 3h20m demoui docker.io/mohitgoyal/demo-ui01:1.0.0 app=demoui,pod-template-hash=5b8db9bb7b
We can make as many changes as we like and once finished, we can resume the deployment using the kubectl rollout resume
command:
cloud_user@d7bfd02ab81c:~/workspace$ kubectl rollout resume deployment demo-ui-deployment -n deployment-demo deployment.apps/demo-ui-deployment resumed cloud_user@d7bfd02ab81c:~/workspace$ kubectl get rs -n deployment-demo -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR demo-ui-deployment-5b8db9bb7b 0 0 0 3h43m demoui docker.io/mohitgoyal/demo-ui01:1.0.0 app=demoui,pod-template-hash=5b8db9bb7b demo-ui-deployment-68bd594749 3 3 3 44m demoui docker.io/mohitgoyal/demo-ui01:1.0.1 app=demoui,pod-template-hash=68bd594749 demo-ui-deployment-d84bbdd88 1 1 0 79s demoui docker.io/mohitgoyal/demo-ui01:1.0.2 app=demoui,pod-template-hash=d84bbdd88
Check Rollout History of Deployment
Kubernetes deployments maintain a history of rollouts, which can be useful both for understanding the previous state of the deployment and to roll back to a specific version.
You can see the deployment history by running kubectl rollout history
command:
cloud_user@d7bfd02ab81c:~/workspace$ kubectl rollout history deployment demo-ui-deployment -n deployment-demo deployment.apps/demo-ui-deployment REVISION CHANGE-CAUSE 1 <none> 2 <none> 3 <none>
The revision history is given in oldest to newest order. A unique revision number is incremented for each new rollout.
Note that change-cause
field is empty in our case. This field is used to contain information about why a deployment object was updated. It is copied from the deployment annotation kubernetes.io/change-cause
to its revisions upon creation.
You can specify the change-cause
message by:
- Annotating the Deployment with
kubectl annotate deployment demo-ui-deployment kubernetes.io/change-cause="image updated to 1.0.1 -n deployment-demo"
- Append the
--record
flag to save thekubectl
command that is making changes to the resource. - Manually editing the manifest of the resource.
If we want more details about a particular revision, we can get it as below:
cloud_user@d7bfd02ab81c:~/workspace$ kubectl rollout history deployment demo-ui-deployment --revision=3 -n deployment-demo deployment.apps/demo-ui-deployment with revision #3 Pod Template: Labels: app=demoui pod-template-hash=d84bbdd88 Containers: demoui: Image: docker.io/mohitgoyal/demo-ui01:1.0.2 Port: 80/TCP Host Port: 0/TCP Environment: <none> Mounts: <none> Volumes: <none>
Rollback a Deployment
If for some reason, the rollout for a new release of the application is not working as expected, you can perform the rollback as well. For example, consider our deployment object demo-ui-deployment’s current state, after updating container image version to 1.0.2:
cloud_user@d7bfd02ab81c:~$ kubectl get deployment -n deployment-demo NAME READY UP-TO-DATE AVAILABLE AGE demo-ui-deployment 3/3 1 3 3h54m cloud_user@d7bfd02ab81c:~$ kubectl rollout status deployment demo-ui-deployment -n deployment-demo error: deployment "demo-ui-deployment" exceeded its progress deadline cloud_user@d7bfd02ab81c:~$ kubectl describe deployment demo-ui-deployment -n deployment-demo Name: demo-ui-deployment Namespace: deployment-demo CreationTimestamp: Sun, 09 May 2021 12:42:18 +0000 Labels: app=demoui version=1.0.1 Annotations: deployment.kubernetes.io/revision: 3 Selector: app=demoui Replicas: 3 desired | 1 updated | 4 total | 3 available | 1 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge ... ...
We can see that the rollout is marked as in state of error. If we do kubectl describe, we can see that it contains now 4 pods, out of which 3 are available and 1 is not working.
To get to the root cause, let’s checkout events associated with our deployment:
33m Normal Started pod/demo-ui-deployment-68bd594749-np6ms Started container demoui 33m Normal TaintManagerEviction pod/demo-ui-deployment-68bd594749-np6ms Cancelling deletion of Pod deployment-demo/demo-ui-deployment-68bd594749-np6ms 16m Normal ScalingReplicaSet deployment/demo-ui-deployment Scaled up replica set demo-ui-deployment-d84bbdd88 to 1 16m Normal SuccessfulCreate replicaset/demo-ui-deployment-d84bbdd88 Created pod: demo-ui-deployment-d84bbdd88-hq9nc 16m Normal Scheduled pod/demo-ui-deployment-d84bbdd88-hq9nc Successfully assigned deployment-demo/demo-ui-deployment-d84bbdd88-hq9nc to k3d-worker1-0 15m Normal Pulling pod/demo-ui-deployment-d84bbdd88-hq9nc Pulling image "docker.io/mohitgoyal/demo-ui01:1.0.2" 15m Warning Failed pod/demo-ui-deployment-d84bbdd88-hq9nc Failed to pull image "docker.io/mohitgoyal/demo-ui01:1.0.2": rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/mohitgoyal/demo-ui01:1.0.2": failed to resolve reference "docker.io/mohitgoyal/demo-ui01:1.0.2": docker.io/mohitgoyal/demo-ui01:1.0.2: not found 15m Warning Failed pod/demo-ui-deployment-d84bbdd88-hq9nc Error: ErrImagePull 6m47s Warning Failed pod/demo-ui-deployment-d84bbdd88-hq9nc Error: ImagePullBackOff 111s Normal BackOff pod/demo-ui-deployment-d84bbdd88-hq9nc Back-off pulling image "docker.io/mohitgoyal/demo-ui01:1.0.2"
We can see that it is failing to pull down the image docker.io/mohitgoyal/demo-ui01:1.0.2
, which is intended as this image does not exist. So now we want to rollback this deployment to previous working version.
We can rollback to a specific revision by specifying it with --to-revision
:
cloud_user@d7bfd02ab81c:~/workspace$ kubectl rollout undo deployment demo-ui-deployment --to-revision=2 -n deployment-demo deployment.apps/demo-ui-deployment rolled back cloud_user@d7bfd02ab81c:~/workspace$ kubectl get deployment demo-ui-deployment -n deployment-demo NAME READY UP-TO-DATE AVAILABLE AGE demo-ui-deployment 3/3 3 3 4h3m cloud_user@d7bfd02ab81c:~/workspace$ kubectl get rs -n deployment-demo NAME DESIRED CURRENT READY AGE demo-ui-deployment-5b8db9bb7b 0 0 0 4h4m demo-ui-deployment-68bd594749 3 3 3 64m demo-ui-deployment-d84bbdd88 0 0 0 21m cloud_user@d7bfd02ab81c:~/workspace$ kubectl get pods -n deployment-demo NAME READY STATUS RESTARTS AGE demo-ui-deployment-68bd594749-np6ms 1/1 Running 1 64m demo-ui-deployment-68bd594749-sq259 1/1 Running 1 64m demo-ui-deployment-68bd594749-4gl6n 1/1 Running 1 64m
An alternative and preferred way is to undo a rollout is to revert your YAML file and kubectl apply
the previous version. In this way, your source code version more closely reflects what is really running in the cluster.
Revision History Limit
By default, the revision history of a deployment is kept attached to the deployment object itself. We can get the history details by using the kubectl rollout history
command. By default, it keep tracks of last 10 revisions. You can increase or decrease this value as per your needs by using the spec.revisionHistoryLimit
field in a deployment manifest.
Explicitly setting this field to 0, will result in cleaning up all the history of your deployment thus that deployment will not be able to roll back.
Deployment Strategies
If we take a look at output of kubectl describe deployment
command, there is a property called strategy. It is controlled by using spec.strategy
field in the deployment manifest. It has two possible values:
- Recreate – This is not default one. If true, it guarantee to kill existing pods before creating new one.
- RollingUpdate – This is the default. It works by updating a few pods at a time, moving incrementally until all of the pods are running the new version of the application. You can specify
maxUnavailable
andmaxSurge
properties to control the rolling update process.maxUnavailable
– It is an optional field that specifies the maximum number of pods that can be unavailable during the update process. The value can be an absolute number (for example, 5) or a percentage of desired pods (for example, 10%). The absolute number is calculated from percentage by rounding down. The value cannot be 0 if.spec.strategy.rollingUpdate.maxSurge
is 0. The default value is 25%.maxSurge
– It is an optional field that specifies the maximum number of pods that can be created over the desired number of Pods. The value can be an absolute number (for example, 5) or a percentage of desired pods (for example, 10%). The value cannot be 0 ifMaxUnavailable
is 0. The absolute number is calculated from the percentage by rounding up. The default value is 25%.
Setting maxSurge
to 100%
and maxUnavailable
to 0% is equivalent to a blue/green deployment. The deployment controller first scales the new version up to 100% of the old version. Once the new version is healthy, it immediately scales the old version down to 0%.
Setting maxUnavailble
to 100% is equivalent to recreate strategy. The deployment controller will reduce the number of pods available to 0 and then create new pods with the pod template specified.
Deleting a Deployment
We can delete a deployment imperatively using kubectl delete deployment
command or pass the manifest to kubectl delete
command to delete it in more declarative way:
cloud_user@d7bfd02ab81c:~/workspace$ kubectl delete -f deployment-simple.yaml -n deployment-demo deployment.apps "demo-ui-deployment" deleted cloud_user@d7bfd02ab81c:~/workspace$ kubectl get deployment -n deployment-demo No resources found in deployment-demo namespace.
Lastly, since we do not need namespace deployment-demo anymore, we can also delete it:
cloud_user@d7bfd02ab81c:~/workspace$ kubectl delete namespace deployment-demo namespace "deployment-demo" deleted
Alternatively, if we have deleted namespace first, it would have deleted all objects including deployments, replicasets, pods, etc under it as well.