In the Kubernetes release v1.20, the development team has marked dockershim as deprecated. There was initially some shock and confusion, as it was perceived that team is moving away from docker completely, however that is not the case. As its turns out, what the team is doing, is steering the Kubernetes away from the proprietary parts of Docker or Docker Engine or just simply called Docker. The Docker Engine is further comprised of many different sub-components like dockerd, containerd, runc etc., many of which were initially developed by Docker Inc and then given away to community. These were later standardized and maintained by the community.
The Kubernetes community has written a detailed blog post about deprecation with a dedicated FAQ page for it. This blog post is being written to understand the impact and what needs to be done about it. Depending on how do you use Docker, think and understand about it, you may or may not have to do anything about it or get worried about sleepless nights.
Architecture Evolutions of Docker and Kubernetes
A Kubernetes cluster consists of a set of worker machines, called nodes, that run containerized applications. Every cluster has at least one worker node. The worker node(s) host the Pods that are the components of the application workload. The control plane manages the worker nodes and the Pods in the cluster. Here’s the basic architecture of Kubernetes:

The Kubelet is the primary component on a worker node, which is responsible for creating and managing various pods and containers.
What we need to understand here that Kubernetes was not designed to run containers. It is still not. What it was designed for was a mechanism to orchestrate containers. So this functionality was delegated to available popular container runtimes at that time i.e. Docker. Due to its easy of use, feature rich set and low learning curve, Docker remained as popular choice throughout all these years till today.
However, Docker was a monolith when it was written initially. Over time, Docker re-invented its architecture and divided into various small components such as docker cli, dockerd, runc, containerd, etc. The runc was designed to run containers and was donated to OCI. containerd was initially designed to manage containers and was donated to CNCF. It later captured additional functionalities such as image push/pull, networking etc.
Since containerd was there to manage containers, an additional standard was developed so that kubelet can call dockerd using its API. dockershim was developed as plugin mechanism to allow kubelet to call dockerd and plugged the gap. The dockershim adapter allows the kubelet to interact with Docker as if Docker were a CRI compatible runtime:

However it was always viewed as temporary solution to a larger problem. So Kubernetes team later standardized it and called it Container Runtime Interface Standard or simply CRI standard. This allows the flexibility of using multiple container runtimes such as dockerd, frakti, containerd, etc. inside Kubernetes. For its initial release of containerd, a plugin called cri-containerd filled this gap:

In containerd 1.1, the cri-containerd daemon was refactored to be a containerd CRI plugin. The CRI plugin is built into containerd 1.1, and enabled by default. Unlike cri-containerd, the CRI plugin interacts with containerd through direct function calls. This new architecture makes the integration more stable and efficient, and eliminates another grpc hop in the stack:

Eliminating Middleman – Docker or should we say, dockershim?
As you can understand, CRI and container runtimes (such as CRI-O, containerd, etc.) suffices need for Kubernetes to manage and run containers. So it allows removal of middleman such as dockershim and dockerd.
This allows Kubernetes development team to reduce the size of kubelet code base by reducing vendor specific code and the convenience and benefits comes from managing a small code base.
Not only this, switching to containerd makes pod creation faster, lowers resource usage and promises better speed and stability in operation. That is not just because you are taking out two hops in the path, but because it ensures that Kubernetes can properly manage all your containers because Kubernetes users can not control as much manually.
But now, since containers are scheduled directly with the container runtime, they are not visible to Docker. So any Docker tooling or fancy UI you might have used before to check on these containers is no longer available.
You cannot get container information using docker ps
or docker inspect
commands. As you cannot list containers, you cannot get logs, stop containers, or execute something inside container using docker exec
.
However, Docker Engine or Simply Docker, is not just dockerd. It is a lot of things to lot of people. Docker is the way that you run containers, but it is also like a whole toolkit around building, managing and interacting with containers. The long-ago split between containerd and Docker in form of CRI, was effectively splitting the runtime and the user interface. In a Kubernetes cluster, Kubernetes is really intended to be the user interface.
With all that history discussed, lets try to address these concerns and what we needs to be done about it. This brings us to our next section.
Addressing Concerns and Questions
Using Docker in Kubernetes 1.20
Docker is still available in Kubernetes 1.20, it is only marked as deprecated. The only thing changing in 1.20 is a single warning log printed at kubelet startup if using Docker as the runtime.
What about future versions of Kubernetes?
It is planned to be removed with v1.23, with things to be evaluated as time progresses. One of the requirements is that cri plugin is still in alpha, which means it is not considered as ready to run production workloads. It should graduate to GA before removing dockershim from kubelet. This is being done under KEP 2041.
Building Images with Docker
Docker Inc is also credited with developing Dockerfile standards, Storage standards and Image building standards, to build containers. However, as Open Container Initiative (OCI) was established, they released container image standard compliant with OCI. Container Images pertaining to OCI Image standard, are called OCI container images.
However, Docker Inc as one of the founding members of OCI, developed and donated a majority of the OCI code and have been instrumental in defining the OCI runtime and image specifications as maintainers of the project. So, Docker Images have been compliant with OCI standard from long time. The image that Docker produces isn’t really a Docker-specific image—it’s an OCI (Open Container Initiative) image. Any OCI-compliant image, regardless of the tool you use to build it, will look the same to Kubernetes.
Again, the way to write Dockerfiles remains the same.
So you can still continue to build images using Docker. And they will continue to work.
Alternatives to building container images
One of such popular tools is buildah. Buildah facilitates building OCI container images. It is a complimentary tool to Podman, and podman build
uses Buildah to perform container image builds. Buildah makes it possible to build images from scratch, from existing images, and using Dockerfiles. OCI images built using the Buildah command-line tool and the underlying OCI-based technologies are portable and can therefore run in a Docker Open Source Engine environment.
Another tools are also available like img, k3c, etc. A more detailed read is available here.
What about building images using containers or Using Docker inside Docker?
A lot of people knowingly or unknowingly uses containers to build container images. This functionality is also sometimes called inside Docker inside Docker. One of such use cases is explained here. To support this functionality, what people have to do, is to take the docker socket running on host machine and mount it inside container. After this, they could use socket to build and push containers to image registries. This functionality was never supported officially, though it was possible to use it.
With Docker gone as runtime, Docker may or may not be available on worker nodes to support this functionality. Depending upon your control on configuration of worker node images, this functionality may or may not be available down the line, for your toolchain and application.
Determine if your application is dependent on Docker Runtime
Some users currently access Docker Engine on a node from within privileged Pods. It is recommended that you update your workloads so that they do not rely on Docker directly. For example, if you currently extract application logs or monitoring data from Docker Engine using docker logs
or docker top
, consider using alternatives for logging and monitoring instead. Alternatively, if you are using Docker API calls to determine if you should run specific containers or specific behaviors, consider migrating away from it.
Check that scripts and apps running on nodes outside of Kubernetes infrastructure do not execute Docker commands. It might be:
- SSH to nodes to troubleshoot;
- Node startup scripts;
- Monitoring and security agents installed on nodes directly.
If you are using third party agents for monitoring, logging and security, check with the vendor of your agents for determining if they are dependent on docker runtime.
Make sure there is no indirect dependencies on dockershim behavior. This is an edge case and unlikely to affect your application. Some tooling may be configured to react to Docker-specific behaviors, for example, raise alert on specific metrics or search for a specific log message as part of troubleshooting instructions. If you have such tooling configured, test the behavior on test cluster before migration.
Be aware of Kubectl plugins that require docker CLI or the control socket, for their functionality.
Other than this, if you have customized your dockerd configuration say for things like registry-mirrors or insecure registries, etc, you will need to adapt that for your new container runtime where possible.
What about other container runtimes?
Besides docker, other popular container runtimes are CRI-O and containerd. You need to install a container runtime into each node in the cluster so that Pods can run there.
Note that the CRI-O major and minor versions must match the Kubernetes major and minor versions. For more information, see the CRI-O compatibility matrix.
You can read steps to install various container runtimes here.
For people using PaaS Kubernetes platforms such as AKS, GKS, EKS, Rancher etc., they need to check with their vendor about migrating away from docker runtime and installing alternate runtimes for worker nodes.
Testing Migration in Live Kubernetes cluster
Dependency on container runtime cannot be changed in a live Kubernetes cluster as of v1.20. Details to switch kubelet to use specific runtimes may vary depend on how do you start K8s cluster. This behavior may or may not change in future releases.
Can we plan Rollback / Upgrade etc.?
Not supported and no plans to support yet as of writing of this blog post. Keep checking Kubernetes community discussions, if you would like to.
We uses Kubernetes on Windows. What about us?
There has been some anxiety about containerd in Kubernetes for Windows, simply because there have been fewer Windows releases of Kubernetes for it to be tested in. But Windows team has the advantage of starting late, so they could not only see where the Kubernetes Roadmap lies but also what was planned and then go about it. So the containerd interface has always been the interface for Windows containers, even though Docker was the only supported container runtime for Windows before Kubernetes v1.18 and the first stable release for containerd Windows support is v1.20.
What about Docker-Swarm and docker-compose?
docker-compose and docker swarm are another popular ways to run containers. They live outside Kubernetes stack and are not affected by this change in anyway.
What about Openshift / Rancher / K3s, etc?
Same is true for container orchestration platforms like OpenShift, Rancher, K3s, etc. They already use containerd as their container management runtime. They are not affected by this change. However, do check with your kubernetes vendor for more details.
What happens to dockershim anyways?
It is set to live outside Kubernetes as of now. Mirantis has announced that it would work with Docker to maintain the shim code as a standalone project outside Kubernetes, as a conformant CRI interface for the Docker Engine API.