It is very easy to spin up a new container in Docker based on an image. However, when you exit the container, this container is not deleted. Docker always keeps a copy of it. This enables you to restart your container any time later.
When you first create a container in docker, its associated dockerfile is called and executed to create that container. This creates a new Docker image, which is stored in the folder ‘/var/lib/docker’ by default. During further runs of Dockerfile, Docker will create and commit a new layer to the already existing image. These images are stored in the cache.
Over the period of time, disk space occupied by these images can become significant. So you need to keep a tab on the disk usage.If you are no longer using exited containers, you can use below command to save up disk space:
$ docker rm $(
docker ps -q -f status=exited)
To entirely wipe out including running containers, you can use below command:
$ docker rm $(docker ps -a -q)
Now, when you will run docker ps -a, you would no longer see the old containers.
Every Docker image has a repository name and tag associated with it. For instance, an Ubuntu Docker image may have ‘Ubuntu’ as the repository name and ‘latest’ as the tag.
As the name suggest, a dangling image in Docker is something that is inactive and not being used by any other running images.
The ‘repo:tag’ for dangling images would be shown as <none>:<none> or untagged when the command ‘docker images’ is executed. Since the dangling images cause wastage of disk space, they need to be deleted periodically to clear up the disk space for efficient functioning of the server. You can use below commands to show untagged images:
$ docker images –filter "dangling=true"
You can clean dangling images by using below command:
$ docker rmi $(docker images -f "dangling=true" -q)
In case any container is using that image, it would show a warning message when you try to remove it.
Manually, clearing the cache and images every now and then would be tiresome and can be a miss sometimes. So you could probably wrap these commands in a cron job or execute it as part of build process and let it run on the servers in reference. Again, you could also use some kind of disk space monitoring on the servers, so that you can be alerted, if it goes higher than a certain threshold and then would run these commands.