Working with ReplicaSets in Kubernetes

In Kubernetes, pods are the smallest unit of deployment for compute. A pod can contain one or more containers for the application. However, you would often want one or more instances of pods running in parallel for reasons like scalability, sharding or other reasons. A ReplicaSet ensures that a specified number of pod replicas (one or more) are running at any given time.

ReplicaSets are the building blocks used for the higher level concepts such as deployments or horizontal pod autoscalers. By ensuring that specified number of pod replicas are running, they provide self-healing for applications for certain failure conditions such as node failures or network partitions. Most of the time, one would be using higher level concepts like deployments instead of directly using ReplicaSets.

Read More »

Configure Pods to run as Systemd services with Podman

This blog post continues from where we left in our earlier blog post, where we discussed how systemd and podman fits in together to run and manage containers as systemd services. We discussed how we can do the same for a specific containers and learned to create generic systemd unit files. We also discussed few use cases where this integration is useful. As we know, pods in podman are a way to group and manage multiple application containers as one. So we can start them together, manage them together and then remove them together once done. If we do need, we can manage them individually as well. Basics of pods are covered here.

Read More »

Running Containers as systemd services with Podman

systemd has long been the de-facto standard for managing services and their dependencies in linux. While its good to run applications within containers, to provide a certain functionality and to avoid installing packages on the host OS, the availability and reliability has been an issue. Before you go ahead and start using an application packaged inside that container, you need to make sure that container is up and running. And what if your application consists of multiple containers which needed to be started in a certain order. In fact, there is a growing set of applications which are available as containers so that users can bypass all the headaches associated with installation and setup. So there is a use case for the systemd to take control and manage containers as native services.

Read More »

Create Multi Node Kubernetes Cluster with k3d

k3d is a lighweight wrapper to run k3s with docker. Unlike k3s, docker containers can be used to create Kubernetes nodes. It makes it useful to create local multi node clusters on the same machine like kind. This is particularly useful for devs and testers alike, as they would not need to deal with complication of setting up multi node Kubernetes setup.

Installation and Setup for k3d

One of the pre-requisites for k3d to work is docker. You can refer official instructions for same or one of our previous posts for installing docker in rootless mode.

Read More »

Create and Restore Container Checkpoints with CRIU, buildah, Podman and Docker

CRIU (stands for Checkpoint and Restore in Userspace) is a utility that enables you to set a checkpoint on a running container or an individual application and store its state to disk. You can use data saved to restore the container after a reboot at the same point in time it was checkpointed. It is possible to perform operations like container live migration, snapshots, remote debugging etc.

CRIU is integrated by major container engines such as Docker, Podman, LXC/LXD, OpenVZ, etc for implementing associated functionality. It is also available in respective package repositories for linux distributions.

Read More »

Control CPU and Memory Resources consumed by Pods in Kubernetes

By default, containers / pods are allocated unbound resources in the Kubernetes cluster. This allows them to consume as much resources as they need on a given node. However, this is a not a pretty scenario for cluster administrators. With the resource quotas, admins can restrict the amount of cpu and memory resources available, on a namespace basis. Within a namespace, the resource per container or pod can be controlled by using limit ranges.

Limit Range policy can be also be used to minimum and maximum storage request per PersistentVolumeClaim as well.

If no resource requests or limits are defined by the pod/container, limit range policy can be used to do the default allocation.

Read More »

Manage Passwords in Linux Ecosystem with Pass Utility

Password based authentications are very common. However storing and securing passwords is an hassle. There are already too many offline and online only services, which does this work for you. Others are more feature-rich and offer lot of other features. If you work in an offline (or air-gap, disconnected from internet, etc) environment, you can use a simple open source utility called pass. It can be used to store each password as a separate file with gpg encryption. It is CLI based, but there are GUI extensions available and has a lot of support in the community.

Also with git, you can choose to sync the encrypted passwords with internal source repos as well, so that you can get all benefits of gitops as well.

Read More »

Create and Manage Pods in Kubernetes

A pod is a group of one or more containers in Kubernetes and it is the smallest unit of deployment for compute. The containers in a pod lives in their own cgroups but share a number of linux namespaces. Applications running in the same Pod share the same IP address and port space (network namespace), have the same hostname (UTS namespace), and can communicate using native interprocess communication channels over System V IPC or POSIX message queues (IPC namespace). 

The containers in the pod are not managed individually, they are managed at pod level. The pod may also include init containers, sidecar containers and ephemeral containers, other than containers running actual application processes.

Read More »