Exposing Application Workloads to Outside world using Ingress in Kubernetes

Almost all applications workloads need to interact with world outside Kubernetes cluster to function as intended. Exposing Services objects using LoadBalancer or NodePort type are one of the common ways. However, at times, you may need functionality like SSL/TLS termination, virtual hosting functions etc. To use such features, you can use Ingress resource type. It exposes exposes HTTP and HTTPS routes from outside the cluster to Services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.

Do note that you would need an Ingress Controller to implement the Ingress Resource. There is no standard Ingress controller that is built into kubernetes, so the user must install one of many optional implementations. You can think of Ingress Controllers as pluggable mechanisms.

Read More »

Connect application with outside world with Services in Kubernetes

In Kubernetes, services are an abstract way to expose application workloads to outside world as a network service. Services most commonly abstract access to Kubernetes pods, but they can also abstract other kinds of backends. The set of pods targeted by a service is usually determined by a selector.

Since pods are considered disposable or non-permanent resources, you would also need to use a controller resource like replicaset or deployment etc. to maintain desired number of pod replicas.

Read More »

Releasing application versions with Deployment resource in Kubernetes

While resources such as replicasets, replicacontroller, services are useful to manage a desired state of the application using pods, they are tied to a particular version of the application. To rollout the new version of the application, these resources are not helpful. The deployment object helps to control and release the release of new versions of the application.

Deployments helps to move from one version of application to another using a user-configurable rollout process. It also uses health checks to ensure that new version of the application is operating correctly and enable additional options such as stopping the deployment if too many failures occur.

Read More »

Working with ReplicaSets in Kubernetes

In Kubernetes, pods are the smallest unit of deployment for compute. A pod can contain one or more containers for the application. However, you would often want one or more instances of pods running in parallel for reasons like scalability, sharding or other reasons. A ReplicaSet ensures that a specified number of pod replicas (one or more) are running at any given time.

ReplicaSets are the building blocks used for the higher level concepts such as deployments or horizontal pod autoscalers. By ensuring that specified number of pod replicas are running, they provide self-healing for applications for certain failure conditions such as node failures or network partitions. Most of the time, one would be using higher level concepts like deployments instead of directly using ReplicaSets.

Read More »

Configure Pods to run as Systemd services with Podman

This blog post continues from where we left in our earlier blog post, where we discussed how systemd and podman fits in together to run and manage containers as systemd services. We discussed how we can do the same for a specific containers and learned to create generic systemd unit files. We also discussed few use cases where this integration is useful. As we know, pods in podman are a way to group and manage multiple application containers as one. So we can start them together, manage them together and then remove them together once done. If we do need, we can manage them individually as well. Basics of pods are covered here.

Read More »

Running Containers as systemd services with Podman

systemd has long been the de-facto standard for managing services and their dependencies in linux. While its good to run applications within containers, to provide a certain functionality and to avoid installing packages on the host OS, the availability and reliability has been an issue. Before you go ahead and start using an application packaged inside that container, you need to make sure that container is up and running. And what if your application consists of multiple containers which needed to be started in a certain order. In fact, there is a growing set of applications which are available as containers so that users can bypass all the headaches associated with installation and setup. So there is a use case for the systemd to take control and manage containers as native services.

Read More »

Create Multi Node Kubernetes Cluster with k3d

k3d is a lighweight wrapper to run k3s with docker. Unlike k3s, docker containers can be used to create Kubernetes nodes. It makes it useful to create local multi node clusters on the same machine like kind. This is particularly useful for devs and testers alike, as they would not need to deal with complication of setting up multi node Kubernetes setup.

Installation and Setup for k3d

One of the pre-requisites for k3d to work is docker. You can refer official instructions for same or one of our previous posts for installing docker in rootless mode.

Read More »

Create and Restore Container Checkpoints with CRIU, buildah, Podman and Docker

CRIU (stands for Checkpoint and Restore in Userspace) is a utility that enables you to set a checkpoint on a running container or an individual application and store its state to disk. You can use data saved to restore the container after a reboot at the same point in time it was checkpointed. It is possible to perform operations like container live migration, snapshots, remote debugging etc.

CRIU is integrated by major container engines such as Docker, Podman, LXC/LXD, OpenVZ, etc for implementing associated functionality. It is also available in respective package repositories for linux distributions.

Read More »