Exposing Application Workloads to Outside world using Ingress in Kubernetes

Almost all applications workloads need to interact with world outside Kubernetes cluster to function as intended. Exposing Services objects using LoadBalancer or NodePort type are one of the common ways. However, at times, you may need functionality like SSL/TLS termination, virtual hosting functions etc. To use such features, you can use Ingress resource type. It exposes exposes HTTP and HTTPS routes from outside the cluster to Services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.

Do note that you would need an Ingress Controller to implement the Ingress Resource. There is no standard Ingress controller that is built into kubernetes, so the user must install one of many optional implementations. You can think of Ingress Controllers as pluggable mechanisms.

Read More »

Connect application with outside world with Services in Kubernetes

In Kubernetes, services are an abstract way to expose application workloads to outside world as a network service. Services most commonly abstract access to Kubernetes pods, but they can also abstract other kinds of backends. The set of pods targeted by a service is usually determined by a selector.

Since pods are considered disposable or non-permanent resources, you would also need to use a controller resource like replicaset or deployment etc. to maintain desired number of pod replicas.

Read More »

Releasing application versions with Deployment resource in Kubernetes

While resources such as replicasets, replicacontroller, services are useful to manage a desired state of the application using pods, they are tied to a particular version of the application. To rollout the new version of the application, these resources are not helpful. The deployment object helps to control and release the release of new versions of the application.

Deployments helps to move from one version of application to another using a user-configurable rollout process. It also uses health checks to ensure that new version of the application is operating correctly and enable additional options such as stopping the deployment if too many failures occur.

Read More »

Working with ReplicaSets in Kubernetes

In Kubernetes, pods are the smallest unit of deployment for compute. A pod can contain one or more containers for the application. However, you would often want one or more instances of pods running in parallel for reasons like scalability, sharding or other reasons. A ReplicaSet ensures that a specified number of pod replicas (one or more) are running at any given time.

ReplicaSets are the building blocks used for the higher level concepts such as deployments or horizontal pod autoscalers. By ensuring that specified number of pod replicas are running, they provide self-healing for applications for certain failure conditions such as node failures or network partitions. Most of the time, one would be using higher level concepts like deployments instead of directly using ReplicaSets.

Read More »

Create Multi Node Kubernetes Cluster with k3d

k3d is a lighweight wrapper to run k3s with docker. Unlike k3s, docker containers can be used to create Kubernetes nodes. It makes it useful to create local multi node clusters on the same machine like kind. This is particularly useful for devs and testers alike, as they would not need to deal with complication of setting up multi node Kubernetes setup.

Installation and Setup for k3d

One of the pre-requisites for k3d to work is docker. You can refer official instructions for same or one of our previous posts for installing docker in rootless mode.

Read More »

Control CPU and Memory Resources consumed by Pods in Kubernetes

By default, containers / pods are allocated unbound resources in the Kubernetes cluster. This allows them to consume as much resources as they need on a given node. However, this is a not a pretty scenario for cluster administrators. With the resource quotas, admins can restrict the amount of cpu and memory resources available, on a namespace basis. Within a namespace, the resource per container or pod can be controlled by using limit ranges.

Limit Range policy can be also be used to minimum and maximum storage request per PersistentVolumeClaim as well.

If no resource requests or limits are defined by the pod/container, limit range policy can be used to do the default allocation.

Read More »

Create and Manage Pods in Kubernetes

A pod is a group of one or more containers in Kubernetes and it is the smallest unit of deployment for compute. The containers in a pod lives in their own cgroups but share a number of linux namespaces. Applications running in the same Pod share the same IP address and port space (network namespace), have the same hostname (UTS namespace), and can communicate using native interprocess communication channels over System V IPC or POSIX message queues (IPC namespace). 

The containers in the pod are not managed individually, they are managed at pod level. The pod may also include init containers, sidecar containers and ephemeral containers, other than containers running actual application processes.

Read More »

Working with pods with podman generate and podman play

Podman pods are a way to manage group of application containers together as one pod. It is similar in that way to Kubernetes pods. While you may add many containers as you need with a pod, it would be easier if you can export and import pod manifests entirely. This would allow you to easily create pod with requisite containers rather than running a bunch of commands. You can also use generated manifest to create kubernetes pods. podman generate is a way to generate pod definition manifest yaml format. Similarly, podman play is to import pod definition and spin up a pod for you.

Read More »