k3d is a lighweight wrapper to run k3s with docker. Unlike k3s, docker containers can be used to create Kubernetes nodes. It makes it useful to create local multi node clusters on the same machine like kind. This is particularly useful for devs and testers alike, as they would not need to deal with complication of setting up multi node Kubernetes setup.
Installation and Setup for k3d
One of the pre-requisites for k3d to work is docker. You can refer official instructions for same or one of our previous posts for installing docker in rootless mode.
Distribution specific instructions for installing k3d can be found at official docs. Installing k3d is as simple as grabbing the script at https://raw.githubusercontent.com/rancher/k3d/main/install.sh and running it:
cloud_user@d7bfd02ab81c:~$ wget -q -O - https://raw.githubusercontent.com/rancher/k3d/main/install.sh | bash Preparing to install k3d into /usr/local/bin k3d installed into /usr/local/bin/k3d Run 'k3d --help' to see what you can do with it. cloud_user@d7bfd02ab81c:~$ k3d --version k3d version v4.4.2 k3s version v1.20.6-k3s1 (default)
Install Kubectl
We’ll need kubectl to work with Kubernetes cluster, in case its not already installed. For this, we can use below commands:
cloud_user@d7bfd02ab81c:~$ curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 154 100 154 0 0 564 0 --:--:-- --:--:-- --:--:-- 564 100 44.2M 100 44.2M 0 0 42.4M 0 0:00:01 0:00:01 --:--:-- 169M cloud_user@d7bfd02ab81c:~$ chmod +x kubectl cloud_user@d7bfd02ab81c:~$ sudo mv kubectl /usr/local/bin
Create Kubernetes Cluster with k3d
Creating a cluster in the k3d is as simple as running k3d cluster create command:
cloud_user@8e8f37ca841c:~$ k3d cluster create mycluster INFO[0000] Prep: Network INFO[0000] Created network 'k3d-mycluster' (9e707271808c5c08e7fa7ba286fffe7a44d7797844951b115bde05ce89283173) INFO[0000] Created volume 'k3d-mycluster-images' INFO[0001] Creating node 'k3d-mycluster-server-0' INFO[0004] Pulling image 'docker.io/rancher/k3s:v1.20.6-k3s1' INFO[0017] Creating LoadBalancer 'k3d-mycluster-serverlb' INFO[0020] Pulling image 'docker.io/rancher/k3d-proxy:v4.4.2' INFO[0025] Starting cluster 'mycluster' INFO[0025] Starting servers... INFO[0025] Starting Node 'k3d-mycluster-server-0' INFO[0033] Starting agents... INFO[0033] Starting helpers... INFO[0033] Starting Node 'k3d-mycluster-serverlb' INFO[0034] (Optional) Trying to get IP of the docker host and inject it into the cluster as 'host.k3d.internal' for easy access WARN[0038] Failed to patch CoreDNS ConfigMap to include entry '172.18.0.1 host.k3d.internal': Exec process in node 'k3d-mycluster-server-0' failed with exit code '1' INFO[0038] Successfully added host record to /etc/hosts in 2/2 nodes INFO[0038] Cluster 'mycluster' created successfully! INFO[0038] --kubeconfig-update-default=false --> sets --kubeconfig-switch-context=false INFO[0038] You can now use it like this: kubectl config use-context k3d-mycluster kubectl cluster-info
By default and contrary to the documentation, it will directly switch the default kubeconfig’s current-context to the new cluster’s context so that ~/.kube/config is automatically updated. We can see the cluster information with kubectl
:
cloud_user@d7bfd02ab81c:~$ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME k3d-mycluster-server-0 Ready control-plane,master 2m25s v1.20.6+k3s1 172.18.0.2 <none> Unknown 5.4.0-1038-aws containerd://1.4.4-k3s1 cloud_user@d7bfd02ab81c:~$ kubectl cluster-info Kubernetes control plane is running at https://0.0.0.0:39165 CoreDNS is running at https://0.0.0.0:39165/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy Metrics-server is running at https://0.0.0.0:39165/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
To avoid this, we can use the --kubeconfig-update-default=false
flag while creating cluster. If we run docker ps
at this time, we can see that two containers are created for creating single node Kubernetes cluster. Out of this, the other container k3d-mycluster-serverlb, is a load-balancer:
cloud_user@d7bfd02ab81c:~$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 025bfb9d3533 rancher/k3d-proxy:v4.4.2 "/bin/sh -c nginx-pr…" 30 hours ago Up 9 minutes 80/tcp, 0.0.0.0:39165->6443/tcp k3d-mycluster-serverlb b4fc98027ce1 rancher/k3s:v1.20.6-k3s1 "/bin/k3s server --t…" 30 hours ago Up 9 minutes k3d-mycluster-server-0 cloud_user@d7bfd02ab81c:~$
We can also see that local port 39165 is mapped to API server port 6443 for Kubernetes cluster, with the help of load balancer container. The load balancer will then take care of proxying the requests to the appropriate server node as its acting as ingress. This holds true for the case of multi-node cluster as well.
Creating Multi-Server Clusters
Creating a multi node cluster is as easy as passing the number of servers (control plane nodes) and workers (agents) with the k3d cluster create
command:
cloud_user@d7bfd02ab81c:~$ k3d cluster create multinode --agents 2 --servers 2 INFO[0000] Prep: Network INFO[0000] Created network 'k3d-multinode' (55da3f39ab800281392fec338e7ab2acb4c6719935d8e5c1eb062dcac1fe0658) INFO[0000] Created volume 'k3d-multinode-images' INFO[0000] Creating initializing server node INFO[0000] Creating node 'k3d-multinode-server-0' INFO[0001] Creating node 'k3d-multinode-server-1' INFO[0001] Creating node 'k3d-multinode-agent-0' INFO[0001] Creating node 'k3d-multinode-agent-1' WARN[0001] You're creating 2 server nodes: Please consider creating at least 3 to achieve quorum & fault tolerance INFO[0001] Creating LoadBalancer 'k3d-multinode-serverlb' INFO[0001] Starting cluster 'multinode' INFO[0001] Starting the initializing server... INFO[0001] Starting Node 'k3d-multinode-server-0' INFO[0003] Starting servers... INFO[0003] Starting Node 'k3d-multinode-server-1' INFO[0027] Starting agents... INFO[0027] Starting Node 'k3d-multinode-agent-0' INFO[0036] Starting Node 'k3d-multinode-agent-1' INFO[0045] Starting helpers... INFO[0045] Starting Node 'k3d-multinode-serverlb' INFO[0047] (Optional) Trying to get IP of the docker host and inject it into the cluster as 'host.k3d.internal' for easy access INFO[0055] Successfully added host record to /etc/hosts in 5/5 nodes and to the CoreDNS ConfigMap INFO[0055] Cluster 'multinode' created successfully! INFO[0055] --kubeconfig-update-default=false --> sets --kubeconfig-switch-context=false INFO[0056] You can now use it like this: kubectl config use-context k3d-multinode kubectl cluster-info cloud_user@d7bfd02ab81c:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION k3d-multinode-agent-0 Ready <none> 54s v1.20.6+k3s1 k3d-multinode-agent-1 Ready <none> 45s v1.20.6+k3s1 k3d-multinode-server-0 Ready control-plane,etcd,master 75s v1.20.6+k3s1 k3d-multinode-server-1 Ready control-plane,etcd,master 62s v1.20.6+k3s1 cloud_user@d7bfd02ab81c:~$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f9226e70ed06 rancher/k3d-proxy:v4.4.2 "/bin/sh -c nginx-pr…" 3 minutes ago Up 2 minutes 80/tcp, 0.0.0.0:34821->6443/tcp k3d-multinode-serverlb b945f568c333 rancher/k3s:v1.20.6-k3s1 "/bin/k3s agent" 3 minutes ago Up 3 minutes k3d-multinode-agent-1 df68cdcf55bd rancher/k3s:v1.20.6-k3s1 "/bin/k3s agent" 3 minutes ago Up 3 minutes k3d-multinode-agent-0 0321f88031bc rancher/k3s:v1.20.6-k3s1 "/bin/k3s server --t…" 3 minutes ago Up 3 minutes k3d-multinode-server-1 75a5a9d2a289 rancher/k3s:v1.20.6-k3s1 "/bin/k3s server --c…" 3 minutes ago Up 3 minutes k3d-multinode-server-0
We can also dynamically add/remove nodes to the cluster, so as to accommodate the application requirements:
# add an control plane node to cluster multinode cloud_user@d7bfd02ab81c:~$ k3d node create server --role server --cluster multinode INFO[0000] Starting Node 'k3d-server-0' # add an worker node to cluter multinode cloud_user@d7bfd02ab81c:~$ k3d node create myagent --role agent --cluster multinode INFO[0000] Starting Node 'k3d-myagent-0' # list existing nodes in the cluster. # Alternatively, use kubectl get nodes cloud_user@d7bfd02ab81c:~$ k3d node list NAME ROLE CLUSTER STATUS k3d-multinode-agent-0 agent multinode running k3d-multinode-agent-1 agent multinode running k3d-multinode-server-0 server multinode running k3d-multinode-server-1 server multinode running k3d-multinode-serverlb loadbalancer multinode running k3d-myagent-0 agent multinode running k3d-mycluster-server-0 server mycluster running k3d-mycluster-serverlb loadbalancer mycluster running k3d-server-0 server multinode running # remove the server node from multinode cluster cloud_user@d7bfd02ab81c:~$ k3d node delete k3d-server-0 INFO[0000] Deleted k3d-server-0 # verify that the node removal was complete cloud_user@d7bfd02ab81c:~$ k3d node list NAME ROLE CLUSTER STATUS k3d-multinode-agent-0 agent multinode running k3d-multinode-agent-1 agent multinode running k3d-multinode-server-0 server multinode running k3d-multinode-server-1 server multinode running k3d-multinode-serverlb loadbalancer multinode running k3d-myagent-0 agent multinode running k3d-mycluster-server-0 server mycluster running k3d-mycluster-serverlb loadbalancer mycluster running cloud_user@d7bfd02ab81c:~$
Lifecycle for k3d cluster
We can view the current list of clusters created with k3d cluster list
, stop a cluster with k3d cluster stop
, start with k3d cluster start
and delete one with k3d cluster delete
command as below:
# list existing clusters and their status cloud_user@d7bfd02ab81c:~$ k3d cluster list NAME SERVERS AGENTS LOADBALANCER multinode 2/2 3/3 true mycluster 1/1 0/0 true # stop cluster mycluster cloud_user@d7bfd02ab81c:~$ k3d cluster stop mycluster INFO[0000] Stopping cluster 'mycluster' # verify that cluster is stopped and now has no active servers/agents cloud_user@d7bfd02ab81c:~$ k3d cluster list NAME SERVERS AGENTS LOADBALANCER multinode 2/2 3/3 true mycluster 0/1 0/0 true # start cluster mycluster cloud_user@d7bfd02ab81c:~$ k3d cluster start mycluster INFO[0000] Starting cluster 'mycluster' INFO[0000] Starting servers... INFO[0000] Starting Node 'k3d-mycluster-server-0' INFO[0007] Starting agents... INFO[0007] Starting helpers... INFO[0007] Starting Node 'k3d-mycluster-serverlb' # verify that cluster is started and all nodes are running cloud_user@d7bfd02ab81c:~$ k3d cluster list NAME SERVERS AGENTS LOADBALANCER multinode 2/2 3/3 true mycluster 1/1 0/0 true # delete cluster mycluser -- No safety switch here as you can delete a running cluster cloud_user@d7bfd02ab81c:~$ k3d cluster delete mycluster INFO[0000] Deleting cluster 'mycluster' INFO[0000] Deleted k3d-mycluster-serverlb INFO[0004] Deleted k3d-mycluster-server-0 INFO[0004] Deleting cluster network 'k3d-mycluster' INFO[0004] Deleting image volume 'k3d-mycluster-images' INFO[0004] Removing cluster details from default kubeconfig... INFO[0004] Removing standalone kubeconfig file (if there is one)... INFO[0004] Successfully deleted cluster mycluster! # verify that the cluster is removed cloud_user@d7bfd02ab81c:~$ k3d cluster list NAME SERVERS AGENTS LOADBALANCER multinode 2/2 3/3 true cloud_user@d7bfd02ab81c:~$
Deploy Application Containers
Deploying application container is as simple as normal application workloads deployments:
# create a simple deployment for nginx cloud_user@d7bfd02ab81c:~$ kubectl create deployment nginx --image=nginx deployment.apps/nginx created # expose deployment with service cloud_user@d7bfd02ab81c:~$ kubectl create service clusterip nginx --tcp=8080:80 service/nginx created # expose service with nginx ingress cloud_user@d7bfd02ab81c:~$ cat <<EOF | kubectl apply -f - > apiVersion: networking.k8s.io/v1 > kind: Ingress > metadata: > name: nginx > annotations: > ingress.kubernetes.io/ssl-redirect: "false" > spec: > defaultBackend: > service: > name: nginx > port: > number: 8080 > EOF ingress.networking.k8s.io/nginx created cloud_user@d7bfd02ab81c:~$ kubectl get all -n default NAME READY STATUS RESTARTS AGE pod/nginx-6799fc88d8-9vqg6 1/1 Running 0 16s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 56m service/nginx ClusterIP 10.43.119.164 <none> 8080/TCP 16s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/nginx 1/1 1 1 16s NAME DESIRED CURRENT READY AGE replicaset.apps/nginx-6799fc88d8 1 1 1 16s # verify that service is mapped with the nginx pods cloud_user@d7bfd02ab81c:~$ kubectl describe svc nginx Name: nginx Namespace: default Labels: app=nginx Annotations: <none> Selector: app=nginx Type: ClusterIP IP Families: <none> IP: 10.43.119.164 IPs: 10.43.119.164 Port: 8080-80 8080/TCP TargetPort: 80/TCP Endpoints: 10.42.5.4:80 Session Affinity: None Events: <none> # verify that ingress is matching the correct service port cloud_user@d7bfd02ab81c:~$ kubectl describe ingress nginx Name: nginx Namespace: default Address: 172.19.0.2 Default backend: nginx:8080 (10.42.5.4:80) Rules: Host Path Backends ---- ---- -------- * * nginx:8080 (10.42.5.4:80) Annotations: ingress.kubernetes.io/ssl-redirect: false Events: <none> # verify that we can access nginx cloud_user@d7bfd02ab81c:~$ curl http://localhost:8080 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>
Create cluster with Specific Kubernetes Version
One may want to create a Kubernetes cluster with a specific version, to accommodate certain requirements. The list of available node images can be found at k3s docker repository. We can specify image with --image
option, at cluster creation time.
cloud_user@d7bfd02ab81c:~$ k3d cluster create v121-dev --port 8080:80@loadbalancer --port 8443:443@loadbalancer --image rancher/k3s:v1.21.0-k3s1 INFO[0000] Prep: Network INFO[0000] Created network 'k3d-v121-dev' (e57e9044185de6f0e1c5c6ff418e29fa9aec5d9a7c4ca8806e891420e39a138d) INFO[0000] Created volume 'k3d-v121-dev-images' INFO[0001] Creating node 'k3d-v121-dev-server-0' INFO[0004] Pulling image 'rancher/k3s:v1.21.0-k3s1' INFO[0010] Creating LoadBalancer 'k3d-v121-dev-serverlb' INFO[0010] Starting cluster 'v121-dev' INFO[0010] Starting servers... INFO[0010] Starting Node 'k3d-v121-dev-server-0' INFO[0019] Starting agents... INFO[0019] Starting helpers... INFO[0019] Starting Node 'k3d-v121-dev-serverlb' INFO[0021] (Optional) Trying to get IP of the docker host and inject it into the cluster as 'host.k3d.internal' for easy access INFO[0025] Successfully added host record to /etc/hosts in 2/2 nodes and to the CoreDNS ConfigMap INFO[0025] Cluster 'v121-dev' created successfully! INFO[0025] --kubeconfig-update-default=false --> sets --kubeconfig-switch-context=false INFO[0026] You can now use it like this: kubectl config use-context k3d-v121-dev kubectl cluster-info cloud_user@d7bfd02ab81c:~$ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME k3d-v121-dev-server-0 Ready control-plane,master 16s v1.21.0+k3s1 172.20.0.2 <none> Unknown 5.4.0-1045-aws containerd://1.4.4-k3s1 cloud_user@d7bfd02ab81c:~$
Infrastructure as Code with k3d clusters
Instead of passing multiple command line options with k3d cluster create
command, we can define all the things that you defined with CLI flags before in a nice and tidy YAML. Using a config file is as easy as putting it in a well-known place in your file system and then referencing it via flag:
- All options in config file:
k3d cluster create --config /home/me/my-awesome-config.yaml
(must be.yaml
/.yml
) - With CLI override (name):
k3d cluster create somename --config /home/me/my-awesome-config.yaml
- With CLI override (extra volume):
k3d cluster create --config /home/me/my-awesome-config.yaml --volume '/some/path:/some:path@server[0]'
More details are available at official documentation for same.