What is Kubernetes?
Kubernetes, or k8s or “kube”, if you’re into brevity, is an open source platform that automates Linux container operations. It eliminates many of the manual processes involved in deploying and scaling containerized applications. In other words, you can cluster together groups of hosts running Linux containers, and Kubernetes helps you easily and efficiently manage those clusters. These clusters can span hosts across public, private, or hybrid clouds.
Kubernetes was originally developed and designed by engineers at Google. Google was one of the early contributors to Linux container technology and has talked publicly about how everything at Google runs in containers. (This is the technology behind Google’s cloud services.)
Google generates more than 2 billion container deployments a week—all powered by an internal platform: Borg. Borg was the predecessor to Kubernetes and the lessons learned from developing Borg over the years became the primary influence behind much of the Kubernetes technology.
Why Kubernetes?
Real production apps span multiple containers. Those containers must be deployed across multiple server hosts. Kubernetes gives you the orchestration and management capabilities required to deploy containers, at scale, for these workloads. Kubernetes orchestration allows you to build application services that span multiple containers, schedule those containers across a cluster, scale those containers, and manage the health of those containers over time.
With Kubernetes you can:
- Orchestrate containers across multiple hosts.
- Make better use of hardware to maximize resources needed to run your enterprise apps.
- Control and automate application deployments and updates.
- Mount and add storage to run stateful apps.
- Scale containerized applications and their resources on the fly.
- Declaratively manage services, which guarantees the deployed applications are always running how you deployed them.
- Health-check and self-heal your apps with autoplacement, autorestart, autoreplication, and autoscaling.
In this blog post, we’ll discover how to create a kubernetes cluster on CentOS. This guide is intended for CentOS 7+ versions only.
Prerequisites
As a part of running kubernetes, you’ll need atleast two machines: One to act as a master node and other as a slave node. There is no limit to the number of slave nodes and you can have as many slave nodes as you want to effectively manage your environment. All of these together form a kubernetes cluster. We’ll also require to have root privileges on each machine.
For this post’s purposes: We’ll have 3 machine cluster with 1 as master and 2 as slave nodes.
The Kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, centos-master, will be the Kubernetes master. This host will run the kube-apiserver, kube-controller-manager and kube-scheduler. In addition, the master will also run etcd. The remaining hosts, centos-minion-n will be the nodes and run kubelet, proxy, cadvisor and docker. All of them run flanneld as networking overlay.
Preparing master node
First, we need to get the IP address of all Linux machines that needed to be a part of cluster. Make sure they all can be reached from each other. Now, login to the master machine and edit /etc/hosts to enter the IP address for each machine with their respective hostnames:

Since we have three machines in our cluster, we have added their information to the /etc/hosts file. This step is not required if hostnames are already in DNS.
Now, create a repository file for kubernetes as /etc/yum.repos.d/virt7-docker-common-release.repo. We can use touch for the same:
touch /etc/yum.repos.d/virt7-docker-common-release.repo
Once its created, let’s edit the file and enter below information:
[virt7-docker-common-release] name=virt7-docker-common-release baseurl=http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/ gpgcheck=0
Next, we’ll enable this repository and install Kubernetes, etcd and flannel. This will also pull in docker and cadvisor:
yum -y install --enablerepo=virt7-docker-common-release kubernetes etcd flannel
Next, we need to edit /etc/kubernetes/config file and make sure it looks like below:
Where controller-01 is the name of the master machine. You need to replace it with the master machine name in your case.
We also need to disable the firewall on the master and all the nodes, as docker does not play well with other firewall rule managers. CentOS won’t let you disable the firewall as long as SELinux is enforcing, so that needs to be disabled first. For this, run below commands:
setenforce 0 systemctl disable iptables-services firewalld systemctl stop iptables-services firewalld
Now, we need to edit the /etc/etcd/etcd.conf to appear as below:
Here, we have modified the values of ETCD_LISTEN_CLIENT_URLS and ETCD_ADVERTISE_CLIENT_URLS to use IP address 0.0.0.0 so that they can listen to all incoming requests. Everything else in etcd.conf will remain the same.
Next, we need to edit /etc/kubernetes/apiserver to appear as below:
Where controller-01 is the name of the master machine. You need to replace it with the master machine name in your case.
Now we need to start ETCD service and configure it to hold the network overlay configuration on master. Do note that mentioned network below (172.30.0.0/16) must be unused in your network infrastructure:
systemctl start etcd etcdctl mkdir /kube-centos/network etcdctl mk /kube-centos/network/config "{ \"Network\": \"172.30.0.0/16\", \"SubnetLen\": 24, \"Backend\": { \"Type\": \"vxlan\" } }"
Now, configure flannel to overlay Docker network in /etc/sysconfig/flanneld on the master:

Finally, we need to start appropriate services on the master and make sure they are running fine:
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler flanneld; do systemctl restart $SERVICES systemctl enable $SERVICES systemctl status $SERVICES done
If any of these services crashes, re-check above steps thoroughly.
Prepare the slave node
Part of preparing slave node is to make sure it points appropriately to master node and has common configuration. So first few steps are same as those of master.
First, we need to get the IP address of all Linux machines that needed to be a part of cluster. Make sure they all can be reached from each other. Now, login to the slave machine and edit /etc/hosts to enter the IP address for each machine with their respective hostnames:

Since we have three machines in our cluster, we have added their information to the /etc/hosts file. This step is not required if hostnames are already in DNS.
Now, create a repository file for kubernetes as /etc/yum.repos.d/virt7-docker-common-release.repo. We can use touch for the same:
touch /etc/yum.repos.d/virt7-docker-common-release.repo
Once its created, let’s edit the file and enter below information:
[virt7-docker-common-release] name=virt7-docker-common-release baseurl=http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/ gpgcheck=0
Next, we’ll enable this repository and install Kubernetes, etcd and flannel. This will also pull in docker and cadvisor:
yum -y install --enablerepo=virt7-docker-common-release kubernetes etcd flannel
Next, we need to edit /etc/kubernetes/config file and make sure it looks like below:
Where controller-01 is the name of the master machine. You need to replace it with the master machine name in your case.
We also need to disable the firewall on the master and all the nodes, as docker does not play well with other firewall rule managers. CentOS won’t let you disable the firewall as long as SELinux is enforcing, so that needs to be disabled first. For this, run below commands:
setenforce 0 systemctl disable iptables-services firewalld systemctl stop iptables-services firewalld
Now, we need to edit /etc/kubernetes/kubelet to appear as below:

where controller-01 is the name of master machine and controller02 is name of slave node on which you are editing this file.
Now, configure flannel to overlay Docker network in /etc/sysconfig/flanneld on the master:

Finally, we need to start appropriate services on the master and make sure they are running fine:
for SERVICES in kube-proxy kubelet flanneld docker; do systemctl restart $SERVICES systemctl enable $SERVICES systemctl status $SERVICES done
If any of these services crashes, re-check above steps thoroughly.
Configure kubectl on master node
Run below command on master machine:
kubectl config set-cluster default-cluster --server=http://controller-01:8080 kubectl config set-context default-context --cluster=default-cluster --user=default-admin kubectl config use-context default-context
Check if all is well!
For this, we can run a simple kubectl command on master machine as below:
kubectl get nodes
It should throw below output:
[root@controller-01 ~]# kubectl get nodes
NAME STATUS AGE
controller02 Ready 7m
controller03 Ready 2m
It should show the status of all slave nodes you added to the cluster.
[…] Create a kubernetes cluster on CentOS […]
LikeLike