Now Reading
Kubernetes Installation in an On-Premises Environment
[vc_row thb_full_width=”true” thb_row_padding=”true” thb_column_padding=”true” css=”.vc_custom_1608290870297{background-color: #ffffff !important;}”][vc_column][vc_row_inner][vc_column_inner][vc_empty_space height=”20px”][thb_postcarousel style=”style3″ navigation=”true” infinite=”” source=”size:6|post_type:post”][vc_empty_space height=”20px”][/vc_column_inner][/vc_row_inner][/vc_column][/vc_row]

Kubernetes Installation in an On-Premises Environment

Kubernetes install

Kubernetes is a leader in the deployment of applications across multiple platforms. This is a significant improvement over the traditional method of deploying applications. It offers a more comprehensive approach to continuous integration (CI) as well as continuous deployment (CD). Kubernetes API allows for robust resource management. This blog will show you how to set up and configure Kubernetes on baremetal (on-premises).

Prerequisites:

In this setup, one master node is used and two workers nodes are used.

ISO image version: Ubuntu 20.04.2LTS (all nodes).

Virtual box/hypervisor: To deploy VMs

Internet access

Specifications for hardware:

Master node: RAM 8GB, CPU-4

Worker nodes RAM: 4GB each, CPU-2

Let’s say you have created virtual machines according to the above specifications. Now, let’s look at the steps for Kubernetes setup.

Docker installation:

Docker can be installed on all three machines

# sudo apt-get update

# apt-get install curl

# curl -fsSL get.docker.com -o get-docker.sh

# sudo sh get-docker.sh

# docker version

Kubernetes install

For a better understanding of the environment, update the hostnames on all nodes.

# apt-get vim

# apt-get updated

# vi /etc/hosts

Docker IPs

Save the IPs of all nodes in /etc/hosts.

Next, update the hostname in http://etc/hostnames

Restart the machine to allow the changes take effect.

Kubernetes Installation:

Follow these steps to master nodes:

# swapoff –a This is used for turning off swap memory in all nodes

# aptget update &&apt-get install-y apttransport-https

# snap install kubelet classical

# snap install kubectl classical

# snap install kubeadm –classic

Docker install

Now check Docker’s status. Docker should now be running on all nodes.

docker status

# kubeadm init –pod-network-cidr=10.211.0.0/16

You can manually define the IP address of an API server if you have multiple master-node interfaces.

# kubeadm init –pod-network-cidr=10.211.0.0/16 –apiserver-advertise-address=

After the cluster has been initialized, the following message appears: Your Kubernetes Control-Plane has successfully initialized! as shown below. Notepad will save the join token.

Kubernetes running

These commands will allow you to get started with your cluster: Master node.

# sudo mkdir -p $HOME/.kube

# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

# sudo: chown $(id ):$(id) -g $HOME/.kube/config

Alternativ, you can also run:

# export KUBECONFIG=/etc/kubernetes/admin.conf

You can join any number workers nodes by running these commands on each worker node root:

# kubeadm join 10.211.55.7:6443 –token r58v9e.fue165xq18n4tzxw

–discovery-token-ca-cert-hash sha256:979e8898318e056047146cfc1132d6ca49e2e705b2b91c1b5cdfe9f55d3ed270

Now, issue the following command to check the status Kubernetes nodes:

# kubectl get nodes o wide

This is where we might encounter an issue while adding a node. If this happens, you will need to add the details of the cgroup driver as follows:

Change the cgroupdriver setting to systemd default, and reload Docker daemonWe are making this change because Kubernetes recommended that cgroupdriver be system).

Add the following content /etc/docker/daemon.json:

$cat /etc/docker/daemon.json

{

“exec-opts”: [“native.cgroupdriver=systemd”],

“log-driver”: “json-file”,

“log-opts”: {

“max-size”: “100m”

},

“storage-driver”: “overlay2”

}

After making the change, reload Docker and restart Docker at the master node.

# systemctl Daemon-Reload

# systemctl restart docker

Do the same for all nodes.

 

For any changes, you can check the following file:

# vi /usr/lib/systemd/system/docker.service

 

Look for the command

ExecStart=/usr/bin/dockerd –exec-opt native.cgroupdriver=system

Add the join command to all worker nodes. This will produce the following output:

Once you’ve added all the worker nodes to the cluster, execute the following command to check the status of the nodes:

Now, we need the roles to nodes to be as follows:

# kubectl label node worker1 node-role.kubernetes.io/worker=

# kubectl label node worker2 node-role.kubernetes.io/worker=

# kubectl get nodes -o wide

The cluster is now ready. Flannel is used to connect the cluster’s network.

# Kubectl Apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

For better performance and reliability, however, Calico can be used instead of Flannel.

Kubernetes cluster

The cluster is ready.

Kubeadm troubleshooting:

# systemctl state kubelet

# journalctl -xeu kubelet

# kubectl cluster-info dump

View Comments (0)

Leave a Reply

Your email address will not be published.