Install Kubernetes Cluster on CentOS 7 with kubeadm
This guide will teach you how to deploy a minimum viable Kubernetes Cluster on CentOS 7 by using kubeadm tool. Kubeadm is a command line tool created to help users bootstrap a Kubernetes cluster that conforms to best practices. This tool supports cluster lifecycle management functions such as bootstrap tokens and cluster upgrades.
For Debian installation: Deploy Kubernetes Cluster on Debian 10 with Kubespray
For Rocky Linux 8: Install Kubernetes Cluster on Rocky Linux 8 with Kubeadm & CRI-O
The next sections will discuss in detail the process of deploying a minimal Kubernetes cluster on CentOS 7 servers. This installation is for a single control-plane cluster. We have other guides on deployment of highly available Kubernetes cluster with RKE and Kubespray.
Step 1: Prepare Kubernetes Servers
The minimal server requirements for the servers used in the cluster are:
- 2 GiB or more of RAM per machine–any less leaves little room for your apps.
- At least 2 CPUs on the machine that you use as a control-plane node.
- Full network connectivity among all machines in the cluster — Can be private or public
Since this setup is meant for development purposes, I have server with below details
Server Type Server Hostname Specs Master k8s-master01.computingpost.com 4GB Ram, 2vcpus Worker k8s-worker01.computingpost.com 4GB Ram, 2vcpus Worker k8s-worker02.computingpost.com 4GB Ram, 2vcpus
Login to all servers and update the OS.
sudo yum -y update && sudo systemctl reboot
Step 2: Install kubelet, kubeadm and kubectl
Once the servers are rebooted, add Kubernetes repository for CentOS 7 to all the servers.
sudo tee /etc/yum.repos.d/kubernetes.repo<[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
Then install required packages.
sudo yum clean all && sudo yum -y makecache
sudo yum -y install epel-release vim git curl wget kubelet kubeadm kubectl --disableexcludes=kubernetes
Confirm installation by checking the version of kubeadm and kubectl.
$ kubeadm version
kubeadm version: &version.InfoMajor:"1", Minor:"22", GitVersion:"v1.22.2", GitCommit:"8b5a19147530eaac9476b0ab82980b4088bbc1b2", GitTreeState:"clean", BuildDate:"2021-09-15T21:37:34Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"
$ kubectl version --client
Client Version: version.InfoMajor:"1", Minor:"22", GitVersion:"v1.22.2", GitCommit:"8b5a19147530eaac9476b0ab82980b4088bbc1b2", GitTreeState:"clean", BuildDate:"2021-09-15T21:38:50Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"
Step 3: Disable SELinux and Swap
If you have SELinux in enforcing mode, turn it off or use Permissive mode.
sudo setenforce 0
sudo sed -i 's/^SELINUX=.*/SELINUX=permissive/g' /etc/selinux/config
Turn off swap.
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
sudo swapoff -a
Configure sysctl.
sudo modprobe overlay
sudo modprobe br_netfilter
sudo tee /etc/sysctl.d/kubernetes.conf<net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system
Step 4: Install Container runtime
To run containers in Pods, Kubernetes uses a container runtime. Supported container runtimes are:
- Docker
- CRI-O
- Containerd
NOTE: You have to choose one runtime at a time.
Using CRI-O Container Runtime
For CRI-O below are the installation steps.
# Ensure you load modules
sudo modprobe overlay
sudo modprobe br_netfilter
# Set up required sysctl params
sudo tee /etc/sysctl.d/kubernetes.conf<net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
# Reload sysctl
sudo sysctl --system
# Add CRI-O repo
OS=CentOS_7
VERSION=1.22
curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable.repo https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/devel:kubic:libcontainers:stable.repo
curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/devel:kubic:libcontainers:stable:cri-o:$VERSION.repo
# Install CRI-O
sudo yum remove docker-ce docker-ce-cli containerd.io
sudo yum install cri-o
# Update CRI-O Subnet
sudo sed -i 's/10.85.0.0/192.168.0.0/g' /etc/cni/net.d/100-crio-bridge.conf
# Start and enable Service
sudo systemctl daemon-reload
sudo systemctl start crio
sudo systemctl enable crio
Using Docker Container runtime
When using Docker container engine run the commands below to install it:
# Install packages
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo yum install docker-ce docker-ce-cli containerd.io
# Create required directories
sudo mkdir /etc/docker
sudo mkdir -p /etc/systemd/system/docker.service.d
# Create daemon json config file
sudo tee /etc/docker/daemon.json <
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts":
"max-size": "100m"
,
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
EOF
# Start and enable Services
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo systemctl enable docker
Using Containerd
Below are the installation steps for Containerd.
# Configure persistent loading of modules
sudo tee /etc/modules-load.d/containerd.conf <overlay
br_netfilter
EOF
# Load at runtime
sudo modprobe overlay
sudo modprobe br_netfilter
# Ensure sysctl params are set
sudo tee /etc/sysctl.d/kubernetes.conf<net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
# Reload configs
sudo sysctl --system
# Install required packages
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
# Add Docker repo
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
# Install containerd
sudo yum update -y && yum install -y containerd.io
# Configure containerd and start service
sudo mkdir -p /etc/containerd
sudo containerd config default > /etc/containerd/config.toml
# restart containerd
sudo systemctl restart containerd
sudo systemctl enable containerd
To use the systemd cgroup driver, set plugins.cri.systemd_cgroup = true in /etc/containerd/config.toml
. When using kubeadm, manually configure the cgroup driver for kubelet
Step 5: Configure Firewalld
I recommend you disable firewalld on your nodes:
sudo systemctl disable --now firewalld
If you have an active firewalld service there are a number of ports to be enabled.
Master Server ports:
sudo firewall-cmd --add-port=6443,2379-2380,10250,10251,10252,5473,179,5473/tcp --permanent
sudo firewall-cmd --add-port=4789,8285,8472/udp --permanent
sudo firewall-cmd --reload
Worker Node ports:
sudo firewall-cmd --add-port=10250,30000-32767,5473,179,5473/tcp --permanent
sudo firewall-cmd --add-port=4789,8285,8472/udp --permanent
sudo firewall-cmd --reload
Step 6: Initialize your control-plane node
Login to the server to be used as master and make sure that the br_netfilter module is loaded:
$ lsmod | grep br_netfilter
br_netfilter 22256 0
bridge 151336 2 br_netfilter,ebtable_broute
Enable kubelet service.
sudo systemctl enable kubelet
We now want to initialize the machine that will run the control plane components which includes etcd (the cluster database) and the API Server.
Pull container images:
$ sudo kubeadm config images pull
[config/images] Pulled k8s.gcr.io/kube-apiserver:v1.22.2
[config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.22.2
[config/images] Pulled k8s.gcr.io/kube-scheduler:v1.22.2
[config/images] Pulled k8s.gcr.io/kube-proxy:v1.22.2
[config/images] Pulled k8s.gcr.io/pause:3.5
[config/images] Pulled k8s.gcr.io/etcd:3.5.0-0
[config/images] Pulled k8s.gcr.io/coredns/coredns:v1.8.4
These are the basic kubeadm init
options that are used to bootstrap cluster.
--control-plane-endpoint : set the shared endpoint for all control-plane nodes. Can be DNS/IP
--pod-network-cidr : Used to set a Pod network add-on CIDR
--cri-socket : Use if have more than one container runtime to set runtime socket path
--apiserver-advertise-address : Set advertise address for this particular control-plane node's API server
Set cluster endpoint DNS name or add record to /etc/hosts file.
$ sudo vim /etc/hosts
172.29.20.5 k8s-cluster.computingpost.com
Create cluster:
sudo kubeadm init \
--pod-network-cidr=192.168.0.0/16 \
--upload-certs \
--control-plane-endpoint=k8s-cluster.computingpost.com
Note: If 192.168.0.0/16 is already in use within your network you must select a different pod network CIDR, replacing 192.168.0.0/16 in the above command.
Container runtime sockets:
Runtime Path to Unix domain socket Docker /var/run/docker.sock
containerd /run/containerd/containerd.sock
CRI-O /var/run/crio/crio.sock
You can optionally pass Socket file for runtime and advertise address depending on your setup.
sudo kubeadm init \
--pod-network-cidr=192.168.0.0/16 \
--cri-socket /var/run/crio/crio.sock \
--upload-certs \
--control-plane-endpoint=k8s-cluster.computingpost.com
Here is the output of my initialization command.
....
[init] Using Kubernetes version: v1.22.2
[preflight] Running pre-flight checks
[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0611 22:34:23.276374 4726 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0611 22:34:23.278380 4726 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 8.008181 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.22" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master01.computingpost.com as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master01.computingpost.com as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: zoy8cq.6v349sx9ass8dzyj
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join k8s-cluster.computingpost.com:6443 --token zoy8cq.6v349sx9ass8dzyj \
--discovery-token-ca-cert-hash sha256:14a6e33ca8dc9998f984150bc8780ddf0c3ff9cf6a3848f49825e53ef1374e24 \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join k8s-cluster.computingpost.com:6443 --token zoy8cq.6v349sx9ass8dzyj \
--discovery-token-ca-cert-hash sha256:14a6e33ca8dc9998f984150bc8780ddf0c3ff9cf6a3848f49825e53ef1374e24
Configure kubectl using commands in the output:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Check cluster status:
$ kubectl cluster-info
Kubernetes master is running at https://k8s-cluster.computingpost.com:6443
KubeDNS is running at https://k8s-cluster.computingpost.com:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Additional Master nodes can be added using the command in installation output:
kubeadm join k8s-cluster.computingpost.com:6443 \
--token zoy8cq.6v349sx9ass8dzyj \
--discovery-token-ca-cert-hash sha256:14a6e33ca8dc9998f984150bc8780ddf0c3ff9cf6a3848f49825e53ef1374e24 \
--control-plane
Step 7: Install network plugin
In this guide we’ll use Calico. You can choose any other supported network plugins.
kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml
kubectl create -f https://docs.projectcalico.org/manifests/custom-resources.yaml
You should see the following output.
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created
namespace/tigera-operator created
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/tigera-operator created
serviceaccount/tigera-operator created
clusterrole.rbac.authorization.k8s.io/tigera-operator created
clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created
deployment.apps/tigera-operator created
.....
installation.operator.tigera.io/default created
apiserver.operator.tigera.io/default created
Confirm that all of the pods are running:
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
calico-system calico-kube-controllers-767ddd5576-t55pr 1/1 Running 0 37s
calico-system calico-node-nkhqf 1/1 Running 0 37s
calico-system calico-typha-955b599b6-2w426 1/1 Running 0 37s
kube-system coredns-78fcd69978-v4hff 1/1 Running 0 2m12s
kube-system coredns-78fcd69978-x489k 1/1 Running 0 2m12s
kube-system etcd-centos.hirebestengineers.com 1/1 Running 0 2m25s
kube-system kube-apiserver-centos.hirebestengineers.com 1/1 Running 0 2m25s
kube-system kube-controller-manager-centos.hirebestengineers.com 1/1 Running 0 2m25s
kube-system kube-proxy-6bvmp 1/1 Running 0 2m12s
kube-system kube-scheduler-centos.hirebestengineers.com 1/1 Running 0 2m25s
tigera-operator tigera-operator-59f4845b57-s9tfz 1/1 Running 0 46s
Confirm master node is ready:
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
centos.hirebestengineers.com Ready control-plane,master 2m58s v1.22.2 142.93.14.175 CentOS Linux 7 (Core) 3.10.0-1160.42.2.el7.x86_64 docker://20.10.9
Step 8: Add worker nodes
With the control plane ready you can add worker nodes to the cluster for running scheduled workloads.
If endpoint address is not in DNS, add record to /etc/hosts.
$ sudo vim /etc/hosts
172.29.20.5 k8s-cluster.computingpost.com
The join command that was given is used to add a worker node to the cluster.
kubeadm join k8s-cluster.computingpost.com:6443 \
--token zoy8cq.6v349sx9ass8dzyj \
--discovery-token-ca-cert-hash sha256:14a6e33ca8dc9998f984150bc8780ddf0c3ff9cf6a3848f49825e53ef1374e24
Output:
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.22" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run below command on the control-plane to see if the node joined the cluster.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01.computingpost.com Ready master 18m v1.22.2
k8s-worker01.computingpost.com Ready 98s v1.22.2
If the join token is expired, refer to our guide on how to join worker nodes.
Step 9: Deploy application on cluster
For single node cluster check out our guide on how to run pods on control plane nodes:
We need to validate that our cluster is working by deploying an application.
kubectl apply -f https://k8s.io/examples/pods/commands.yaml
Check to see if pod started
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
command-demo 0/1 Completed 0 40s
Step 10: Install Kubernetes Dashboard (Optional)
Kubernetes dashboard can be used to deploy containerized applications to a Kubernetes cluster, troubleshoot your containerized application, and manage the cluster resources.
Refer to our guide for installation: How To Install Kubernetes Dashboard with NodePort
Step 11: Install Nginx Ingress Controller
If Nginx is your preferred Ingress controller for Kubernetes workloads, you can use our guide in the following link for the installation process: