Install and configure k8s cluster v1.29
Cluster configuration
control-plane
: 192.168.56.100node-01
: 192.168.56.110node-02
: 192.168.56.120- Pods network: 10.0.0.0/16
Note: There is another article - [[k8s - Install and configure a Kubernetes cluster]] which has been done some time ago. Current one is a new one. Feather I need to compare these two articles to create full one.
Go to the linkand read about prerequisites and how to install kubeadm
on a system.
You know, I think it’s not necessary to copy and paste all the information from the official documentation. Because all the information is already there.
Maybe the better way would be to create an Ansible role for that?
Before you begin
hostnames (all nodes)
control-plane
, control-plane.local: 192.168.56.100
node-01
, node-01.local: 192.168.56.110
node-02
, node-02.local: 192.168.56.120
hostnamectl
cat /etc/hostname
Static IP configuration (all nodes)
Ubuntu
https://ostechnix.com/configure-static-ip-address-ubuntu/
For control-plane:
/etc/netplan/00-installer-config.yaml
network:
ethernets:
enp0s8:
dhcp4: true
enp0s8:
dhcp4: false
addresses:
- 192.168.56.100/24
version: 2
To validate the configuration:
sudo netplan try
Apply changes:
sudo netplan apply
/etc/hosts file or DNS Server configuration (all nodes)
control-plane
/etc/hosts
192.168.56.100 k8s-cp control-plane
192.168.56.110 k8s-n1 node-01
192.168.56.120 k8s-n2 node-02
Disable Firewall (all nodes)
Ubuntu
Stop and disable firewall:
sudo systemctl stop ufw
sudo systemctl disable ufw
sudo systemctl status ufw
Check that you don’t have any entries:
sudo iptables -nL
Disable Swap (all nodes)
sudo swapoff -a
# comment out the swap section
sudo vim /etc/fstab
free
Configure NTP Server (all nodes)
Change default time zone on your region:
timedatectl list-timezones
sudo timedatectl set-timezone Europe/Madrid
date
Maybe it’s a good idea to setup a NTP Server within local network ([[NTP Server (Chrony)]]).
Network configuration
Enable IPv4 packet forwarding (all nodes)
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
# Apply sysctl params without reboot
sudo sysctl --system
lsmod | grep br_netfilter
lsmod | grep overlay
sysctl net.bridge.bridge-nf-call-iptables \
net.bridge.bridge-nf-call-ip6tables \
net.ipv4.ip_forward
Install a container runtime (all nodes)
Install only the containerd
as a container runtime.
Go through the link and get familiar about the container runtimes and do the necessary configuration in context of network and forwarding. Then go to the section of contanerd
. You can use a short link for that. And follow the instructions on getting started with containerd.
There are different ways on how to install containerd
. I’ve chosen apt-get
method and my OS is Ubuntu. So, follow the link . Docker official documentation.
for pkg in docker.io docker-doc docker-compose docker-compose-v2 podman-docker containerd runc; do sudo apt-get remove $pkg; done
Setup Docker’s repository:
# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
# Add the repository to Apt sources:
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install -y containerd.io
Generate containerd
configuration file:
containerd config default | sudo tee /etc/containerd/config.toml >/dev/null 2>&1
Change in the /etc/containerd/config.toml
file SystemdCgroup
from false
to true
.
SystemdCgroup = true
Restart containerd
daemon and check the status.
sudo systemctl restart containerd
sudo systemctl status containerd
Runtime is ready.
Now go to install kubeadm
, kubelet
and kubectl
as described in the documentation.
After that you are ready to provision a Kubernetes cluster with kubeadm
command. Use this link.
Install kubeadm, kubelet and kubectl (all nodes)
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
Note: If the directory
/etc/apt/keyrings
does not exist, it should be created before the curl command, read the note below.
sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list
.
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
Creating a cluster with kubeadm (control-plane node)
IPADDR="192.168.56.100"
NODENAME=$(hostname -s)
POD_CIDR="192.168.111.0/24"
The following command needs to be run on a control plane node only.
sudo kubeadm init \
--apiserver-advertise-address=192.168.56.100 \
--pod-network-cidr=192.168.111.0/24
mkdir $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
And you’re able to execute commands:
kubectl get node
kubectl get pods -A
Install CNI Plugin (control-plane only)
Go to the link https://kubernetes.io/docs/concepts/cluster-administration/addons/ and chose appropriate CNI plugin for you.
Calico
For more information, see the documentation: https://docs.tigera.io/calico/latest/getting-started/kubernetes/self-managed-onprem/onpremises
Install the operator on your cluster.
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/tigera-operator.yaml
Download the custom resources necessary to configure Calico.
curl https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/custom-resources.yaml -O
custom-resources.yaml
...
cidr: 192.168.111.0/24
...
kubectl create -f custom-resources.yaml
Create the manifest to install Calico.
kubectl create -f custom-resources.yaml
Verify Calico installation in your cluster.
watch kubectl get pods -n calico-system
Weave Net
kubectl apply -f https://reweave.azurewebsites.net/k8s/v1.29/net.yaml
Check:
watch kubectl get pods -A
Add Worker Nodes to the cluster (worker nodes)
sudo kubeadm join 192.168.56.100:6443 \
--token ... \
--discovery-token-ca-cert-hash ...
mkdir -p $HOME/.kube
touch $HOME/.kube/config
# copy config file from control-plane node
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get nodes
kubectl get pods -A
Troubleshooting
If you have the errors like this:
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-6443]: Port 6443 is in use
[ERROR Port-10259]: Port 10259 is in use
[ERROR Port-10257]: Port 10257 is in use
[ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
[ERROR Port-10250]: Port 10250 is in use
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
Try to use sudo kubeadm reset
and repeat the sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.1.47
command again.
Clean up
If you need to clean up, refer the documentation.
sudo kubeadm reset
Reset iptables:
sudo iptables -F && sudo iptables -t nat -F && sudo iptables -t mangle -F && sudo iptables -X
Check:
sudo iptables -nL