How to setup Kubernetes cluster on Debian 12

How to setup Kubernetes cluster on Debian 12

Install Kubeadm to Configure Multi Nodes Kubernetes Cluster.

This example is based on the environment like follows.

For System requirements, each Node has unique Hostname, MAC address, Product_uuid.
MAC address and Product_uuid are generally already unique one if you installed OS on physical machine or virtual machine with common procedure.
You can see Product_uuid with the command [dmidecode -s system-uuid].

-----------+---------------------------+--------------------------+------------
           |                           |                          |
       eth0|10.0.0.25              eth0|10.0.0.71             eth0|10.0.0.72
+----------+-----------+   +-----------+-----------+   +-----------+-----------+
|  [ ctrl.srv.world ]  |   |  [snode01.srv.world]  |   |  [snode02.srv.world]  |
|     Control Plane    |   |      Worker Node      |   |      Worker Node      |
+----------------------+   +-----------------------+   +-----------------------+

[1]Install Containerd and apply some requirements on all Nodes.

root@ctrl:~# apt -y install containerd iptables apt-transport-https gnupg2 curl sudo
root@ctrl:~# cat > /etc/sysctl.d/99-k8s-cri.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
EOF
root@ctrl:~# sysctl --system
root@ctrl:~# modprobe overlay; modprobe br_netfilter
root@ctrl:~# echo -e overlay\\nbr_netfilter > /etc/modules-load.d/k8s.conf
# needs [iptables-legacy] for iptables backend
# if nftables is enabled, change to [iptables-legacy]
root@ctrl:~# update-alternatives --config iptables
There are 2 choices for the alternative iptables (providing /usr/sbin/iptables).

  Selection    Path                       Priority   Status
------------------------------------------------------------
* 0            /usr/sbin/iptables-nft      20        auto mode
  1            /usr/sbin/iptables-legacy   10        manual mode
  2            /usr/sbin/iptables-nft      20        manual mode

Press <enter> to keep the current choice[*], or type selection number: 1
update-alternatives: using /usr/sbin/iptables-legacy to provide /usr/sbin/iptables (iptables) in manual mode

# disable swap
root@ctrl:~# swapoff -a
root@ctrl:~# vi /etc/fstab
# comment out
#/dev/mapper/debian--vg-swap_1 none            swap    sw              0       0

# switch to Cgroup v1 (v2 is the default)
root@ctrl:~# vi /etc/default/grub
# line 10 : add
GRUB_CMDLINE_LINUX="systemd.unified_cgroup_hierarchy=0"
root@ctrl:~# update-grub
root@ctrl:~# reboot

[2] Install Kubeadm, Kubelet, Kubectl on all Nodes.

root@ctrl:~# curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg -o /etc/apt/keyrings/kubernetes-keyring.key
root@ctrl:~# echo "deb [signed-by=/etc/apt/keyrings/kubernetes-keyring.key] https://apt.kubernetes.io/ kubernetes-xenial main" | tee /etc/apt/sources.list.d/kubernetes.list
root@ctrl:~# apt update
root@ctrl:~# apt -y install kubelet=1.26.6-00 kubeadm=1.26.6-00 kubectl=1.26.6-00
root@ctrl:~# ln -s /opt/cni/bin /usr/lib/cni

[3] Configure initial setup on Control Plane Node.

For [control-plane-endpoint], specify the Hostname or IP address that Etcd and Kubernetes API server are run.

For [ — pod-network-cidr] option, specify network which Pod Network uses.
There are some plugins for Pod Network. (refer to details below)

⇒ https://kubernetes.io/docs/concepts/cluster-administration/networking/

On this example, it uses Calico.

root@ctrl:~# kubeadm init --control-plane-endpoint=10.0.0.25 --pod-network-cidr=192.168.0.0/16 --cri-socket=unix:///run/containerd/containerd.sock
I0727 19:05:02.936396    1489 version.go:256] remote version is much newer: v1.27.4; falling back to: stable-1.26
[init] Using Kubernetes version: v1.26.7
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [ctrl.srv.world kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.25]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ctrl.srv.world localhost] and IPs [10.0.0.25 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ctrl.srv.world localhost] and IPs [10.0.0.25 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 6.501887 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node ctrl.srv.world as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node ctrl.srv.world as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: q1xuoi.v6meo9irmuv9g0y1
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 10.0.0.25:6443 --token q1xuoi.v6meo9irmuv9g0y1 \
        --discovery-token-ca-cert-hash sha256:1b96782423120e014e212197d34c56b029af1a8db85bf40a6602e19881bc7db1 \
        --control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.0.25:6443 --token q1xuoi.v6meo9irmuv9g0y1 \
        --discovery-token-ca-cert-hash sha256:1b96782423120e014e212197d34c56b029af1a8db85bf40a6602e19881bc7db1

# set cluster admin user
# if you set common user as cluster admin, login with it and run [sudo cp/chown ***]
root@ctrl:~# mkdir -p $HOME/.kube
root@ctrl:~# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
root@ctrl:~# chown $(id -u):$(id -g) $HOME/.kube/config

[4] Configure Pod Network with Calico.

root@ctrl:~# wget https://raw.githubusercontent.com/projectcalico/calico/master/manifests/calico.yaml
root@ctrl:~# kubectl apply -f calico.yaml
poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
serviceaccount/calico-cni-plugin created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrole.rbac.authorization.k8s.io/calico-cni-plugin created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-cni-plugin created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created

# show state : OK if STATUS = Ready
root@ctrl:~# kubectl get nodes
NAME             STATUS   ROLES           AGE     VERSION
ctrl.srv.world   Ready    control-plane   4h41m   v1.26.6

# show state : OK if all pods are Running
root@ctrl:~# kubectl get pods -A
NAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-56dd5794f-r7fvl   1/1     Running   0          4h36m
kube-system   calico-node-89s9h                         1/1     Running   0          4h36m
kube-system   coredns-787d4945fb-6nn4p                  1/1     Running   0          4h41m
kube-system   coredns-787d4945fb-qwxb5                  1/1     Running   0          4h41m
kube-system   etcd-ctrl.srv.world                       1/1     Running   0          4h41m
kube-system   kube-apiserver-ctrl.srv.world             1/1     Running   0          4h41m
kube-system   kube-controller-manager-ctrl.srv.world    1/1     Running   0          4h41m
kube-system   kube-proxy-fbfgj                          1/1     Running   0          4h41m
kube-system   kube-scheduler-ctrl.srv.world             1/1     Running   0          4h41m

[5] Join in Kubernetes Cluster which is initialized on Control Plane Node.
The command for joining is just the one [kubeadm join ***] which was shown on the bottom of the results on initial setup of Cluster.

root@snode01:~# kubeadm join 10.0.0.25:6443 --token q1xuoi.v6meo9irmuv9g0y1 \
--discovery-token-ca-cert-hash sha256:1b96782423120e014e212197d34c56b029af1a8db85bf40a6602e19881bc7db1
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
# OK if the message above is shown

[6] Verify Status on Control Plane Node. That’s Ok if all STATUS are Ready.

root@ctrl:~# kubectl get nodes
NAME                STATUS   ROLES           AGE     VERSION
ctrl.srv.world      Ready    control-plane   4h54m   v1.26.6
snode01.srv.world   Ready    <none>          2m12s   v1.26.6
snode02.srv.world   Ready    <none>          83s     v1.26.6

root@ctrl:~# sudo crictl config --set runtime-endpoint=unix:///run/containerd/containerd.sock --set image-endpoint=unix:///run/containerd/containerd.sock 

If you find value in my article and want to show your appreciation, consider buying me a coffee. Your support would mean a lot to me!

https://buymeacoffee.com/akylson

Kanat Akylson Avatar
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments