Contents

Build Kubernetes Cluster on Proxmox VE

/images/kubernetes-cluster-architecture.svg

The documentation is describing the steps required to setup kubernetes cluster on bare metal using command line step by steps. We will need:

  • At least 2 ubuntu server instance, can use 18.04 or 22.04. We will made this using single cluster with 1 instance kubernetes controller and another as node instance.
  • Make sure all instance has static IP address, you can use Terraform to create instance on this tutorial on proxmox.
  • Controller should have at least 2 cores CPUs and 4GB of memory.
  • Node instance should have at least 1 cores CPUs and 2GB of memory.
  • Makesure all instance can access from SSH

Apply in All Instance

The first thing we need to do is update our operating system by execute apt update and apt upgrade

1
sudo apt update && sudo apt upgrade

Install containerd to manages the complete container lifecycle of its host system, from image transfer and storage to container execution and supervision to low-level storage to network attachments and beyond

1
sudo apt install -y containerd

Create containerd initial configuration

1
2
sudo mkdir /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml

Update /etc/containerd/config.toml configuration to enable SystemdCgroup

1
2
3
4
5
...
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    ...
    SystemdCgroup = true
...

Create /etc/crictl.yaml to configure containerd runtime endpoint

1
echo "runtime-endpoint: unix:///var/run/containerd/containerd.sock" | sudo tee /etc/crictl.yaml

Ensure swap is disabled

1
sudo swapoff -a

Update /etc/sysctl.conf to enable bridging or ip forwarding with uncomment this line

1
net.ipv4.ip_forward=1

Enable kubernetes net filter, then reboot our instances

1
echo "br_netfilter" | sudo tee /etc/modules-load.d/k8s.conf

After rebooted then add kubernetes keyrings, create this directory if doesn’t exists /etc/apt/keyrings

1
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

Add kubernetes repository source list

1
2
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt update

Install kubernetes

1
sudo apt install -y kubeadm kubectl kubelet

Controller Node

As long we have everything so far then we can initialize the kubernetes cluster with deploy the kubernetes controller node

1
sudo kubeadm init --apiserver-advertise-address=<k8s-master-ip> --apiserver-cert-extra-sans=<k8s-master-ip> --control-plane-endpoint=<k8s-master-ip> --node-name <node-name> --pod-network-cidr=10.244.0.0/16

Note:

  • Replace <k8s-master-ip> with your master node ip
  • Replace <node-name> with your preferred name
  • --pod-network-cidr will use default calico network cidr because we will use calico

Calico is a networking and security solution that enables Kubernetes workloads and non-Kubernetes/legacy workloads to communicate seamlessly and securely.

Three commands will be shown in the output from the previous command, and these commands will give our user account access to manage our cluster. Here are those related commands to save you from having to search the output for them:

1
2
3
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Install calico network using kubectl on controller or master node

1
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.2/manifests/tigera-operator.yaml

Create calico custom resource file calico-custom-resource.yaml based on Calico documentation

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
# This section includes base Calico installation configuration.
# For more information, see: https://docs.tigera.io/calico/latest/reference/installation/api#operator.tigera.io/v1.Installation
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
  name: default
spec:
  # Configures Calico networking.
  calicoNetwork:
    # Note: The ipPools section cannot be modified post-install.
    ipPools:
    - blockSize: 26
      cidr: 10.244.0.0/16
      encapsulation: VXLANCrossSubnet
      natOutgoing: Enabled
      nodeSelector: all()

---

# This section configures the Calico API server.
# For more information, see: https://docs.tigera.io/calico/latest/reference/installation/api#operator.tigera.io/v1.APIServer
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
  name: default
spec: {}

Note that the ipPools[].cidr must match the --pod-network-cidr that we defined before which is 10.244.0.0/16

Install custom resource

1
kubectl apply -f calico-custom-resource.yaml

Generate join cluster command

1
kubeadm token create --print-join-command

Worker Node

For worker node we only need to join to cluster with generated join command from kubernetes controller, for example:

1
kubeadm join 192.168.56.10:6443 --token 9ihiei.ocmvmcmrrndvqx15 --discovery-token-ca-cert-hash sha256:0b4f058dc00796282339680f1d97513129800c6b957875093b4f3c7bb10e2ee8

Do that join command to all worker node that you preferred. Finally, we already have kubernetes cluster with calico network.