Build Kubernetes Cluster on Proxmox VE
The documentation is describing the steps required to setup kubernetes cluster on bare metal using command line step by steps. We will need:
- At least 2 ubuntu server instance, can use
18.04
or22.04
. We will made this using single cluster with 1 instance kubernetes controller and another as node instance. - Make sure all instance has static IP address, you can use
Terraform
to create instance on this tutorial on proxmox. - Controller should have at least 2 cores CPUs and 4GB of memory.
- Node instance should have at least 1 cores CPUs and 2GB of memory.
- Makesure all instance can access from SSH
Apply in All Instance
The first thing we need to do is update our operating system by execute apt update
and apt upgrade
|
|
Install containerd
to manages the complete container lifecycle of its host system, from image transfer and storage to container execution and supervision to low-level storage to network attachments and beyond
|
|
Create containerd
initial configuration
|
|
Update /etc/containerd/config.toml
configuration to enable SystemdCgroup
|
|
Create /etc/crictl.yaml
to configure containerd
runtime endpoint
|
|
Ensure swap
is disabled
|
|
Update /etc/sysctl.conf
to enable bridging or ip forwarding with uncomment this line
|
|
Enable kubernetes net filter, then reboot our instances
|
|
After rebooted then add kubernetes keyrings, create this directory if doesn’t exists /etc/apt/keyrings
|
|
Add kubernetes repository source list
|
|
Install kubernetes
|
|
Controller Node
As long we have everything so far then we can initialize the kubernetes cluster with deploy the kubernetes controller node
|
|
Note:
- Replace
<k8s-master-ip>
with your master node ip - Replace
<node-name>
with your preferred name --pod-network-cidr
will use defaultcalico
network cidr because we will usecalico
Calico is a networking and security solution that enables Kubernetes workloads and non-Kubernetes/legacy workloads to communicate seamlessly and securely.
Three commands will be shown in the output from the previous command, and these commands will give our user account access to manage our cluster. Here are those related commands to save you from having to search the output for them:
|
|
Install calico
network using kubectl
on controller or master node
|
|
Create calico
custom resource file calico-custom-resource.yaml
based on Calico documentation
|
|
Note that the ipPools[].cidr
must match the --pod-network-cidr
that we defined before which is 10.244.0.0/16
Install custom resource
|
|
Generate join cluster command
|
|
Worker Node
For worker node we only need to join to cluster with generated join command from kubernetes controller, for example:
|
|
Do that join command to all worker node that you preferred. Finally, we already have kubernetes cluster with calico network.