Build A Kubernetes Cluster With Kubeadm On Ubuntu 22.04
Build a Kubernetes Cluster with Kubeadm on Ubuntu 22.04
Hey everyone! 👋 Ever wanted to dive into the world of
Kubernetes
? It’s the go-to platform for automating the deployment, scaling, and management of containerized applications. And trust me, once you get the hang of it, you’ll be amazed at its power and flexibility. Today, we’re going to walk through the process of setting up your own
Kubernetes
cluster using
kubeadm
on
Ubuntu 22.04
. Don’t worry if you’re new to this; I’ll break it down step by step to make it as smooth as possible. We’ll cover everything from preparing your servers to deploying a sample application. Ready to get started? Let’s go!
Prerequisites: Setting the Stage
Before we dive into the nitty-gritty of
Kubernetes
cluster creation, let’s make sure we have everything we need. You’ll need a few things in place to ensure a successful setup.
First and foremost
, you’ll need access to at least two
Ubuntu 22.04
servers. One will act as your control plane (the master node), and the others will be worker nodes.
Think of the control plane as the brains of your operation
, managing all the Kubernetes resources. The worker nodes are where your applications will actually run. Ideally, these servers should have a minimum of 2 CPU cores, 2GB of RAM, and at least 20GB of free disk space. Also, ensure that all servers can communicate with each other over the network, which is super important for your cluster to function correctly. This usually involves having a common network and allowing traffic between the nodes. Make sure to check that your firewall settings aren’t blocking any essential ports (like the Kubernetes API server port, the kubelet port, etc.) - this can save you from a lot of head-scratching later on. You should have
sudo
privileges on all your servers, as we’ll be making changes that require elevated permissions. A stable internet connection on each server is also a must, as we’ll be downloading and installing several packages. One often overlooked, but critical, step is to disable swap space. Kubernetes has some issues with swap enabled, so it’s a good practice to disable it. And finally, although not strictly a requirement, a basic understanding of Linux commands and networking concepts will be helpful. This includes things like how to navigate the command line, how to use
ssh
to connect to your servers, and how to understand basic network configurations. Don’t worry, even if you’re a beginner, I’ll provide clear instructions and explanations along the way. Now, let’s get down to the actual preparation!
Server Preparation: Getting Ready to Rumble
Now that we have all the prerequisites sorted, it’s time to prepare your Ubuntu servers for the
Kubernetes
installation. This involves updating packages, configuring networking, and installing some essential tools. First, start by updating your package lists and upgrading the installed packages on all your servers. This ensures that you have the latest security patches and bug fixes. You can do this by running the following commands:
sudo apt update
and
sudo apt upgrade -y
. Next up, we need to disable the swap space. This is a crucial step because Kubernetes performs better when swap is disabled. Run
sudo swapoff -a
to disable the swap. Then, to make the change permanent, comment out the swap entry in the
/etc/fstab
file. You can edit this file using a text editor like
nano
or
vim
. Open the file with
sudo nano /etc/fstab
and comment out the line that starts with
swap
. Save the file and exit. Moving on, we need to configure the networking. Ensure that your servers have a static IP address or that you have configured DHCP reservations in your network to provide consistent IP addresses. This is critical because Kubernetes components need to be able to communicate with each other using fixed IP addresses. Also, it’s a good idea to set a hostname for each server to make it easier to identify them in the cluster. You can set the hostname with the
hostnamectl set-hostname
command and then verify it with the
hostname
command. For example, on your master node, you might set the hostname to
k8s-master
, and on your worker nodes, you can set it to
k8s-worker-1
,
k8s-worker-2
, and so on. Finally, we need to install
containerd
, which is the container runtime that Kubernetes will use. Run the following commands to install
containerd
:
sudo apt install containerd -y
, create the configuration file with
sudo mkdir -p /etc/containerd
and
sudo containerd config default | sudo tee /etc/containerd/config.toml
, then restart containerd service with
sudo systemctl restart containerd
. Once you have completed these steps, your servers are ready to have Kubernetes installed. Remember to perform these preparation steps on all your servers, both the master node and the worker nodes. This will set the foundation for a smooth
Kubernetes
installation and configuration.
Installing Kubernetes Components: The Heart of the Matter
Alright, it’s time to install the
Kubernetes
components. This includes
kubeadm
,
kubelet
, and
kubectl
. These are the core tools you’ll need to manage your cluster. Start by adding the Kubernetes repository to your system. Run these commands to do so:
sudo apt-get update
,
sudo apt-get install -y apt-transport-https ca-certificates curl
,
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
, and then
echo 'deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://packages.cloud.google.com/apt/ kubernetes-xenial main' | sudo tee /etc/apt/sources.list.d/kubernetes.list
. Then, update the package list again with
sudo apt update
. Now, install the
kubeadm
,
kubelet
, and
kubectl
packages using the command
sudo apt install -y kubeadm kubelet kubectl
. It’s important to note that you should not install a specific version yet, as we’ll initialize the cluster with
kubeadm
, which will handle the versioning. After installation, hold the
kubelet
package to prevent it from being automatically upgraded. Run the command
sudo apt-mark hold kubelet
. This ensures that
kubelet
doesn’t get automatically upgraded, which could potentially break your cluster if there’s a version mismatch with
kubeadm
. Next, initialize the
Kubernetes
control plane on your master node using
kubeadm init
. This command will set up the necessary components on the master node and provide you with the commands you’ll need to join worker nodes to the cluster. When you run
kubeadm init
, it’s crucial to specify the
--pod-network-cidr
option. This option defines the IP address range for the pods in your cluster. Choose a CIDR block that doesn’t overlap with your existing network. A common choice is
10.244.0.0/16
. The full command might look something like this:
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
. Make a note of the output of this command, as it will give you important information, including the
kubeadm join
command you’ll use to add worker nodes to the cluster. After the initialization, you’ll need to set up your
kubectl
configuration to connect to the cluster. Run the commands provided in the output of
kubeadm init
. These commands typically involve creating a
.kube
directory in your home directory and copying the
config
file. Also, after initializing the cluster, you must deploy a networking solution. Kubernetes uses a networking solution to enable communication between pods. We’ll be using
Calico
for this, but there are other options available. You can deploy
Calico
with the following command:
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/calico.yaml
. Finally, verify that everything is working correctly by checking the status of your nodes with the command
kubectl get nodes
. You should see your master node in the
Ready
state. Congratulations! You’ve successfully installed the core
Kubernetes
components and initialized your control plane.
Joining Worker Nodes: Expanding Your Horizon
Once the control plane is set up, the next step is to add worker nodes to your
Kubernetes
cluster. This is where your applications will actually run. Remember that
kubeadm join
command you got from the output of
kubeadm init
? Now’s the time to use it.
Copy the
kubeadm join
command from the master node and run it on each of your worker nodes
. This command will register the worker nodes with the control plane and make them part of the cluster. Before running the
kubeadm join
command on the worker nodes, you must ensure that the
containerd
service is running and configured correctly. On the worker nodes, also check if you have followed all the server preparation steps, including disabling swap and configuring networking, which we discussed earlier. After running the
kubeadm join
command on a worker node, give it a few moments to register with the cluster. You can check the status of your nodes by running
kubectl get nodes
on the master node. The worker nodes should eventually appear in the
Ready
state, indicating that they are successfully joined to the cluster. If your worker nodes are not showing up as
Ready
, double-check the following: Is the worker node able to communicate with the master node? Are there any firewall rules blocking the required ports? Did you disable swap on the worker nodes? Did you correctly configure the container runtime (
containerd
)? Are you running the correct
kubeadm join
command, and did you copy it from the output of
kubeadm init
on the master node? Once all your worker nodes are in the
Ready
state, your
Kubernetes
cluster is up and running. You can now deploy your applications to the cluster and start taking advantage of the power of
Kubernetes
.
Deploying a Sample Application: Putting It All to the Test
Now for the exciting part: deploying an application to your newly created Kubernetes cluster. Let’s deploy a simple