☑️Day 32: Creating Multi Node Cluster in Kubernetes🚀

☑️Day 32: Creating Multi Node Cluster in Kubernetes🚀

Kubernetes multi-node clusters are essential for deploying scalable applications across multiple machines. In this newsletter, I’ll guide you step-by-step through the process of setting up a multi-node Kubernetes cluster on CentOS using VMware.


What is a Multi-Node Kubernetes Cluster?

  • A multi-node cluster consists of one master node (control plane) and multiple worker nodes.

  • Master node handles the management of the cluster, while worker nodes handle the actual running of your applications.


✅Steps for Setting Up a Multi-Node Kubernetes Cluster


Step 1: Set Up Virtual Machines (VMs) on VMware

  1. Create VMs:

    • VM1: Master node (control plane)

    • VM2 & VM3: Worker nodes

    • Allocate at least 2 GB RAM and 2 vCPUs to each node.

    • Install CentOS on all nodes.


Step 2: Prepare Nodes

  1. Update all systems: On each node, run:

     bashCopy codeyum update -y
    
  2. Disable Swap (Kubernetes does not support swap):

     bashCopy codeswapoff -a
     sed -i '/swap/d' /etc/fstab
    
  3. Disable Firewall:

     bashCopy codesystemctl stop firewalld
     systemctl disable firewalld
    
  4. Ensure that the IP forwarding is enabled:

     bashCopy codeecho 1 > /proc/sys/net/ipv4/ip_forward
    

Step 3: Install Docker

  1. Install Docker on all nodes:

     bashCopy codeyum install docker -y
     systemctl start docker
     systemctl enable docker
    

Step 4: Install Kubernetes

  1. Install kubeadm, kubelet, and kubectl on all nodes:

     bashCopy codecat <<EOF > /etc/yum.repos.d/kubernetes.repo
     [kubernetes]
     name=Kubernetes
     baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
     enabled=1
     gpgcheck=1
     repo_gpgcheck=1
     gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
     EOF
    
     yum install -y kubelet kubeadm kubectl
     systemctl enable kubelet
     systemctl start kubelet
    

Step 5: Initialize Kubernetes Master Node

  1. On the master node (VM1), initialize Kubernetes:

     bashCopy codekubeadm init --pod-network-cidr=192.168.0.0/16
    
  2. Configure kubectl for master node:

     bashCopy codemkdir -p $HOME/.kube
     sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
     sudo chown $(id -u):$(id -g) $HOME/.kube/config
    

Step 6: Set Up a Pod Network

  1. Install Calico as the pod network (on the master node):

     bashCopy codekubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
    

Step 7: Join Worker Nodes to the Cluster

  1. On the master node, after the init process, you’ll get a join token. It looks like this:

     bashCopy codekubeadm join <master-ip>:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash>
    
  2. Run this command on both worker nodes (VM2 & VM3) to join them to the master node:

     bashCopy codekubeadm join <master-ip>:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash>
    

Step 8: Verify the Cluster

  1. On the master node, check if all nodes have joined successfully:

     bashCopy codekubectl get nodes
    

    You should see the master and worker nodes listed.


✅Important Commands Recap

  • Install Kubernetes components on each node:

      bashCopy codeyum install -y kubelet kubeadm kubectl
    
  • Disable swap on all nodes:

      bashCopy codeswapoff -a
    
  • Initialize the master node:

      bashCopy codekubeadm init --pod-network-cidr=192.168.0.0/16
    
  • Join worker nodes to the cluster:

      bashCopy codekubeadm join <master-ip>:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash>
    

✅Key Concepts to Remember:

  • Master Node: Controls the Kubernetes cluster.

  • Worker Nodes: Run the application containers.

  • Pod Network: Allows communication between Kubernetes pods across nodes.

  • Calico: A popular networking option for Kubernetes clusters.


✅Real-Time Use Case:

A multi-node Kubernetes cluster can be used for high-availability applications. For example, in a production environment, a web application can be deployed across multiple worker nodes. If one node fails, Kubernetes ensures the application stays up by distributing the load across the remaining nodes.


🚀Thanks for joining me on Day 32! Let’s keep learning and growing together!

Happy Learning! 😊

#90DaysOfDevOps

💡
Follow for more updates on LinkedIn , Github and Twitter(X)