Creating deployment file for auto-scaling, auto-healing , namespaces through kubeadm

Table of contents

Welcome to day 32 of the #90DaysOfDevOps challenge initiated by Shubham Londhe . In today's blog post, we'll dive into the process of launching a Kubernetes cluster using kubeadm, with one master node and one worker node. Once the cluster is up and running, we'll also deploy a node-todo application through deployment.yml file pod on the cluster. Let's get started!

Task-1:

Create one Deployment file to deploy a sample todo-app on K8s using the "Auto-healing" and "Auto-Scaling" features.

In this section of a blog post, we will explore how to deploy a sample todo-app on Kubernetes using the powerful features of auto-healing and auto-scaling. We will accomplish this by creating a deployment file and applying it to a Kubernetes cluster using the 'kubectl' command.

Prerequisites: Before proceeding, ensure you have the following prerequisites in place:

  1. A running Kubernetes cluster (kubeadm)

  2. kubectl command-line tool installed and configured to communicate with your cluster

    If you are new to Kubernetes and you don't know about the prerequisites, you can check my previous blog for step by step guide.

    https://hashnode.com/edit/clivk3ry9001509lddifj6orc

Creating the Deployment File (deployment.yml):

To deploy our sample todo-app with auto-healing and auto-scaling, we need to create a deployment file. This file describes the desired state of our application and how it should be managed by Kubernetes. Let's create a file named 'deployment.yml' and add the following contents:

apiVersion: apps/v1
kind: Deployment
metadata:
    name: node-todo-deployment
    labels:
      app: node-todo
spec:
  replicas: 3
  selector:
    matchLabels:
      app: node-todo
  template:
    metadata:
      labels:
        app: node-todo
    spec:
      containers:
      - name: node-todo
        image: shubhambmatere/node-todo-cicd:latest
        ports:
        - containerPort: 8000
        env:
        - name: AUTO_SCALE
          value: "true"

kubectl apply -f deployment.yml
kubectl get nodes
kubectl get pods   OR kubectl get pods -o wide

Parallelly, you can check the image is created on the Worker node as well.

Understanding Auto-healing and Auto-scaling:

  1. Auto-healing: Auto-healing in Kubernetes refers to the ability of the system to automatically detect and recover from failures or issues within the cluster. When a pod or container becomes unresponsive or fails, Kubernetes detects this through health checks called probes. It then automatically restarts or replaces the failed pod or container to restore the desired state of the application. This helps ensure that your applications are highly available and resilient, as Kubernetes constantly monitors and maintains the health of your application components.

  2. Auto-scaling: Auto-scaling in Kubernetes enables the system to dynamically adjust the number of running instances (pods) of an application based on workload demands. Kubernetes provides Horizontal Pod Autoscaling (HPA), which automatically scales the number of replicas of a Deployment based on CPU utilization, memory usage, or custom metrics. When the workload increases, the HPA automatically increases the number of replicas to handle the additional load. Conversely, when the workload decreases, the HPA reduces the number of replicas, optimizing resource utilization and cost efficiency. Auto-scaling allows your applications to efficiently handle varying levels of traffic or workload without manual intervention, ensuring optimal performance and resource allocation.

A detailed explanation of each term and configuration used:

apiVersion: apps/v1
kind: Deployment
metadata:
    name: node-todo-deployment
  • apiVersion: Specifies the version of the Kubernetes API being used. In this case, it is "apps/v1", which indicates the Apps v1 API version.

  • kind: Defines the type of Kubernetes resource. Here, it is a Deployment, which manages the lifecycle of replicated pods.

  • metadata: Contains metadata information for the Deployment resource.

  • name: Specifies the name of the Deployment. In this case, it is "todo-app".

      spec:
        replicas: 3
        selector:
          matchLabels:
            app: node-todo
    
  • spec: Specifies the desired state of the Deployment.

  • replicas: Defines the desired number of replicas for the application. In this example, it is set to 3, meaning Kubernetes will ensure that three instances of the application are running.

  • selector: Specifies the label selector used to identify the pods controlled by this Deployment.

  • matchLabels: Sets the labels used for matching pods. In this case, the pods must have the label "app" with a value of "todo-app" to be managed by this Deployment.

      template:
          metadata:
            labels:
              app: node-todo
          spec:
            containers:
            - name: node-todo
              image: shubhambmatere/node-todo-cicd:latest
              ports:
              - containerPort: 8000
              env:
              - name: AUTO_SCALE
                value: "true"
    
  • template: Describes the pod template used for creating new pods.

  • metadata: Contains metadata for the pod template.

  • labels: Specifies the labels applied to the pods created from this template. Here, the label "app" with a value of "todo-app" is set.

  • spec: Defines the specification of the pod template.

  • containers: Specifies the list of containers within the pod.

  • name: Sets the name of the container. In this case, it is "todo-app".

  • image: Specifies the image used for the container. Replace "your-todo-app-image" with the actual image name and version you want to deploy.

  • ports: Defines the ports exposed by the container.

    • containerPort: Specifies the port on which the container listens for incoming traffic. Here, it is set to 8080.
  • This section of env: specifies the autoscaling configuration for the deployment. When enabled, it will automatically adjust the number of replicas based on CPU utilization.

Task 2:

Create a Namespace for Deployment.

  1. Open a terminal and use the command kubectl create namespace <namespace-name> to create a Namespace. Replace <namespace-name> with the name of your desired Namespace.
kubectl create namespace node-todo-namespace

  1. Update the deployment.yml file to include the Namespace. You can add the Namespace field under the metadata section of the YAML file.
apiVersion: apps/v1
kind: Deployment
metadata:
    name: node-todo-deployment
    namespace: node-todo-namespace
    labels:
      app: node-todo
spec:
   replicas: 3
   selector:
      matchLabels:
        app: node-todo
   template:
        metadata:
          labels:
            app: node-todo
        spec:
          containers:
          - name: node-todo
            image: shubhambmatere/node-todo-cicd:latest
            ports:
            - containerPort: 8000
            env:
              - name: AUTO_SCALE
                value: "true"

  1. Apply the updated deployment using the command kubectl apply -f deployment.yml -n <namespace-name>.

  2. Replace <namespace-name> with the name of the Namespace you created in step 1.

kubectl apply -f deployment.yml -n <namespace-name

So, that is it for this blog. We can conclude that this YAML file defines a scalable deployment for a todo app running in Kubernetes. By using the autoscaling feature, the deployment can automatically adjust the number of replicas based on demand, ensuring that the application is always available and responsive to users.

In the next blog post, we will explore more advanced topics in the realm of DevOps. Also, I will be deploying two projects using K8s. So, stay tuned and let me know if there is any correction.