Learn Kubernetes Basic

Maleesha Thalagala
4 min readJul 18, 2022

Introduction to Kubernetes

Initially built by Google, Kubernetes is a potent open-source technology for clustered environment management of containerized applications. It seeks to offer improved techniques for controlling connected, dispersed components and services across the various infrastructures.

We’ll go through some of the fundamental ideas behind Kubernetes in this guide. We will discuss the system’s architecture, the issues it resolves, and the strategy it employs to manage scalability and containerized deployments.

What are Kubernetes and Kubernetes Architecture?

At its core, Kubernetes is a framework for managing the execution of containerized applications across a cluster of computers. Using techniques that offer predictability, scalability, and high availability, it is a platform created to fully manage the life cycle of containerized applications and services.

You can specify how your applications should function and how they should be able to communicate with one another and the outside world as a Kubernetes user. To test new features or undo failed deployments, you can scale up or down your services, carry out smooth rolling updates, and switch traffic between multiple versions of your apps. With high levels of flexibility, power, and dependability, Kubernetes’ APIs and composable platform primitives let you define and manage your applications.

It is useful to gain a sense of how Kubernetes is built and organized at a high level in order to comprehend how it is able to provide these functionalities. A layer-based system, with each higher layer abstracting the complexity inherent in the lower levels, is how Kubernetes may be seen.

At its core, Kubernetes creates a cluster from separate physical or virtual machines by connecting them across a common network. The physical platform on which all Kubernetes capabilities, workloads, and components are configured is this cluster. The other computers in the cluster are known as nodes, which are servers in charge of receiving and managing workloads using both internal and external resources. Each node must have a container runtime since Kubernetes runs applications and services in containers to aid with isolation, administration, and flexibility. The node modifies networking rules to properly route and send traffic after receiving work instructions from the master server and creating or destroying containers as necessary.

Kubernetes Components

A cluster is created when Kubernetes is deployed. A group of worker computers, known as nodes, that run containerized apps make up a Kubernetes cluster. There is at least one worker node in each cluster. The Pods that make up the application workload are hosted on the worker node(s). The cluster’s worker nodes and Pods are controlled by the control plane. To provide fault tolerance and high availability, the control plane typically runs across many computers in production scenarios, and a cluster typically contains numerous nodes.

Sources:https://kubernetes.io/docs/concepts/overview/components/

Terms in Kubernetes

The terminology peculiar to Kubernetes can be an entry hurdle, as is true for other technologies. In order to assist you to comprehend Kubernetes better, let’s define some of the more frequently used words.

Control plane: The group of programs in charge of Kubernetes nodes. All tasks are assigned at this point.

Nodes: These devices carry out the requests made by the control plane.

Pod: A collection of containers that have been deployed to a single node. An IP address, IPC, hostname, and other resources are shared by all containers in a pod. Pods decouple the underlying container’s network and storage. This makes it simpler to move containers around the cluster.

Replication controller: This determines the number of identical pods that should be running throughout the cluster.

Service: This separates the definitions of the task from the pods. No matter where a pod travels within the cluster or even if it has been replaced, Kubernetes service proxies automatically route service requests to the appropriate pod.

Kubelet: This node-based service checks if the defined containers are started and running by reading the container manifests.

kubectl: The Kubernetes command-line configuration tool

Benefits of Kubernetes

  • Reducing timeframe for development and release
    The development, release, and deployment processes are substantially streamlined by Kubernetes, for instance by enabling container integration or by making it easier to manage access to storage resources from many providers. The program is divided into functional units that connect with one another via APIs in scenarios where the architecture is built on microservices; as a result, the development team can be divided into smaller groups, each of which focuses on a particular feature. IT teams may work more efficiently and with more focus thanks to this organization, which shortens the time between releases.
  • Optimizing costs

Kubernetes may assist enterprises in reducing the cost of managing their ecosystems while maintaining scalability across various environments through dynamic and intelligent container administration. Due in part to native autoscaling (HPA, VPA) logics and integrations with major cloud vendors, which are capable of providing resources dynamically, resource allocation is automatically modulated to the actual application needs, and low-level manual operations on the infrastructure are significantly reduced.

IT personnel may now be deployed to complete work that brings value to the business because of automation, which eliminates the need for them to perform a significant number of operational duties linked to system management. Applications run indifferently across all environments, giving organizations the freedom to select the resources they need based on convenience for each unique workload.

  • Flexibility in multi-cloud environments

One of the major advantages of the solution is that containerization and Kubernetes enable the realization of the promises of the new hybrid and multi-cloud environments, ensuring the operation of applications in any public and private environment without functional or performance degradations. Thus, the chance of lock-in is likewise decreased.

--

--

Maleesha Thalagala

Software Engineer | Tech Enthusiast | Freelance Writer