Kubernetes, often abbreviated as K8s, is a powerful open-source platform designed to automate deploying, scaling, and operating containerized applications. To fully grasp Kubernetes, it's essential to understand its architecture, which involves various components that work in unison to manage and orchestrate containers. This blog will explore the key components of Kubernetes architecture and trace the flow of a command from kubectl
through the entire system.
Kubernetes Architecture : I made the diagram using eraser.io and it can be found here
1. Overview of Kubernetes Components
Control Plane
The control plane is the brain of the Kubernetes cluster. It manages the cluster's lifecycle, including starting and stopping workloads, scaling applications, and ensuring the desired state of the cluster is maintained. The main components of the control plane include:
API Server
ETCD
Scheduler
Controller Manager
Worker Node
Worker nodes are the machines where your applications (containers) run. Each node has the necessary services to run and manage containers, including:
Kubelet
Kube Proxy
Pod
Kubectl
kubectl
is the command-line tool used to interact with the Kubernetes API server. It is the user's gateway to managing the Kubernetes cluster.
2. Detailed Component Breakdown
Kubectl
kubectl
is a CLI tool for communicating with the Kubernetes API server. It allows users to deploy applications, inspect and manage cluster resources, and view logs.
API Server
The API server is the front end of the Kubernetes control plane. It exposes the Kubernetes API and processes API requests, including authentication, validation, and data changes.
ETCD
ETCD is a consistent and highly-available key-value store used as Kubernetes' backing store for all cluster data. It stores the entire state of the cluster.
Scheduler
The scheduler watches for newly created Pods that have no assigned node and selects a node for them to run on based on various constraints and policies.
Controller Manager
The controller manager runs controllers, which are loops that watch the state of the cluster through the API server and make changes to achieve the desired state. Examples include the node controller, replication controller, and endpoints controller.
Worker Node Components
Kubelet: The kubelet is an agent that runs on each worker node. It ensures that containers are running in a Pod and communicates with the API server to manage workloads.
Kube Proxy: Kube Proxy maintains network rules on nodes. These rules allow network communication to your Pods from network sessions inside or outside of your cluster.
Pod: A Pod is the smallest and simplest Kubernetes object. It represents a single instance of a running process in your cluster and can contain one or more containers.
Container: Containers are the actual applications running inside Pods. Kubernetes supports various container runtimes like Docker, containerd, and CRI-O.
3. Flow of a Command from Kubectl to Container
Let's trace the journey of a command issued through kubectl
to the final state where the application is running in a container.
Flow of Control inside Kubernetes Architecture : I made the diagram using eraser.io and it can be found here
Step-by-Step Flow:
Kubectl Command:
- A user issues a command using
kubectl
, such askubectl run nginx --image=nginx
.
- A user issues a command using
API Server Receives the Request:
kubectl
communicates with the API server. The command is translated into an API request (e.g., to create a Pod).
API Server Processes the Request:
- The API server authenticates, validates, and processes the request. If valid, it writes the new state to ETCD.
ETCD Stores the State:
- ETCD stores the configuration and state of the newly created Pod.
Scheduler Schedules the Pod:
- The scheduler detects the new unscheduled Pod and assigns it to a suitable worker node based on resource availability and other constraints.
Node Assignment:
- The scheduler updates the API server with the node assignment.
Kubelet on the Assigned Node:
- The kubelet on the selected worker node gets the Pod specification from the API server and ensures the containers described in the Pod are started.
Kube Proxy Sets Up Networking:
- Kube Proxy sets up the necessary networking rules to allow communication to and from the Pods.
Container Runtime Starts the Containers:
- The container runtime (e.g., Docker) pulls the required image (nginx) from the container registry and starts the container(s) inside the Pod.
Pod and Container Run:
- The Pod is now running on the worker node, and the application (nginx) is live and accessible as per the defined networking rules.
Conclusion
Kubernetes architecture is a sophisticated system that orchestrates the deployment and management of containerized applications across a cluster of machines. By understanding the flow from kubectl
command to running containers, you can appreciate the seamless integration and coordination of Kubernetes components that ensure your applications are running efficiently and reliably.
That's it for now. Did you like this blog? Please let me know.
You can Buy Me a Coffee if you want to and please don't forget to follow me on Youtube, Twitter, and LinkedIn also.
#40daysofkubernetes