How to Deploy Kubernetes Cluster

How to Deploy Kubernetes Cluster Kubernetes has become the de facto standard for container orchestration in modern cloud-native environments. Whether you're managing microservices, scaling web applications, or automating deployment pipelines, deploying a Kubernetes cluster is a foundational skill for DevOps engineers, site reliability engineers (SREs), and cloud architects. This tutorial provides

Nov 10, 2025 - 11:51
Nov 10, 2025 - 11:51
 0

How to Deploy Kubernetes Cluster

Kubernetes has become the de facto standard for container orchestration in modern cloud-native environments. Whether you're managing microservices, scaling web applications, or automating deployment pipelines, deploying a Kubernetes cluster is a foundational skill for DevOps engineers, site reliability engineers (SREs), and cloud architects. This tutorial provides a comprehensive, step-by-step guide to deploying a Kubernetes cluster from scratch covering everything from infrastructure preparation to cluster validation and optimization. By the end of this guide, you will understand not only how to deploy Kubernetes, but also why each step matters, how to avoid common pitfalls, and how to scale your deployment for production-grade workloads.

The importance of mastering Kubernetes deployment cannot be overstated. With over 80% of enterprises now using containers in production (per the 2023 Cloud Native Computing Foundation survey), the ability to reliably deploy, manage, and secure Kubernetes clusters is no longer optional its essential. This guide is designed for intermediate users familiar with Linux, Docker, and basic networking concepts. If you're new to containers, consider learning Docker first. But if youre ready to take the next step, lets begin.

Step-by-Step Guide

Step 1: Understand Kubernetes Architecture

Before deploying a Kubernetes cluster, its critical to understand its core components. Kubernetes operates on a master-worker architecture. The control plane (master nodes) manages the clusters state, schedules workloads, and responds to cluster events. The worker nodes run the actual containerized applications.

The key components of the control plane include:

  • kube-apiserver: The front-end for the Kubernetes control plane. It exposes the API and handles all REST operations.
  • kube-controller-manager: Runs controllers that regulate the state of the cluster (e.g., node controller, replication controller).
  • kube-scheduler: Assigns newly created pods to worker nodes based on resource availability and constraints.
  • etcd: A consistent and highly-available key-value store used to store all cluster data.

On worker nodes, the essential components are:

  • kubelet: An agent that ensures containers are running in a pod.
  • kube-proxy: Maintains network rules on nodes to enable communication to and from pods.
  • Container Runtime: Software responsible for running containers (e.g., containerd, Docker Engine).

Understanding these components helps you troubleshoot issues during deployment and configure your cluster appropriately.

Step 2: Choose Your Deployment Method

There are multiple ways to deploy a Kubernetes cluster, each suited to different environments and use cases:

  • Managed Kubernetes Services: AWS EKS, Google GKE, Azure AKS ideal for production with minimal operational overhead.
  • Self-Hosted (On-Premise or VM): Using kubeadm, kubespray, or Rancher best for learning, hybrid cloud, or environments requiring full control.
  • Local Development: Minikube, Kind (Kubernetes in Docker) perfect for testing and development.

This guide focuses on deploying a self-hosted Kubernetes cluster using kubeadm on Ubuntu 22.04 LTS virtual machines. Kubeadm is the official Kubernetes tool for bootstrapping clusters and is widely used in production environments for its simplicity and reliability.

Step 3: Prepare Your Infrastructure

For a minimal production-ready cluster, youll need at least three machines:

  • 1 Control Plane Node (Master)
  • 2 Worker Nodes

Each machine should meet the following minimum specifications:

  • 2 vCPUs
  • 2 GB RAM
  • 20 GB disk space
  • Ubuntu 22.04 LTS (or CentOS 8+/RHEL 8+)
  • Static IP addresses
  • Full network connectivity between nodes (ports 6443, 23792380, 10250, 10251, 10252 open)

Ensure all nodes can resolve each other by hostname. Edit the /etc/hosts file on each machine:

192.168.1.10  k8s-master

192.168.1.11 k8s-worker1

192.168.1.12 k8s-worker2

Replace the IPs with your actual static IPs. Test connectivity using ping k8s-master from each node.

Step 4: Disable Swap and Configure System Settings

Kubernetes does not support swap memory. Disable it permanently:

sudo swapoff -a
sudo sed -i '/ swap / s/^/

/' /etc/fstab

Configure kernel parameters for Kubernetes networking:

cat overlay

br_netfilter

EOF

sudo modprobe overlay

sudo modprobe br_netfilter

cat

net.bridge.bridge-nf-call-iptables = 1

net.bridge.bridge-nf-call-ip6tables = 1

net.ipv4.ip_forward = 1

EOF

sudo sysctl --system

These settings enable iptables to correctly handle traffic forwarded between containers and ensure proper network routing.

Step 5: Install Container Runtime (containerd)

Kubernetes requires a container runtime. While Docker was historically used, containerd is now the recommended runtime due to its lightweight nature and direct integration with the CRI (Container Runtime Interface).

Install containerd:

sudo apt update

sudo apt install -y containerd

sudo mkdir -p /etc/containerd

containerd config default | sudo tee /etc/containerd/config.toml

Configure systemd as the cgroup driver

sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml

sudo systemctl restart containerd

sudo systemctl enable containerd

Verify installation:

sudo crictl ps

Ensure no errors appear. If you see a list of containers (even empty), containerd is running correctly.

Step 6: Install Kubernetes Components

Add the Kubernetes APT repository and install kubeadm, kubelet, and kubectl:

sudo apt update

sudo apt install -y apt-transport-https ca-certificates curl

curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

sudo apt update

sudo apt install -y kubelet kubeadm kubectl

sudo apt-mark hold kubelet kubeadm kubectl

The apt-mark hold command prevents automatic updates that could break cluster compatibility. Always update Kubernetes components in coordination across all nodes.

Step 7: Initialize the Control Plane

On the control plane node (k8s-master), initialize the cluster:

sudo kubeadm init --pod-network-cidr=10.244.0.0/16

The --pod-network-cidr flag specifies the IP range for pod networks. This value must match the CNI plugin youll install later (Flannel uses 10.244.0.0/16 by default).

After initialization completes, youll see output similar to:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

Follow the instructions exactly:

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

Verify the control plane is running:

kubectl get nodes

Initially, the node will show as NotReady because the network plugin hasnt been installed yet.

Step 8: Install a Container Network Interface (CNI)

Kubernetes requires a CNI plugin to enable pod-to-pod communication. The most popular options are Flannel, Calico, and Cilium.

For simplicity and compatibility, well use Flannel:

kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml

Wait 12 minutes for the pods to start:

kubectl get pods -n kube-system

You should see kube-flannel-ds-xxxxx in Running state. Once all core components are ready, check the node status again:

kubectl get nodes

Now the control plane node should show as Ready.

Step 9: Join Worker Nodes to the Cluster

On the control plane node, retrieve the join command:

kubeadm token create --print-join-command

This outputs a command similar to:

kubeadm join 192.168.1.10:6443 --token abcdef.1234567890abcdef \

--discovery-token-ca-cert-hash sha256:1234567890abcdef...

Copy this command and run it on each worker node (k8s-worker1 and k8s-worker2). You may need to use sudo:

sudo kubeadm join 192.168.1.10:6443 --token abcdef.1234567890abcdef \

--discovery-token-ca-cert-hash sha256:1234567890abcdef...

Once joined, verify from the control plane:

kubectl get nodes

All three nodes should now appear with status Ready.

Step 10: Deploy a Test Application

To validate your cluster is fully functional, deploy a simple Nginx deployment:

kubectl create deployment nginx --image=nginx:latest

kubectl expose deployment nginx --port=80 --type=NodePort

Check the service:

kubectl get services

Look for the NodePort assigned (e.g., 3000032767). Access the application via any worker nodes IP and the assigned port:

curl http://<worker-node-ip>:30000

If you see the Nginx welcome page, your Kubernetes cluster is successfully deployed and operational.

Best Practices

Use Role-Based Access Control (RBAC)

Always define granular RBAC policies. Avoid using the default cluster-admin role for everyday tasks. Create dedicated service accounts and roles for applications and users:

kubectl create serviceaccount myapp-sa

kubectl create role myapp-role --verb=get,list,watch --resource=pods

kubectl create rolebinding myapp-binding --role=myapp-role --serviceaccount=default:myapp-sa

This minimizes the risk of privilege escalation and follows the principle of least privilege.

Enable Audit Logging

Kubernetes audit logs track all API requests. Enable them in the kube-apiserver configuration:

--audit-policy-file=/etc/kubernetes/audit-policy.yaml

--audit-log-path=/var/log/kube-apiserver/audit.log

Create a basic audit policy file:

apiVersion: audit.k8s.io/v1

kind: Policy

rules:

- level: Metadata

Audit logs are invaluable for security compliance and incident investigation.

Apply Resource Limits and Requests

Always define resources.requests and resources.limits in your deployments. Without them, pods may consume excessive resources, destabilizing the cluster.

resources:

requests:

memory: "64Mi"

cpu: "250m"

limits:

memory: "128Mi"

cpu: "500m"

Use the kubectl top pods command to monitor actual usage and refine these values over time.

Use Namespaces for Isolation

Organize workloads into namespaces (e.g., production, staging, dev):

kubectl create namespace production

kubectl create deployment nginx-prod --image=nginx -n production

This prevents naming conflicts and simplifies access control and resource quotas.

Regularly Update and Patch

Keep your Kubernetes version up to date. The Kubernetes release cycle is rapid new versions are released every 3 months. Always test upgrades in a staging environment first.

Use kubeadm upgrade plan to check available versions, then:

sudo kubeadm upgrade apply v1.29.0

Update kubelet and kubectl on all nodes afterward.

Backup etcd Regularly

etcd stores the entire state of your cluster. Back it up frequently:

ETCDCTL_API=3 etcdctl \

--endpoints=https://127.0.0.1:2379 \

--cacert=/etc/kubernetes/pki/etcd/ca.crt \

--cert=/etc/kubernetes/pki/etcd/server.crt \

--key=/etc/kubernetes/pki/etcd/server.key \

snapshot save /backup/etcd-snapshot.db

Store backups securely and test restoration procedures periodically.

Secure API Server Access

Disable anonymous access and ensure TLS is enforced:

--anonymous-auth=false

--authorization-mode=Node,RBAC

Use client certificates or OIDC integration for authentication. Never expose the API server directly to the public internet without a reverse proxy and WAF.

Tools and Resources

Essential CLI Tools

  • kubectl: The primary command-line tool for interacting with Kubernetes clusters.
  • kubeadm: Bootstraps clusters with minimal configuration.
  • kustomize: Customizes YAML manifests without templates ideal for environment-specific configurations.
  • helm: Package manager for Kubernetes applications. Use Helm charts to deploy complex apps like PostgreSQL, Redis, or Prometheus.
  • k9s: Terminal-based UI for managing Kubernetes resources excellent for rapid debugging.

Monitoring and Observability

  • Prometheus + Grafana: Collect metrics from kubelet, cAdvisor, and custom applications.
  • Loki: Log aggregation system optimized for Kubernetes.
  • Jaeger: Distributed tracing for microservices.

Install the Prometheus Operator via Helm for automated service discovery and alerting:

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts

helm install prometheus prometheus-community/kube-prometheus-stack

Infrastructure as Code (IaC)

Automate cluster provisioning using:

  • Terraform: Provision VMs, networks, and security groups on AWS, Azure, or GCP.
  • Ansible: Configure OS-level settings (swap, kernel params, Docker/containerd) across nodes.
  • Flux CD: GitOps tool that automatically syncs cluster state from a Git repository.

Example Terraform module for creating Ubuntu VMs on AWS:

resource "aws_instance" "k8s_master" {

ami = "ami-0c55b159cbfafe1f0"

instance_type = "t3.medium"

key_name = "k8s-key"

security_groups = ["k8s-cluster-sg"]

tags = {

Name = "k8s-master"

}

}

Documentation and Learning

Real Examples

Example 1: Deploying a Multi-Tier Web Application

Lets deploy a full-stack application: a React frontend, a Node.js API, and a PostgreSQL database.

1. Create a namespace:

kubectl create namespace web-app

2. Deploy PostgreSQL with persistent volume:

apiVersion: v1

kind: PersistentVolumeClaim

metadata:

name: postgres-pvc

namespace: web-app

spec:

accessModes:

- ReadWriteOnce

resources:

requests:

storage: 10Gi

---

apiVersion: apps/v1

kind: Deployment

metadata:

name: postgres

namespace: web-app

spec:

replicas: 1

selector:

matchLabels:

app: postgres

template:

metadata:

labels:

app: postgres

spec:

containers:

- name: postgres

image: postgres:15

ports:

- containerPort: 5432

env:

- name: POSTGRES_DB

value: "myapp"

- name: POSTGRES_USER

value: "user"

- name: POSTGRES_PASSWORD

value: "password"

volumeMounts:

- name: postgres-storage

mountPath: /var/lib/postgresql/data

volumes:

- name: postgres-storage

persistentVolumeClaim:

claimName: postgres-pvc

---

apiVersion: v1

kind: Service

metadata:

name: postgres

namespace: web-app

spec:

selector:

app: postgres

ports:

- protocol: TCP

port: 5432

targetPort: 5432

type: ClusterIP

3. Deploy the Node.js API:

apiVersion: apps/v1

kind: Deployment

metadata:

name: api

namespace: web-app

spec:

replicas: 2

selector:

matchLabels:

app: api

template:

metadata:

labels:

app: api

spec:

containers:

- name: api

image: your-registry/api:latest

ports:

- containerPort: 3000

env:

- name: DB_HOST

value: "postgres"

- name: DB_PORT

value: "5432"

resources:

requests:

memory: "128Mi"

cpu: "100m"

limits:

memory: "256Mi"

cpu: "200m"

---

apiVersion: v1

kind: Service

metadata:

name: api

namespace: web-app

spec:

selector:

app: api

ports:

- protocol: TCP

port: 80

targetPort: 3000

type: ClusterIP

4. Deploy the React frontend:

apiVersion: apps/v1

kind: Deployment

metadata:

name: frontend

namespace: web-app

spec:

replicas: 3

selector:

matchLabels:

app: frontend

template:

metadata:

labels:

app: frontend

spec:

containers:

- name: frontend

image: your-registry/frontend:latest

ports:

- containerPort: 80

resources:

requests:

memory: "64Mi"

cpu: "50m"

limits:

memory: "128Mi"

cpu: "100m"

---

apiVersion: v1

kind: Service

metadata:

name: frontend

namespace: web-app

spec:

selector:

app: frontend

ports:

- protocol: TCP

port: 80

targetPort: 80

type: NodePort

5. Expose the frontend externally:

kubectl expose deployment frontend --type=NodePort --port=80 -n web-app

Access the frontend via any worker nodes IP and the assigned NodePort. The API connects to PostgreSQL internally via the service name postgres, demonstrating Kubernetes built-in service discovery.

Example 2: Blue-Green Deployment with Ingress

Use an Ingress controller (e.g., NGINX Ingress) to route traffic between two versions of an app:

1. Install NGINX Ingress:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.10.0/deploy/static/provider/cloud/deploy.yaml

2. Deploy two versions of your app:

Version 1

apiVersion: apps/v1

kind: Deployment

metadata:

name: app-v1

spec:

replicas: 2

selector:

matchLabels:

app: app

version: v1

template:

metadata:

labels:

app: app

version: v1

spec:

containers:

- name: app

image: myapp:v1

ports:

- containerPort: 80

---

Version 2

apiVersion: apps/v1

kind: Deployment

metadata:

name: app-v2

spec:

replicas: 2

selector:

matchLabels:

app: app

version: v2

template:

metadata:

labels:

app: app

version: v2

spec:

containers:

- name: app

image: myapp:v2

ports:

- containerPort: 80

3. Create a Service to expose both versions:

apiVersion: v1

kind: Service

metadata:

name: app-service

spec:

selector:

app: app

ports:

- protocol: TCP

port: 80

targetPort: 80

type: ClusterIP

4. Configure Ingress to route 90% to v1 and 10% to v2:

apiVersion: networking.k8s.io/v1

kind: Ingress

metadata:

name: app-ingress

annotations:

nginx.ingress.kubernetes.io/canary: "true"

nginx.ingress.kubernetes.io/canary-weight: "10"

spec:

ingressClassName: nginx

rules:

- host: app.example.com

http:

paths:

- path: /

pathType: Prefix

backend:

service:

name: app-service

port:

number: 80

Gradually increase the canary weight to 100% as you monitor performance and errors. This is a safe, production-grade deployment strategy.

FAQs

Can I deploy Kubernetes on a single machine?

Yes, using tools like Minikube or Kind, you can run a single-node Kubernetes cluster on your laptop for development. However, this is not suitable for production due to lack of high availability and fault tolerance.

Whats the difference between kubeadm, kops, and Rancher?

kubeadm is a lightweight tool for bootstrapping clusters manually. kops is a tool specifically for AWS, automating cluster creation and management. Rancher is a full-featured UI and management platform that supports multiple Kubernetes distributions and provides centralized monitoring and RBAC.

How do I scale my Kubernetes cluster?

Add more worker nodes using the same kubeadm join command. For automatic scaling, use Cluster Autoscaler with cloud providers (e.g., AWS Auto Scaling Groups or Azure VM Scale Sets). Ensure your CNI and storage backends support dynamic provisioning.

Is Kubernetes secure by default?

No. Kubernetes has many attack surfaces. By default, it allows anonymous access, unencrypted communication, and excessive privileges. Always enable RBAC, audit logging, network policies, and pod security policies (or OPA/Gatekeeper) to harden your cluster.

How do I troubleshoot a node stuck in NotReady state?

Run kubectl describe node <node-name> to check events. Common causes include:

  • Failed container runtime (check systemctl status containerd)
  • Network plugin not installed or misconfigured
  • Insufficient resources (CPU/memory)
  • Time synchronization issues (ensure NTP is running)

Can I run Kubernetes on bare metal?

Yes. Many enterprises run Kubernetes on physical servers using tools like MetalLB (for load balancing) and local-path-provisioner (for local storage). This is common in edge computing and high-performance environments.

What happens if the control plane fails?

In a single-control-plane setup, the cluster becomes unmanageable you cant schedule new workloads or update configurations. For production, always deploy a highly available (HA) control plane with 3 or 5 master nodes and an external etcd cluster.

How often should I back up my cluster?

At minimum, back up etcd before any major upgrade or configuration change. For mission-critical systems, schedule daily snapshots and store them offsite. Test restores quarterly.

Can I use Kubernetes without Docker?

Yes. Since Kubernetes 1.24, Docker Engine is no longer supported as a container runtime. Use containerd, CRI-O, or other CRI-compliant runtimes instead.

Whats the best way to learn Kubernetes deployment?

Start with Minikube to understand core concepts. Then deploy a 3-node cluster using kubeadm on virtual machines. Practice deploying real applications, breaking them, and fixing them. Use the Kubernetes documentation as your primary reference its exceptionally well-written.

Conclusion

Deploying a Kubernetes cluster is more than a technical task its the foundation of modern infrastructure. By following this guide, youve not only learned how to install and configure a production-grade cluster, but also how to secure it, monitor it, and scale it responsibly. You now understand the importance of each component, the rationale behind best practices, and how to apply these principles to real-world applications.

Kubernetes is not a silver bullet. It introduces complexity, and with that comes operational responsibility. But the benefits scalability, resilience, automation, and portability far outweigh the costs when implemented correctly. Whether youre managing a startups web app or a Fortune 500s microservices ecosystem, mastering Kubernetes deployment empowers you to build systems that are reliable, efficient, and future-proof.

Continue to explore advanced topics: Helm charts, GitOps with Flux, service meshes like Istio, and multi-cluster management. The Kubernetes ecosystem evolves rapidly, and staying curious is your greatest asset. Your journey into cloud-native infrastructure has just begun now go deploy something amazing.