How to Setup Ingress Controller
How to Setup Ingress Controller In modern cloud-native environments, managing external access to services running inside a Kubernetes cluster is a critical responsibility. This is where an Ingress Controller comes into play. An Ingress Controller is a specialized component that routes external HTTP and HTTPS traffic to internal Kubernetes services based on defined rules. Unlike a simple LoadBalanc
How to Setup Ingress Controller
In modern cloud-native environments, managing external access to services running inside a Kubernetes cluster is a critical responsibility. This is where an Ingress Controller comes into play. An Ingress Controller is a specialized component that routes external HTTP and HTTPS traffic to internal Kubernetes services based on defined rules. Unlike a simple LoadBalancer service, an Ingress Controller provides advanced routing capabilities such as path-based routing, host-based routing, SSL termination, and load balancing across multiple servicesall through a single IP address.
Setting up an Ingress Controller correctly is essential for securing, scaling, and optimizing web applications deployed on Kubernetes. Whether youre running a microservices architecture, a multi-tenant SaaS platform, or a high-traffic e-commerce backend, a properly configured Ingress Controller ensures efficient traffic management, improved security posture, and seamless integration with modern DevOps workflows.
This comprehensive guide walks you through every step of setting up an Ingress Controllerfrom choosing the right controller to validating your configuration. Youll learn best practices, real-world examples, and the tools that make the process reliable and repeatable. By the end, youll have the knowledge to deploy and manage an Ingress Controller confidently in any production-ready Kubernetes environment.
Step-by-Step Guide
Step 1: Understand the Role of Ingress and Ingress Controller
Before deploying any Ingress Controller, its vital to distinguish between two related but distinct Kubernetes resources: Ingress and Ingress Controller.
Ingress is a Kubernetes API object that defines rules for routing external traffic to services within the cluster. Its essentially a configuration file that specifies which hostnames and paths should be routed to which backend services. However, Ingress by itself does nothingits just a set of rules. To make those rules functional, you need an Ingress Controller.
The Ingress Controller is a separate piece of software that watches the Kubernetes API for Ingress resource changes and then configures a reverse proxy (like NGINX, Traefik, or HAProxy) to enforce those rules. Think of Ingress as the blueprint and the Ingress Controller as the construction crew that builds the actual road system based on that blueprint.
Common Ingress Controllers include:
- NGINX Ingress Controller The most widely used, based on the NGINX web server.
- Traefik Modern, dynamic, and designed for microservices with automatic service discovery.
- HAProxy Ingress High-performance, enterprise-grade, ideal for heavy workloads.
- Envoy Ingress Controller Built on the Envoy proxy, often used in service mesh environments like Istio.
- Contour Uses Envoy and is optimized for Kubernetes-native workflows.
For this guide, well focus on the NGINX Ingress Controller due to its popularity, extensive documentation, and broad compatibility.
Step 2: Prepare Your Kubernetes Cluster
Before installing the Ingress Controller, ensure your Kubernetes cluster is ready:
- Verify that
kubectlis installed and configured to communicate with your cluster:kubectl cluster-info - Confirm that your cluster is running a supported version (v1.19 or later recommended).
- Ensure you have administrative access to deploy resources in the
defaultor a dedicated namespace (e.g.,ingress-nginx). - If using a managed Kubernetes service (like EKS, GKE, or AKS), check whether an Ingress Controller is already installed by default. Some providers offer their own (e.g., GKEs HTTP(S) Load Balancer), which may conflict with manual installations.
Run the following command to list existing Ingress resources:
kubectl get ingress --all-namespaces
If you see any existing Ingress objects, determine whether they are managed by an existing controller. If so, you may need to uninstall the current one before proceeding.
Step 3: Install the NGINX Ingress Controller
The NGINX Ingress Controller is maintained by the Kubernetes SIG Network team and is available via official manifests on GitHub.
Use the following command to install the latest stable version:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.12.0/deploy/static/provider/cloud/deploy.yaml
This command deploys:
- A namespace named
ingress-nginx - Service Account, Role, and RoleBinding for RBAC permissions
- Deployment for the NGINX controller pods
- A ClusterIP Service for internal communication
- A LoadBalancer Service to expose the controller externally
Wait a few moments for the resources to be created. Monitor the rollout with:
kubectl get pods -n ingress-nginx -w
You should see one or more pods with status Running. If pods remain in ContainerCreating or ImagePullBackOff, check for image pull errors or insufficient resources.
Once the pods are running, check the external IP assigned to the LoadBalancer service:
kubectl get svc -n ingress-nginx
Look for the EXTERNAL-IP column under the ingress-nginx-controller service. If the IP remains <pending>, your cloud provider may not have provisioned the LoadBalancer yet (common on AWS, Azure, or GCP). You can force an update by deleting the service and reapplying:
kubectl delete svc ingress-nginx-controller -n ingress-nginx
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.12.0/deploy/static/provider/cloud/deploy.yaml
On local environments like Minikube or Kind, the external IP may not be assigned. Instead, use the NodePort method or port-forwarding:
kubectl port-forward svc/ingress-nginx-controller -n ingress-nginx 8080:80
Now you can access your Ingress Controller via http://localhost:8080.
Step 4: Create a Sample Application
To test your Ingress setup, deploy a simple web application. Well use a basic Nginx server serving a static page.
Create a deployment manifest called sample-app-deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: sample-app
labels:
app: sample-app
spec:
replicas: 2
selector:
matchLabels:
app: sample-app
template:
metadata:
labels:
app: sample-app
spec:
containers:
- name: nginx
image: nginx:alpine
ports:
- containerPort: 80
env:
- name: HOSTNAME
valueFrom:
fieldRef:
fieldPath: metadata.name
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
Apply the deployment:
kubectl apply -f sample-app-deployment.yaml
Now expose the deployment as a ClusterIP service with sample-app-service.yaml:
apiVersion: v1
kind: Service
metadata:
name: sample-app-service
spec:
selector:
app: sample-app
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP
Apply the service:
kubectl apply -f sample-app-service.yaml
Verify the service is running:
kubectl get svc sample-app-service
Step 5: Define an Ingress Resource
Now create an Ingress resource that routes traffic to your sample application. Create sample-ingress.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: sample-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: sample-app-service
port:
number: 80
Key elements explained:
ingressClassName: nginxSpecifies which Ingress Controller should handle this resource (required in Kubernetes v1.19+).host: example.comThe domain name that will trigger this rule.path: /All requests to the root path will be routed to the service.pathType: PrefixMatches any path starting with the defined value.backend.service.nameThe Kubernetes service to forward traffic to.
Apply the Ingress:
kubectl apply -f sample-ingress.yaml
Verify the Ingress was created:
kubectl get ingress
You should see output similar to:
NAME CLASS HOSTS ADDRESS PORTS AGE
sample-ingress nginx example.com 34.120.150.23 80 2m
At this point, the Ingress Controller is actively routing traffic from the external IP to your sample application. However, since example.com doesnt resolve to your public IP, youll need to test it locally.
Step 6: Test the Ingress Configuration
To test your setup without a real domain, modify your local /etc/hosts file (on macOS/Linux) or C:\Windows\System32\drivers\etc\hosts (on Windows) to map the domain to your Ingress Controllers external IP:
34.120.150.23 example.com
Save the file and test access:
curl -H "Host: example.com" http://34.120.150.23
You should receive the default NGINX welcome page. Alternatively, open a browser and navigate to http://example.com. You should see the same page.
To verify logs from the Ingress Controller, check the pod logs:
kubectl logs -n ingress-nginx ingress-nginx-controller-xxxxx
Look for entries like:
2024/05/15 10:30:15 [notice] 4747: *1234 [lua] access.lua:123: rewrite() - Rewriting request to / for host example.com
This confirms the Ingress Controller successfully processed your request.
Step 7: Configure SSL/TLS (Optional but Recommended)
For production environments, HTTPS is mandatory. To enable SSL, you need a TLS certificate. You can use a self-signed certificate for testing or obtain a valid one via Lets Encrypt using Cert-Manager (covered later).
First, generate a self-signed certificate:
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-keyout tls.key -out tls.crt -subj "/CN=example.com/O=My Organization"
Create a Kubernetes TLS secret:
kubectl create secret tls tls-secret --key tls.key --cert tls.crt
Update your Ingress resource to use the TLS secret. Modify sample-ingress.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: sample-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
tls:
- hosts:
- example.com
secretName: tls-secret
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: sample-app-service
port:
number: 80
Apply the updated Ingress:
kubectl apply -f sample-ingress.yaml
Test HTTPS access:
curl -k https://example.com
The -k flag bypasses certificate validation (since its self-signed). In production, use a trusted certificate authority (CA) like Lets Encrypt.
Best Practices
Use Dedicated Namespaces
Always install the Ingress Controller in its own namespace (e.g., ingress-nginx) rather than default. This improves security, simplifies RBAC management, and makes it easier to delete or upgrade the controller without affecting other workloads.
Enable RBAC and Least Privilege
Ensure the Ingress Controllers Service Account has only the permissions it needs. The official manifests include appropriate RBAC roles, but if you customize them, follow the principle of least privilege. Avoid granting cluster-admin access unless absolutely necessary.
Configure Resource Limits and Requests
Always define CPU and memory limits and requests for the Ingress Controller pods. Without them, the controller may consume excessive resources, especially under high traffic. Example:
resources:
requests:
cpu: 100m
memory: 90Mi
limits:
cpu: 200m
memory: 200Mi
Adjust based on expected traffic volume. Monitor resource usage with tools like Prometheus and Grafana.
Use IngressClass for Multi-Controller Environments
If you have multiple Ingress Controllers in the same cluster (e.g., NGINX and Traefik), use the ingressClassName field to explicitly assign Ingress resources to the correct controller. This prevents conflicts and ensures predictable routing behavior.
Implement Health Checks and Readiness Probes
Ensure the Ingress Controller has proper liveness and readiness probes configured. The default manifests include these, but if you customize the deployment, verify that:
livenessProbechecks the controllers health endpoint (typically/healthz).readinessProbeensures the controller is ready to accept traffic before being added to the service endpoint list.
Enable Access Logs and Monitoring
Enable NGINX access logs to capture request details. You can do this via annotations:
annotations:
nginx.ingress.kubernetes.io/access-log-path: /var/log/nginx/access.log
nginx.ingress.kubernetes.io/enable-access-log: "true"
Integrate logs with a centralized logging system like Fluentd, Loki, or Elasticsearch. Combine with metrics from Prometheus and Grafana to monitor request rates, latency, error codes, and backend health.
Rate Limiting and Security Policies
Use annotations to enforce security and performance policies:
nginx.ingress.kubernetes.io/limit-rpsLimit requests per second per client.nginx.ingress.kubernetes.io/limit-connectionsLimit concurrent connections per IP.nginx.ingress.kubernetes.io/secure-backendsForce HTTPS to backend services.nginx.ingress.kubernetes.io/ssl-redirectAutomatically redirect HTTP to HTTPS.
Example:
annotations:
nginx.ingress.kubernetes.io/limit-rps: "10"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/secure-backends: "true"
Regularly Update and Patch
Ingress Controllers are exposed to the public internet and are common targets for attacks. Subscribe to security advisories for your chosen controller. Use automated tools like Renovate or Dependabot to keep your manifests updated. Always test upgrades in staging before applying to production.
Use Canary Deployments and Blue/Green Strategies
When updating Ingress rules or switching backends, use canary deployments to gradually shift traffic. Tools like Flagger or Argo Rollouts can automate traffic shifting based on metrics like error rate and latency.
Tools and Resources
Official Documentation
- NGINX Ingress Controller Docs Comprehensive guides, configuration options, and troubleshooting.
- Traefik Documentation Excellent for dynamic environments and service mesh integration.
- Kubernetes Ingress API Reference Official specification for Ingress resources.
Monitoring and Observability
- Prometheus + Grafana Collect metrics from the NGINX Ingress Controllers metrics endpoint (
/metrics). - Fluentd + Loki + Grafana Centralized log aggregation and visualization.
- OpenTelemetry For distributed tracing across services behind the Ingress.
Automation and CI/CD
- Helm Use the official Helm chart for easy deployment and versioning:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx - Kustomize For overlay-based configuration management across environments.
- Argo CD GitOps-based continuous delivery of Ingress resources.
SSL/TLS Automation
- Cert-Manager Automates issuance and renewal of Lets Encrypt certificates. Install via Helm:
helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --version v1.14.4
Then create an Issuer for Lets Encrypt:
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: admin@example.com
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: nginx
Reference this Issuer in your Ingress resource to auto-provision TLS certificates.
Testing and Validation
- kube-nginx-test Lightweight tool to simulate Ingress traffic.
- Postman or curl Manual testing with custom headers.
- curl -v View headers and response codes in detail.
- nghttp2 Test HTTP/2 and HTTP/3 support.
Real Examples
Example 1: Multi-Tenant SaaS Application
A SaaS platform hosts multiple customers, each with a custom subdomain (e.g., customer1.yourapp.com, customer2.yourapp.com). Each customer has their own backend service.
Ingress configuration:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: saas-ingress
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
tls:
- hosts:
- customer1.yourapp.com
- customer2.yourapp.com
secretName: saas-tls-secret
rules:
- host: customer1.yourapp.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: customer1-service
port:
number: 80
- host: customer2.yourapp.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: customer2-service
port:
number: 80
This allows each customer to have a unique domain while sharing the same Ingress Controller and LoadBalancer IP.
Example 2: API Gateway with Path-Based Routing
An application exposes both a frontend and a backend API:
/? Frontend (React app)/api/v1/? Backend (Node.js API)
Ingress configuration:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: api-gateway
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
ingressClassName: nginx
rules:
- host: app.example.com
http:
paths:
- path: /api/v1(/|$)(.*)
pathType: Prefix
backend:
service:
name: api-service
port:
number: 80
- path: /
pathType: Prefix
backend:
service:
name: frontend-service
port:
number: 80
The rewrite-target: /$2 annotation captures the second capture group (everything after /api/v1) and rewrites the path to send it correctly to the backend.
Example 3: Canary Deployment with Weighted Routing
Using NGINX annotations, you can route 10% of traffic to a new version of a service:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: canary-ingress
annotations:
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-weight: "10"
spec:
ingressClassName: nginx
rules:
- host: app.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: app-v2
port:
number: 80
Combine this with the primary Ingress pointing to app-v1 to gradually shift traffic. Monitor error rates and performance before increasing the weight to 50%, then 100%.
FAQs
Whats the difference between Ingress and a LoadBalancer service?
A LoadBalancer service exposes a single service to the internet using a cloud providers load balancer. It operates at Layer 4 (TCP/UDP). In contrast, an Ingress Controller operates at Layer 7 (HTTP/HTTPS) and can route traffic to multiple services based on hostname, path, headers, or other criteriaall using a single IP address.
Can I use multiple Ingress Controllers in the same cluster?
Yes. You can install multiple Ingress Controllers (e.g., NGINX and Traefik) and assign each Ingress resource to a specific controller using the ingressClassName field. This is useful for isolating traffic between teams or applications with different requirements.
Why is my Ingress showing <pending> for the external IP?
This typically happens on cloud platforms when the LoadBalancer service is not yet provisioned. Check cloud provider quotas, network policies, or IAM permissions. On Minikube or Kind, use kubectl port-forward instead of relying on external IPs.
Do I need to use a domain name with Ingress?
No. You can use IP-based routing for internal services or testing. However, for public-facing applications, a domain name is required to use host-based routing and SSL certificates.
How do I troubleshoot a 502 Bad Gateway error?
Common causes:
- Backend service is not running or unreachable.
- Service port is misconfigured in the Ingress.
- Readiness probe is failing, so the endpoint is not registered.
- NetworkPolicy is blocking traffic.
Check pod logs, service endpoints (kubectl get endpoints), and Ingress controller logs for detailed error messages.
Is the NGINX Ingress Controller production-ready?
Yes. The NGINX Ingress Controller is used by thousands of production systems worldwide. It is actively maintained, well-documented, and integrates with enterprise tooling. Always follow best practices for resource limits, monitoring, and security.
Can I use Ingress with gRPC or WebSockets?
Yes. NGINX Ingress Controller supports WebSockets and gRPC out of the box. For gRPC, ensure you use HTTP/2 and set the annotation nginx.ingress.kubernetes.io/backend-protocol: "GRPC".
How do I upgrade the Ingress Controller?
Use Helm for easy upgrades: helm upgrade ingress-nginx ingress-nginx/ingress-nginx. If using manifests, apply the latest version and ensure backward compatibility. Always test in staging first.
Conclusion
Setting up an Ingress Controller is a foundational skill for anyone managing applications on Kubernetes. It transforms a collection of internal services into a scalable, secure, and well-organized web application platform. By following the steps outlined in this guidefrom choosing the right controller to implementing TLS, monitoring, and advanced routingyouve gained the knowledge to deploy a production-grade Ingress Controller confidently.
Remember that Ingress is not just about routingits about control, security, and observability. Use annotations wisely, monitor traffic patterns, automate certificate renewal, and integrate with your CI/CD pipeline. As your infrastructure grows, so should your Ingress strategy. Consider advanced patterns like canary deployments, service mesh integration, and multi-cluster routing using tools like Istio or Linkerd.
With a solid Ingress setup, youre not just exposing servicesyoure building a resilient, high-performance gateway to your entire application ecosystem. Keep learning, keep testing, and keep optimizing. The cloud-native future belongs to those who master the fundamentalsand Ingress is one of the most important.