Kubernetes networking can feel like magic until something breaks. This article demystifies the networking stack layer by layer, so you can debug issues and design better architectures.
Every pod gets its own IP address. Containers within a pod share the same network namespace and can communicate via localhost. This is fundamentally different from Docker's default bridge networking.
apiVersion: v1
kind: Pod
metadata:
name: web-app
spec:
containers:
- name: app
image: myapp:latest
ports:
- containerPort: 8080
- name: sidecar
image: envoy:latest
ports:
- containerPort: 9090The Container Network Interface (CNI) plugin is responsible for assigning IP addresses to pods and configuring the network so that pods can communicate with each other across nodes. Popular CNI plugins include Calico, Cilium, and Flannel, each with different tradeoffs.
Services provide stable endpoints for a set of pods. When you create a Service, Kubernetes assigns it a cluster IP that remains constant even as pods come and go.
apiVersion: v1
kind: Service
metadata:
name: api-server
spec:
selector:
app: api-server
ports:
- port: 80
targetPort: 8080
type: ClusterIPkube-proxy runs on every node and programs iptables or IPVS rules to redirect traffic destined for service IPs to healthy pod IPs. In iptables mode, it creates a chain of rules with random probability to achieve load balancing. IPVS mode uses kernel-level load balancing which is more efficient for clusters with many services.
sequenceDiagram
participant PodA as Pod A
participant Net as Network (iptables/IPVS)
participant PodB as Pod B (Target)
Note over PodA: App sends request to<br/>Service IP (10.96.0.1)
PodA->>Net: Packet (Src: 10.244.0.5, Dst: 10.96.0.1)
Note right of Net: DNAT Rule Matches<br/>Selects Endpoint Pod B
Net->>PodB: Packet (Src: 10.244.0.5, Dst: 10.244.1.5)
Note over PodB: App processes request
PodB-->>PodA: Response
CoreDNS runs as a deployment in the cluster and provides DNS resolution for services. Every service gets a DNS entry in the format <service>.<namespace>.svc.cluster.local.
// In your application code, just use the service name
conn, err := grpc.Dial("api-server.default.svc.cluster.local:80",
grpc.WithInsecure(),
)Ingress resources define rules for routing external HTTP traffic to services within the cluster. The newer Gateway API provides a more expressive and extensible model.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: api-ingress
annotations:
nginx.ingress.kubernetes.io/rate-limit: "100"
spec:
ingressClassName: nginx
rules:
- host: api.example.com
http:
paths:
- path: /v1
pathType: Prefix
backend:
service:
name: api-server
port:
number: 80Network policies are Kubernetes' firewall. By default, all pods can communicate with all other pods. Network policies allow you to restrict traffic based on labels, namespaces, and IP blocks.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: api-server-policy
spec:
podSelector:
matchLabels:
app: api-server
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
role: frontend
ports:
- port: 8080
egress:
- to:
- podSelector:
matchLabels:
app: postgres
ports:
- port: 5432When pods cannot communicate, check these things in order: verify the CNI plugin is healthy, check that kube-proxy is running and iptables rules exist, confirm DNS resolution works, and inspect network policies for overly restrictive rules.
Kubernetes networking is complex but logical. Understanding the layers -- pod, service, ingress -- gives you the mental model needed to design and troubleshoot production clusters effectively.
System Architecture Group
Experts in distributed systems, scalability, and high-performance computing.