Discover how Envoy sidecar proxy revolutionizes microservices by intercepting traffic, gathering real-time metrics, and enabling autoscaling—all without modifying your application code. Dive into a step-by-step Kubernetes setup guide and unlock seamless service mesh capabilities.
What is Envoy Sidecar Proxy?
Envoy is an open-source edge and service proxy designed for cloud-native applications. Developed by Lyft and now part of the Cloud Native Computing Foundation (CNCF), Envoy acts as a sidecar proxy—a companion container deployed alongside application containers in Kubernetes pods. It intercepts and manages all network traffic (inbound/outbound), providing advanced features like load balancing, observability, and security without requiring changes to the application itself.
Envoy’s sidecar pattern is a cornerstone of service meshes like Istio, enabling consistent communication policies, encryption, and telemetry across distributed systems.
Problems Envoy Sidecar Proxy Solves
- Traffic Complexity: Manages retries, timeouts, and circuit-breaking in microservices.
- Observability Gaps: Collects metrics, logs, and traces without app instrumentation.
- Security Challenges: Enforces TLS/mTLS and access control policies.
- Autoscaling Readiness: Provides real-time metrics to drive Kubernetes autoscaling.
How Envoy Intercepts Traffic
Envoy uses iptables rules to redirect traffic through the sidecar. In Kubernetes:
- Init Container: A lightweight container runs during pod initialization to configure iptables. – Redirects all inbound (port 80/443) and outbound traffic to Envoy’s listener ports (e.g., 15001 for outbound, 15006 for inbound).
-
Transparent Proxying: Applications communicate as usual, unaware that Envoy handles routing.
Example iptables Rule:
bash
iptables -t nat -A OUTPUT -p tcp -j REDIRECT --to-port 15001
### Gathering Metrics Without Code Changes
Envoy automatically generates metrics for:
- HTTP: Request count, latency, status codes (4xx/5xx).
- TCP: Connections opened/closed, bytes transferred.
-
gRPC: Streams and message counts.
How It Works:
- Built-in Stats: Envoy exposes metrics via its admin interface (port 9901) or Prometheus endpoint.
- Access Logs: Logs all requests/responses in configurable formats (e.g., JSON).
- Integration: Metrics are scraped by Prometheus and visualized in Grafana.
Example Envoy Metric Configuration:
stats_sinks:
name: envoy.stat_sinks.prometheus
typed_config:
"@type": type.googleapis.com/envoy.config.metrics.v3.PrometheusSink
Step-by-Step: Install Envoy Sidecar Proxy in Kubernetes
1. Create Envoy Configuration (ConfigMap) Save this as envoy-config.yaml
:
apiVersion: v1
kind: ConfigMap
metadata:
name: envoy-config
data:
envoy.yaml: |
admin:
access_log_path: /tmp/admin_access.log
address:
socket_address: { address: 0.0.0.0, port_value: 9901 }
static_resources:
isteners:
- name: http_listener
address:
socket_address: { address: 0.0.0.0, port_value: 8080 } filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: ingress
http_filters:
- name: envoy.filters.http.router
route_config:
name: local_route
virtual_hosts:
- name: service
domains: ["*"]
routes:
- match: { prefix: "/" }
route: { cluster: app_service }
clusters:
- name: app_service
connect_timeout: 0.25s
type: STATIC
load_assignment:
cluster_name: app_service
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address: { address: 127.0.0.1, port_value: 3000 }
2. Deploy Pod with Envoy Sidecar
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-with-envoy
spec:
replicas: 1
selector:
matchLabels:
app: demo
template:
metadata:
labels:
app: demo
spec:
initContainers:
- name: iptables-config
image: alpine:latest
command: ["/bin/sh", "-c"]
args:
- "apk add iptables;
iptables -t nat -A OUTPUT -p tcp -j REDIRECT --to-port 15001;
iptables -t nat -A INPUT -p tcp -j REDIRECT --to-port 15006"
securityContext:
capabilities:
add: ["NET_ADMIN"]
containers:
- name: app
image: my-rest-api:latest
ports:
- containerPort: 3000
- name: envoy
image: envoyproxy/envoy:v1.24.0
ports:
- containerPort: 8080
- containerPort: 9901
volumeMounts:
- name: envoy-config
mountPath: /etc/envoy
volumes:
- name: envoy-config
configMap:
name: envoy-config
3. Apply Configuration
kubectl apply -f envoy-config.yaml
kubectl apply -f deployment.yaml
Enable Real-Time Metrics for Autoscaling
-
Deploy Prometheus:
- Use Helm or the Prometheus Operator to install Prometheus in your cluster.
-
Configure Scraping:
Add annotations to the Envoy pod to enable Prometheus discovery:
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9901"
-
Set Up HPA:
Create a HorizontalPodAutoscaler using Envoy’shttp_requests_total
metric:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: app-autoscaler
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: app-with-envoy
minReplicas: 1
maxReplicas: 10
metrics:
- type: Pods
pods:
metric:
name: http_requests_total
target:
type: AverageValue
averageValue: 1000
Top 3 Key Takeaways
- Traffic Interception Made Simple: Envoy sidecar proxies transparently manage network traffic via iptables, decoupling application logic from networking concerns.
- Zero-Code Observability: Built-in metrics and logs enable monitoring without altering application code.
- Autoscaling Ready: Real-time metrics from Envoy power Kubernetes autoscaling, ensuring efficient resource utilization.
Final Thought:
Envoy sidecar proxy is a game-changer for Kubernetes deployments, offering a unified approach to traffic management, security, and observability. By following this guide, you’ve unlocked the power of cloud-native networking—without rewriting a single line of application code.