🧱 Understanding Execution Units in Amazon EKS with Fargate
Amazon EKS (Elastic Kubernetes Service) with Fargate provides a fully managed, serverless compute engine for containers. But when you’re building applications on EKS with Fargate, it’s important to understand the core execution model: what is actually being scheduled and run under the hood?
In this article, we’ll break down the execution unit hierarchy in EKS (Fargate mode), starting from the container and working our way up.
🔽 Minimum Execution Unit: The Pod
The smallest unit of execution in EKS (and Kubernetes in general) is not a container — it’s a pod.
A pod is the atomic unit that the Kubernetes scheduler works with. All workloads in EKS are scheduled at the pod level.
Each pod can contain one or more containers that:
- Share the same network namespace (same IP, ports)
- Share volumes (such as EFS or ephemeral storage)
- Start and stop together
In Fargate, you cannot run a container by itself — it must be inside a pod. Fargate provisions the runtime and resources for pods, not individual containers.
🔁 What Happens When You Deploy?
When you run a Kubernetes Deployment
, Job
, or CronJob
, Kubernetes:
- Converts that into a desired Pod template
- The EKS Fargate scheduler matches the pod spec against a Fargate profile
- Fargate launches a lightweight VM to run that pod
Even if your pod contains just one container (which is common), it is still wrapped and managed as a pod.
🧱 Execution Hierarchy in EKS (Fargate Mode)
Here’s how the hierarchy of execution and grouping works:
Level | Unit | Description |
---|---|---|
1 | Container | Single image runtime — shares IP and volumes with others in pod |
2 | Pod (✅ Minimum unit) | Scheduled onto Fargate — has its own IP, lifecycle, and boundaries |
3 | Deployment | Manages pods via replica sets, handles rolling updates |
4 | Namespace | Logical isolation — can group multiple apps or environments |
5 | Cluster | All nodes, namespaces, workloads, and control plane resources |
📦 Sidecars and Init Containers
Fargate fully supports multi-container pods, including sidecars and init containers.
- Sidecars are commonly used for logging agents, service mesh proxies, or monitoring tools.
- Init containers can handle startup tasks like waiting for dependencies or setting up volumes.
All containers in the same pod:
- Share the same lifecycle (start/stop together)
- Run in the same network namespace
- Share volumes and environment context
This means you can build rich application patterns (like sidecar-based observability or gateways) while still benefiting from Fargate’s simplicity.
⚡ Limitations Specific to Fargate
While Fargate abstracts away a lot of complexity, it also imposes some limitations:
- ❌ No support for DaemonSets — since there are no persistent nodes
- ❌ No host-level access — you can’t inspect
/var/log
or access the host filesystem - ⏱ Cold starts may occur as new microVMs are provisioned for pods
- ⚙️ Limited volume options — EFS and ephemeral only (no EBS)
- 📦 All containers in a pod share fixed CPU/memory — specified in the pod spec
For most use cases, these trade-offs are acceptable in exchange for operational simplicity and scalability.
📆 Why This Matters
- Networking: IAM roles for service accounts (IRSA), security groups, and DNS are assigned per pod.
- Logging: Logs are collected at the pod level (aggregating all container output).
- Scheduling: The Fargate control plane schedules pods, not containers.
Understanding that the pod is the minimum execution unit helps you:
- Design better Kubernetes manifests
- Debug more effectively (e.g. logs, exec, resource limits)
- Configure IAM, networking, and observability correctly
✅ TL;DR
Concept | Answer |
---|---|
Minimum execution unit in EKS Fargate? | Pod ✅ |
Can I run a single container? | Yes — inside a pod |
Can a pod have multiple containers? | Yes (e.g. sidecars) |
What does Fargate actually run? | A pod, not a container |
Understanding this model is foundational for building production-ready applications on EKS with Fargate. The pod is where everything starts — from logging to networking to execution itself.
Happy deploying! 🚀