Home / Technology / Deploying an AWS EKS Cluster with Terraform and GitHub Actions

Deploying an AWS EKS Cluster with Terraform and GitHub Actions

This guide shows you how to automate EKS cluster deployment using Terraform and GitHub Actions. You’ll learn to create a production-ready Kubernetes environment with networking, security, and scaling capabilities.

VPC and Network Architecture

Create a dedicated VPC for the cluster.

resource "aws_vpc" "eks_vpc" {
  cidr_block           = "10.0.0.0/16"
  enable_dns_hostnames = true
  enable_dns_support   = true

  tags = {
    Name = "eks-vpc"
  }
}

resource "aws_subnet" "private" {
  count             = 2
  vpc_id            = aws_vpc.eks_vpc.id
  cidr_block        = "10.0.${count.index + 1}.0/24"
  availability_zone = data.aws_availability_zones.available.names[count.index]

  tags = {
    Name                              = "eks-private-${count.index + 1}"
    "kubernetes.io/role/internal-elb" = "1"
  }
}

Private subnets host the worker nodes while public subnets handle incoming traffic through load balancers. The subnet tags enable automatic discovery by Kubernetes for internal load balancer placement.

EKS Cluster and Node Groups

The EKS cluster consists of a control plane for managing Kubernetes components and worker nodes for running workloads:

resource "aws_eks_cluster" "main" {
  name     = "eks-cluster"
  role_arn = aws_iam_role.eks_cluster.arn
  version  = "1.27"

  vpc_config {
    subnet_ids              = aws_subnet.private[*].id
    endpoint_private_access = true
    endpoint_public_access  = true
  }

  depends_on = [
    aws_iam_role_policy_attachment.eks_cluster_policy
  ]
}

resource "aws_eks_node_group" "main" {
  cluster_name    = aws_eks_cluster.main.name
  node_group_name = "eks-node-group"
  node_role_arn   = aws_iam_role.eks_nodes.arn
  subnet_ids      = aws_subnet.private[*].id

  scaling_config {
    desired_size = 2
    max_size     = 4
    min_size     = 1
  }

  instance_types = ["t3.medium"]
}

The EKS Blueprints for Terraform provide production-ready modules to accelerate cluster deployment. Breaking the configuration into modules improves maintainability:

module "vpc" {
  source = "./modules/vpc"
  # VPC configuration parameters
}

module "eks" {
  source = "./modules/eks"
  # EKS configuration parameters
  vpc_id     = module.vpc.vpc_id
  subnet_ids = module.vpc.private_subnet_ids
}

Automating Deployments with GitHub Actions

Workflow File

Create a workflow file in .github/workflows to define the deployment process:

name: Terraform AWS Workflow

on:
  pull_request:
    branches: [ main ]
  push:
    branches: [ main ]

jobs:
  terraform:
    runs-on: ubuntu-latest
    permissions:
      id-token: write
      contents: read

    steps:
      - uses: actions/checkout@v3

      - name: Configure AWS Credentials
        uses: aws-actions/configure-aws-credentials@v2
        with:
          role-to-assume: arn:aws:iam::123456789012:role/github-actions
          aws-region: us-west-2

      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v3

      - name: Terraform Init
        run: terraform init

      - name: Terraform Plan
        if: github.event_name == 'pull_request'
        run: terraform plan -no-color

      - name: Terraform Apply
        if: github.ref == 'refs/heads/main' && github.event_name == 'push'
        run: terraform apply -auto-approve

State Management

Store Terraform state in S3.

terraform {
  backend "s3" {
    bucket         = "terraform-state-bucket"
    key            = "eks/terraform.tfstate"
    region         = "us-west-2"
    dynamodb_table = "terraform-state-lock"
    encrypt        = true
    use_lockfile   = true
  }
}

Managing Production

GitOps and Add-on Management

Deploy ArgoCD using the EKS Blueprints framework:

module "kubernetes_addons" {
  source = "github.com/aws-ia/terraform-aws-eks-blueprints//modules/kubernetes-addons"

  eks_cluster_id = module.eks_blueprints.eks_cluster_id

  enable_argocd = true
  enable_metrics_server = true
  enable_cluster_autoscaler = true
  enable_aws_load_balancer_controller = true

  argocd_helm_config = {
    values = [templatefile("${path.module}/values.yaml", {})]
  }
}

Monitoring and Scaling

Implement automatic scaling based on resource utilization:

resource "aws_autoscaling_policy" "cluster_autoscaling" {
  name = "eks-cluster-autoscaling"
  policy_type = "TargetTrackingScaling"
  target_tracking_configuration {
    target_value = 75.0
    predefined_metric_specification {
      predefined_metric_type = "ASGAverageCPUUtilization"
    }
  }
  autoscaling_group_name = aws_eks_node_group.main.resources[0].autoscaling_groups[0].name
}

Next Steps

Automating EKS with GitHub Actions is a reliable way to manage your new cluster. For teams looking to improve their infrastructure workflows, check out Terrateam (OSS) which provides a GitOps-first approach to managing Terraform in GitHub.