Deploying the DevOps Stack to EKS

Prerequisites

  • Access to API keys allowing to create required resources in AWS,

  • Access to GitLab or GitHub (only supported CI/CD for now),

  • Knowledge of Terraform basics

Create your Terraform root module

Camptocamp’s DevOps Stack is instantiated using a Terraform composition module.

Here is a minimal working example:

# terraform/main.tf

locals {
  cluster_name = "my-cluster"
}

data "aws_availability_zones" "available" {}

module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "2.66.0"

  name = local.cluster_name
  cidr = "10.0.0.0/16"

  azs = data.aws_availability_zones.available.names

  private_subnets = [
    "10.0.1.0/24",
    "10.0.2.0/24",
    "10.0.3.0/24",
  ]

  public_subnets = [
    "10.0.11.0/24",
    "10.0.12.0/24",
    "10.0.13.0/24",
  ]

  # NAT Gateway Scenarios : One NAT Gateway per availability zone
  enable_nat_gateway     = true
  single_nat_gateway     = false
  one_nat_gateway_per_az = true

  enable_dns_hostnames = true
  enable_dns_support   = true

  private_subnet_tags = {
    "kubernetes.io/cluster/default"   = "shared"
    "kubernetes.io/role/internal-elb" = "1"
  }

  public_subnet_tags = {
    "kubernetes.io/role/elb" = "1"
  }
}

module "cluster" {
  source = "git::https://github.com/camptocamp/devops-stack.git//modules/eks/aws?ref=v0.26.0"

  cluster_name = local.cluster_name
  vpc_id       = module.vpc.vpc_id

  worker_groups = [
    {
      instance_type        = "m5a.large"
      asg_desired_capacity = 2
      asg_max_size         = 3
    }
  ]

  base_domain     = "example.com"

  cognito_user_pool_id     = aws_cognito_user_pool.pool.id
  cognito_user_pool_domain = aws_cognito_user_pool_domain.pool_domain.domain
}

resource "aws_cognito_user_pool" "pool" {
  name = "pool"
}

resource "aws_cognito_user_pool_domain" "pool_domain" {
  domain       = "pool-domain"
  user_pool_id = aws_cognito_user_pool.pool.id
}

Terraform Outputs

Define outputs:

# terraform/outputs.tf

output "argocd_auth_token" {
  sensitive = true
  value     = module.cluster.argocd_auth_token
}

output "kubeconfig" {
  sensitive = true
  value     = module.cluster.kubeconfig
}

output "argocd_server" {
  value = module.cluster.argocd_server
}

output "grafana_admin_password" {
  sensitive = true
  value     = module.cluster.grafana_admin_password
}

Terraform Backend

If you wish to collaborate, define a backend to store your state:

# terraform/versions.tf

terraform {
  backend "remote" {
    organization = "example_corp"

    workspaces {
      name = "my-app-prod"
    }
  }
}

Deploying from your workstation

Even if one of the purpose of the DevOps Stack is to do everything in pipelines, you could deploy your cluster from your workstation using the Terraform CLI:

$ cd terraform
$ terraform init
$ terraform apply

Deploying from pipelines

When using pipelines, the DevOps Stack runs a dry-run on Merge Request and applies the modification on commit on a protected branch.

GitLab CI

Push your code in a new project on Gitlab

Create a new project on GitLab and push your Terraform files on it. You can use either gitlab.com or your self hosted GitLab instance.

Protect your branch

The cluster creation pipeline is triggered only on protected branches, so you have to protect every branch that will define a cluster (in Settings ⇒ Repository ⇒ Protected Branches).

Add CI / CD variables

There are multiple ways to configure the Terraform AWS provider. You could commit the credentials in your code, with a high potential risk of secret leakage, or another simple solution is to define the required environment variables as CI/CD variables.

In your project’s Setting → CI/CD → Variables, add variables for:

  • AWS_ACCESS_KEY_ID

  • AWS_SECRET_ACCESS_KEY

  • AWS_REGION

Create .gitlab-ci.yml

In order to use Gitlab as Continuous Delivery, you have to create a .gitlab-ci.yml file in the root level of your project.

---
include:
  - https://raw.githubusercontent.com/camptocamp/devops-stack/v0.26.0/.gitlab-ci/pipeline.yaml

Trigger pipeline

The pipeline, and hence the cluster creation, will be triggered when you push to a protected branch, but you can trigger a pipeline manually if needed (in CI/CD ⇒ Pipelines ⇒ Run Pipeline).

GitHub Actions

Create a new project on GitHub

Create a new project on GitHub and push your Terraform files on it.

Add Actions secrets

There are multiple ways to configure the Terraform AWS provider. You could commit the credentials in your code, with a high potential risk of leakage, or another simple solution is to define the required environment variables as Actions secrets.

In your project settings in Secrets Actions, create secrets for these variables:

  • AWS_ACCESS_KEY_ID

  • AWS_SECRET_ACCESS_KEY

  • AWS_REGION

Create GitHub Actions workflow for your project

Unfortunately, composite Actions have some limitations right now, so we can’t provide a single Action to declare in your workflow (as we do for GitLab pipeline). Hence, you have to create a .github/workflows/terraform.yml file with this content:

---
name: 'Terraform'
on:
  push:
    branches:
      - main
  pull_request:
jobs:
  terraform:
    name: Terraform
    runs-on: ubuntu-latest
    env:
      AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
      AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
      AWS_REGION: ${{ secrets.AWS_REGION }}
      CAMPTOCAMP_DEVOPS_STACK_VERSION: 0.26.0
      TF_ROOT: terraform
    defaults:
      run:
        working-directory: ${{ env.TF_ROOT }}
    steps:
      - name: Checkout
        uses: actions/checkout@v2

      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v1
        with:
          terraform_version: 0.14.6

      - name: Terraform Format
        run: terraform fmt -check -diff -recursive

      - name: Terraform Init
        run: terraform init

      - name: Terraform Validate
        run: terraform validate -no-color

      - name: Terraform Plan
        if: github.event_name == 'pull_request'
        run: terraform plan -no-color -out plan

      - name: Install aws-iam-authenticator
        if: github.event_name == 'push'
        run: |
          mkdir -p ${{ github.workspace }}/bin
          curl -o ${{ github.workspace }}/bin/aws-iam-authenticator https://amazon-eks.s3.us-west-2.amazonaws.com/1.19.6/2021-01-05/bin/linux/amd64/aws-iam-authenticator
          chmod +x ${{ github.workspace }}/bin/aws-iam-authenticator
          echo "PATH=${{ github.workspace }}/bin:$PATH" >> $GITHUB_ENV

      - name: Terraform Apply
        if: github.event_name == 'push'
        run: terraform apply --auto-approve

      - name: Terraform Plan
        if: github.event_name == 'push'
        run: terraform plan --detailed-exitcode -no-color

Recovering the kubeconfig for EKS

  1. Make sure your AWS credentials file (~/.aws/credentials on Linux) contains the access key that has been used to create the cluster:

    [<ACCOUNT_NAME>]
    aws_access_key_id = <YOUR_ACCESS_KEY_ID>
    aws_secret_access_key = <YOUR_SECRET_ACCESS_KEY>
    region = <YOUR_REGION>
  2. Update your kubeconfig using the following command:

    aws --profile <ACCOUNT_NAME> eks --region <YOUR_REGION> update-kubeconfig --name <CLUSTER_NAME>

Then, you should be able to use kubectl.

Inspect the DevOps Stack Applications

You can view the ingress routes for the various DevOps Stack Applications with:

$ kubectl get ingress --all-namespaces

Access the URLs in https, and use the OIDC/OAuth2 to log in, using the admin account with the password previously retrieved.

Destroy the cluster

$ terraform destroy

Reference