Mastering GKE: Terraform Magic and CI/CD Pipeline in Action!

April 20, 2024

Hey tech folks! Today we'll master GKE and learn about how to set up an automated CI/CD pipeline on GKE.

Terraforming Our Way to Efficiency ๐Ÿš€

Let's start by talking about a super tool that's been making waves in the world of Infrastructure as Code (IaC) - Terraform. This powerful tool can turn hours of deployment time into mere minutes. โฑ๏ธ

In my recent project, I've used Terraform to provision three key resources:

  1. Google Container Cluster ๐ŸŒ
  2. Google Container Node Pool ๐ŸŽฑ
  3. Google Compute Disk ๐Ÿ’ฝ

Let's dive into the details of our Terraform script:

provider "google" {
  project     = "kubernetes-assignment-413919"
  region      = "us-central1"
}

resource "google_container_cluster" "primary" {
  name     = "my-gke-cluster"
  location = "us-central1"

  remove_default_node_pool = true
  initial_node_count       = 1

  master_auth {
    client_certificate_config {
      issue_client_certificate = false
    }
  }

  deletion_protection = false
}

resource "google_container_node_pool" "primary_preemptible_nodes" {
  name       = "my-node-pool"
  location   = "us-central1"
  cluster    = google_container_cluster.primary.name
  node_count = 1

  node_config {
    preemptible  = true
    machine_type = "e2-micro"

    metadata = {
      disable-legacy-endpoints = "true"
    }

    oauth_scopes = [
      "https://www.googleapis.com/auth/logging.write",
      "https://www.googleapis.com/auth/monitoring",
      "https://www.googleapis.com/auth/devstorage.read_only",
      "https://www.googleapis.com/auth/servicecontrol",
      "https://www.googleapis.com/auth/service.management.readonly",
      "https://www.googleapis.com/auth/trace.append"
    ]

    disk_size_gb = 10
    disk_type = "pd-standard"
    image_type = "COS_CONTAINERD"
  }
}

resource "google_compute_disk" "default" {
  name  = "my-data-disk"
  type  = "pd-standard"
  size  = "20"
  zone  = "us-central1-a"
  image = "debian-8-jessie-v20170523"
}

In this script, weโ€™re setting up a Google Kubernetes Engine (GKE) cluster, a node pool, and a compute disk. Weโ€™ve made sure to configure each resource with the necessary settings for optimal performance. ๐ŸŽฏ

Build and Deploy ๐Ÿ—๏ธ

Once the resources are successfully provisioned on Google Cloud Platform (GCP), we need to build and deploy our application onto Google Kubernetes Engine (GKE). For this, we've used the following GCP resources:

  1. Cloud Source Repository: This is where we store our application code and manage version control. ๐Ÿ“
  2. Artifact Registry: This is our storage space for containerized images. ๐Ÿ–ผ๏ธ
  3. Trigger: This starts the build and deployment process whenever there's a code push. ๐Ÿš€
  4. Cloud Build: This is the tool we use to build and deploy images. ๐Ÿ› ๏ธ

The flow goes like this: Any changes to the code in the Cloud Source Repository would run the trigger. This builds and stores the image in the Artifact Registry. The building of the image is done using Docker. Once the image is successfully stored in the Artifact Registry, we deploy the images from there to our GKE cluster.

Let's take a closer look at the build and deployment files:

cloudbuild.yaml

This file specifies the build and deploy steps. It includes steps for building the container image, pushing the image to the Container Registry, installing and authenticating kubectl, authenticating Docker, substituting environment variables into deployment.yaml, and finally, deploying to GKE.

steps:
  #  Build the container image
  - name: 'gcr.io/cloud-builders/docker'
    args: [ 'build', '-t', 'gcr.io/$PROJECT_ID/container-one:$COMMIT_SHA', '.' ]
  #  Push the container image to container Registry
  - name: 'gcr.io/cloud-builders/docker'
    args: [ 'push', 'gcr.io/$PROJECT_ID/container-one:$COMMIT_SHA' ]
  # Install kubectl
  - name: 'gcr.io/cloud-builders/gcloud'
    args: [ 'components', 'install', 'kubectl' ]
  # Authenticate kubectl
  - name: 'gcr.io/cloud-builders/gcloud'
    args: [ 'container', 'clusters', 'get-credentials', 'my-gke-cluster', '--region', 'us-central1', '--project', '$PROJECT_ID' ]
  # Authenticate Docker
  - name: 'gcr.io/cloud-builders/gcloud'
    args: [ 'auth', 'configure-docker', 'gcr.io' ]
  # Substitute environment variables into deployment.yaml
  - name: 'alpine'
    entrypoint: 'sh'
    args:
      - '-c'
      - |
        apk add --no-cache gettext
        envsubst < deployment.yaml.template > deployment.yaml
    env:
      - 'PROJECT_ID=$PROJECT_ID'
      - 'COMMIT_SHA=$COMMIT_SHA'
  # Deploy to GKE
  - name: 'gcr.io/cloud-builders/kubectl'
    args: [ 'apply', '-f', 'deployment.yaml' ]
    env:
      - 'CLOUDSDK_COMPUTE_ZONE=us-central1'
      - 'CLOUDSDK_CONTAINER_CLUSTER=my-gke-cluster'
images:
  - 'gcr.io/$PROJECT_ID/container-one:$COMMIT_SHA'

deployment.yaml

This file creates Kubernetes objects on GKE. It includes the configuration for a PersistentVolume, a PersistentVolumeClaim, a Deployment, and a Service.

apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-volume
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
gcePersistentDisk:
pdName: my-data-disk
fsType: ext4

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi

---
apiVersion: apps/v1
kind: Deployment
metadata:
name: container-one
spec:
replicas: 1
selector:
matchLabels:
app: container-one
template:
metadata:
labels:
app: container-one
spec:
containers:
- name: container-one
image: gcr.io/$PROJECT_ID/container-one:$COMMIT_SHA
ports:
- containerPort: 6000
volumeMounts:
- mountPath: "/bhishman_PV_dir"
name: volume
volumes:
- name: volume
persistentVolumeClaim:
claimName: pv-claim

---
apiVersion: v1
kind: Service
metadata:
name: service-one
spec:
selector:
app: container-one
ports:
- protocol: TCP
port: 6000
targetPort: 6000
type: LoadBalancer

And thatโ€™s it! With these files and the power of GCP, we can efficiently build and deploy our application onto GKE. โ€๐Ÿ’ป

Link to the Source Code.

Related: