DevOps Pipeline with AWS Developer Tools on EKS for CodeCart Technologies
AMJ Cloud Technologies implemented a CI/CD pipeline for CodeCart Technologies using AWS CodeBuild, CodePipeline, and EKS, deploying a frontend microservice with automated Docker image builds and Route 53 DNS.
Technologies
DevOps Pipeline with AWS Developer Tools on EKS for CodeCart Technologies
AMJ Cloud Technologies delivered a robust CI/CD pipeline for CodeCart Technologies, an e-commerce company, to automate the deployment of a frontend microservice on Amazon Elastic Kubernetes Service (EKS). By integrating AWS CodeBuild, AWS CodePipeline, and GitHub, we streamlined the build, push, and deployment of Docker images to a private Elastic Container Registry (ECR) and EKS cluster. The pipeline included an approval stage with Amazon SNS notifications and automated DNS registration via External DNS, enabling secure access at frontend.codecarttechnologies.com and frontend-alt.codecarttechnologies.com.
Project Overview
CodeCart Technologies required an automated DevOps pipeline to deploy their e-commerce frontend microservice on EKS, replacing manual processes with a scalable, secure CI/CD workflow. AMJ implemented a pipeline with two stages: Build (Docker image creation and ECR push) and Deploy (EKS deployment with STS-based authentication). The solution used AWS Load Balancer Controller for ALB Ingress, External DNS for Route 53, and Amazon SNS for manual approval notifications, ensuring reliability and governance for their high-traffic platform.
Technical Implementation
Prerequisites
- Used CodeCart’s EKS cluster (
ecommerce-cluster, version 1.31) with AWS CLI v2 and Docker installed:aws --version docker --version - Configured AWS CLI credentials:
aws configure - Verified AWS Load Balancer Controller (v2.8.0) and External DNS:
helm install load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName=ecommerce-cluster --set image.tag=v2.8.0 helm install external-dns external-dns/external-dns -n kube-system --set provider=aws --set aws.region=us-east-1 kubectl get pods -n kube-system kubectl get pods
Create ECR Repository
- Created an ECR repository for the frontend microservice:
aws ecr create-repository --repository-name ecommerce-frontend --region us-east-1 aws ecr put-image-tag-mutability --repository-name ecommerce-frontend --image-tag-mutability IMMUTABLE --region us-east-1 aws ecr put-image-scanning-configuration --repository-name ecommerce-frontend --image-scanning-configuration scanOnPush=true --region us-east-1
Set Up GitHub Repository
- Created a GitHub repository (
codecart-eks-devops) and pushed application files:git clone git@github.com:<your-username>/codecart-eks-devops.git git add . git commit -m "Initial commit" git push - Files included:
frontend/index.html:<!DOCTYPE html> <html> <body style="background-color:rgb(228, 250, 210);"> <h1>Welcome to CodeCart Technologies - V1</h1> <h3>DevOps with AWS EKS</h3> <p>Application: Frontend</p> </body> </html>Dockerfile:FROM public.ecr.aws/nginx/nginx:latest COPY frontend /usr/share/nginx/html/frontendbuildspec-build.yaml(Build stage for Docker image creation and ECR push).buildspec-deploy.yaml(Deploy stage for EKS deployment).- Kubernetes manifests:
frontend-deployment.yaml,frontend-service.yaml,ingress-frontend.yaml.
Configure Build Stage
- Created a GitHub connection in AWS Developer Tools:
- Connected to
codecart-eks-devopsrepository via GitHub App (eks-devops-github-connection).
- Connected to
- Configured CodeBuild project (
build-eks-devops):aws codebuild create-project --name build-eks-devops --source "{\"type\":\"GITHUB\",\"location\":\"https://github.com/<your-username>/codecart-eks-devops.git\",\"buildspec\":\"buildspec-build.yaml\"}" --environment "{\"type\":\"LINUX_CONTAINER\",\"image\":\"aws/codebuild/amazonlinux-x86_64-standard:5.0\",\"computeType\":\"BUILD_GENERAL1_SMALL\"}" --service-role arn:aws:iam::<account-id>:role/buildphase-codebuild-eks-devops-service-role - Attached policies to CodeBuild role (
buildphase-codebuild-eks-devops-service-role):aws iam attach-role-policy --role-name buildphase-codebuild-eks-devops-service-role --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryFullAccess aws iam attach-role-policy --role-name buildphase-codebuild-eks-devops-service-role --policy-arn arn:aws:iam::aws:policy/CloudWatchLogsFullAccess - Buildspec for Build Stage (
buildspec-build.yaml):version: 0.2 env: variables: IMAGE_URI: "<account-id>.dkr.ecr.us-east-1.amazonaws.com/ecommerce-frontend" exported-variables: - IMAGE_URI - IMAGE_TAG phases: pre_build: commands: - IMAGE_TAG="$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c1-7)" - aws ecr get-login-password | docker login --username AWS --password-stdin $IMAGE_URI build: commands: - docker build -t $IMAGE_URI:$IMAGE_TAG . post_build: commands: - docker push $IMAGE_URI:$IMAGE_TAG - echo "IMAGE_URI=$IMAGE_URI" >> $CODEBUILD_SRC_DIR/exported-vars.env - echo "IMAGE_TAG=$IMAGE_TAG" >> $CODEBUILD_SRC_DIR/exported-vars.env artifacts: files: - exported-vars.env - buildspec-deploy.yaml - "**/kube-manifests/**/*"
Configure Deploy Stage
- Created an IAM role for EKS access (
EksCodeBuildKubectlRole):ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text) aws iam create-role --role-name EksCodeBuildKubectlRole --assume-role-policy-document '{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":{"AWS":"arn:aws:iam::'$ACCOUNT_ID':root"},"Action":"sts:AssumeRole"}]}' aws iam put-role-policy --role-name EksCodeBuildKubectlRole --policy-name eks-describe --policy-document '{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Action":"eks:Describe*","Resource":"*"}]}' - Updated
aws-authConfigMap:ROLE_ARN="arn:aws:iam::$ACCOUNT_ID:role/EksCodeBuildKubectlRole" kubectl get configmap aws-auth -n kube-system -o yaml > aws-auth/aws-auth-backup.yaml kubectl get configmap aws-auth -n kube-system -o yaml | awk -v role=" - rolearn: $ROLE_ARN\n username: build\n groups:\n - system:masters" '/mapRoles: \|/ {print; print role; next} 1' > aws-auth/aws-auth-patch.yaml kubectl apply -f aws-auth/aws-auth-patch.yaml - Configured CodeBuild project (
deploy-eks-devops):aws codebuild create-project --name deploy-eks-devops --source "{\"type\":\"CODEPIPELINE\",\"buildspec\":\"buildspec-deploy.yaml\"}" --environment "{\"type\":\"LINUX_CONTAINER\",\"image\":\"aws/codebuild/amazonlinux-x86_64-standard:5.0\",\"computeType\":\"BUILD_GENERAL1_SMALL\"}" --service-role arn:aws:iam::<account-id>:role/deployphase-codebuild-eks-devops-service-role - Attached STS AssumeRole policy to deploy CodeBuild role:
aws iam put-role-policy --role-name deployphase-codebuild-eks-devops-service-role --policy-name eks-codebuild-sts-assume-role --policy-document '{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Action":"sts:AssumeRole","Resource":"arn:aws:iam::<account-id>:role/EksCodeBuildKubectlRole"}]}' - Buildspec for Deploy Stage (
buildspec-deploy.yaml):version: 0.2 env: variables: EKS_CLUSTER_NAME: "ecommerce-cluster" EKS_KUBECTL_ROLE_ARN: "arn:aws:iam::<account-id>:role/EksCodeBuildKubectlRole" phases: pre_build: commands: - source ./exported-vars.env - sed -i 's@CONTAINER_IMAGE@'"$IMAGE_URI:$IMAGE_TAG"'@' kube-manifests/frontend-deployment.yaml build: commands: - CREDENTIALS=$(aws sts assume-role --role-arn $EKS_KUBECTL_ROLE_ARN --role-session-name codebuild-kubectl --duration-seconds 900) - export AWS_ACCESS_KEY_ID=$(echo $CREDENTIALS | jq -r '.Credentials.AccessKeyId') - export AWS_SECRET_ACCESS_KEY=$(echo $CREDENTIALS | jq -r '.Credentials.SecretAccessKey') - export AWS_SESSION_TOKEN=$(echo $CREDENTIALS | jq -r '.Credentials.SessionToken') - aws eks update-kubeconfig --name $EKS_CLUSTER_NAME - kubectl apply -f kube-manifests/ - kubectl rollout status deployment/frontend-deployment --timeout=180s post_build: commands: - kubectl get pods,svc,ingress -o wide
Configure Approval Stage
- Created SNS topic and subscription:
aws sns create-topic --name eks-devops-topic aws sns subscribe --topic-arn arn:aws:sns:us-east-1:<account-id>:eks-devops-topic --protocol email --notification-endpoint devops@codecarttechnologies.com - Added manual approval stage to CodePipeline:
- Stage:
DeploymentApproval, Action:ManualApproval, SNS Topic:arn:aws:sns:us-east-1:<account-id>:eks-devops-topic.
- Stage:
- Updated CodePipeline role:
aws iam attach-role-policy --role-name eks-devops-codepipeline-service-role --policy-arn arn:aws:iam::aws:policy/AmazonSNSFullAccess
Create CodePipeline
- Configured pipeline (
eks-devops):- Source: GitHub (
codecart-eks-devops,mainbranch,eks-devops-github-connection). - Build: CodeBuild (
build-eks-devops). - Deploy: CodeBuild (
deploy-eks-devops). - Approval: ManualApproval with SNS.
- Source: GitHub (
- Created via AWS CLI or Console, ensuring role
eks-devops-codepipeline-service-rolehadAWSCodeBuildAdminAccess.
Deploy and Test
- Updated
frontend/index.html(e.g., “V2”, “V3”) and pushed to GitHub:git commit -am "Update to V2" git push - Monitored CodePipeline and CodeBuild logs.
- Verified Kubernetes resources:
kubectl get pods,svc,ingress -o wide - Checked External DNS logs and Route 53 records:
kubectl logs -f $(kubectl get po | egrep -o 'external-dns[A-Za-z0-9-]+') - Tested application:
https://frontend.codecarttechnologies.com/frontend/index.html https://frontend-alt.codecarttechnologies.com/frontend/index.html
Kubernetes Manifests
- Deployment (
frontend-deployment.yaml):apiVersion: apps/v1 kind: Deployment metadata: name: frontend-deployment labels: app: frontend spec: replicas: 2 selector: matchLabels: app: frontend template: metadata: labels: app: frontend spec: containers: - name: frontend image: CONTAINER_IMAGE ports: - containerPort: 80 - Service (
frontend-service.yaml):apiVersion: v1 kind: Service metadata: name: frontend-service labels: app: frontend annotations: alb.ingress.kubernetes.io/healthcheck-path: /frontend/index.html spec: type: NodePort selector: app: frontend ports: - port: 80 targetPort: 80 - Ingress (
ingress-frontend.yaml):apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend-ingress annotations: alb.ingress.kubernetes.io/load-balancer-name: ecommerce-ingress alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]' alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:<account-id>:certificate/<certificate-id> alb.ingress.kubernetes.io/ssl-redirect: "443" external-dns.alpha.kubernetes.io/hostname: frontend.codecarttechnologies.com,frontend-alt.codecarttechnologies.com alb.ingress.kubernetes.io/healthcheck-protocol: HTTP alb.ingress.kubernetes.io/healthcheck-interval-seconds: "15" alb.ingress.kubernetes.io/healthcheck-timeout-seconds: "5" alb.ingress.kubernetes.io/success-codes: "200" alb.ingress.kubernetes.io/healthy-threshold-count: "2" alb.ingress.kubernetes.io/unhealthy-threshold-count: "2" spec: ingressClassName: my-aws-ingress-class defaultBackend: service: name: frontend-service port: number: 80
Technical Highlights
- Automated CI/CD Pipeline: Integrated GitHub, CodeBuild, and CodePipeline for seamless build and deployment.
- Secure EKS Access: Used STS AssumeRole and
aws-authConfigMap to enable CodeBuild to deploy to EKS securely. - ECR Integration: Leveraged private ECR with tag immutability and scan-on-push for secure image management.
- Approval Workflow: Implemented SNS-based manual approval for governance.
- DNS Automation: Configured External DNS for Route 53, supporting multiple DNS records.
Client Impact
For CodeCart Technologies, this pipeline reduced deployment times, eliminated manual processes, and enhanced security with private ECR and HTTPS access. The automated workflow supported their e-commerce platform’s scalability and reliability.
Technologies Used
- AWS EKS
- AWS Elastic Container Registry (ECR)
- AWS CodeBuild
- AWS CodePipeline
- GitHub
- AWS Load Balancer Controller
- Kubernetes Ingress
- External DNS
- AWS Route 53
- AWS Certificate Manager
- Amazon SNS
- Docker
Need a Similar Solution?
I can help you design and implement similar cloud infrastructure and DevOps solutions for your organization.