Skip to main content

CodeTogether Intel on AWS EKS (Ingress Options: NGINX or ALB)

This document describes how to deploy CodeTogether Intel to Amazon EKS using one of two mutually-exclusive ingress approaches:

  • Option A (NGINX + TLS Secret): Install the NGINX Ingress Controller in the cluster (which provisions an AWS ELB), and terminate TLS at NGINX using a pre-existing certificate stored as a Kubernetes TLS secret (no cert-manager required).
  • Option B (AWS Load Balancer Controller / ALB + ACM): Use the AWS Load Balancer Controller to provision an ALB directly from a Kubernetes Ingress, and terminate TLS at the ALB using an ACM certificate.
danger

Choose one ingress approach per hostname. Do not mix NGINX and ALB for the same host.

All names and domains are examples and should be replaced with organization-specific values.

Environment Summary

  • Cloud: AWS (EKS, Route 53, NLB/ALB)
  • DNS: Existing DNS provider (no delegation required for this path)
  • Cluster: EKS
  • Ingress: NGINX Ingress Controller or AWS Load Balancer Controller (ALB)
  • TLS: Option A: TLS secret / Option B: ACM
  • DNS automation: Optional (external-dns), not required
  • Data: Cassandra (single‑pod demonstration)
  • Application: Intel (Helm chart intel — replace with actual chart)
  • Public host: intel.prod.example.com

Variables (replace as needed)

# DNS & cluster
export CLUSTER_NAME="intel-eks"
export AWS_REGION="us-east-1"
export NAMESPACE="default"
export DOMAIN_NAME="*.example.com" # or "example.com" / "intel.example.com"
export INTEL_HOST="intel.example.com"
export COLLAB_HOST="collab.example.com"

1 Prerequisites

  • AWS account with IAM permissions for EKS, EC2, ELB, and Route 53
  • A parent domain (example.com) hosted by any DNS provider
  • A public DNS zone for your domain (Route53 or any external DNS provider). Route53 delegation is optional and only required if you plan to use Route53/external-dns.
  • Local tooling: kubectl, eksctl, Helm, AWS CLI

2 Create the EKS Cluster (example)

# Create a small demo cluster (adjust versions and sizes as needed)
eksctl create cluster \
--name ${CLUSTER_NAME} \
--region ${AWS_REGION} \
--version 1.33 \
--nodegroup-name ng-ops \
--nodes 1 --nodes-min 1 --nodes-max 1 \
--node-type t3.medium \
--managed
# Confirm cluster access
kubectl get nodes -o wide

EKS commonly uses the AWS EBS CSI Driver for dynamic volume provisioning. This section installs the EBS CSI Driver as an EKS Addon and configures IRSA so the controller can create EBS volumes. If you skip this, PVCs may remain in Pending with Waiting for a volume to be created....

    eksctl utils associate-iam-oidc-provider \
--cluster "${CLUSTER_NAME}" \
--region "${AWS_REGION}" \
--approve
    eksctl create iamserviceaccount \
--cluster "${CLUSTER_NAME}" \
--region "${AWS_REGION}" \
--namespace kube-system \
--name ebs-csi-controller-sa \
--attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \
--approve \
--override-existing-serviceaccounts \
--role-name AmazonEKS_EBS_CSI_DriverRole_${CLUSTER_NAME}

Verify the CSI pods are running:

Install the EBS CSI Driver EKS Addon (recommended):

export AWS_ACCOUNT_ID="$(aws sts get-caller-identity --query Account --output text)"
    aws eks create-addon \
--cluster-name "${CLUSTER_NAME}" \
--addon-name aws-ebs-csi-driver \
--region "${AWS_REGION}" \
--service-account-role-arn "arn:aws:iam::${AWS_ACCOUNT_ID}:role/AmazonEKS_EBS_CSI_DriverRole_${CLUSTER_NAME}" \
--resolve-conflicts OVERWRITE
    aws eks wait addon-active \
--cluster-name "${CLUSTER_NAME}" \
--addon-name aws-ebs-csi-driver \
--region "${AWS_REGION}"

Verify the CSI driver is installed and pods are running:

kubectl get csidrivers | grep -i ebs || true
kubectl -n kube-system get pods | egrep -i 'ebs|csi' || true

Verify StorageClasses:

    kubectl get storageclass -o wide
kubectl get storageclass -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.provisioner}{"\t"}{.metadata.annotations.storageclass\.kubernetes\.io/is-default-class}{"\n"}{end}'
note
  • For this guide we use the CSI storage class (example: gp2) for Cassandra PVCs.
  • Ensure only one default StorageClass is marked as default.

3 Cassandra Installation (demo)

helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
helm upgrade --install cassandra bitnami/cassandra \
--namespace "$NAMESPACE" --create-namespace \
--set replicaCount=1 \
--set persistence.storageClass=gp2 \
--set global.imageRegistry=public.ecr.aws \
--set image.registry=public.ecr.aws \
--set image.repository=bitnami/cassandra \
--set image.tag=5.0.5-debian-12-r17 \
--set global.security.allowInsecureImages=true

kubectl -n "${NAMESPACE}" get pods -l app.kubernetes.io/name=cassandra -o wide
kubectl -n "${NAMESPACE}" get pvc -o wide
note
  • Some Bitnami Cassandra tags may not exist on Docker Hub; using public.ecr.aws/bitnami/* avoids ImagePull errors.
  • The chart may warn about “unrecognized images” when switching registries; global.security.allowInsecureImages=true is required to proceed.

3.1 Create the Intel keyspace

Get the Cassandra password created by the chart:

    CASS_PASS="$(kubectl -n "${NAMESPACE}" get secret cassandra -o jsonpath='{.data.cassandra-password}' | base64 -d)"
echo "$CASS_PASS"

Exec into the pod and create the keyspace:

    kubectl -n "${NAMESPACE}" exec -it cassandra-0 -- bash
cqlsh -u cassandra -p "$CASS_PASS" cassandra.default.svc.cluster.local 9042
    CREATE KEYSPACE intel WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 1};

3.2 Local datacenter name

Bitnami Cassandra commonly uses datacenter1 by default. Confirm with:

    kubectl -n "${NAMESPACE}" exec -it cassandra-0 -- nodetool status

4 Ingress on AWS: NGINX+ELB vs ALB Controller (do not mix)

On AWS you will typically use one of these patterns:

note

Choose your ingress approach:

  • If you want NGINX + TLS Secret, follow Option A below and skip Option B.
  • If you want ALB + ACM, skip Option A and go directly to Option B.

Option A (this guide): NGINX Ingress Controller provisions an ELB (usually NLB)

  • You install ingress-nginx (Section 5)
  • Kubernetes creates an AWS Load Balancer for the ingress-nginx-controller Service
  • TLS terminates at NGINX using the Kubernetes TLS secret configured on your Ingress

This means your application Ingress should use the nginx ingress class (or the cluster default):

metadata:
annotations:
kubernetes.io/ingress.class: nginx
spec:
ingressClassName: nginx
tls:
- secretName: <your-tls-secret>

Create/Confirm your TLS Secret (use your own certificate)

If you already have a certificate for ${INTEL_HOST}, create a TLS secret in the namespace where Intel will be deployed. Example (namespace default):

    kubectl -n "${NAMESPACE}" create secret tls codetogether-io-tls \
--cert=/path/to/tls.crt \
--key=/path/to/tls.key

Verify:

    kubectl -n "${NAMESPACE}" get secret codetogether-io-tls -o yaml
note
  • Secret type must be kubernetes.io/tls with keys tls.crt and tls.key.
  • Use the same secret name in the Intel chart values (Ingress TLS section).

Install NGINX Ingress Controller

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm upgrade --install ingress-nginx ingress-nginx/ingress-nginx \
--namespace ingress-nginx --create-namespace \
--set controller.ingressClassResource.name=nginx \
--set controller.ingressClass=nginx
kubectl -n ingress-nginx get svc,deploy,pod -o wide
note

If you see multiple AWS load balancers being created, double-check you are not exposing Intel/Collab via both: 1) service.type=LoadBalancer and 2) ingress.enabled=true Unless you explicitly want both, pick one exposure method to avoid certificate/hostname inconsistencies.

The controller will provision an AWS load balancer. The ADDRESS will look like: a6f242a393a644dbba40d11a5f1480f0-1015045504.us-east-1.elb.amazonaws.com.

You will use this ELB hostname as the DNS target for ${INTEL_HOST}.


Configure DNS (existing DNS provider)

Create a DNS record for ${INTEL_HOST} pointing to the NGINX ELB hostname.

Recommended:

  • CNAME: ${INTEL_HOST}<ELB_HOSTNAME>

Get the ELB hostname from the Ingress controller service (or from your app Ingress once deployed):

    kubectl -n ingress-nginx get svc ingress-nginx-controller -o wide

Or later from the Intel ingress:

    kubectl -n "${NAMESPACE}" get ingress intel-server -o jsonpath='{.status.loadBalancer.ingress[0].hostname}{"\n"}'

Validate DNS propagation using public resolvers (important if your ISP resolver returns NXDOMAIN):

    dig @1.1.1.1 +short ${INTEL_HOST}
dig @8.8.8.8 +short ${INTEL_HOST}

If your local resolver still fails but public resolvers work, retry later or temporarily use a public resolver for validation.


Optional: external-dns

If your DNS is hosted in Route 53 and you want Kubernetes to manage records automatically, you may install external-dns. If you manage DNS elsewhere (DigitalOcean/Cloudflare/etc), skip this section and create records manually (as shown above).

Deploy Intel (Helm)

note

Prerequisites: cluster & ingress basics from the Kubernetes install guide, TLS secret creation (see the TLS section above), and any SSO specifics from SSO configuration.

This deployment produces a Deployment, Service (ClusterIP:1080), and Ingress (class nginx), terminating TLS with your existing secret. Deploy Intel by following the Kubernetes install guide for CodeTogether Intel.

After deployment, check:

kubectl get deploy,svc,ingress,pod
kubectl get ingress intel-server -o wide # shows ELB address
dig @1.1.1.1 +short ${INTEL_HOST}
curl -I https://${INTEL_HOST}

Deploy Collab on AWS EKS (Helm)

Collab is installed as a separate Helm release and must be configured to connect to Intel (URL + shared secret). For the Collab specific values, follow Install the Collab Container via Kubernetes

DNS + TLS for Collab

Choose a Collab host (example):

export COLLAB_HOST="collab.example.com"

Create a DNS record for ${COLLAB_HOST} pointing to the same NGINX ELB hostname you used for Intel (or a different one if you run a separate Ingress controller).

Create/confirm a TLS secret for ${COLLAB_HOST} in the namespace where Collab will be deployed. (You may reuse the same TLS secret as Intel if the certificate covers both hosts via SAN/wildcard.)

Install Collab

Deploy Collab by following the Collab section of the Kubernetes install guide above, ensuring:

  • intel.url points to your Intel URL (example: https://${INTEL_HOST})
  • intel.secret matches the Intel-side shared secret (hq.collab.secret)
  • Ingress is enabled and uses the nginx ingress class (since this guide uses NGINX Ingress Controller)
  • TLS references your secret for ${COLLAB_HOST}

After deployment, check:

kubectl get deploy,svc,ingress,pod
kubectl get ingress codetogether-collab -o wide
dig @1.1.1.1
short ${COLLAB_HOST}
curl -I https://${COLLAB_HOST}

Option B: AWS Load Balancer Controller (ALB + ACM)

This section describes an alternative AWS-native ingress approach using the AWS Load Balancer Controller. Use this approach if you want AWS to provision an ALB directly from a Kubernetes Ingress.

Key differences vs the NGINX + TLS Secret approach:

  • Ingress class is alb (not nginx)
  • TLS is typically terminated at the ALB using ACM (not a Kubernetes TLS secret)
  • DNS points to the ALB hostname created for the Ingress
danger

Do not mix NGINX and ALB for the same hostname. Your application Ingress must use either ingressClassName: nginx OR ingressClassName: alb.

Prerequisites

  • EKS cluster with OIDC provider enabled (IRSA)
  • AWS permissions to manage EC2/ELBv2/IAM
  • An ACM certificate ARN for your public hostname (same region as the cluster/ALB)
  • kubectl, eksctl, helm, aws CLI configured
  • VPC subnets must be discoverable by the controller (correct tags), or specify subnets explicitly via alb.ingress.kubernetes.io/subnets

A) Check if the controller is already installed

kubectl -n kube-system get deploy aws-load-balancer-controller
kubectl -n kube-system get sa aws-load-balancer-controller \
-o jsonpath='{.metadata.annotations.eks\.amazonaws\.com/role-arn}{"\n"}'

If the Deployment exists and the ServiceAccount prints a role-arn, you can skip to Create the ALB Ingress.

B) Enable IRSA (OIDC provider)

eksctl utils associate-iam-oidc-provider \
--cluster "${CLUSTER_NAME}" \
--region "${AWS_REGION}" \
--approve

C) Create the controller IAM policy (official)

curl -o iam_policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/install/iam_policy.json
export POLICY_ARN="$(aws iam create-policy \
--policy-name AWSLoadBalancerControllerIAMPolicy \
--policy-document file://iam_policy.json \
--query Policy.Arn --output text)"
echo "$POLICY_ARN"
note

If the policy already exists, reuse its ARN via

export POLICY_ARN="$(aws iam list-policies --scope Local \
--query "Policies[?PolicyName=='AWSLoadBalancerControllerIAMPolicy'].Arn | [0]" --output text)"
echo "$POLICY_ARN"

D) Create/Update the service account (IRSA)

eksctl create iamserviceaccount \
--cluster "${CLUSTER_NAME}" \
--region "${AWS_REGION}" \
--namespace kube-system \
--name aws-load-balancer-controller \
--attach-policy-arn "${POLICY_ARN}" \
--override-existing-serviceaccounts \
--approve

E) (Fix if needed) Attach policy to the actual role used by the service account

If you see errors like UnauthorizedOperation: ec2:DescribeAvailabilityZones, attach the policy to the role referenced in the service account annotation:

export LBC_ROLE_ARN="$(kubectl -n kube-system get sa aws-load-balancer-controller \
-o jsonpath='{.metadata.annotations.eks\.amazonaws\.com/role-arn}')"
export LBC_ROLE_NAME="${LBC_ROLE_ARN##*/}"

Attach the policy:

aws iam attach-role-policy \
--role-name "$LBC_ROLE_NAME" \
--policy-arn "$POLICY_ARN"
kubectl -n kube-system rollout restart deploy/aws-load-balancer-controller
kubectl -n kube-system rollout status deploy/aws-load-balancer-controller

F) Install AWS Load Balancer Controller (Helm)

Obtain your cluster's VPC ID

export VPC_ID="$(aws eks describe-cluster \
--name "${CLUSTER_NAME}" \
--region "${AWS_REGION}" \
--query "cluster.resourcesVpcConfig.vpcId" --output text)"
echo "$VPC_ID"
helm repo add eks https://aws.github.io/eks-charts
helm repo update
helm upgrade --install aws-load-balancer-controller eks/aws-load-balancer-controller \
-n kube-system \
--set clusterName="${CLUSTER_NAME}" \
--set serviceAccount.create=false \
--set serviceAccount.name=aws-load-balancer-controller \
--set region="${AWS_REGION}" \
--set vpcId="${VPC_ID}"

Verify

kubectl -n kube-system rollout status deploy/aws-load-balancer-controller
kubectl -n kube-system get pods -l app.kubernetes.io/name=aws-load-balancer-controller

Expected output

kubectl -n kube-system rollout status deploy/aws-load-balancer-controller
kubectl -n kube-system get pods -l app.kubernetes.io/name=aws-load-balancer-controller -o wide

You should see a successful rollout, and 2 controller pods in Running state (by default the chart runs 2 replicas), similar to:

deployment "aws-load-balancer-controller" successfully rolled out
NAME                                            READY   STATUS    RESTARTS   AGE   IP              NODE                            NOMINATED NODE   READINESS GATES
aws-load-balancer-controller-686dd965d4-gr6wd 1/1 Running 0 20s 192.168.29.85 ip-192-168-15-96.ec2.internal <none> <none>
aws-load-balancer-controller-686dd965d4-v8zqr 1/1 Running 0 20s 192.168.3.216 ip-192-168-15-96.ec2.internal <none> <none>

If the rollout does not complete or pods are not Running, check the controller logs:

kubectl -n kube-system logs deploy/aws-load-balancer-controller --tail=200

Common causes include missing IAM permissions (IRSA/policy not attached) or VPC subnet tagging issues.

G) Prereq: have an ACM certificate ARN

Request a new certificate for ${DOMAIN_NAME} (DNS validation)

Request the certificate:

CERT_ARN="$(aws acm request-certificate \
--region us-east-1 \
--domain-name "${DOMAIN_NAME}" \
--validation-method DNS \
--query CertificateArn --output text)"
export ACM_CERT_ARN="$CERT_ARN"
echo "$ACM_CERT_ARN"
Get the DNS record ACM asks you to create
aws acm describe-certificate --region us-east-1 --certificate-arn "$ACM_CERT_ARN" \
--query "Certificate.DomainValidationOptions[0].ResourceRecord" \
--output json

It will return something like:

  • Name: _xxxx.${DOMAIN_NAME}.
  • Type: CNAME
  • Value: _yyyy.acm-validations.aws.
Create that CNAME in your DNS

ACM DNS validation works even if your DNS is hosted outside AWS (e.g., DigitalOcean/Cloudflare). You just need to create the requested CNAME record in your DNS provider.

In the ${DOMAIN_NAME} DNS panel:

  • Type: CNAME
  • Host/Name: el _xxxx... (without .${DOMAIN_NAME} if DO auto-appends it; depends on the UI)
  • Value: _yyyy.acm-validations.aws.
Wait for the certificate to become ISSUED (validation)
aws acm wait certificate-validated --region us-east-1 --certificate-arn "$ACM_CERT_ARN"
aws acm describe-certificate --region us-east-1 --certificate-arn "$ACM_CERT_ARN" \
--query "Certificate.Status" --output text

Once it says ISSUED, you can use it in the Ingress.

Make sure the ARN is in us-east-1 (same region as the ALB). Once you have it:

H) Install Intel (internal Service only on :1080, NO Ingress yet)

note

Prerequisites: cluster & ingress basics from the Kubernetes install guide, and any SSO specifics from SSO configuration.

ingress.enabled remains false because the cluster will be using an external AWS ALB Ingress Controller to manage ingress (so we don’t deploy the chart-managed Ingress resource).

ingress:
enabled: false
annotations: {}

Deploy Intel by following the Kubernetes install guide for CodeTogether Intel.

I) Install Collab

Install Collab (Helm) and expose it via Service (ClusterIP)

Collab must be reachable via an internal Service.

ingress.enabled remains false because the cluster will be using an external AWS ALB Ingress Controller to manage ingress (so we don’t deploy the chart-managed Ingress resource).

ingress:
enabled: false
annotations: {}

Deploy Collab by following the Install the Collab Container via Kubernetes for CodeTogether Collab.

J) Configure ALB Ingress (Intel + Collab in the same Ingress)

Since Intel and Collab are deployed in the same namespace, you can use a single ALB Ingress and add a second host rule for Collab. This keeps both services behind the same ALB while routing traffic by hostname.

Replace <COLLAB_SERVICE_NAME> and <COLLAB_SERVICE_PORT> with the actual Service name and port created by the Collab Helm install. Same for Intel.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: codetogether-intel
annotations:
kubernetes.io/ingress.class: alb

# Internet-facing ALB
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip

# HTTPS listener + ACM cert
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
alb.ingress.kubernetes.io/certificate-arn: ${ACM_CERT_ARN}
alb.ingress.kubernetes.io/ssl-redirect: "443"
spec:
ingressClassName: alb
rules:
- host: <INTEL_HOST>
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: <INTEL_SERVICE_NAME>
port:
number: <INTEL_SERVICE_PORT>

- host: <COLLAB_HOST>
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: <COLLAB_SERVICE_NAME>
port:
number: <COLLAB_SERVICE_PORT>

Apply the Ingress:

kubectl apply -f codetogether-intel-ingress.yaml
kubectl describe ingress codetogether-intel
note

This single-Ingress approach assumes Intel and Collab Services are in the same namespace. If they are in different namespaces, use two Ingress objects

Wait for the ALB + validate TargetGroupBinding. Expected output (ALB + TargetGroupBinding)

After applying the Ingress, you should see an ALB hostname and two TargetGroupBindings (one for Intel and one for Collab):

kubectl -n "${NAMESPACE}" get ingress codetogether-intel -o jsonpath='{.status.loadBalancer.ingress[0].hostname}{"\n"}'
kubectl get targetgroupbindings -A

Expected example:

k8s-default-codetoge-c22dc0ac27-1682607233.us-east-1.elb.amazonaws.com

NAMESPACE NAME SERVICE-NAME SERVICE-PORT TARGET-TYPE AGE
default k8s-default-intelcol-e461104dcb intel-collab 443 ip 3m31s
default k8s-default-intelser-50d060e977 intel-server 1080 ip 3m31s
note

If you see TargetGroupBindings for both services, it confirms the AWS Load Balancer Controller successfully created target groups and registered your Kubernetes Services behind the ALB.

Create/verify DNS records (external provider)

A CNAME record cannot share the same name with other record types (A/AAAA/NS/etc.). Remove any conflicting records for the same hostname.

Get the ALB hostname:

kubectl get ingress codetogether-intel -o jsonpath='{.status.loadBalancer.ingress[0].hostname}{"\n"}'

Both hostnames should point to the same ALB hostname:

  • <INTEL_HOST> → <ALB_HOSTNAME>
  • <COLLAB_HOST> → <ALB_HOSTNAME>

Validate routing

ALB="$(kubectl get ingress codetogether-intel -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')"
curl -Ik "https://$ALB/" -H "Host: <INTEL_HOST>"
curl -Ik "https://$ALB/" -H "Host: <COLLAB_HOST>"
note

Keep both Intel and Collab Services as ClusterIP when using ALB Ingress.


5 Troubleshooting Notes

Helm fails with Ingress networking.k8s.io/v1beta1 mapping error

  • Your Kubernetes cluster supports networking.k8s.io/v1 Ingress (EKS 1.19+). Confirm:
    kubectl api-resources | grep -i ingress
  • If Helm still tries to install v1beta1, update the chart templates to select Ingress API by availability (prefer .Capabilities.APIVersions.Has "networking.k8s.io/v1/Ingress" over semver-only checks).

Ingress error: pathType: Required value

  • For networking.k8s.io/v1, pathType is required. Ensure your rendered Ingress includes:
  pathType: Prefix

ImagePullBackOff for Cassandra (Bitnami image tag not found on Docker Hub)

  • Use the public ECR registry for Bitnami images (public.ecr.aws/bitnami/...) as shown in the Cassandra section.

6 Useful Commands

# ELB address from Ingress
kubectl get ingress intel-server -o wide

# Public curl checks
curl -I https://${INTEL_HOST}

Notes on Production Hardening

  • Size node groups for expected workload; consider multi‑AZ and PodDisruptionBudgets
  • Define resource requests/limits; configure HPAs where applicable
  • Use IRSA for controller/service IAM; lock down IAM and security groups
  • Configure persistent storage and backups for Cassandra; plan for replication/availability
  • Add observability (CloudWatch, Prometheus/Grafana), log aggregation, and alerts