Skip to content

🌐 UB-Global-US Terraform InfrastructureÂļ

OverviewÂļ

The ub-global-us environment is a production-grade AWS infrastructure deployed and managed entirely through Terraform. This environment hosts the Unibeam platform services across multiple availability zones with high availability and security configurations.

Deployment Method

All infrastructure operations are executed via GitHub Actions workflows. Manual Terraform commands are not recommended for production changes.


📁 Directory StructureÂļ

The repository follows a modular approach with clear separation of concerns:

managed-services/
├── components/          # Individual infrastructure components
├── modules/            # Reusable Terraform modules
├── providers.tf        # AWS provider configuration
├── variables.tf        # Input variable definitions
├── locals.tf          # Local variables and calculations
├── outputs.tf         # Output values
├── data.tf            # Data sources (VPC, subnets, etc.)
├── remote-state.tf    # S3 backend configuration
└── global-variables.tfvars  # Global variable values

Components DirectoryÂļ

Each component represents a distinct infrastructure layer:

components/
├── eks/                    # EKS cluster configuration
├── worker-nodes/          # EKS worker node groups
├── mongodb/               # MongoDB Atlas clusters
├── redislabs/            # Redis Cloud instances
├── s3/                   # S3 buckets with lifecycle policies
├── route53/              # Private hosted zones and records
├── firewall/             # Network firewall rules
├── security-groups/      # Security group definitions
│   ├── dmz/             # DMZ zone security groups
│   ├── domain/          # Domain-specific groups
│   └── whitelist/       # IP whitelist configurations
├── iam/                  # IAM roles and policies
└── monitoring/           # CloudWatch and monitoring

Modules DirectoryÂļ

Reusable modules that enforce consistency:

modules/
├── eks-cluster/          # EKS cluster module
├── iam-roles/           # IAM role templates
├── mongodb-atlas/       # MongoDB Atlas provisioning
├── redis-cloud/         # Redis Cloud provisioning
├── s3-bucket/          # S3 bucket with standards
├── security-group/      # Security group module
└── route53-zone/       # DNS zone module

🔧 Core ComponentsÂļ

EKS ClusterÂļ

Location: components/eks/

The EKS cluster forms the foundation of the Unibeam platform.

Key Features: - Multi-AZ deployment for high availability - EKS addons (VPC CNI, CoreDNS, kube-proxy) - Pod Identity for AWS service access - Integration with AWS load balancers (ALB/NLB)

Files: - main.tf - Cluster definition - addons.tf - EKS addons configuration - backend.tf - State management

Pod Identity

All Unibeam components use Pod Identity for accessing AWS services (Secrets Manager, S3, MongoDB Atlas). Avoid using IRSA for new services.

Worker NodesÂļ

Location: components/worker-nodes/

Managed node groups for running workloads.

Configuration: - Instance types optimized for workload - Auto-scaling groups - Spot and On-Demand instance mix

IAM RolesÂļ

Location: components/iam/ and modules/iam-roles/

Key Roles: - EKS Service Roles: Cluster operation permissions - Pod Identity Roles: Service-specific AWS access - Secrets Manager access - S3 bucket operations - MongoDB Atlas connections - Node Group Roles: Worker node permissions

Least Privilege

IAM policies follow least privilege principles. Reference existing roles before creating new ones.

MongoDB AtlasÂļ

Location: components/mongodb/

Production MongoDB clusters with private connectivity.

Configuration: - Connection: AWS PrivateLink or VPC Peering - Authentication: Pod Identity with IAM roles - Environments: - Production clusters (high availability) - Development clusters

Key Files: - cluster.tf - MongoDB cluster definition - privatelink.tf - Private endpoint configuration - variables.tf - Cluster-specific variables

Redis CloudÂļ

Location: components/redislabs/

Redis caching layer with secure connectivity.

Configuration: - Connection: VPC Peering - High Availability: Multi-AZ replication - Access: Through security groups and peering - ACLs: Role-based access control, User-Names and Passwords are stored in Secrets Manager, and are fetched via Pod Identity.

S3 BucketsÂļ

Location: components/s3/

Multiple S3 configurations for various purposes.

Features: - Lifecycle policies for cost optimization - Cross-region replication (where applicable) - Encryption at rest (KMS) - Versioning enabled

Common Buckets: - Application logs (Loki chunks)

Route53Âļ

Location: components/route53/

Private hosted zones for internal DNS resolution.

Configuration: - Private zones for VPC - Service discovery records - MongoDB and Redis endpoint aliases - Internal load balancer DNS - Private DNS zone are associated with the VPC for internal resolution (Workload VPN and DMZ VPC "vpn")

FirewallÂļ

Location: components/firewall/

Network firewall policies and rules.

Zones: - Workload Zone: External-facing traffic - HTTPS (443) - TCP 9506 (custom services) - Using Suricata for traffic inspection "message headers" - DMZ/SEC Zone: Internal traffic filtering and also egress control - Restrictive rules for sensitive services - Logging and monitoring integration - Domain whitelisting for known services

Security GroupsÂļ

Location: components/security-groups/

Layered security group architecture.

Types: - DMZ/SEC Security Groups: External ingress/egress rules (Wireguard VPN access) - Whitelist Security Groups: IP-based access control


🔄 Terraform Configuration PatternsÂļ

File Organization StandardÂļ

Each component follows this structure:

component/
├── main.tf              # Primary resource definitions
├── variables.tf         # Input variables
├── locals.tf           # Local values
├── outputs.tf          # Output values
├── data.tf            # Data sources
├── backend.tf         # Remote state backend
└── {region}-vars.tfvars  # Region-specific values

Backend ConfigurationÂļ

State Management: - S3 backend for remote state - Separate state files per component

# backend.tf
terraform {
  backend "s3" {
    bucket         = "terraform-state-bucket"
    key            = "ub-global-us/component-name/terraform.tfstate"
    region         = "us-east-1"
    dynamodb_table = "terraform-state-lock"
    encrypt        = true
  }
}

Variable ManagementÂļ

Global Variables: global-variables.tfvars - Environment name - VPC ID and subnet IDs - Common tags - AWS region - EKS Workers configurations - Redis and MongoDB settings - Roles/Permissions for Pod Identity

Component Variables: Component-specific variables.tf - Resource-specific configurations - Size and capacity settings - Feature flags

Data SourcesÂļ

Common Data Sources:

# data.tf
data "aws_vpc" "main" {
  id = var.vpc_id
}

data "aws_subnets" "private" {
  filter {
    name   = "vpc-id"
    values = [data.aws_vpc.main.id]
  }
  tags = {
    Type = "private"
  }
}


🚀 Deployment WorkflowÂļ

GitHub Actions IntegrationÂļ

Github Actions Role have been configured from AWS side with GithubActions role for execution operation using elevated permissions and GitHub-Actions-TF role for terraform state updates. Role are set with Trust relationship to allow GitHub Actions to assume the roles for performing terraform operations, Trust is set as by branch or environment. All Terraform operations execute through GitHub Actions:

  1. Plan Phase:
  2. Triggered on pull request
  3. Runs terraform plan
  4. Posts plan output as PR comment

  5. Apply Phase:

  6. Triggered on merge to main
  7. Runs terraform apply
  8. Updates infrastructure

Manual Operations

Never run Terraform commands manually in production. All changes must go through the CI/CD pipeline.

Deployment StepsÂļ

  1. Create Feature Branch

    git checkout -b feature/add-new-component
    

  2. Make Changes

  3. Modify Terraform files
  4. Update variables as needed
  5. Follow existing patterns

  6. Submit Pull Request

  7. GitHub Actions runs plan
  8. Review plan output
  9. Get approvals

  10. Merge to Main

  11. GitHub Actions runs apply
  12. Infrastructure updated
  13. State files updated in S3

🔐 Security ConsiderationsÂļ

IAM Best PracticesÂļ

  • ✅ Use Pod Identity for AWS service access
  • ✅ Follow least privilege principle
  • ✅ Audit IAM policies regularly
  • ❌ Avoid using root credentials
  • ❌ Don't hardcode secrets

Network SecurityÂļ

Layered Approach: 1. VPC Level: Public/Private subnet segregation 2. Firewall Level: Network firewall rules 3. Security Groups: Instance-level filtering 4. Application Level: Service mesh policies (if applicable)

Secrets ManagementÂļ

  • AWS Secrets Manager: Application secrets
  • KMS Encryption: Data at rest
  • Pod Identity: Secure secret access

SCPs Applied

Organization-level Service Control Policies (SCPs) enforce additional security boundaries. Consult with security team before major changes.


📊 Monitoring SetupÂļ

CloudWatch IntegrationÂļ

Location: components/monitoring/

Configured Monitoring: - EKS cluster metrics - Node group health - Application logs forwarding - Custom metric dashboards

Kubernetes MonitoringÂļ

Namespace: monitoring

  • Kube-Prometheus-Stack: Metrics collection
  • Loki: Log aggregation (namespace: loki)
  • Promtail: Log shipping (namespace: promtail)

đŸ› ī¸ Common OperationsÂļ

Adding a New ServiceÂļ

  1. Create IAM Role (if AWS access needed)

    components/iam/new-service-role.tf
    

  2. Add Security Group (if unique networking needed)

    components/security-groups/new-service-sg.tf
    

  3. Update Route53 (for service discovery)

    components/route53/new-service-records.tf
    

  4. Deploy via GitHub Actions

Modifying Existing ResourcesÂļ

  1. Locate component in components/ directory
  2. Update relevant .tf files
  3. Update variables if needed
  4. Submit PR for review
  5. Merge to trigger apply

Scaling ResourcesÂļ

EKS Node Groups:

# components/worker-nodes/main.tf
scaling_config {
  desired_size = 5  # Update this
  max_size     = 10
  min_size     = 3
}

MongoDB Cluster:

# components/mongodb/cluster.tf
provider_instance_size_name = "M40"  # Update tier


📚 Additional ResourcesÂļ

Best PracticesÂļ

  1. Always reference existing configurations before creating new ones
  2. Maintain consistency with naming conventions
  3. Document significant changes in PR descriptions
  4. Test in dev environment before production
  5. Use modules for reusable infrastructure patterns

âš ī¸ Important NotesÂļ

Region-Specific Deployments

If deploying to multiple regions, ensure proper variable file management and state isolation.

State Management

Each component has its own state file. This prevents state file conflicts and enables parallel development.

Module Reusability

Before creating new resources, check if a module exists in modules/ directory that can be reused.


🤝 SupportÂļ

For questions or issues: - Check existing Terraform configurations for patterns - Review GitHub Actions workflow logs - Consult with DevOps team for architecture decisions - Reference AWS documentation for service-specific details


Last Updated: [Current Date] Maintained by: DevOps Team