đ UB-Global-US Terraform InfrastructureÂļ
OverviewÂļ
The ub-global-us environment is a production-grade AWS infrastructure deployed and managed entirely through Terraform. This environment hosts the Unibeam platform services across multiple availability zones with high availability and security configurations.
Deployment Method
All infrastructure operations are executed via GitHub Actions workflows. Manual Terraform commands are not recommended for production changes.
đ Directory StructureÂļ
The repository follows a modular approach with clear separation of concerns:
managed-services/
âââ components/ # Individual infrastructure components
âââ modules/ # Reusable Terraform modules
âââ providers.tf # AWS provider configuration
âââ variables.tf # Input variable definitions
âââ locals.tf # Local variables and calculations
âââ outputs.tf # Output values
âââ data.tf # Data sources (VPC, subnets, etc.)
âââ remote-state.tf # S3 backend configuration
âââ global-variables.tfvars # Global variable values
Components DirectoryÂļ
Each component represents a distinct infrastructure layer:
components/
âââ eks/ # EKS cluster configuration
âââ worker-nodes/ # EKS worker node groups
âââ mongodb/ # MongoDB Atlas clusters
âââ redislabs/ # Redis Cloud instances
âââ s3/ # S3 buckets with lifecycle policies
âââ route53/ # Private hosted zones and records
âââ firewall/ # Network firewall rules
âââ security-groups/ # Security group definitions
â âââ dmz/ # DMZ zone security groups
â âââ domain/ # Domain-specific groups
â âââ whitelist/ # IP whitelist configurations
âââ iam/ # IAM roles and policies
âââ monitoring/ # CloudWatch and monitoring
Modules DirectoryÂļ
Reusable modules that enforce consistency:
modules/
âââ eks-cluster/ # EKS cluster module
âââ iam-roles/ # IAM role templates
âââ mongodb-atlas/ # MongoDB Atlas provisioning
âââ redis-cloud/ # Redis Cloud provisioning
âââ s3-bucket/ # S3 bucket with standards
âââ security-group/ # Security group module
âââ route53-zone/ # DNS zone module
đ§ Core ComponentsÂļ
EKS ClusterÂļ
Location: components/eks/
The EKS cluster forms the foundation of the Unibeam platform.
Key Features: - Multi-AZ deployment for high availability - EKS addons (VPC CNI, CoreDNS, kube-proxy) - Pod Identity for AWS service access - Integration with AWS load balancers (ALB/NLB)
Files:
- main.tf - Cluster definition
- addons.tf - EKS addons configuration
- backend.tf - State management
Pod Identity
All Unibeam components use Pod Identity for accessing AWS services (Secrets Manager, S3, MongoDB Atlas). Avoid using IRSA for new services.
Worker NodesÂļ
Location: components/worker-nodes/
Managed node groups for running workloads.
Configuration: - Instance types optimized for workload - Auto-scaling groups - Spot and On-Demand instance mix
IAM RolesÂļ
Location: components/iam/ and modules/iam-roles/
Key Roles: - EKS Service Roles: Cluster operation permissions - Pod Identity Roles: Service-specific AWS access - Secrets Manager access - S3 bucket operations - MongoDB Atlas connections - Node Group Roles: Worker node permissions
Least Privilege
IAM policies follow least privilege principles. Reference existing roles before creating new ones.
MongoDB AtlasÂļ
Location: components/mongodb/
Production MongoDB clusters with private connectivity.
Configuration: - Connection: AWS PrivateLink or VPC Peering - Authentication: Pod Identity with IAM roles - Environments: - Production clusters (high availability) - Development clusters
Key Files:
- cluster.tf - MongoDB cluster definition
- privatelink.tf - Private endpoint configuration
- variables.tf - Cluster-specific variables
Redis CloudÂļ
Location: components/redislabs/
Redis caching layer with secure connectivity.
Configuration: - Connection: VPC Peering - High Availability: Multi-AZ replication - Access: Through security groups and peering - ACLs: Role-based access control, User-Names and Passwords are stored in Secrets Manager, and are fetched via Pod Identity.
S3 BucketsÂļ
Location: components/s3/
Multiple S3 configurations for various purposes.
Features: - Lifecycle policies for cost optimization - Cross-region replication (where applicable) - Encryption at rest (KMS) - Versioning enabled
Common Buckets: - Application logs (Loki chunks)
Route53Âļ
Location: components/route53/
Private hosted zones for internal DNS resolution.
Configuration: - Private zones for VPC - Service discovery records - MongoDB and Redis endpoint aliases - Internal load balancer DNS - Private DNS zone are associated with the VPC for internal resolution (Workload VPN and DMZ VPC "vpn")
FirewallÂļ
Location: components/firewall/
Network firewall policies and rules.
Zones: - Workload Zone: External-facing traffic - HTTPS (443) - TCP 9506 (custom services) - Using Suricata for traffic inspection "message headers" - DMZ/SEC Zone: Internal traffic filtering and also egress control - Restrictive rules for sensitive services - Logging and monitoring integration - Domain whitelisting for known services
Security GroupsÂļ
Location: components/security-groups/
Layered security group architecture.
Types: - DMZ/SEC Security Groups: External ingress/egress rules (Wireguard VPN access) - Whitelist Security Groups: IP-based access control
đ Terraform Configuration PatternsÂļ
File Organization StandardÂļ
Each component follows this structure:
component/
âââ main.tf # Primary resource definitions
âââ variables.tf # Input variables
âââ locals.tf # Local values
âââ outputs.tf # Output values
âââ data.tf # Data sources
âââ backend.tf # Remote state backend
âââ {region}-vars.tfvars # Region-specific values
Backend ConfigurationÂļ
State Management: - S3 backend for remote state - Separate state files per component
# backend.tf
terraform {
backend "s3" {
bucket = "terraform-state-bucket"
key = "ub-global-us/component-name/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "terraform-state-lock"
encrypt = true
}
}
Variable ManagementÂļ
Global Variables: global-variables.tfvars
- Environment name
- VPC ID and subnet IDs
- Common tags
- AWS region
- EKS Workers configurations
- Redis and MongoDB settings
- Roles/Permissions for Pod Identity
Component Variables: Component-specific variables.tf
- Resource-specific configurations
- Size and capacity settings
- Feature flags
Data SourcesÂļ
Common Data Sources:
# data.tf
data "aws_vpc" "main" {
id = var.vpc_id
}
data "aws_subnets" "private" {
filter {
name = "vpc-id"
values = [data.aws_vpc.main.id]
}
tags = {
Type = "private"
}
}
đ Deployment WorkflowÂļ
GitHub Actions IntegrationÂļ
Github Actions Role have been configured from AWS side with GithubActions role for execution operation using elevated permissions and GitHub-Actions-TF role for terraform state updates. Role are set with Trust relationship to allow GitHub Actions to assume the roles for performing terraform operations, Trust is set as by branch or environment. All Terraform operations execute through GitHub Actions:
- Plan Phase:
- Triggered on pull request
- Runs
terraform plan -
Posts plan output as PR comment
-
Apply Phase:
- Triggered on merge to main
- Runs
terraform apply - Updates infrastructure
Manual Operations
Never run Terraform commands manually in production. All changes must go through the CI/CD pipeline.
Deployment StepsÂļ
-
Create Feature Branch
-
Make Changes
- Modify Terraform files
- Update variables as needed
-
Follow existing patterns
-
Submit Pull Request
- GitHub Actions runs plan
- Review plan output
-
Get approvals
-
Merge to Main
- GitHub Actions runs apply
- Infrastructure updated
- State files updated in S3
đ Security ConsiderationsÂļ
IAM Best PracticesÂļ
- â Use Pod Identity for AWS service access
- â Follow least privilege principle
- â Audit IAM policies regularly
- â Avoid using root credentials
- â Don't hardcode secrets
Network SecurityÂļ
Layered Approach: 1. VPC Level: Public/Private subnet segregation 2. Firewall Level: Network firewall rules 3. Security Groups: Instance-level filtering 4. Application Level: Service mesh policies (if applicable)
Secrets ManagementÂļ
- AWS Secrets Manager: Application secrets
- KMS Encryption: Data at rest
- Pod Identity: Secure secret access
SCPs Applied
Organization-level Service Control Policies (SCPs) enforce additional security boundaries. Consult with security team before major changes.
đ Monitoring SetupÂļ
CloudWatch IntegrationÂļ
Location: components/monitoring/
Configured Monitoring: - EKS cluster metrics - Node group health - Application logs forwarding - Custom metric dashboards
Kubernetes MonitoringÂļ
Namespace: monitoring
- Kube-Prometheus-Stack: Metrics collection
- Loki: Log aggregation (namespace:
loki) - Promtail: Log shipping (namespace:
promtail)
đ ī¸ Common OperationsÂļ
Adding a New ServiceÂļ
-
Create IAM Role (if AWS access needed)
-
Add Security Group (if unique networking needed)
-
Update Route53 (for service discovery)
-
Deploy via GitHub Actions
Modifying Existing ResourcesÂļ
- Locate component in
components/directory - Update relevant
.tffiles - Update variables if needed
- Submit PR for review
- Merge to trigger apply
Scaling ResourcesÂļ
EKS Node Groups:
# components/worker-nodes/main.tf
scaling_config {
desired_size = 5 # Update this
max_size = 10
min_size = 3
}
MongoDB Cluster:
đ Additional ResourcesÂļ
Related DocumentationÂļ
Best PracticesÂļ
- Always reference existing configurations before creating new ones
- Maintain consistency with naming conventions
- Document significant changes in PR descriptions
- Test in dev environment before production
- Use modules for reusable infrastructure patterns
â ī¸ Important NotesÂļ
Region-Specific Deployments
If deploying to multiple regions, ensure proper variable file management and state isolation.
State Management
Each component has its own state file. This prevents state file conflicts and enables parallel development.
Module Reusability
Before creating new resources, check if a module exists in modules/ directory that can be reused.
đ¤ SupportÂļ
For questions or issues: - Check existing Terraform configurations for patterns - Review GitHub Actions workflow logs - Consult with DevOps team for architecture decisions - Reference AWS documentation for service-specific details
Last Updated: [Current Date] Maintained by: DevOps Team