The AWS CLI commands in this section are provided as examples. Your environment may require different configurations. Refer to the official AWS documentation for detailed instructions.If using external managed services in production (RDS, DocumentDB, ElastiCache, Amazon MQ): start provisioning these in parallel with EKS cluster creation (Step 1). Managed databases typically take 10–20 minutes to become available, and they must be ready before the Helm install in Step 4. See the External AWS Services section for configuration details.
Create an EKS cluster that meets the cluster specifications. The example below uses eksctl:
This example provisions 3x m6i.xlarge nodes (12 vCPU, 48 GB total), which meets the evaluation minimum. For production deployments, use m6i.2xlarge instances or increase node count to meet the production minimum (16 cores, 64 GB). See the Sizing Guide for tier recommendations.
The aws-ebs-csi-driver addon is required for persistent volume provisioning. Without it, database pods will fail to start with PVC binding errors.
On EKS with Amazon Linux 2023 (default for EKS 1.30+), ensure volumeSize is at least 200 GB per node. Jitera container images total approximately 30–50 GB of compressed layers, and containerd’s overlayfs storage multiplies this during extraction. A 100 GB volume will run out of space during initial image pulls, causing no space left on device errors. If nodes report this error, increase the volume size and replace the nodes.
This example uses eksctl’s default VPC, which automatically provisions a NAT Gateway for outbound connectivity from private subnets. If you use a custom VPC, you must configure a NAT Gateway yourself — without it, worker nodes cannot reach AWS services (ECR, S3, STS, etc.), causing image pulls and addon installs to fail during cluster bootstrap.See VPC requirements for Amazon EKS for details.
Create 4 S3 buckets with the access levels listed in Storage Configuration, and configure CORS on the default and public buckets.
All S3 buckets must be created in ap-northeast-1 (Tokyo). The application generates presigned URLs with a hardcoded ap-northeast-1 region for direct uploads. Buckets in other regions will cause jitera init (CLI project import) to fail with a 301 Moved Permanently redirect. S3 cross-region access works — the EKS cluster and other AWS services can be in a different region.
The public bucket must have public-read access enabled (S3 bucket policy or ACL). All other buckets should remain private.
CORS AllowedOrigins must include both your main domain and chat domain (e.g., https://app.example.com and https://chat.example.com). Missing origins will cause file upload failures in the application.
The following is an example — replace the region, bucket names, and credentials with your actual values:
storage: provider: S3 secret: aws: AWS_ACCESS_KEY_ID: "<YOUR_ACCESS_KEY>" AWS_SECRET_ACCESS_KEY: "<YOUR_SECRET_KEY>" AWS_REGION: "ap-northeast-1" # must be ap-northeast-1 (see Warning above) AWS_BUCKET: "jitera-storage-default" # your default bucket name AWS_PUBLIC_BUCKET: "jitera-storage-public" # your public bucket name AWS_EXPORT_PROJECT_BUCKET: "jitera-storage-export" # your export bucket name AWS_ULTRON_BUCKET: "jitera-storage-ultron" # your AI service bucket name
Create an IAM user with the following S3 permissions scoped to the 4 Jitera buckets. The following is an example — replace the bucket names with the ones you created in Step 2:
Generate an access key for this IAM user — the Helm chart requires static credentials (AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY). See AWS IAM documentation for policy creation and access key management.
ACM provides managed certificates for use with AWS load balancers. For ACM, use a Subject Alternative Name (SAN) or wildcard certificate. See AWS Certificate Manager documentation for detailed instructions.
Set up Amazon SES for email delivery. For detailed and up-to-date instructions, see the Amazon SES documentation.
New SES accounts start in sandbox mode, which only allows sending to verified email addresses. In sandbox mode, the Org Owner invitation email will silently fail with 554 Message rejected: Email address is not verified — the error appears only in Rails logs, not in the UI.For testing/pilot environments: verify each recipient address before sending invitations:
Example: Verify domain and create SMTP credentials
# Verify domain identityaws ses verify-domain-identity \ --domain yourdomain.com \ --region ap-northeast-1# Get verification DNS recordsaws ses get-identity-verification-attributes \ --identities yourdomain.com \ --region ap-northeast-1
Add the TXT record to your DNS configuration.
# Create IAM user for SMTPaws iam create-user --user-name jitera-ses-smtp# Attach SES sending policyaws iam attach-user-policy \ --user-name jitera-ses-smtp \ --policy-arn arn:aws:iam::aws:policy/AmazonSESFullAccess# Create access keyaws iam create-access-key --user-name jitera-ses-smtp
The SMTP password is not the IAM secret access key. You must derive it using the SES-specific signing algorithm. Follow the official SES SMTP credentials conversion procedure, which includes a reference implementation in Python.
The following is an example — replace the SMTP endpoint, credentials, and sender address with your actual values:
mailer: smtp_settings: address: email-smtp.ap-northeast-1.amazonaws.com # SES endpoint for your region user_name: "<SES_SMTP_USERNAME>" password: "<SES_SMTP_PASSWORD>" default_from_email: "noreply@yourdomain.com"
The secret name (jitera-registry) must match the imagePullSecrets entry in your Helm values file. A mismatch will cause image pull failures across all pods.
Create a values file for your AWS deployment. All placeholder values (<...>) must be replaced with your actual configuration. Parameters not listed here use sensible defaults — see Helm Values Reference for the full list.
Load balancer and inbound IP restriction. Jitera provisions a Classic Load Balancer (CLB) by default via the Kong ingress controller. ALB is not supported — the Kong Ingress Controller does not offer ALB-mode ingress. NLB is not supported due to a Jitera-side limitation.For Layer-4 IP allow-listing, pre-create a Security Group in your infrastructure layer (console / CLI / IaC) and attach it to the Kong Service via the service.beta.kubernetes.io/aws-load-balancer-security-groups annotation. This keeps SG lifecycle decoupled from the Helm chart — the SG is not deleted by helm uninstall.
Do not use kong.proxy.loadBalancerSourceRanges — it works, but couples infrastructure-layer rules to application-layer Helm values.
Do not use service.beta.kubernetes.io/aws-load-balancer-source-ranges — silently ignored on the in-tree CLB.
The pre-created SG must include your NAT Gateway EIP(s) so pods hitting the public app domain (hairpin) aren’t dropped. See Inbound Access.
AWS WAF. Neither CLB nor NLB natively integrates with AWS WAF. If you require WAF, place CloudFront + AWS WAF in front of the CLB (also gains edge caching and DDoS protection), or use a third-party CDN/WAF such as Cloudflare.
Trusted proxy. Narrow kong.env.trusted_ips (chart default 0.0.0.0/0,::/0) and Jitera’s application-level trusted proxies to your LB CIDR in production. See Trusted Proxy.
EKS networking. Jitera is verified with the Amazon VPC CNI add-on (the EKS default, already listed under addons: in the cluster config above). Pod IPs come from VPC subnets, so the inbound Security Group attached via the aws-load-balancer-security-groups annotation applies at the Service (CLB) frontend — independent of pod-level networking.
# values-aws.yaml# ============================================================# Core Configuration — all parameters below must be overridden# ============================================================# --- Registry credentials (provided by Jitera) ---registryCredentials: server: "<REGISTRY_URL>" username: "<REGISTRY_USERNAME>" password: "<REGISTRY_PASSWORD>" email: "<REGISTRY_EMAIL>"# --- Domain ---ingress: domainName: "app.yourdomain.com" chatDomainName: "chat.yourdomain.com"# --- TLS — CLB with ACM certificate + pre-created Security Group ---kong: proxy: annotations: service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http" service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "kong-proxy-tls" service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "<ACM_CERTIFICATE_ARN>" # Pre-created Security Group that carries your inbound allow-list rules. # Create the SG in your infrastructure layer (console / CLI / IaC); # include NAT Gateway EIP(s) for pod hairpin. See the Warning above. service.beta.kubernetes.io/aws-load-balancer-security-groups: "<SECURITY_GROUP_ID>" tls: overrideServiceTargetPort: 8000# --- JWT ---jwt: secret: "<GENERATE_WITH_pwgen_64_1>"# --- Internal secrets (generate unique values for each) ---automation: env: PUBLIC_OPEN_AI_INTERNAL_SECRET: "<GENERATE_RANDOM_SECRET>"ultron: secret: PUBLIC_OPEN_AI_INTERNAL_SECRET: "<SAME_VALUE_AS_ABOVE>"credentials: hasura: HASURA_GRAPHQL_ADMIN_SECRET: "<GENERATE_RANDOM_SECRET>" boost: JITERA_BOOST_API_KEY_MAIN: "<GENERATE_WITH_pwgen_32_1>" JITERA_BOOST_AUTO_API_KEY: "<SAME_AS_HASURA_ADMIN_SECRET>" JITERA_BOOST_OPENAI_KEY_LITELLM: "<GENERATE_WITH_pwgen_32_1>" JITERA_BOOST_ROLLBAR_ACCESS_TOKEN: "" # Set Rollbar token or leave empty html_conversion: BEARER_TOKEN: "<GENERATE_WITH_pwgen_32_1>"# --- Database credentials (in-cluster) ---postgresql: postgresql: username: "<DB_USERNAME>" password: "<DB_PASSWORD>" database: "<DB_NAME>" postgresPassword: "<POSTGRES_SUPERUSER_PASSWORD>"pgvector: postgresql: username: "<PGVECTOR_USERNAME>" password: "<PGVECTOR_PASSWORD>" database: "<PGVECTOR_DB_NAME>"mongodb: auth: databases: ["<MONGO_DB_NAME>"] usernames: ["<MONGO_USERNAME>"] passwords: ["<MONGO_PASSWORD>"]rabbitmq: auth: password: "<RABBITMQ_PASSWORD>" erlangCookie: "<GENERATE_RANDOM_STRING>"# --- Storage — AWS S3 ---storage: provider: S3 secret: aws: AWS_ACCESS_KEY_ID: "<AWS_ACCESS_KEY_ID>" AWS_SECRET_ACCESS_KEY: "<AWS_SECRET_ACCESS_KEY>" AWS_REGION: "ap-northeast-1" # must be ap-northeast-1 — see S3 bucket Warning AWS_BUCKET: "jitera-storage-default" AWS_PUBLIC_BUCKET: "jitera-storage-public" AWS_EXPORT_PROJECT_BUCKET: "jitera-storage-export" AWS_ULTRON_BUCKET: "jitera-storage-ultron"document_converter: env: USE_AZURE: "false"ultron: env: STORAGE_DISK: "s3"# --- Email — AWS SES ---mailer: smtp_settings: address: "email-smtp.ap-northeast-1.amazonaws.com" user_name: "<SES_SMTP_USERNAME>" password: "<SES_SMTP_PASSWORD>" default_from_email: "noreply@yourdomain.com"# --- Company ---company: name: "<YOUR_COMPANY_NAME>" brand_name: "<YOUR_BRAND_NAME>" domain: "@yourdomain.com" language: "en"# --- AI / LLM Primary Provider (choose one, see tabs below) ---# See the AI provider tabs below this code block.# ============================================================# Optional Configuration# ============================================================# The following use sensible defaults. Override only if needed.# See: /self-hosted-v26.02.16/reference/helm-values## Integrations: credentials.github.*, credentials.gitlab.*, credentials.figma.*# Sign-up control: automation.env.SECURED_SIGN_UP, frontend.env.REACT_APP_SECURED_SIGN_UP# StorageClass: postgresql.persistence.storageClassName, mongodb.persistence.storageClass, etc.# External databases: externalPostgres.*, externalRedis.*, externalMongodb.*, externalRabbitmq.*# Monitoring: monitoring.*# Error monitoring: credentials.rollbar.*# Monitoring domains: ingress.grafana.domain, ingress.prometheus.domain
If your environment restricts Docker Hub access, you can redirect all image pulls to the Jitera ACR registry. See Container Registry for the configuration.
Choose one primary provider and add the corresponding configuration to your values file. The AI_MODE setting determines Ultron’s primary LLM routing. Additional providers (AWS Bedrock/Claude, Anthropic Direct API, Google Gemini, vLLM) can be configured alongside — see AI Configuration for details.
Azure OpenAI
OpenAI
openai: AI_MODE: azure secretKeys: azure: AZURE_OPENAI_KEYS: '["<AZURE_OPENAI_KEY>"]' AZURE_OPENAI_INSTANCE_NAMES: '["<AZURE_OPENAI_INSTANCE_NAME>"]' AZURE_OPENAI_VERSION: "2024-10-21" AZURE_OPENAI_DEVELOPMENT_NAME: "gpt-4.1" AZURE_OPENAI_EMBEDDING_DEVELOPMENT_NAME: "text-embedding-ada-002" AZURE_OPENAI_VISION_DEVELOPMENT_NAME: "gpt-4o" AZURE_OPENAI_GPT_4O_DEVELOPMENT_NAME: "gpt-4o" AZURE_OPENAI_GPT_4O_MINI_DEVELOPMENT_NAME: "gpt-4o-mini" openai: OPENAI_MAIN_MODEL_NAME: "gpt-4.1"credentials: boost: # Models you have deployed — set the full config string JITERA_BOOST_API_CONFIG_AZURE_INSTANCE_1_41: "behavior=azure,url=https://<INSTANCE>.openai.azure.com/openai/deployments/gpt-4.1,headers={\"api-key\": \"<KEY>\"},query_params={\"api-version\": \"2024-10-21\"}" JITERA_BOOST_API_CONFIG_AZURE_INSTANCE_1_41_MINI: "behavior=azure,url=https://<INSTANCE>.openai.azure.com/openai/deployments/gpt-4.1-mini,headers={\"api-key\": \"<KEY>\"},query_params={\"api-version\": \"2024-10-21\"}" JITERA_BOOST_API_CONFIG_AZURE_INSTANCE_1_41_NANO: "behavior=azure,url=https://<INSTANCE>.openai.azure.com/openai/deployments/gpt-4.1-nano,headers={\"api-key\": \"<KEY>\"},query_params={\"api-version\": \"2024-10-21\"}" JITERA_BOOST_API_CONFIG_AZURE_INSTANCE_1_4O: "behavior=azure,url=https://<INSTANCE>.openai.azure.com/openai/deployments/gpt-4o,headers={\"api-key\": \"<KEY>\"},query_params={\"api-version\": \"2024-10-21\"}" JITERA_BOOST_API_CONFIG_AZURE_INSTANCE_1_4O_MINI: "behavior=azure,url=https://<INSTANCE>.openai.azure.com/openai/deployments/gpt-4o-mini,headers={\"api-key\": \"<KEY>\"},query_params={\"api-version\": \"2024-10-21\"}" JITERA_BOOST_API_CONFIG_AZURE_INSTANCE_1_ADA: "behavior=azure,url=https://<INSTANCE>.openai.azure.com/openai/deployments/text-embedding-ada-002,headers={\"api-key\": \"<KEY>\"},query_params={\"api-version\": \"2024-10-21\"}" # Models NOT deployed — must override with empty string to prevent startup crash JITERA_BOOST_API_CONFIG_AZURE_INSTANCE_1_O1: "" JITERA_BOOST_API_CONFIG_AZURE_INSTANCE_1_O3: "" JITERA_BOOST_API_CONFIG_AZURE_INSTANCE_1_O3_MINI: "" JITERA_BOOST_API_CONFIG_AZURE_INSTANCE_1_O4_MINI: ""
See AI Configuration — Azure OpenAI for the full setup including custom subdomain requirements, all Boost config keys, model deployment, and SuperAdmin registration.
The Helm chart’s monitoring stack and in-cluster databases use PersistentVolumeClaims that rely on the cluster’s default StorageClass. On EKS, create a gp3 StorageClass and mark it as default:
A default StorageClass is required even when databases are externalized — the monitoring stack (Grafana, Prometheus, Loki, Tempo) still needs persistent volumes. Without a default StorageClass, pods will remain in Pending state with unbound immediate PersistentVolumeClaims.
The --timeout 15m flag is important for initial installation. Database initialization and migrations can take several minutes. If the installation times out, check pod status with kubectl get pods -n jitera before retrying.
Create an RDS instance running PostgreSQL 14.x. See External Services for the validated version.Jitera requires the following PostgreSQL extensions: btree_gist, citext, cube, pg_stat_statements, pg_trgm, pgcrypto, uuid-ossp, vector. These are installed automatically during deployment. See Amazon RDS PostgreSQL extensions for details.
Create a separate RDS instance running PostgreSQL 16.x. See External Services for the validated version.Jitera requires the same set of PostgreSQL extensions as the primary database: btree_gist, citext, cube, pg_stat_statements, pg_trgm, pgcrypto, uuid-ossp, vector. These are installed automatically during deployment. See pgvector on Amazon RDS for details.
The DocumentDB master password must be 8–100 characters and cannot contain/, ", or @. These characters are URI-reserved and will also break the MongoDB connection string. Generate with: openssl rand -base64 24 | tr -d '/+="@'
# Get the Kong proxy load balancer hostnamekubectl get svc -n jitera -l app.kubernetes.io/name=kong -o jsonpath='{.items[0].status.loadBalancer.ingress[0].hostname}'
Create CNAME records pointing both hostnames (main domain and chat domain) to the Kong load balancer hostname. See Deployment Requirements for hostname details.
# Check Kong proxy service statuskubectl get svc -n jitera -l app.kubernetes.io/name=kong# Check Kong pod logskubectl logs -n jitera -l app.kubernetes.io/name=kong# Common issues:# - Subnet tagging issues# - Security group restrictions
# Access Rails consolekubectl exec -it deploy/jitera-automation-rails -n jitera -- rails console# Send test emailActionMailer::Base.mail( from: 'noreply@yourdomain.com', to: 'test@example.com', subject: 'Test Email', body: 'This is a test email from Jitera.').deliver_now
# Verify SES SMTP endpoint is reachablekubectl exec -it deploy/jitera-automation-rails -n jitera -- \ telnet email-smtp.ap-northeast-1.amazonaws.com 587# Ensure outbound port 587 is allowed in your security group
Ensure you converted the AWS secret access key to an SES SMTP password — these are not the same value. See AWS SES documentation for the conversion process.
This operation is irreversible. All in-cluster data (databases, caches, message queues) will be permanently deleted. Back up all data before proceeding.
# Uninstall the Helm releasehelm uninstall jitera -n jitera# Delete the namespace (removes all remaining resources)kubectl delete namespace jitera
External resources (S3 buckets, RDS instances, DocumentDB clusters, ElastiCache, ACM certificates, Route 53 records) are not deleted by helm uninstall. These must be removed separately through the AWS Console, CLI, or Terraform.