Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.jitera.ai/llms.txt

Use this file to discover all available pages before exploring further.

This guide walks you through deploying Jitera Self-Hosted on Amazon EKS with AWS-native services.
All examples in this guide use the following sample values. Replace them with your actual values:
  • Helm release name / namespace: jitera (i.e., helm install jitera ./charts/jitera --namespace jitera) — Kubernetes resource names are prefixed with this (e.g., jitera-automation-rails, jitera-ultron)
  • S3 bucket names: jitera-storage-default, jitera-storage-public, jitera-storage-export, jitera-storage-ultron
  • S3 bucket region: ap-northeast-1 (required — see Warning in Step 2)
  • AWS region (EKS, RDS, etc.): ap-northeast-1
  • Domain: jitera.yourdomain.com
  • Secret names: jitera-registry (image pull secret), jitera-tls (TLS certificate)

Prerequisites Checklist

Complete the Deployment Requirements checklist first, then confirm the following AWS-specific items:
  • AWS account with appropriate permissions
  • AWS CLI configured (aws configure)
  • eksctl installed (recommended) or Terraform

AWS Infrastructure Setup

The AWS CLI commands in this section are provided as examples. Your environment may require different configurations. Refer to the official AWS documentation for detailed instructions.If using external managed services in production (RDS, DocumentDB, ElastiCache, Amazon MQ): start provisioning these in parallel with EKS cluster creation (Step 1). Managed databases typically take 10–20 minutes to become available, and they must be ready before the Helm install in Step 4. See the External AWS Services section for configuration details.

Step 1: Create EKS Cluster

Create an EKS cluster that meets the cluster specifications. The example below uses eksctl:
This example provisions 3x m6i.xlarge nodes (12 vCPU, 48 GB total), which meets the evaluation minimum. For production deployments, use m6i.2xlarge instances or increase node count to meet the production minimum (16 cores, 64 GB). See the Sizing Guide for tier recommendations.
# cluster.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: jitera-cluster
  region: ap-northeast-1
  version: "1.35"

managedNodeGroups:
  - name: jitera-nodes
    instanceType: m6i.xlarge
    desiredCapacity: 3
    minSize: 3
    maxSize: 6
    volumeSize: 200
    volumeType: gp3
    iam:
      withAddonPolicies:
        ebs: true
        efs: true

iam:
  withOIDC: true

addons:
  - name: vpc-cni
  - name: coredns
  - name: kube-proxy
  - name: aws-ebs-csi-driver
eksctl create cluster -f cluster.yaml
The aws-ebs-csi-driver addon is required for persistent volume provisioning. Without it, database pods will fail to start with PVC binding errors.
On EKS with Amazon Linux 2023 (default for EKS 1.30+), ensure volumeSize is at least 200 GB per node. Jitera container images total approximately 30–50 GB of compressed layers, and containerd’s overlayfs storage multiplies this during extraction. A 100 GB volume will run out of space during initial image pulls, causing no space left on device errors. If nodes report this error, increase the volume size and replace the nodes.
This example uses eksctl’s default VPC, which automatically provisions a NAT Gateway for outbound connectivity from private subnets. If you use a custom VPC, you must configure a NAT Gateway yourself — without it, worker nodes cannot reach AWS services (ECR, S3, STS, etc.), causing image pulls and addon installs to fail during cluster bootstrap.See VPC requirements for Amazon EKS for details.
See Amazon EKS documentation for detailed cluster creation instructions.

Step 2: Create S3 Buckets

Create 4 S3 buckets with the access levels listed in Storage Configuration, and configure CORS on the default and public buckets.
All S3 buckets must be created in ap-northeast-1 (Tokyo). The application generates presigned URLs with a hardcoded ap-northeast-1 region for direct uploads. Buckets in other regions will cause jitera init (CLI project import) to fail with a 301 Moved Permanently redirect. S3 cross-region access works — the EKS cluster and other AWS services can be in a different region.
  • The public bucket must have public-read access enabled (S3 bucket policy or ACL). All other buckets should remain private.
  • CORS AllowedOrigins must include both your main domain and chat domain (e.g., https://app.example.com and https://chat.example.com). Missing origins will cause file upload failures in the application.
  • See Object Storage — CORS Configuration for the required CORS rules for default and public buckets.
See Amazon S3 documentation for bucket creation and CORS configuration.

Helm Configuration

The following is an example — replace the region, bucket names, and credentials with your actual values:
storage:
  provider: S3
  secret:
    aws:
      AWS_ACCESS_KEY_ID: "<YOUR_ACCESS_KEY>"
      AWS_SECRET_ACCESS_KEY: "<YOUR_SECRET_KEY>"
      AWS_REGION: "ap-northeast-1"                       # must be ap-northeast-1 (see Warning above)
      AWS_BUCKET: "jitera-storage-default"              # your default bucket name
      AWS_PUBLIC_BUCKET: "jitera-storage-public"        # your public bucket name
      AWS_EXPORT_PROJECT_BUCKET: "jitera-storage-export" # your export bucket name
      AWS_ULTRON_BUCKET: "jitera-storage-ultron"        # your AI service bucket name

Step 3: Create IAM User for S3 Access

Create an IAM user with the following S3 permissions scoped to the 4 Jitera buckets. The following is an example — replace the bucket names with the ones you created in Step 2:
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:GetObject",
        "s3:PutObject",
        "s3:DeleteObject",
        "s3:ListBucket",
        "s3:GetBucketLocation"
      ],
      "Resource": [
        "arn:aws:s3:::jitera-storage-default",
        "arn:aws:s3:::jitera-storage-default/*",
        "arn:aws:s3:::jitera-storage-public",
        "arn:aws:s3:::jitera-storage-public/*",
        "arn:aws:s3:::jitera-storage-export",
        "arn:aws:s3:::jitera-storage-export/*",
        "arn:aws:s3:::jitera-storage-ultron",
        "arn:aws:s3:::jitera-storage-ultron/*"
      ]
    }
  ]
}
Generate an access key for this IAM user — the Helm chart requires static credentials (AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY). See AWS IAM documentation for policy creation and access key management.

Step 4: Obtain TLS Certificate

The certificate must cover both hostnames (main domain and chat domain).
Self-signed certificates are not supported.

Option 1: AWS Certificate Manager (ACM)

ACM provides managed certificates for use with AWS load balancers. For ACM, use a Subject Alternative Name (SAN) or wildcard certificate. See AWS Certificate Manager documentation for detailed instructions.
aws acm request-certificate \
  --domain-name jitera.yourdomain.com \
  --subject-alternative-names "chat.yourdomain.com" \
  --validation-method DNS \
  --region ap-northeast-1
Complete DNS validation as instructed. Use the certificate ARN in the Helm values ingress.annotations (see Step 4: Create Values File).

Option 2: cert-manager with Let’s Encrypt

For automatic certificate management without ACM. See cert-manager documentation for installation instructions.

Step 5: Configure Email (SES)

Set up Amazon SES for email delivery. For detailed and up-to-date instructions, see the Amazon SES documentation.
New SES accounts start in sandbox mode, which only allows sending to verified email addresses. In sandbox mode, the Org Owner invitation email will silently fail with 554 Message rejected: Email address is not verified — the error appears only in Rails logs, not in the UI.For testing/pilot environments: verify each recipient address before sending invitations:
aws ses verify-email-identity \
  --profile <your-aws-profile> \
  --region ap-northeast-1 \
  --email-address recipient@example.com
For production environments: request production access to remove the sandbox restriction. Approval typically takes up to 24 hours:
aws sesv2 put-account-details \
  --profile <your-aws-profile> \
  --region ap-northeast-1 \
  --production-access-enabled
See the Amazon SES sandbox documentation for details.
# Verify domain identity
aws ses verify-domain-identity \
  --domain yourdomain.com \
  --region ap-northeast-1

# Get verification DNS records
aws ses get-identity-verification-attributes \
  --identities yourdomain.com \
  --region ap-northeast-1
Add the TXT record to your DNS configuration.
# Create IAM user for SMTP
aws iam create-user --user-name jitera-ses-smtp

# Attach SES sending policy
aws iam attach-user-policy \
  --user-name jitera-ses-smtp \
  --policy-arn arn:aws:iam::aws:policy/AmazonSESFullAccess

# Create access key
aws iam create-access-key --user-name jitera-ses-smtp
The SMTP password is not the IAM secret access key. You must derive it using the SES-specific signing algorithm. Follow the official SES SMTP credentials conversion procedure, which includes a reference implementation in Python.

Helm Configuration

The following is an example — replace the SMTP endpoint, credentials, and sender address with your actual values:
mailer:
  smtp_settings:
    address: email-smtp.ap-northeast-1.amazonaws.com  # SES endpoint for your region
    user_name: "<SES_SMTP_USERNAME>"
    password: "<SES_SMTP_PASSWORD>"
  default_from_email: "noreply@yourdomain.com"

Jitera Installation

Step 1: Install cert-manager (Optional)

If not using ACM for TLS, install cert-manager. See cert-manager documentation for installation instructions.

Step 2: Create Namespace and Registry Secret

kubectl create namespace jitera

kubectl create secret docker-registry jitera-registry \
  --namespace jitera \
  --docker-server=registry.jitera.com \
  --docker-username="your-username" \
  --docker-password="your-password"
The secret name (jitera-registry) must match the imagePullSecrets entry in your Helm values file. A mismatch will cause image pull failures across all pods.

Step 3: Create Values File

Create a values file for your AWS deployment. All placeholder values (<...>) must be replaced with your actual configuration. Parameters not listed here use sensible defaults — see Helm Values Reference for the full list.
Load balancer and inbound IP restriction. Jitera provisions a Classic Load Balancer (CLB) by default via the Kong ingress controller. ALB is not supported — the Kong Ingress Controller does not offer ALB-mode ingress. NLB is not supported due to a Jitera-side limitation.For Layer-4 IP allow-listing, pre-create a Security Group in your infrastructure layer (console / CLI / IaC) and attach it to the Kong Service via the service.beta.kubernetes.io/aws-load-balancer-security-groups annotation. This keeps SG lifecycle decoupled from the Helm chart — the SG is not deleted by helm uninstall.
  • Do not use kong.proxy.loadBalancerSourceRanges — it works, but couples infrastructure-layer rules to application-layer Helm values.
  • Do not use service.beta.kubernetes.io/aws-load-balancer-source-ranges — silently ignored on the in-tree CLB.
  • The pre-created SG must include your NAT Gateway EIP(s) so pods hitting the public app domain (hairpin) aren’t dropped. See Inbound Access.
AWS WAF. Neither CLB nor NLB natively integrates with AWS WAF. If you require WAF, place CloudFront + AWS WAF in front of the CLB (also gains edge caching and DDoS protection), or use a third-party CDN/WAF such as Cloudflare.
Trusted proxy. Narrow kong.env.trusted_ips (chart default 0.0.0.0/0,::/0) and Jitera’s application-level trusted proxies to your LB CIDR in production. See Trusted Proxy.
EKS networking. Jitera is verified with the Amazon VPC CNI add-on (the EKS default, already listed under addons: in the cluster config above). Pod IPs come from VPC subnets, so the inbound Security Group attached via the aws-load-balancer-security-groups annotation applies at the Service (CLB) frontend — independent of pod-level networking.
# values-aws.yaml
# ============================================================
# Core Configuration — all parameters below must be overridden
# ============================================================

# --- Registry credentials (provided by Jitera) ---
registryCredentials:
  server: "<REGISTRY_URL>"
  username: "<REGISTRY_USERNAME>"
  password: "<REGISTRY_PASSWORD>"
  email: "<REGISTRY_EMAIL>"

# --- Domain ---
ingress:
  domainName: "app.yourdomain.com"
  chatDomainName: "chat.yourdomain.com"

# --- TLS — CLB with ACM certificate + pre-created Security Group ---
kong:
  proxy:
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
      service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "kong-proxy-tls"
      service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "<ACM_CERTIFICATE_ARN>"
      # Pre-created Security Group that carries your inbound allow-list rules.
      # Create the SG in your infrastructure layer (console / CLI / IaC);
      # include NAT Gateway EIP(s) for pod hairpin. See the Warning above.
      service.beta.kubernetes.io/aws-load-balancer-security-groups: "<SECURITY_GROUP_ID>"
    tls:
      overrideServiceTargetPort: 8000

# --- JWT ---
jwt:
  secret: "<GENERATE_WITH_pwgen_64_1>"

# --- Internal secrets (generate unique values for each) ---
automation:
  env:
    PUBLIC_OPEN_AI_INTERNAL_SECRET: "<GENERATE_RANDOM_SECRET>"
ultron:
  secret:
    PUBLIC_OPEN_AI_INTERNAL_SECRET: "<SAME_VALUE_AS_ABOVE>"
credentials:
  hasura:
    HASURA_GRAPHQL_ADMIN_SECRET: "<GENERATE_RANDOM_SECRET>"
  boost:
    JITERA_BOOST_API_KEY_MAIN: "<GENERATE_WITH_pwgen_32_1>"
    JITERA_BOOST_AUTO_API_KEY: "<SAME_AS_HASURA_ADMIN_SECRET>"
    JITERA_BOOST_OPENAI_KEY_LITELLM: "<GENERATE_WITH_pwgen_32_1>"
    JITERA_BOOST_ROLLBAR_ACCESS_TOKEN: ""  # Set Rollbar token or leave empty
  html_conversion:
    BEARER_TOKEN: "<GENERATE_WITH_pwgen_32_1>"

# --- Database credentials (in-cluster) ---
postgresql:
  postgresql:
    username: "<DB_USERNAME>"
    password: "<DB_PASSWORD>"
    database: "<DB_NAME>"
    postgresPassword: "<POSTGRES_SUPERUSER_PASSWORD>"
pgvector:
  postgresql:
    username: "<PGVECTOR_USERNAME>"
    password: "<PGVECTOR_PASSWORD>"
    database: "<PGVECTOR_DB_NAME>"
mongodb:
  auth:
    databases: ["<MONGO_DB_NAME>"]
    usernames: ["<MONGO_USERNAME>"]
    passwords: ["<MONGO_PASSWORD>"]
rabbitmq:
  auth:
    password: "<RABBITMQ_PASSWORD>"
    erlangCookie: "<GENERATE_RANDOM_STRING>"

# --- Storage — AWS S3 ---
storage:
  provider: S3
  secret:
    aws:
      AWS_ACCESS_KEY_ID: "<AWS_ACCESS_KEY_ID>"
      AWS_SECRET_ACCESS_KEY: "<AWS_SECRET_ACCESS_KEY>"
      AWS_REGION: "ap-northeast-1"  # must be ap-northeast-1 — see S3 bucket Warning
      AWS_BUCKET: "jitera-storage-default"
      AWS_PUBLIC_BUCKET: "jitera-storage-public"
      AWS_EXPORT_PROJECT_BUCKET: "jitera-storage-export"
      AWS_ULTRON_BUCKET: "jitera-storage-ultron"
document_converter:
  env:
    USE_AZURE: "false"
ultron:
  env:
    STORAGE_DISK: "s3"

# --- Email — AWS SES ---
mailer:
  smtp_settings:
    address: "email-smtp.ap-northeast-1.amazonaws.com"
    user_name: "<SES_SMTP_USERNAME>"
    password: "<SES_SMTP_PASSWORD>"
  default_from_email: "noreply@yourdomain.com"

# --- Company ---
company:
  name: "<YOUR_COMPANY_NAME>"
  brand_name: "<YOUR_BRAND_NAME>"
  domain: "@yourdomain.com"
  language: "en"

# --- AI / LLM Primary Provider (choose one, see tabs below) ---
# See the AI provider tabs below this code block.

# ============================================================
# Optional Configuration
# ============================================================
# The following use sensible defaults. Override only if needed.
# See: /self-hosted-v26.02.16.2/reference/helm-values
#
# Integrations:       credentials.github.*, credentials.gitlab.*, credentials.figma.*
# Sign-up control:    automation.env.SECURED_SIGN_UP, frontend.env.REACT_APP_SECURED_SIGN_UP
# StorageClass:       postgresql.persistence.storageClassName, mongodb.persistence.storageClass, etc.
# External databases: externalPostgres.*, externalRedis.*, externalMongodb.*, externalRabbitmq.*
# Monitoring:         monitoring.*
# Error monitoring:   credentials.rollbar.*
# Monitoring domains: ingress.grafana.domain, ingress.prometheus.domain
If your environment restricts Docker Hub access, you can redirect all image pulls to the Jitera ACR registry. See Container Registry for the configuration.

AI / LLM Primary Provider

Choose one primary provider and add the corresponding configuration to your values file. The AI_MODE setting determines Ultron’s primary LLM routing. Additional providers (AWS Bedrock/Claude, Anthropic Direct API, Google Gemini, vLLM) can be configured alongside — see AI Configuration for details.
openai:
  AI_MODE: azure
  secretKeys:
    azure:
      AZURE_OPENAI_KEYS: '["<AZURE_OPENAI_KEY>"]'
      AZURE_OPENAI_INSTANCE_NAMES: '["<AZURE_OPENAI_INSTANCE_NAME>"]'
      AZURE_OPENAI_VERSION: "2024-10-21"
      AZURE_OPENAI_DEVELOPMENT_NAME: "gpt-4.1"
      AZURE_OPENAI_EMBEDDING_DEVELOPMENT_NAME: "text-embedding-ada-002"
      AZURE_OPENAI_VISION_DEVELOPMENT_NAME: "gpt-4o"
      AZURE_OPENAI_GPT_4O_DEVELOPMENT_NAME: "gpt-4o"
      AZURE_OPENAI_GPT_4O_MINI_DEVELOPMENT_NAME: "gpt-4o-mini"
    openai:
      OPENAI_MAIN_MODEL_NAME: "gpt-4.1"
credentials:
  boost:
    # Models you have deployed — set the full config string
    JITERA_BOOST_API_CONFIG_AZURE_INSTANCE_1_41: "behavior=azure,url=https://<INSTANCE>.openai.azure.com/openai/deployments/gpt-4.1,headers={\"api-key\": \"<KEY>\"},query_params={\"api-version\": \"2024-10-21\"}"
    JITERA_BOOST_API_CONFIG_AZURE_INSTANCE_1_41_MINI: "behavior=azure,url=https://<INSTANCE>.openai.azure.com/openai/deployments/gpt-4.1-mini,headers={\"api-key\": \"<KEY>\"},query_params={\"api-version\": \"2024-10-21\"}"
    JITERA_BOOST_API_CONFIG_AZURE_INSTANCE_1_41_NANO: "behavior=azure,url=https://<INSTANCE>.openai.azure.com/openai/deployments/gpt-4.1-nano,headers={\"api-key\": \"<KEY>\"},query_params={\"api-version\": \"2024-10-21\"}"
    JITERA_BOOST_API_CONFIG_AZURE_INSTANCE_1_4O: "behavior=azure,url=https://<INSTANCE>.openai.azure.com/openai/deployments/gpt-4o,headers={\"api-key\": \"<KEY>\"},query_params={\"api-version\": \"2024-10-21\"}"
    JITERA_BOOST_API_CONFIG_AZURE_INSTANCE_1_4O_MINI: "behavior=azure,url=https://<INSTANCE>.openai.azure.com/openai/deployments/gpt-4o-mini,headers={\"api-key\": \"<KEY>\"},query_params={\"api-version\": \"2024-10-21\"}"
    JITERA_BOOST_API_CONFIG_AZURE_INSTANCE_1_ADA: "behavior=azure,url=https://<INSTANCE>.openai.azure.com/openai/deployments/text-embedding-ada-002,headers={\"api-key\": \"<KEY>\"},query_params={\"api-version\": \"2024-10-21\"}"
    # Models NOT deployed — must override with empty string to prevent startup crash
    JITERA_BOOST_API_CONFIG_AZURE_INSTANCE_1_O1: ""
    JITERA_BOOST_API_CONFIG_AZURE_INSTANCE_1_O3: ""
    JITERA_BOOST_API_CONFIG_AZURE_INSTANCE_1_O3_MINI: ""
    JITERA_BOOST_API_CONFIG_AZURE_INSTANCE_1_O4_MINI: ""
See AI Configuration — Azure OpenAI for the full setup including custom subdomain requirements, all Boost config keys, model deployment, and SuperAdmin registration.
See Helm Values Reference for the complete parameter reference.

Step 3.5: Create Default StorageClass

The Helm chart’s monitoring stack and in-cluster databases use PersistentVolumeClaims that rely on the cluster’s default StorageClass. On EKS, create a gp3 StorageClass and mark it as default:
# storageclass-gp3.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: gp3
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: ebs.csi.aws.com
parameters:
  type: gp3
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
kubectl apply -f storageclass-gp3.yaml
kubectl get storageclass   # Verify gp3 shows "(default)"
A default StorageClass is required even when databases are externalized — the monitoring stack (Grafana, Prometheus, Loki, Tempo) still needs persistent volumes. Without a default StorageClass, pods will remain in Pending state with unbound immediate PersistentVolumeClaims.

Step 4: Install Jitera

# Extract the Jitera Helm chart zip
unzip jitera-helm-chart.zip

# Install Jitera
helm install jitera ./charts/jitera \
  --namespace jitera \
  --values values-aws.yaml \
  --wait \
  --timeout 15m
The --timeout 15m flag is important for initial installation. Database initialization and migrations can take several minutes. If the installation times out, check pod status with kubectl get pods -n jitera before retrying.

External AWS Services (Optional)

For production high-availability deployments, consider externalizing databases to managed services. See Amazon RDS documentation for setup procedures.

Using Amazon RDS (PostgreSQL)

Create an RDS instance running PostgreSQL 14.x. See External Services for the validated version. Jitera requires the following PostgreSQL extensions: btree_gist, citext, cube, pg_stat_statements, pg_trgm, pgcrypto, uuid-ossp, vector. These are installed automatically during deployment. See Amazon RDS PostgreSQL extensions for details.
# values-aws.yaml additions
postgresql:
  enabled: false

externalPostgres:
  enabled: true
  host: jitera-db.cluster-xxxxx.ap-northeast-1.rds.amazonaws.com
  port: "5432"
  dbName: jitera
  username: jitera
  password: "your-rds-password"

Using Amazon RDS (PGVector)

Create a separate RDS instance running PostgreSQL 16.x. See External Services for the validated version. Jitera requires the same set of PostgreSQL extensions as the primary database: btree_gist, citext, cube, pg_stat_statements, pg_trgm, pgcrypto, uuid-ossp, vector. These are installed automatically during deployment. See pgvector on Amazon RDS for details.
# values-aws.yaml additions
pgvector:
  enabled: false

externalPgvector:
  enabled: true
  host: jitera-pgvector.cluster-xxxxx.ap-northeast-1.rds.amazonaws.com
  port: "5432"
  database: jitera_pgvector
  username: jitera
  password: "your-rds-pgvector-password"
  sslMode: disable
See External Services — PGVector for TLS limitations.

Using Amazon DocumentDB (MongoDB)

The DocumentDB master password must be 8–100 characters and cannot contain /, ", or @. These characters are URI-reserved and will also break the MongoDB connection string. Generate with: openssl rand -base64 24 | tr -d '/+="@'
See External Services — MongoDB for connection URI requirements.
# values-aws.yaml additions
mongodb:
  enabled: false

externalMongodb:
  enabled: true
  mongodb_uri: "mongodb://jitera:password@jitera-docdb.cluster-xxxxx.ap-northeast-1.docdb.amazonaws.com:27017/jitera?replicaSet=rs0&readPreference=secondaryPreferred&retryWrites=false"
See External Services — MongoDB for TLS limitations and connection URI requirements.

Using Amazon ElastiCache (Redis)

# values-aws.yaml additions
redis:
  enabled: false

externalRedis:
  enabled: true
  host: jitera-redis.xxxxx.cache.amazonaws.com
  port: 6379
  username: ""       # Required — leave empty for ElastiCache (no ACL username)
  password: ""
  useTls: true

Using Amazon MQ (RabbitMQ)

# values-aws.yaml additions
rabbitmq:
  enabled: false

externalRabbitmq:
  enabled: true
  host: "b-xxxxx.mq.ap-northeast-1.amazonaws.com"
  port: "5671"
  username: "jitera"
  password: "your-rabbitmq-password"
  useTls: true

Post-Installation Verification

Step 1: Check Pod Status

# All pods should be Running
kubectl get pods -n jitera

# Expected output:
# NAME                           READY   STATUS    RESTARTS   AGE
# jitera-api-xxxxx               1/1     Running   0          5m
# jitera-web-xxxxx               1/1     Running   0          5m
# jitera-worker-xxxxx            1/1     Running   0          5m
# jitera-postgresql-0            1/1     Running   0          5m
# jitera-mongodb-0               1/1     Running   0          5m
# jitera-redis-master-0          1/1     Running   0          5m

Step 2: Check Load Balancer

# Get the Kong proxy load balancer hostname
kubectl get svc -n jitera -l app.kubernetes.io/name=kong -o jsonpath='{.items[0].status.loadBalancer.ingress[0].hostname}'

Step 3: Configure DNS

Create CNAME records pointing both hostnames (main domain and chat domain) to the Kong load balancer hostname. See Deployment Requirements for hostname details.
app.example.com     CNAME  xxxxx.ap-northeast-1.elb.amazonaws.com
chat.example.com    CNAME  xxxxx.ap-northeast-1.elb.amazonaws.com
# Get Kong load balancer hostname
LB_HOSTNAME=$(kubectl get svc -n jitera -l app.kubernetes.io/name=kong -o jsonpath='{.items[0].status.loadBalancer.ingress[0].hostname}')

# Create CNAME records
aws route53 change-resource-record-sets \
  --hosted-zone-id <YOUR_HOSTED_ZONE_ID> \
  --change-batch '{
    "Changes": [
      {
        "Action": "UPSERT",
        "ResourceRecordSet": {
          "Name": "app.example.com",
          "Type": "CNAME",
          "TTL": 300,
          "ResourceRecords": [{"Value": "'$LB_HOSTNAME'"}]
        }
      },
      {
        "Action": "UPSERT",
        "ResourceRecordSet": {
          "Name": "chat.example.com",
          "Type": "CNAME",
          "TTL": 300,
          "ResourceRecords": [{"Value": "'$LB_HOSTNAME'"}]
        }
      }
    ]
  }'
See Amazon Route 53 documentation for detailed instructions.

Troubleshooting

Pods Not Starting

# Check pod events
kubectl describe pod <pod-name> -n jitera

# Check logs
kubectl logs <pod-name> -n jitera

# Common issues:
# - Image pull errors: Verify registry credentials
# - Resource issues: Check node capacity
# - PVC issues: Verify EBS CSI driver is installed

Load Balancer Not Provisioning

# Check Kong proxy service status
kubectl get svc -n jitera -l app.kubernetes.io/name=kong

# Check Kong pod logs
kubectl logs -n jitera -l app.kubernetes.io/name=kong

# Common issues:
# - Subnet tagging issues
# - Security group restrictions

Database Connection Issues

# Test PostgreSQL connection
kubectl run -it --rm psql --image=postgres:15 --restart=Never -- \
  psql -h jitera-postgresql -U jitera -d jitera

# Check MongoDB connection
kubectl run -it --rm mongo --image=mongo:6 --restart=Never -- \
  mongosh "mongodb://jitera-mongodb:27017/jitera"

S3 Access Issues

# Check which storage provider is configured
kubectl get configmap jitera-ultron -n jitera -o jsonpath='{.data.STORAGE_DISK}'

# Check S3 credentials in the Ultron secret
kubectl get secret jitera-ultron -n jitera \
  -o jsonpath='{.data.S3_KEY}' | base64 -d

# Check storage config in the Automation secret (secrets.yml)
kubectl get secret jitera-automation -n jitera \
  -o jsonpath='{.data.secrets\.yml}' | base64 -d | grep -A 5 'storage_service\|aws:'

# Check S3 configuration in the Ultron secret
kubectl get secret jitera-ultron -n jitera -o json | \
  jq '.data | to_entries[] | select(.key | test("S3_")) | {(.key): (.value | @base64d)}'

# Test S3 connectivity from a pod
kubectl exec -it deploy/jitera-automation-rails -n jitera -- \
  curl -s -o /dev/null -w "%{http_code}" https://s3.amazonaws.com

# Check IAM policy
aws iam get-policy-version \
  --policy-arn arn:aws:iam::${AWS_ACCOUNT_ID}:policy/JiteraS3Access \
  --version-id v1

S3 CORS Errors

If you see CORS errors in the browser console:
  1. Verify the CORS configuration includes your domain on the default and public buckets
  2. Check that the allowed methods include all required HTTP methods
  3. Ensure the S3 endpoint is accessible from the browser
aws s3api get-bucket-cors --bucket jitera-storage-default
aws s3api get-bucket-cors --bucket jitera-storage-public

Email Testing

# Access Rails console
kubectl exec -it deploy/jitera-automation-rails -n jitera -- rails console

# Send test email
ActionMailer::Base.mail(
  from: 'noreply@yourdomain.com',
  to: 'test@example.com',
  subject: 'Test Email',
  body: 'This is a test email from Jitera.'
).deliver_now
# Test SMTP connection to SES
kubectl exec -it deploy/jitera-automation-rails -n jitera -- \
  nc -zv email-smtp.ap-northeast-1.amazonaws.com 587

# Check environment variables
kubectl exec -it deploy/jitera-automation-rails -n jitera -- \
  env | grep -i smtp

# Check automation logs for email errors
kubectl logs deploy/jitera-automation-rails -n jitera | grep -i mail

Email Connection Refused

# Verify SES SMTP endpoint is reachable
kubectl exec -it deploy/jitera-automation-rails -n jitera -- \
  telnet email-smtp.ap-northeast-1.amazonaws.com 587

# Ensure outbound port 587 is allowed in your security group

Email Authentication Failed

  1. Verify credentials are correct
  2. Ensure you converted the AWS secret access key to an SES SMTP password — these are not the same value. See AWS SES documentation for the conversion process.
  3. Check that the SMTP username format is correct
# Check secret values
kubectl get secret jitera-mailer -n jitera -o yaml

Emails Not Delivered

  1. Check spam/junk folder
  2. Verify domain SPF/DKIM records
  3. Check if the domain/email is verified in SES (SES requires sender verification)
  4. Review the SES sending dashboard for bounces and complaints

Deployment Deletion

Completely remove Jitera from the cluster.
This operation is irreversible. All in-cluster data (databases, caches, message queues) will be permanently deleted. Back up all data before proceeding.
# Uninstall the Helm release
helm uninstall jitera -n jitera

# Delete the namespace (removes all remaining resources)
kubectl delete namespace jitera
External resources (S3 buckets, RDS instances, DocumentDB clusters, ElastiCache, ACM certificates, Route 53 records) are not deleted by helm uninstall. These must be removed separately through the AWS Console, CLI, or Terraform.

Requirements

Mandatory and optional deployment requirements

Architecture

Understanding the architecture

Backup and Restore

Database backup procedures before upgrades

Troubleshooting

Common issues and solutions