Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.jitera.ai/llms.txt

Use this file to discover all available pages before exploring further.

This guide walks you through deploying Jitera Self-Hosted on Azure AKS with Azure-native services.
All examples in this guide use the following sample values. Replace them with your actual values:
  • Helm release name / namespace: jitera (i.e., helm install jitera ./charts/jitera --namespace jitera) — Kubernetes resource names are prefixed with this (e.g., jitera-automation-rails, jitera-ultron)
  • Resource group: jitera-rg
  • Cluster name: jitera-cluster
  • Storage account: jiterastoragexxxx
  • Container names: jitera-default, jitera-public, jitera-export, jitera-ultron
  • Domain: jitera.yourdomain.com
  • Secret names: jitera-registry (image pull secret), jitera-tls (TLS certificate)

Prerequisites Checklist

Complete the Deployment Requirements checklist first, then confirm the following Azure-specific items:
  • Azure subscription with appropriate permissions
  • Azure CLI installed and configured (az login)

Azure Infrastructure Setup

The Azure CLI commands in this section are provided as examples. Your environment may require different configurations. Refer to the official Azure documentation for detailed instructions.If using external managed services in production (Azure Database for PostgreSQL, Cosmos DB, Azure Cache for Redis, Azure Communication Services): start provisioning these in parallel with AKS cluster creation (Step 1). Managed databases typically take 10–20 minutes to become available, and they must be ready before the Helm install. See the External Azure Services section for configuration details.

Step 1: Create Resource Group and AKS Cluster

Create an AKS cluster that meets the cluster specifications. The example below uses Azure CLI:
This example provisions 3x Standard_D4s_v3 nodes (12 vCPU, 48 GB total), which meets the evaluation minimum. For production deployments, use Standard_D8s_v3 instances or increase node count to meet the production minimum (16 cores, 64 GB). See the Sizing Guide for tier recommendations.
# Set variables
export RESOURCE_GROUP="jitera-rg"
export LOCATION="japaneast"
export CLUSTER_NAME="jitera-cluster"

# Create resource group
az group create --name $RESOURCE_GROUP --location $LOCATION

# Create AKS cluster
az aks create \
  --resource-group $RESOURCE_GROUP \
  --name $CLUSTER_NAME \
  --node-count 3 \
  --node-vm-size Standard_D4s_v3 \
  --kubernetes-version 1.35 \
  --enable-managed-identity \
  --generate-ssh-keys

# Get credentials
az aks get-credentials --resource-group $RESOURCE_GROUP --name $CLUSTER_NAME
This example uses default networking for simplicity. For production deployments, configure a custom VNet with subnets and Network Security Groups (NSGs) for network isolation. See the AKS networking documentation for guidance.
Load balancer and inbound IP restriction. Jitera provisions an Azure Standard Load Balancer (SLB) by default via the Kong ingress controller. Application Gateway and Front Door are not supported as the ingress layer (but can sit in front of the SLB for WAF — see below). For Layer-4 IP allow-listing, pre-create a Network Security Group (NSG) and associate it with the AKS node subnet. This decouples NSG lifecycle from the Helm chart and filters traffic at the subnet boundary, outside the SLB’s DSR session-tracking path.Do not use kong.proxy.loadBalancerSourceRanges (or the service.beta.kubernetes.io/azure-allowed-ip-ranges annotation) on Azure. Setting a source-range filter on the Standard LB breaks same-VNet hairpin traffic — pod-to-own-LB-IP connections (Boost → Automation, jitera init, cert-manager HTTP-01 self-check) time out because the SYN is accepted but the SYN-ACK return path is dropped. The DSR session-tracking bug only triggers when the filter is applied at the LB frontend, not at the subnet NSG.At minimum, the NSG inbound rules should allow:
  • 443/tcp from your trusted client CIDRs (office / VPN)
  • 443/tcp from the VirtualNetwork service tag (required for pod hairpin and intra-cluster traffic to the public ingress domain)
  • Everything else denied by an explicit Deny-All rule (or rely on the default deny)
See AKS network security documentation and NSG overview for details.
Azure WAF. The Azure Standard Load Balancer is L4-only and does not provide WAF. For L7 WAF, place Azure Application Gateway with WAF_v2 or Azure Front Door with WAF in front of the SLB. These sit ahead of Kong — Kong remains the cluster-internal ingress.
Trusted proxy. Narrow kong.env.trusted_ips (chart default 0.0.0.0/0,::/0) and Jitera’s application-level trusted proxies to your LB / Application Gateway / Front Door source CIDRs in production. See Trusted Proxy.
AKS networking. Jitera is verified with Azure CNI (traditional / node-subnet mode) — not Azure CNI Overlay and not the Cilium dataplane. Pod IPs come from the node subnet, so the NSG attached to that subnet applies to both nodes and pods, which is what the inbound IP restriction above relies on. If your az aks create CLI version defaults to a different mode, pass --network-plugin azure explicitly.
Custom VNet requires Network Contributor role. When deploying AKS into a bring-your-own (BYO) VNet, the cluster’s managed identity does not automatically have permission to join the subnet. LoadBalancer Services fail to provision public IPs and stay in <pending> state with LinkedAuthorizationFailed errors visible in service events.Grant the Network Contributor role on the subnet to the AKS cluster’s system-assigned managed identity:
# Get the AKS managed identity principal ID
PRINCIPAL_ID=$(az aks show --resource-group $RESOURCE_GROUP --name $CLUSTER_NAME \
  --query identity.principalId -o tsv)

# Get the subnet ID
SUBNET_ID=$(az network vnet subnet show --resource-group $RESOURCE_GROUP \
  --vnet-name <vnet-name> --name <subnet-name> --query id -o tsv)

# Grant Network Contributor role
az role assignment create \
  --assignee "$PRINCIPAL_ID" \
  --role "Network Contributor" \
  --scope "$SUBNET_ID"
Not required when using the default AKS-managed VNet.
See Azure Kubernetes Service documentation for detailed cluster creation instructions.

Step 2: Create Storage Account

Create a storage account with 4 containers matching the access levels listed in Storage Configuration, and configure CORS on the storage account.
  • The public container must have its access level set to Blob (anonymous read access for blobs only). All other containers should remain private.
  • CORS AllowedOrigins must include both your main domain and chat domain (e.g., https://app.example.com and https://chat.example.com). Missing origins will cause file upload failures in the application.
  • See Object Storage — CORS Configuration for the required CORS rules for default and public buckets.
See Azure Blob Storage documentation for storage account creation and CORS configuration.

Helm Configuration

storage:
  provider: AzureStorage
  secret:
    azure:
      STORAGE_ACCOUNT_NAME: "jiterastoragexxxx"
      STORAGE_ACCESS_KEY: "<YOUR_ACCESS_KEY>"
      CONTAINER: "jitera-default"
      PUBLIC_CONTAINER: "jitera-public"
      EXPORT_PROJECT_CONTAINER: "jitera-export"
      ULTRON_CONTAINER: "jitera-ultron"
The Helm chart requires a storage account access key (STORAGE_ACCOUNT_NAME and STORAGE_ACCESS_KEY). Workload Identity and Managed Identity are not currently supported for application storage. The access key provides full access to the storage account — ensure the storage account is dedicated to Jitera or restrict network access using Azure Storage firewalls.

Step 3: Configure Email Service

Set up Azure Communication Services for email delivery. For detailed and up-to-date instructions, see the Azure Communication Services documentation.
# Create Communication Service
az communication create \
  --name jitera-comm \
  --resource-group jitera-rg \
  --location global \
  --data-location unitedstates
  1. Navigate to Communication Services in Azure Portal
  2. Go to Email > Domains > Connect domain
  3. Select the Email Communication Service and connect AzureManagedDomain
  4. Note the MailFrom address from the connected domain (format: DoNotReply@<random-uuid>.azurecomm.net)
  5. Create SMTP credentials via an Entra ID application — see Azure Communication Services SMTP authentication
SMTP credentials are not the Communication Service connection string or access key. The SMTP username is constructed from three Entra ID values (<communication-service-name>.<entra-app-client-id>.<tenant-id>) and the password is the Entra ID application’s client secret. Do not use az communication list-key output for SMTP authentication.

Helm Configuration

mailer:
  smtp_settings:
    address: smtp.azurecomm.net
    user_name: "<ACS_SMTP_USERNAME>"
    password: "<ACS_SMTP_PASSWORD>"
  default_from_email: "DoNotReply@<random-uuid>.azurecomm.net"  # must match the MailFrom address from Azure Portal

Step 4: Set Up TLS Certificates

The certificate must cover both hostnames (main domain and chat domain).
Self-signed certificates are not supported.
Install cert-manager for automatic certificate management with Let’s Encrypt. See cert-manager documentation for installation and ClusterIssuer configuration.
helm repo add jetstack https://charts.jetstack.io
helm repo update

helm install cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --create-namespace \
  --set crds.enabled=true
# cluster-issuer.yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: admin@yourdomain.com
    privateKeySecretRef:
      name: letsencrypt-prod
    solvers:
    - http01:
        ingress:
          class: kong
kubectl apply -f cluster-issuer.yaml
The HTTP-01 solver above requires Kong to correctly route ACME challenge requests to /.well-known/acme-challenge/. If you experience certificate issuance failures, consider using a DNS-01 solver instead, which avoids ingress routing dependencies entirely.
The Helm values use cert-manager.io/cluster-issuer: letsencrypt-prod annotation to request certificates automatically (see Step 3: Create Values File).

Jitera Installation

Step 1: Create Namespace and Registry Secret

kubectl create namespace jitera

kubectl create secret docker-registry jitera-registry \
  --namespace jitera \
  --docker-server=registry.jitera.com \
  --docker-username="your-username" \
  --docker-password="your-password"
The secret name (jitera-registry) must match the imagePullSecrets entry in your Helm values file. A mismatch will cause image pull failures across all pods.

Step 2: Create Values File

Create a values file for your Azure deployment. All placeholder values (<...>) must be replaced with your actual configuration. Parameters not listed here use sensible defaults — see Helm Values Reference for the full list.
# values-azure.yaml
# ============================================================
# Core Configuration — all parameters below must be overridden
# ============================================================

# --- Registry credentials (provided by Jitera) ---
registryCredentials:
  server: "<REGISTRY_URL>"
  username: "<REGISTRY_USERNAME>"
  password: "<REGISTRY_PASSWORD>"
  email: "<REGISTRY_EMAIL>"

# --- Domain ---
ingress:
  domainName: "app.yourdomain.com"
  chatDomainName: "chat.yourdomain.com"

# --- JWT ---
jwt:
  secret: "<GENERATE_WITH_pwgen_64_1>"

# --- Internal secrets (generate unique values for each) ---
automation:
  env:
    PUBLIC_OPEN_AI_INTERNAL_SECRET: "<GENERATE_RANDOM_SECRET>"
ultron:
  secret:
    PUBLIC_OPEN_AI_INTERNAL_SECRET: "<SAME_VALUE_AS_ABOVE>"
credentials:
  hasura:
    HASURA_GRAPHQL_ADMIN_SECRET: "<GENERATE_RANDOM_SECRET>"
  boost:
    JITERA_BOOST_API_KEY_MAIN: "<GENERATE_WITH_pwgen_32_1>"
    JITERA_BOOST_AUTO_API_KEY: "<SAME_AS_HASURA_ADMIN_SECRET>"
    JITERA_BOOST_OPENAI_KEY_LITELLM: "<GENERATE_WITH_pwgen_32_1>"
    JITERA_BOOST_ROLLBAR_ACCESS_TOKEN: ""  # Set Rollbar token or leave empty
  html_conversion:
    BEARER_TOKEN: "<GENERATE_WITH_pwgen_32_1>"

# --- Database credentials (in-cluster) ---
postgresql:
  postgresql:
    username: "<DB_USERNAME>"
    password: "<DB_PASSWORD>"
    database: "<DB_NAME>"
    postgresPassword: "<POSTGRES_SUPERUSER_PASSWORD>"
pgvector:
  postgresql:
    username: "<PGVECTOR_USERNAME>"
    password: "<PGVECTOR_PASSWORD>"
    database: "<PGVECTOR_DB_NAME>"
mongodb:
  auth:
    databases: ["<MONGO_DB_NAME>"]
    usernames: ["<MONGO_USERNAME>"]
    passwords: ["<MONGO_PASSWORD>"]
rabbitmq:
  auth:
    password: "<RABBITMQ_PASSWORD>"
    erlangCookie: "<GENERATE_RANDOM_STRING>"

# --- Storage — Azure Blob Storage ---
storage:
  provider: AzureStorage
  secret:
    azure:
      STORAGE_ACCOUNT_NAME: "<STORAGE_ACCOUNT_NAME>"
      STORAGE_ACCESS_KEY: "<STORAGE_ACCESS_KEY>"
      CONTAINER: "jitera-default"
      ULTRON_CONTAINER: "jitera-ultron"
      EXPORT_PROJECT_CONTAINER: "jitera-export"
      PUBLIC_CONTAINER: "jitera-public"
document_converter:
  env:
    USE_AZURE: "true"
ultron:
  env:
    STORAGE_DISK: "azure"

# --- Email — SMTP ---
mailer:
  smtp_settings:
    address: "<SMTP_HOST>"
    user_name: "<SMTP_USERNAME>"
    password: "<SMTP_PASSWORD>"
  default_from_email: "noreply@yourdomain.com"

# --- Company ---
company:
  name: "<YOUR_COMPANY_NAME>"
  brand_name: "<YOUR_BRAND_NAME>"
  domain: "@yourdomain.com"
  language: "en"

# --- AI / LLM Primary Provider (choose one, see tabs below) ---
# See the AI provider tabs below this code block.

# ============================================================
# Optional Configuration
# ============================================================
# The following use sensible defaults. Override only if needed.
# See: /self-hosted-v26.04.21/reference/helm-values
#
# Integrations:       credentials.github.*, credentials.gitlab.*, credentials.figma.*
# Sign-up control:    automation.env.SECURED_SIGN_UP, frontend.env.REACT_APP_SECURED_SIGN_UP
# StorageClass:       postgresql.persistence.storageClassName, mongodb.persistence.storageClass, etc.
# External databases: externalPostgres.*, externalRedis.*, externalMongodb.*, externalRabbitmq.*
# Monitoring:         monitoring.*
# Error monitoring:   credentials.rollbar.*
# Monitoring domains: ingress.grafana.domain, ingress.prometheus.domain
If your environment restricts Docker Hub access, you can redirect all image pulls to the Jitera ACR registry. See Container Registry for the configuration.

AI / LLM Primary Provider

Choose one primary provider and add the corresponding configuration to your values file. The AI_MODE setting determines Ultron’s primary LLM routing. Additional providers (AWS Bedrock/Claude, Anthropic Direct API, Google Gemini, vLLM) can be configured alongside — see AI Configuration for details.
openai:
  AI_MODE: azure
  secretKeys:
    azure:
      AZURE_OPENAI_KEYS: '["<AZURE_OPENAI_KEY>"]'
      AZURE_OPENAI_INSTANCE_NAMES: '["<AZURE_OPENAI_INSTANCE_NAME>"]'
      AZURE_OPENAI_VERSION: "2024-10-21"
      AZURE_OPENAI_DEVELOPMENT_NAME: "gpt-4.1"
      AZURE_OPENAI_EMBEDDING_DEVELOPMENT_NAME: "text-embedding-ada-002"
      AZURE_OPENAI_VISION_DEVELOPMENT_NAME: "gpt-4o"
      AZURE_OPENAI_GPT_4O_DEVELOPMENT_NAME: "gpt-4o"
      AZURE_OPENAI_GPT_4O_MINI_DEVELOPMENT_NAME: "gpt-4o-mini"
    openai:
      OPENAI_MAIN_MODEL_NAME: "gpt-4.1"
credentials:
  boost:
    # Models you have deployed — set the full config string
    JITERA_BOOST_API_CONFIG_AZURE_INSTANCE_1_41: "behavior=azure,url=https://<INSTANCE>.openai.azure.com/openai/deployments/gpt-4.1,headers={\"api-key\": \"<KEY>\"},query_params={\"api-version\": \"2024-10-21\"}"
    JITERA_BOOST_API_CONFIG_AZURE_INSTANCE_1_41_MINI: "behavior=azure,url=https://<INSTANCE>.openai.azure.com/openai/deployments/gpt-4.1-mini,headers={\"api-key\": \"<KEY>\"},query_params={\"api-version\": \"2024-10-21\"}"
    JITERA_BOOST_API_CONFIG_AZURE_INSTANCE_1_41_NANO: "behavior=azure,url=https://<INSTANCE>.openai.azure.com/openai/deployments/gpt-4.1-nano,headers={\"api-key\": \"<KEY>\"},query_params={\"api-version\": \"2024-10-21\"}"
    JITERA_BOOST_API_CONFIG_AZURE_INSTANCE_1_4O: "behavior=azure,url=https://<INSTANCE>.openai.azure.com/openai/deployments/gpt-4o,headers={\"api-key\": \"<KEY>\"},query_params={\"api-version\": \"2024-10-21\"}"
    JITERA_BOOST_API_CONFIG_AZURE_INSTANCE_1_4O_MINI: "behavior=azure,url=https://<INSTANCE>.openai.azure.com/openai/deployments/gpt-4o-mini,headers={\"api-key\": \"<KEY>\"},query_params={\"api-version\": \"2024-10-21\"}"
    JITERA_BOOST_API_CONFIG_AZURE_INSTANCE_1_ADA: "behavior=azure,url=https://<INSTANCE>.openai.azure.com/openai/deployments/text-embedding-ada-002,headers={\"api-key\": \"<KEY>\"},query_params={\"api-version\": \"2024-10-21\"}"
    # Models NOT deployed — must override with empty string to prevent startup crash
    JITERA_BOOST_API_CONFIG_AZURE_INSTANCE_1_O1: ""
    JITERA_BOOST_API_CONFIG_AZURE_INSTANCE_1_O3: ""
    JITERA_BOOST_API_CONFIG_AZURE_INSTANCE_1_O3_MINI: ""
    JITERA_BOOST_API_CONFIG_AZURE_INSTANCE_1_O4_MINI: ""
See AI Configuration — Azure OpenAI for the full setup including custom subdomain requirements, all Boost config keys, model deployment, and SuperAdmin registration.
StorageClass is not specified in the example above — the cluster default will be used. On AKS, this is typically managed-csi or managed-premium. Verify with kubectl get storageclass. To override, add storageClassName to the postgresql.persistence, pgvector.persistence, and mongodb.persistence sections.
See Helm Values Reference for the complete parameter reference.

Step 2.5: Verify Default StorageClass

The Helm chart’s monitoring stack and in-cluster databases use PersistentVolumeClaims that rely on the cluster’s default StorageClass. On AKS, managed-csi is typically the default. Verify:
kubectl get storageclass   # Look for "(default)" next to managed-csi
If no default StorageClass exists, set one:
kubectl patch storageclass managed-csi -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}'
A default StorageClass is required even when databases are externalized — the monitoring stack (Grafana, Prometheus, Loki, Tempo) still needs persistent volumes. Without a default StorageClass, pods will remain in Pending state with unbound immediate PersistentVolumeClaims. If your cluster uses managed-premium instead of managed-csi, substitute accordingly.

Step 3: Install Jitera

# Extract the Jitera Helm chart zip
unzip jitera-helm-chart.zip

# Install Jitera
helm install jitera ./charts/jitera \
  --namespace jitera \
  --values values-azure.yaml \
  --wait \
  --timeout 15m
The --timeout 15m flag is important for initial installation. Database initialization and migrations can take several minutes. If the installation times out, check pod status with kubectl get pods -n jitera before retrying.

External Azure Services (Optional)

For production high-availability deployments, consider externalizing databases to managed services. Refer to the official Azure documentation for setup procedures:

Using Azure Database for PostgreSQL

Create an Azure Database for PostgreSQL Flexible Server running PostgreSQL 14.x. See External Services for the validated version. Jitera requires the following PostgreSQL extensions: btree_gist, citext, cube, pg_stat_statements, pg_trgm, pgcrypto, uuid-ossp, vector. These are installed automatically during deployment. On Azure Database for PostgreSQL Flexible Server, these extensions must be allow-listed via the azure.extensions server parameter before deployment. See PostgreSQL extensions in Azure Database for PostgreSQL for details.
# values-azure.yaml additions
postgresql:
  enabled: false

externalPostgres:
  enabled: true
  host: jitera-postgres.postgres.database.azure.com
  port: "5432"
  dbName: jitera
  username: jiteraadmin
  password: "YourSecurePassword123!"

Using Azure Database for PostgreSQL (PGVector)

Create a separate Azure Database for PostgreSQL Flexible Server running PostgreSQL 16.x. See External Services for the validated version. Jitera requires the same set of PostgreSQL extensions as the primary database: btree_gist, citext, cube, pg_stat_statements, pg_trgm, pgcrypto, uuid-ossp, vector. These are installed automatically during deployment. On Azure Database for PostgreSQL Flexible Server, these extensions must be allow-listed via the azure.extensions server parameter. See pgvector on Azure Database for PostgreSQL for details.
# values-azure.yaml additions
pgvector:
  enabled: false

externalPgvector:
  enabled: true
  host: jitera-pgvector.postgres.database.azure.com
  port: "5432"
  database: jitera_pgvector
  username: jiteraadmin
  password: "YourSecurePassword123!"
  sslMode: disable
See External Services — PGVector for TLS limitations.

Using Azure Cosmos DB for MongoDB

See External Services — MongoDB for connection URI requirements.
# values-azure.yaml additions
mongodb:
  enabled: false

externalMongodb:
  enabled: true
  mongodb_uri: "mongodb://jitera-cosmos:xxxxx@jitera-cosmos.mongo.cosmos.azure.com:10255/jitera?ssl=true&replicaSet=globaldb"

Using Azure Cache for Redis

# values-azure.yaml additions
redis:
  enabled: false

externalRedis:
  enabled: true
  host: jitera-redis.redis.cache.windows.net
  port: 6380
  username: ""       # Required — leave empty for Azure Cache for Redis (no ACL username)
  password: "your-redis-access-key"
  useTls: true

Post-Installation Verification

Step 1: Check Pod Status

# All pods should be Running
kubectl get pods -n jitera

# Expected output:
# NAME                           READY   STATUS    RESTARTS   AGE
# jitera-api-xxxxx               1/1     Running   0          5m
# jitera-web-xxxxx               1/1     Running   0          5m
# jitera-worker-xxxxx            1/1     Running   0          5m
# jitera-postgresql-0            1/1     Running   0          5m
# jitera-mongodb-0               1/1     Running   0          5m
# jitera-redis-master-0          1/1     Running   0          5m

Step 2: Get Load Balancer IP

kubectl get svc -n jitera -l app.kubernetes.io/name=kong
# Note the EXTERNAL-IP

Step 3: Configure DNS

Create A records pointing both hostnames (main domain and chat domain) to the load balancer IP. See Deployment Requirements for hostname details.
app.example.com     A  <load-balancer-ip>
chat.example.com    A  <load-balancer-ip>
# Get load balancer IP
LB_IP=$(kubectl get svc -n jitera -l app.kubernetes.io/name=kong -o jsonpath='{.items[0].status.loadBalancer.ingress[0].ip}')

# Create A records
az network dns record-set a add-record \
  --resource-group <YOUR_DNS_RESOURCE_GROUP> \
  --zone-name example.com \
  --record-set-name app \
  --ipv4-address $LB_IP

az network dns record-set a add-record \
  --resource-group <YOUR_DNS_RESOURCE_GROUP> \
  --zone-name example.com \
  --record-set-name chat \
  --ipv4-address $LB_IP
See Azure DNS documentation for detailed instructions.

Step 4: Verify TLS Certificate

# Check certificate is issued
kubectl get certificate -n jitera

# Should show READY = True

Troubleshooting

Pods Not Starting

# Check pod events
kubectl describe pod <pod-name> -n jitera

# Check logs
kubectl logs <pod-name> -n jitera

# Common issues:
# - Image pull errors: Verify registry credentials
# - Resource issues: Check node capacity
# - PVC issues: Verify storage class exists

Certificate Not Issuing

# Check certificate status
kubectl describe certificate jitera-tls -n jitera

# Check cert-manager logs
kubectl logs -n cert-manager -l app=cert-manager

# Common issues:
# - DNS not propagated
# - Ingress not accessible for HTTP-01 challenge
# - Rate limiting from Let's Encrypt

Load Balancer Issues

# Check Kong proxy service
kubectl get svc -n jitera -l app.kubernetes.io/name=kong

# Check Kong pod logs
kubectl logs -n jitera -l app.kubernetes.io/name=kong

# Common issues:
# - Quota limits exceeded
# - Network security group blocking traffic

Database Connection Issues

# Test PostgreSQL connection
kubectl run -it --rm psql --image=postgres:15 --restart=Never -- \
  psql -h jitera-postgresql -U jitera -d jitera

Storage Access Issues

# Check which storage provider is configured
kubectl get configmap jitera-ultron -n jitera -o jsonpath='{.data.STORAGE_DISK}'

# Check Azure credentials in the Ultron secret
kubectl get secret jitera-ultron -n jitera \
  -o jsonpath='{.data.AZURE_STORAGE_ACCOUNT_NAME}' | base64 -d

# Check storage config in the Automation secret (secrets.yml)
kubectl get secret jitera-automation -n jitera \
  -o jsonpath='{.data.secrets\.yml}' | base64 -d | grep -A 5 'storage_service\|azure:'

# Check storage configuration in the Ultron secret
kubectl get secret jitera-ultron -n jitera -o json | \
  jq '.data | to_entries[] | select(.key | test("AZURE_STORAGE")) | {(.key): (.value | @base64d)}'

# Test connectivity from a pod
kubectl exec -it deploy/jitera-automation-rails -n jitera -- \
  curl -s -o /dev/null -w "%{http_code}" https://jiterastoragexxxx.blob.core.windows.net

# Verify storage account access
az storage container list \
  --account-name jiterastoragexxxx \
  --auth-mode key

Storage CORS Errors

If you see CORS errors in the browser console:
  1. Verify the CORS configuration includes your domain on the storage account
  2. Check that the allowed methods include all required HTTP methods
  3. Ensure the storage endpoint is accessible from the browser
az storage cors list --account-name jiterastoragexxxx --services b

Email Testing

# Access Rails console
kubectl exec -it deploy/jitera-automation-rails -n jitera -- rails console

# Send test email
ActionMailer::Base.mail(
  from: 'noreply@yourdomain.com',
  to: 'test@example.com',
  subject: 'Test Email',
  body: 'This is a test email from Jitera.'
).deliver_now
# Check environment variables
kubectl exec -it deploy/jitera-automation-rails -n jitera -- \
  env | grep -i smtp

# Check automation logs for email errors
kubectl logs deploy/jitera-automation-rails -n jitera | grep -i mail

Email Connection Refused

# Verify SMTP server is reachable
kubectl exec -it deploy/jitera-automation-rails -n jitera -- \
  telnet smtp.azurecomm.net 587

# Ensure outbound port 587 is allowed in your NSG rules

Email Authentication Failed

  1. Verify credentials are correct — the SMTP username format is <comm-service-name>.<entra-app-client-id>.<tenant-id>, and the password is the Entra ID app client secret (not the Communication Service connection string)
  2. Check that the email domain is connected in the Azure Portal (Communication Services > Email > Domains)
  3. Ensure default_from_email matches the MailFrom address shown in the Portal (DoNotReply@<random-uuid>.azurecomm.net)
# Check secret values
kubectl get secret jitera-mailer -n jitera -o yaml

Emails Not Delivered

  1. Check spam/junk folder
  2. Verify domain SPF/DKIM records
  3. Check if the domain is verified in Azure Communication Services
  4. Review the Azure Communication Services logs for delivery failures

Deployment Deletion

Completely remove Jitera from the cluster.
This operation is irreversible. All in-cluster data (databases, caches, message queues) will be permanently deleted. Back up all data before proceeding.
# Uninstall the Helm release
helm uninstall jitera -n jitera

# Delete the namespace (removes all remaining resources)
kubectl delete namespace jitera
External resources (storage accounts, Azure Database instances, Cosmos DB, Azure Cache for Redis, certificates, DNS records) are not deleted by helm uninstall. These must be removed separately through the Azure Portal, CLI, or Terraform.

Requirements

Mandatory and optional deployment requirements

Architecture

Understanding the architecture

Backup and Restore

Database backup procedures before upgrades

Troubleshooting

Common issues and solutions