The Azure CLI commands in this section are provided as examples. Your environment may require different configurations. Refer to the official Azure documentation for detailed instructions.If using external managed services in production (Azure Database for PostgreSQL, Cosmos DB, Azure Cache for Redis, Azure Communication Services): start provisioning these in parallel with AKS cluster creation (Step 1). Managed databases typically take 10–20 minutes to become available, and they must be ready before the Helm install. See the External Azure Services section for configuration details.
Create an AKS cluster that meets the cluster specifications. The example below uses Azure CLI:
This example provisions 3x Standard_D4s_v3 nodes (12 vCPU, 48 GB total), which meets the evaluation minimum. For production deployments, use Standard_D8s_v3 instances or increase node count to meet the production minimum (16 cores, 64 GB). See the Sizing Guide for tier recommendations.
# Set variablesexport RESOURCE_GROUP="jitera-rg"export LOCATION="japaneast"export CLUSTER_NAME="jitera-cluster"# Create resource groupaz group create --name $RESOURCE_GROUP --location $LOCATION# Create AKS clusteraz aks create \ --resource-group $RESOURCE_GROUP \ --name $CLUSTER_NAME \ --node-count 3 \ --node-vm-size Standard_D4s_v3 \ --kubernetes-version 1.35 \ --enable-managed-identity \ --generate-ssh-keys# Get credentialsaz aks get-credentials --resource-group $RESOURCE_GROUP --name $CLUSTER_NAME
This example uses default networking for simplicity. For production deployments, configure a custom VNet with subnets and Network Security Groups (NSGs) for network isolation. See the AKS networking documentation for guidance.
Load balancer and inbound IP restriction. Jitera provisions an Azure Standard Load Balancer (SLB) by default via the Kong ingress controller. Application Gateway and Front Door are not supported as the ingress layer (but can sit in front of the SLB for WAF — see below). For Layer-4 IP allow-listing, pre-create a Network Security Group (NSG) and associate it with the AKS node subnet. This decouples NSG lifecycle from the Helm chart and filters traffic at the subnet boundary, outside the SLB’s DSR session-tracking path.Do not use kong.proxy.loadBalancerSourceRanges (or the service.beta.kubernetes.io/azure-allowed-ip-ranges annotation) on Azure. Setting a source-range filter on the Standard LB breaks same-VNet hairpin traffic — pod-to-own-LB-IP connections (Boost → Automation, jitera init, cert-manager HTTP-01 self-check) time out because the SYN is accepted but the SYN-ACK return path is dropped. The DSR session-tracking bug only triggers when the filter is applied at the LB frontend, not at the subnet NSG.At minimum, the NSG inbound rules should allow:
443/tcp from your trusted client CIDRs (office / VPN)
443/tcp from the VirtualNetwork service tag (required for pod hairpin and intra-cluster traffic to the public ingress domain)
Everything else denied by an explicit Deny-All rule (or rely on the default deny)
Azure WAF. The Azure Standard Load Balancer is L4-only and does not provide WAF. For L7 WAF, place Azure Application Gateway with WAF_v2 or Azure Front Door with WAF in front of the SLB. These sit ahead of Kong — Kong remains the cluster-internal ingress.
Trusted proxy. Narrow kong.env.trusted_ips (chart default 0.0.0.0/0,::/0) and Jitera’s application-level trusted proxies to your LB / Application Gateway / Front Door source CIDRs in production. See Trusted Proxy.
AKS networking. Jitera is verified with Azure CNI (traditional / node-subnet mode) — not Azure CNI Overlay and not the Cilium dataplane. Pod IPs come from the node subnet, so the NSG attached to that subnet applies to both nodes and pods, which is what the inbound IP restriction above relies on. If your az aks create CLI version defaults to a different mode, pass --network-plugin azure explicitly.
Custom VNet requires Network Contributor role. When deploying AKS into a bring-your-own (BYO) VNet, the cluster’s managed identity does not automatically have permission to join the subnet. LoadBalancer Services fail to provision public IPs and stay in <pending> state with LinkedAuthorizationFailed errors visible in service events.Grant the Network Contributor role on the subnet to the AKS cluster’s system-assigned managed identity:
# Get the AKS managed identity principal IDPRINCIPAL_ID=$(az aks show --resource-group $RESOURCE_GROUP --name $CLUSTER_NAME \ --query identity.principalId -o tsv)# Get the subnet IDSUBNET_ID=$(az network vnet subnet show --resource-group $RESOURCE_GROUP \ --vnet-name <vnet-name> --name <subnet-name> --query id -o tsv)# Grant Network Contributor roleaz role assignment create \ --assignee "$PRINCIPAL_ID" \ --role "Network Contributor" \ --scope "$SUBNET_ID"
Not required when using the default AKS-managed VNet.
Create a storage account with 4 containers matching the access levels listed in Storage Configuration, and configure CORS on the storage account.
The public container must have its access level set to Blob (anonymous read access for blobs only). All other containers should remain private.
CORS AllowedOrigins must include both your main domain and chat domain (e.g., https://app.example.com and https://chat.example.com). Missing origins will cause file upload failures in the application.
The Helm chart requires a storage account access key (STORAGE_ACCOUNT_NAME and STORAGE_ACCESS_KEY). Workload Identity and Managed Identity are not currently supported for application storage. The access key provides full access to the storage account — ensure the storage account is dedicated to Jitera or restrict network access using Azure Storage firewalls.
SMTP credentials are not the Communication Service connection string or access key. The SMTP username is constructed from three Entra ID values (<communication-service-name>.<entra-app-client-id>.<tenant-id>) and the password is the Entra ID application’s client secret. Do not use az communication list-key output for SMTP authentication.
mailer: smtp_settings: address: smtp.azurecomm.net user_name: "<ACS_SMTP_USERNAME>" password: "<ACS_SMTP_PASSWORD>" default_from_email: "DoNotReply@<random-uuid>.azurecomm.net" # must match the MailFrom address from Azure Portal
The certificate must cover both hostnames (main domain and chat domain).
Self-signed certificates are not supported.
Install cert-manager for automatic certificate management with Let’s Encrypt. See cert-manager documentation for installation and ClusterIssuer configuration.
Example: Install cert-manager and create ClusterIssuer
The HTTP-01 solver above requires Kong to correctly route ACME challenge requests to /.well-known/acme-challenge/. If you experience certificate issuance failures, consider using a DNS-01 solver instead, which avoids ingress routing dependencies entirely.
The Helm values use cert-manager.io/cluster-issuer: letsencrypt-prod annotation to request certificates automatically (see Step 3: Create Values File).
The secret name (jitera-registry) must match the imagePullSecrets entry in your Helm values file. A mismatch will cause image pull failures across all pods.
Create a values file for your Azure deployment. All placeholder values (<...>) must be replaced with your actual configuration. Parameters not listed here use sensible defaults — see Helm Values Reference for the full list.
If your environment restricts Docker Hub access, you can redirect all image pulls to the Jitera ACR registry. See Container Registry for the configuration.
Choose one primary provider and add the corresponding configuration to your values file. The AI_MODE setting determines Ultron’s primary LLM routing. Additional providers (AWS Bedrock/Claude, Anthropic Direct API, Google Gemini, vLLM) can be configured alongside — see AI Configuration for details.
Azure OpenAI
OpenAI
openai: AI_MODE: azure secretKeys: azure: AZURE_OPENAI_KEYS: '["<AZURE_OPENAI_KEY>"]' AZURE_OPENAI_INSTANCE_NAMES: '["<AZURE_OPENAI_INSTANCE_NAME>"]' AZURE_OPENAI_VERSION: "2024-10-21" AZURE_OPENAI_DEVELOPMENT_NAME: "gpt-4.1" AZURE_OPENAI_EMBEDDING_DEVELOPMENT_NAME: "text-embedding-ada-002" AZURE_OPENAI_VISION_DEVELOPMENT_NAME: "gpt-4o" AZURE_OPENAI_GPT_4O_DEVELOPMENT_NAME: "gpt-4o" AZURE_OPENAI_GPT_4O_MINI_DEVELOPMENT_NAME: "gpt-4o-mini" openai: OPENAI_MAIN_MODEL_NAME: "gpt-4.1"credentials: boost: # Models you have deployed — set the full config string JITERA_BOOST_API_CONFIG_AZURE_INSTANCE_1_41: "behavior=azure,url=https://<INSTANCE>.openai.azure.com/openai/deployments/gpt-4.1,headers={\"api-key\": \"<KEY>\"},query_params={\"api-version\": \"2024-10-21\"}" JITERA_BOOST_API_CONFIG_AZURE_INSTANCE_1_41_MINI: "behavior=azure,url=https://<INSTANCE>.openai.azure.com/openai/deployments/gpt-4.1-mini,headers={\"api-key\": \"<KEY>\"},query_params={\"api-version\": \"2024-10-21\"}" JITERA_BOOST_API_CONFIG_AZURE_INSTANCE_1_41_NANO: "behavior=azure,url=https://<INSTANCE>.openai.azure.com/openai/deployments/gpt-4.1-nano,headers={\"api-key\": \"<KEY>\"},query_params={\"api-version\": \"2024-10-21\"}" JITERA_BOOST_API_CONFIG_AZURE_INSTANCE_1_4O: "behavior=azure,url=https://<INSTANCE>.openai.azure.com/openai/deployments/gpt-4o,headers={\"api-key\": \"<KEY>\"},query_params={\"api-version\": \"2024-10-21\"}" JITERA_BOOST_API_CONFIG_AZURE_INSTANCE_1_4O_MINI: "behavior=azure,url=https://<INSTANCE>.openai.azure.com/openai/deployments/gpt-4o-mini,headers={\"api-key\": \"<KEY>\"},query_params={\"api-version\": \"2024-10-21\"}" JITERA_BOOST_API_CONFIG_AZURE_INSTANCE_1_ADA: "behavior=azure,url=https://<INSTANCE>.openai.azure.com/openai/deployments/text-embedding-ada-002,headers={\"api-key\": \"<KEY>\"},query_params={\"api-version\": \"2024-10-21\"}" # Models NOT deployed — must override with empty string to prevent startup crash JITERA_BOOST_API_CONFIG_AZURE_INSTANCE_1_O1: "" JITERA_BOOST_API_CONFIG_AZURE_INSTANCE_1_O3: "" JITERA_BOOST_API_CONFIG_AZURE_INSTANCE_1_O3_MINI: "" JITERA_BOOST_API_CONFIG_AZURE_INSTANCE_1_O4_MINI: ""
See AI Configuration — Azure OpenAI for the full setup including custom subdomain requirements, all Boost config keys, model deployment, and SuperAdmin registration.
StorageClass is not specified in the example above — the cluster default will be used. On AKS, this is typically managed-csi or managed-premium. Verify with kubectl get storageclass. To override, add storageClassName to the postgresql.persistence, pgvector.persistence, and mongodb.persistence sections.
The Helm chart’s monitoring stack and in-cluster databases use PersistentVolumeClaims that rely on the cluster’s default StorageClass. On AKS, managed-csi is typically the default. Verify:
kubectl get storageclass # Look for "(default)" next to managed-csi
A default StorageClass is required even when databases are externalized — the monitoring stack (Grafana, Prometheus, Loki, Tempo) still needs persistent volumes. Without a default StorageClass, pods will remain in Pending state with unbound immediate PersistentVolumeClaims. If your cluster uses managed-premium instead of managed-csi, substitute accordingly.
The --timeout 15m flag is important for initial installation. Database initialization and migrations can take several minutes. If the installation times out, check pod status with kubectl get pods -n jitera before retrying.
For production high-availability deployments, consider externalizing databases to managed services. Refer to the official Azure documentation for setup procedures:
Create an Azure Database for PostgreSQL Flexible Server running PostgreSQL 14.x. See External Services for the validated version.Jitera requires the following PostgreSQL extensions: btree_gist, citext, cube, pg_stat_statements, pg_trgm, pgcrypto, uuid-ossp, vector. These are installed automatically during deployment. On Azure Database for PostgreSQL Flexible Server, these extensions must be allow-listed via the azure.extensions server parameter before deployment. See PostgreSQL extensions in Azure Database for PostgreSQL for details.
Create a separate Azure Database for PostgreSQL Flexible Server running PostgreSQL 16.x. See External Services for the validated version.Jitera requires the same set of PostgreSQL extensions as the primary database: btree_gist, citext, cube, pg_stat_statements, pg_trgm, pgcrypto, uuid-ossp, vector. These are installed automatically during deployment. On Azure Database for PostgreSQL Flexible Server, these extensions must be allow-listed via the azure.extensions server parameter. See pgvector on Azure Database for PostgreSQL for details.
# Access Rails consolekubectl exec -it deploy/jitera-automation-rails -n jitera -- rails console# Send test emailActionMailer::Base.mail( from: 'noreply@yourdomain.com', to: 'test@example.com', subject: 'Test Email', body: 'This is a test email from Jitera.').deliver_now
# Verify SMTP server is reachablekubectl exec -it deploy/jitera-automation-rails -n jitera -- \ telnet smtp.azurecomm.net 587# Ensure outbound port 587 is allowed in your NSG rules
Verify credentials are correct — the SMTP username format is <comm-service-name>.<entra-app-client-id>.<tenant-id>, and the password is the Entra ID app client secret (not the Communication Service connection string)
Check that the email domain is connected in the Azure Portal (Communication Services > Email > Domains)
Ensure default_from_email matches the MailFrom address shown in the Portal (DoNotReply@<random-uuid>.azurecomm.net)
This operation is irreversible. All in-cluster data (databases, caches, message queues) will be permanently deleted. Back up all data before proceeding.
# Uninstall the Helm releasehelm uninstall jitera -n jitera# Delete the namespace (removes all remaining resources)kubectl delete namespace jitera
External resources (storage accounts, Azure Database instances, Cosmos DB, Azure Cache for Redis, certificates, DNS records) are not deleted by helm uninstall. These must be removed separately through the Azure Portal, CLI, or Terraform.