Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.jitera.ai/llms.txt

Use this file to discover all available pages before exploring further.

This page covers upgrading an existing Jitera Self-Hosted deployment to a newer Helm chart version. The procedure is the same on AWS EKS, Azure AKS, and on-premises clusters — only the data-store snapshot commands differ (covered in Backup and Restore).

Before You Upgrade

Run this checklist before every upgrade. Most failed upgrades come from a missed step here.
  1. Back up the databases. Before any chart change, back up the PostgreSQL databases (Automation, PGVector) and MongoDB / DocumentDB / Cosmos DB. Database migrations are forward-only — the chart’s post-install hook runs db:migrate and data:migrate, neither of which has a downgrade path. The exact procedure depends on your topology:
    Backups are mandatory for upgrades that span a major version. When the gap between versions is wide, the upgrade can include numerous table DROPs and data migrations. helm rollback alone does not revert the schema, so parts of the application (admin rake tasks, code paths that reference dropped tables, etc.) will fail with PG::UndefinedTable. Without a backup, there is no complete recovery path.
    Record the backup reference (snapshot ID or dump file path) in the release runbook / release ticket. Having it immediately accessible at rollback decision time is a prerequisite for minimizing downtime.
  2. Capture the current Helm release state. Save the live values and record the revision number — both help if you need to roll back.
    helm -n jitera get values jitera > values-backup-$(date +%Y%m%d).yaml
    helm -n jitera history jitera
    
  3. Diff the values.yaml between the old chart and the new chart, focusing on keys you override:
    diff <old-chart-dir>/charts/jitera/values.yaml \
         <new-chart-dir>/charts/jitera/values.yaml
    
    Common keys that have changed historically: externalPostgres.*, externalPgvector.*, externalMongodb.*, kong.proxy.annotations, boost.api_config.*, credentials.*.
  4. Freeze concurrent workloads. Pause CI/CD pipelines that hit the cluster, drain background jobs in flight, and notify users — depending on the upgrade scope, the rollout can take 10–30 minutes and includes brief windows where services are unavailable.

Releases and Versioning

Jitera Helm chart versions follow vYY.MM.DD(.PATCH). Patch releases (the optional .PATCH suffix) bundle bug fixes and image updates without schema changes. Minor / month bumps may include schema migrations.
Upgrade one minor version at a time. Skip-version upgrades are not supported. Schema migrations for skipped versions will not run, leaving the database in an inconsistent state. Step through every released minor version in order.
The chart zip is delivered out-of-band by Jitera support. Confirm you have the artifact for the version you intend to deploy before starting.

Upgrade Procedure

The high-level flow is the same on every platform; only the chart-extraction location and Helm command surroundings differ. Follow your platform’s specific commands at the links above.
  1. Obtain the new chart zip from Jitera (e.g., jitera-helm-<VERSION>.zip).
  2. Extract alongside the current chart (don’t overwrite — keep the previous chart directory available for rollback):
    cp /path/to/jitera-helm-<VERSION>.zip .
    unzip jitera-helm-<VERSION>.zip
    
  3. Update your Helm values (or Terraform variable) to point at the new chart path:
    # Terraform
    helm_chart_path = "./jitera-helm-<VERSION>/charts/jitera"
    
    # Manual Helm — preview first (requires the helm-diff plugin)
    helm -n jitera diff upgrade jitera ./jitera-helm-<VERSION>/charts/jitera \
      -f values.yaml
    
    # Manual Helm — apply
    helm -n jitera upgrade jitera ./jitera-helm-<VERSION>/charts/jitera \
      -f values.yaml --timeout 15m
    
  4. Plan and apply (Terraform-managed) — target the helm release to avoid unrelated drift:
    terraform plan  -target=helm_release.jitera -out=upgrade.tfplan
    terraform apply upgrade.tfplan
    
  5. Monitor the rollout. The chart’s post-install hook runs db:create, db:migrate, and data:migrate. If this hook fails, downstream pods (Automation, Ultron) crash-loop against an incomplete schema. Wait for the migration job to show Complete before trusting the rest of the rollout.
    kubectl -n jitera get pods -w
    kubectl -n jitera logs -l app=jitera-automation-db --tail=200
    
  6. Verify CLI authentication if you use the Jitera CLI (the chart’s post-install hook may strip PEM newlines on some platforms — confirm the secret looks right):
    kubectl -n jitera get secret jitera-ultron \
      -o jsonpath='{.data.CLI_ZIPPER_PRIVATE_KEY}' | base64 -d | head -1
    # Expected: -----BEGIN PRIVATE KEY-----
    
  7. Smoke-test the upgraded deployment. Visit the Jitera Studio app domain (expect HTTP 200), invite a test user (expect successful email delivery), and run jitera init from a test project (expect successful authentication and upload).

Rollback

If the upgrade fails and Helm’s automatic rollback (cleanup_on_fail) doesn’t fully recover, first identify which rollback scenario applies to the release pair you’re rolling back, then choose the strategy. The cases where helm rollback alone is sufficient are clearly separated from those that require a DB snapshot restore.

Rollback Scenario Decision

#ScenarioDescriptionSchema diffhelm rollback alone?Recommended procedure
1Patch update (no schema diff)Between patches within the same minor versionNone (image tag only)✅ Fully OKhelm rollback only
2Minor / major update (forward-compatible)Upgrade that spans a minor or major versionTable DROPs and similar, but old code only references them from admin rake tasks⚠️ User-facing features OK, admin paths failhelm rollback works, but DB restore (Scenario 3 procedure) recommended
3Minor / major update (integrity risk)Upgrade that spans a minor or major version, or writes occur during the rollback windowSame as Scenario 2, plus dropped tables are referenced by live code, or writes occur between rollback and re-upgrade❌ Feature outage / data inconsistencyDB snapshot restore is required
Decision steps:
  1. Diff charts/jitera/templates/ and values.yaml between the old and new charts
  2. List db/migrate/ and db/data/ inside both the old and new Rails images and diff the file names:
    docker run --rm <jitera_automation:OLD_TAG> ls /web/db/migrate/ > /tmp/old.txt
    docker run --rm <jitera_automation:NEW_TAG> ls /web/db/migrate/ > /tmp/new.txt
    diff /tmp/old.txt /tmp/new.txt
    
  3. The release is Scenario 1 only if both of the following hold:
    • No structural change to the chart (templates/ and values.yaml keys) — i.e. step 1 shows only an image-tag diff, and
    • No new db/migrate/* or db/data/* files in the new image — i.e. step 2 is empty
    Even when the chart has no structural changes, the Rails image can still ship new migrations (patch releases sometimes bundle a migration). Classifying a release as Scenario 1 from chart diff alone risks PG::UndefinedTable and similar errors after helm rollback, so always check the image’s db/migrate/ as well.
  4. If step 3 does not hold, inspect whether the diffed migrations contain destructive operations (Drop*Table, Drop*Column, NotNull, type changes)
  5. If destructive migrations are present, grep the old image to check whether the dropped tables / columns are referenced from user-facing code paths
  6. If writes can occur during the rollback window, or full data consistency is required, escalate to Scenario 3
When in doubt, default to Scenario 3. Issues from misclassifying Scenario 3 as Scenario 2 often surface days later (when an admin operation hits the missing table), making detection lag dangerous.

Scenario 1: Patch Update Rollback

With no chart structural change and no new migrations, helm rollback alone fully recovers. No DB restore or app scale-down is needed. Terraform-managed rollback:
# 1. Revert helm_chart_path in tfvars to the prior version
git checkout terraform.tfvars   # if committed
# or manually edit it back

# 2. Let Terraform downgrade the release
terraform apply -target=helm_release.jitera
Helm-direct rollback (if Terraform downgrade produces destructive diffs):
helm -n jitera history jitera
helm -n jitera rollback jitera <prev-revision>
kubectl -n jitera rollout status deploy/jitera-automation-rails
terraform refresh   # re-sync state if applicable

Scenario 2: Forward-Compatible Minor / Major Rollback

After running helm rollback, always verify:
# Run helm rollback
helm -n jitera rollback jitera <prev-revision>
kubectl -n jitera rollout status deploy/jitera-automation-rails

# Inspect "NO FILE" rows in db:migrate:status (cosmetic, but a hint at impact surface)
kubectl -n jitera exec deploy/jitera-automation-rails -- \
  bundle exec rails db:migrate:status | grep "NO FILE"

# Manually verify primary UI flows (sign-in, dashboard, project list)
Preconditions for treating a rollback as Scenario 2:
  • You don’t operate features that use the dropped tables / columns (mobile app, web email, etc.)
  • No writes (new project creation, etc.) occur during the rollback window
  • You can tolerate minor inconsistencies (admin-side PG::UndefinedTable, schema_migrations left at the new-version count, etc.)
If any of these don’t hold, use the Scenario 3 procedure (with DB snapshot restore).

Scenario 3: Rollback with Integrity Risk (DB Snapshot Restore Required)

Three risks make helm rollback alone insufficient:
  • Risk A: Old code references tables that no longer exist, causing user-facing operations to fail with PG::UndefinedTable
  • Risk B: Writes during the rollback window are skipped by data migrations on re-upgrade (Rails’ idempotent behavior skips them entirely). Rows created during the window remain NULL for newly-introduced columns, breaking integrity
  • Risk C: pg_restore -c (clean mode) leaves new-version ENUM types and tables orphaned, causing the re-upgrade’s migration job to fail with PG::DuplicateObject

Data stores to restore

The procedure below uses the Automation PostgreSQL as the worked example. Restore every snapshot you captured in the pre-upgrade checklist at the same time. Per-store restore commands are documented in Backup and Restore:
Data storeRestore procedure
Automation PostgreSQLRestore PostgreSQL — same pattern as the procedure below
PGVector PostgreSQLRestore PostgreSQL — apply the same procedure to the PGVector instance (returns the embedding vectors to a state consistent with the old version)
MongoDB / DocumentDB / Cosmos DBRestore MongoDBmongorestore or the cloud provider’s snapshot-restore flow
Do not restore only Automation Postgres. data:migrate can also mutate MongoDB documents and PGVector embeddings, so rolling back one data store while leaving the others on the new-version state breaks cross-store consistency. Restore every data store from the same snapshot generation captured in the pre-upgrade checklist.
Do not use pg_restore -c (clean mode). pg_restore -c leaves new-version ENUM types and tables as orphans, and the next upgrade’s migration job fails with CREATE TYPE ... already exists. Use the restore-to-a-new-instance + repoint flow below instead.
helm rollback rolls back the entire manifest, including replicas on each Deployment, so any pods you scaled to 0 beforehand will come right back. Attempting DROP DATABASE / pg_restore against the same instance after that produces:
  • DROP DATABASE fails with database "jitera" is being accessed by other users
  • Even if the connection briefly drops and DROP succeeds, the revived old-version pods reconnect to the new-version schema and hit PG::UndefinedTable
  • Sidekiq retry queues and caches accumulate inconsistent state
To sidestep the race entirely, restore the snapshot to a new instance, repoint the chart at the new endpoint, and only then run helm rollback:
# === Prerequisite: snapshot taken before the upgrade ===
# See step 1 of "Before You Upgrade".

# === Rollback ===

# 1. Scale down the app tier (stop user traffic + release DB connections)
for d in jitera-automation-rails jitera-automation-rpc \
         jitera-automation-sidekiq jitera-automation-sidekiq-priority \
         jitera-hasura jitera-boost jitera-ultron jitera-ultron-public; do
  kubectl -n jitera scale deploy $d --replicas=0
done
kubectl -n jitera wait --for=delete --timeout=5m \
  -l 'app in (jitera-automation-rails,jitera-automation-rpc,jitera-automation-sidekiq,jitera-automation-sidekiq-priority,jitera-hasura,jitera-boost,jitera-ultron,jitera-ultron-public)' pod

# 2. Restore the snapshot to a new RDS / PostgreSQL Flexible Server instance
#    AWS example:
aws rds restore-db-instance-from-db-snapshot \
  --db-instance-identifier jitera-rollback-$(date +%Y%m%d-%H%M) \
  --db-snapshot-identifier <PRE_UPGRADE_SNAPSHOT_ID>
aws rds wait db-instance-available \
  --db-instance-identifier jitera-rollback-$(date +%Y%m%d-%H%M)
#    For Azure / GCP equivalents, see [Backup and Restore].

# 3. Update the Helm values to point at the restored endpoint
#    e.g. set externalPostgres.host in values.yaml to the new endpoint

# 4. Roll back Helm resources using the updated values
helm -n jitera rollback jitera <prev-revision> \
  -f values.yaml

# 5. Wait for the rollout to complete
for d in jitera-automation-rails jitera-automation-rpc \
         jitera-automation-sidekiq jitera-automation-sidekiq-priority \
         jitera-hasura jitera-boost jitera-ultron jitera-ultron-public; do
  kubectl -n jitera rollout status deploy/$d --timeout=5m
done

# 6. Verify
kubectl -n jitera exec deploy/jitera-automation-rails -- \
  bundle exec rails db:migrate:status | grep "NO FILE" | wc -l
# Expected: 0

kubectl -n jitera exec deploy/jitera-automation-rails -- \
  bundle exec rails runner "puts Organization.count"
# Expected: numeric result (no PG::UndefinedTable)
For <prev-revision>, use the revision number recorded in step 2 of “Before You Upgrade” (from helm history).
The scale-down list above targets workloads that hold DB connections. User-facing pods (Frontend, SWEF, Playwright, Document Converter, etc.) need to be isolated separately — show a maintenance page or switch to maintenance mode for those.
Estimated downtime:
StepDowntime
App scale-down1 – 5 min
Snapshot restore (new instance creation)5 – 30 min (depends on DB size / cloud provider)
Endpoint switch + helm rollback1 – 3 min
Rollout completion2 – 10 min

Alternative: restore in place to the same instance (small-scale environments)

If you can’t create a new instance (cost / quota constraints) and must reuse the existing one, use the procedure below. The key difference is that you must scale down a second time after helm rollback — because the rollback resets replicas, the pods you scaled down in step 1 come back up, and DROP DATABASE will fail unless you re-drain them first.
# 1. Scale down the app tier
for d in jitera-automation-rails jitera-automation-rpc \
         jitera-automation-sidekiq jitera-automation-sidekiq-priority \
         jitera-hasura jitera-boost jitera-ultron jitera-ultron-public; do
  kubectl -n jitera scale deploy $d --replicas=0
done
kubectl -n jitera wait --for=delete --timeout=5m \
  -l 'app in (jitera-automation-rails,jitera-automation-rpc,jitera-automation-sidekiq,jitera-automation-sidekiq-priority,jitera-hasura,jitera-boost,jitera-ultron,jitera-ultron-public)' pod

# 2. Roll back Helm resources
helm -n jitera rollback jitera <prev-revision>

# 3. Rollback restored replicas — scale down again
for d in jitera-automation-rails jitera-automation-rpc \
         jitera-automation-sidekiq jitera-automation-sidekiq-priority \
         jitera-hasura jitera-boost jitera-ultron jitera-ultron-public; do
  kubectl -n jitera scale deploy $d --replicas=0
done
kubectl -n jitera wait --for=delete --timeout=5m \
  -l 'app in (jitera-automation-rails,jitera-automation-rpc,jitera-automation-sidekiq,jitera-automation-sidekiq-priority,jitera-hasura,jitera-boost,jitera-ultron,jitera-ultron-public)' pod

# 4. Wipe the DB and restore from snapshot
#    DROP / CREATE must run while connected to a different database (e.g. `postgres`),
#    not to the target database itself.
psql -h <pg-host> -U <admin> -d postgres -c "DROP DATABASE jitera"
psql -h <pg-host> -U <admin> -d postgres -c "CREATE DATABASE jitera OWNER jitera"
pg_restore -h <pg-host> -U <admin> -d jitera --no-owner --no-acl /backup/jitera.dump

# 5. Scale the app tier back up
for d in jitera-automation-rails jitera-automation-rpc \
         jitera-automation-sidekiq jitera-automation-sidekiq-priority \
         jitera-hasura jitera-boost jitera-ultron jitera-ultron-public; do
  kubectl -n jitera scale deploy $d --replicas=1
done

AWS EKS Deployment

Full deployment guide, including the platform-specific upgrade procedure

Azure AKS Deployment

Full deployment guide, including the platform-specific upgrade procedure

Maintenance Overview

Backup, restore, and routine maintenance tasks

Troubleshooting

Incident workflow and common issues