This page covers upgrading an existing Jitera Self-Hosted deployment to a newer Helm chart version. The procedure is the same on AWS EKS, Azure AKS, and on-premises clusters — only the data-store snapshot commands differ (covered in Backup and Restore).Documentation Index
Fetch the complete documentation index at: https://docs.jitera.ai/llms.txt
Use this file to discover all available pages before exploring further.
Before You Upgrade
Run this checklist before every upgrade. Most failed upgrades come from a missed step here.-
Back up the databases. Before any chart change, back up the PostgreSQL databases (Automation, PGVector) and MongoDB / DocumentDB / Cosmos DB. Database migrations are forward-only — the chart’s post-install hook runs
db:migrateanddata:migrate, neither of which has a downgrade path. The exact procedure depends on your topology:- Managed databases (RDS / Cloud SQL / Cosmos DB, etc.): follow the cloud-provider snapshot steps in Backup and Restore.
- In-cluster databases: follow the
pg_dump/mongodumpsteps in PostgreSQL Backup and MongoDB Backup.
-
Capture the current Helm release state. Save the live values and record the revision number — both help if you need to roll back.
-
Diff the
values.yamlbetween the old chart and the new chart, focusing on keys you override:Common keys that have changed historically:externalPostgres.*,externalPgvector.*,externalMongodb.*,kong.proxy.annotations,boost.api_config.*,credentials.*. - Freeze concurrent workloads. Pause CI/CD pipelines that hit the cluster, drain background jobs in flight, and notify users — depending on the upgrade scope, the rollout can take 10–30 minutes and includes brief windows where services are unavailable.
Releases and Versioning
Jitera Helm chart versions followvYY.MM.DD(.PATCH). Patch releases (the optional .PATCH suffix) bundle bug fixes and image updates without schema changes. Minor / month bumps may include schema migrations.
Upgrade one minor version at a time. Skip-version upgrades are not supported. Schema migrations for skipped versions will not run, leaving the database in an inconsistent state. Step through every released minor version in order.
Upgrade Procedure
The high-level flow is the same on every platform; only the chart-extraction location and Helm command surroundings differ. Follow your platform’s specific commands at the links above.-
Obtain the new chart zip from Jitera (e.g.,
jitera-helm-<VERSION>.zip). -
Extract alongside the current chart (don’t overwrite — keep the previous chart directory available for rollback):
-
Update your Helm values (or Terraform variable) to point at the new chart path:
-
Plan and apply (Terraform-managed) — target the helm release to avoid unrelated drift:
-
Monitor the rollout. The chart’s post-install hook runs
db:create,db:migrate, anddata:migrate. If this hook fails, downstream pods (Automation, Ultron) crash-loop against an incomplete schema. Wait for the migration job to showCompletebefore trusting the rest of the rollout. -
Verify CLI authentication if you use the Jitera CLI (the chart’s post-install hook may strip PEM newlines on some platforms — confirm the secret looks right):
-
Smoke-test the upgraded deployment. Visit the Jitera Studio app domain (expect HTTP 200), invite a test user (expect successful email delivery), and run
jitera initfrom a test project (expect successful authentication and upload).
Rollback
If the upgrade fails and Helm’s automatic rollback (cleanup_on_fail) doesn’t fully recover, first identify which rollback scenario applies to the release pair you’re rolling back, then choose the strategy. The cases where helm rollback alone is sufficient are clearly separated from those that require a DB snapshot restore.
Rollback Scenario Decision
| # | Scenario | Description | Schema diff | helm rollback alone? | Recommended procedure |
|---|---|---|---|---|---|
| 1 | Patch update (no schema diff) | Between patches within the same minor version | None (image tag only) | ✅ Fully OK | helm rollback only |
| 2 | Minor / major update (forward-compatible) | Upgrade that spans a minor or major version | Table DROPs and similar, but old code only references them from admin rake tasks | ⚠️ User-facing features OK, admin paths fail | helm rollback works, but DB restore (Scenario 3 procedure) recommended |
| 3 | Minor / major update (integrity risk) | Upgrade that spans a minor or major version, or writes occur during the rollback window | Same as Scenario 2, plus dropped tables are referenced by live code, or writes occur between rollback and re-upgrade | ❌ Feature outage / data inconsistency | DB snapshot restore is required |
-
Diff
charts/jitera/templates/andvalues.yamlbetween the old and new charts -
List
db/migrate/anddb/data/inside both the old and new Rails images and diff the file names: -
The release is Scenario 1 only if both of the following hold:
- No structural change to the chart (
templates/andvalues.yamlkeys) — i.e. step 1 shows only an image-tag diff, and - No new
db/migrate/*ordb/data/*files in the new image — i.e. step 2 is empty
- No structural change to the chart (
-
If step 3 does not hold, inspect whether the diffed migrations contain destructive operations (
Drop*Table,Drop*Column,NotNull, type changes) - If destructive migrations are present, grep the old image to check whether the dropped tables / columns are referenced from user-facing code paths
- If writes can occur during the rollback window, or full data consistency is required, escalate to Scenario 3
When in doubt, default to Scenario 3. Issues from misclassifying Scenario 3 as Scenario 2 often surface days later (when an admin operation hits the missing table), making detection lag dangerous.
Scenario 1: Patch Update Rollback
With no chart structural change and no new migrations,helm rollback alone fully recovers. No DB restore or app scale-down is needed.
Terraform-managed rollback:
Scenario 2: Forward-Compatible Minor / Major Rollback
After runninghelm rollback, always verify:
Scenario 3: Rollback with Integrity Risk (DB Snapshot Restore Required)
Three risks makehelm rollback alone insufficient:
- Risk A: Old code references tables that no longer exist, causing user-facing operations to fail with
PG::UndefinedTable - Risk B: Writes during the rollback window are skipped by data migrations on re-upgrade (Rails’ idempotent behavior skips them entirely). Rows created during the window remain
NULLfor newly-introduced columns, breaking integrity - Risk C:
pg_restore -c(clean mode) leaves new-version ENUM types and tables orphaned, causing the re-upgrade’s migration job to fail withPG::DuplicateObject
Data stores to restore
The procedure below uses the Automation PostgreSQL as the worked example. Restore every snapshot you captured in the pre-upgrade checklist at the same time. Per-store restore commands are documented in Backup and Restore:| Data store | Restore procedure |
|---|---|
| Automation PostgreSQL | Restore PostgreSQL — same pattern as the procedure below |
| PGVector PostgreSQL | Restore PostgreSQL — apply the same procedure to the PGVector instance (returns the embedding vectors to a state consistent with the old version) |
| MongoDB / DocumentDB / Cosmos DB | Restore MongoDB — mongorestore or the cloud provider’s snapshot-restore flow |
Recommended procedure: restore the snapshot to a new instance and repoint the chart
helm rollback rolls back the entire manifest, including replicas on each Deployment, so any pods you scaled to 0 beforehand will come right back. Attempting DROP DATABASE / pg_restore against the same instance after that produces:
DROP DATABASEfails withdatabase "jitera" is being accessed by other users- Even if the connection briefly drops and
DROPsucceeds, the revived old-version pods reconnect to the new-version schema and hitPG::UndefinedTable - Sidekiq retry queues and caches accumulate inconsistent state
helm rollback:
For
<prev-revision>, use the revision number recorded in step 2 of “Before You Upgrade” (from helm history).The scale-down list above targets workloads that hold DB connections. User-facing pods (Frontend, SWEF, Playwright, Document Converter, etc.) need to be isolated separately — show a maintenance page or switch to maintenance mode for those.
| Step | Downtime |
|---|---|
| App scale-down | 1 – 5 min |
| Snapshot restore (new instance creation) | 5 – 30 min (depends on DB size / cloud provider) |
Endpoint switch + helm rollback | 1 – 3 min |
| Rollout completion | 2 – 10 min |
Alternative: restore in place to the same instance (small-scale environments)
If you can’t create a new instance (cost / quota constraints) and must reuse the existing one, use the procedure below. The key difference is that you must scale down a second time afterhelm rollback — because the rollback resets replicas, the pods you scaled down in step 1 come back up, and DROP DATABASE will fail unless you re-drain them first.
Related Documentation
AWS EKS Deployment
Full deployment guide, including the platform-specific upgrade procedure
Azure AKS Deployment
Full deployment guide, including the platform-specific upgrade procedure
Maintenance Overview
Backup, restore, and routine maintenance tasks
Troubleshooting
Incident workflow and common issues

