Documentation Index
Fetch the complete documentation index at: https://docs.jitera.ai/llms.txt
Use this file to discover all available pages before exploring further.
The Jitera Helm chart includes a full observability stack that deploys directly into your Kubernetes cluster. A single toggle — monitoring.enabled — deploys all core components. Individual sub-components can then be enabled or disabled independently.
Components
When monitoring.enabled: true, the following components are deployed:
| Component | Purpose |
|---|
| Prometheus | Metrics collection and alerting |
| Grafana | Dashboards and visualization |
| Loki | Log aggregation |
| Promtail | Log collection agent (DaemonSet) |
| Tempo | Distributed tracing backend |
| OpenTelemetry Collector | Unified traces, metrics, and logs pipeline |
| Blackbox Exporter | HTTP and TCP health probes for all services |
| Kube State Metrics | Kubernetes object metrics |
| Node Exporter | Host-level system metrics |
Enabling the Monitoring Stack
Add the following to your Helm values to enable in-cluster monitoring. The example below shows the minimum required settings — all other values use sensible defaults.
monitoring:
enabled: true
grafana:
adminPassword: <GRAFANA_ADMIN_PASSWORD> # Required — change from default
adminPassword is honored only on Grafana’s first boot. The Grafana subchart writes the password to its persistent SQLite DB (/var/lib/grafana/grafana.db on a 5 GiB PVC) when the pod first starts. After that, changing monitoring.grafana.adminPassword and running helm upgrade re-rolls the env var and Kubernetes Secret, but Grafana ignores them — the existing admin user keeps its old password.To rotate the password (preserves dashboards and data sources):kubectl -n jitera exec -it jitera-grafana-0 -c grafana -- \
grafana-cli admin reset-admin-password '<new-password>'
Alternative — wipe Grafana state (custom dashboards, users, folders, and data sources stored in the DB are lost; provisioned dashboards re-create from ConfigMaps):kubectl -n jitera scale statefulset jitera-grafana --replicas=0
kubectl -n jitera delete pvc -l app.kubernetes.io/name=grafana
kubectl -n jitera scale statefulset jitera-grafana --replicas=1
Update values.yaml (and any GitOps source-of-truth) in the same change so future PVC recreations stay in sync with the live admin password. Reference: Grafana — Reset admin password.
Apply the configuration:
helm upgrade jitera ./charts/jitera \
-f values.yaml \
-n jitera
Accessing Grafana
Ingress (Recommended)
Grafana and Prometheus can be exposed via Ingress. Two modes are supported:
Subdomain mode (default) — serves each tool on its own subdomain:
ingress:
grafana:
enabled: true
type: subdomain
domain: grafana.example.com # Optional — defaults to grafana.<rootDomain>
prometheus:
enabled: true
type: subdomain
domain: prometheus.example.com # Optional — defaults to prometheus.<rootDomain>
Path-based mode — serves each tool under a path on your main domain:
ingress:
grafana:
enabled: true
type: path
pathPrefix: /grafana
prometheus:
enabled: true
type: path
pathPrefix: /prometheus
Port Forward
If Ingress is not configured, use kubectl port-forward:
kubectl port-forward -n jitera svc/jitera-grafana 3000:80
Access at http://localhost:3000. The default username is admin. Retrieve the password with:
kubectl get secret -n jitera jitera-grafana \
-o jsonpath='{.data.admin-password}' | base64 -d
Pre-Built Dashboards
Grafana is automatically provisioned with the following dashboards:
| Dashboard | Description |
|---|
| Internal Service Monitoring | Blackbox probe results — HTTP/TCP health status for all Jitera services and data stores |
| APM Dashboard | Application Performance Monitoring — request rate, error rate, and latency (RED metrics) derived from OpenTelemetry traces |
| Kubernetes Service Monitoring | Pod CPU, memory, network I/O, and restart counts per service |
| Logs Exploration | Query and filter logs collected by Loki and Promtail |
| K8s Views Global | Cluster-wide view of node and workload resource usage |
Health Probes
Blackbox Exporter automatically probes the health endpoints of all Jitera services:
| Service | Endpoint | Probe Type |
|---|
| Frontend | /health | HTTP |
| Automation (Rails) | /health | HTTP |
| Ultron | /api/health | HTTP |
| Ultron Public | /api/health | HTTP |
| SWEF | /health | HTTP |
| Hasura | /healthz | HTTP |
| LiteLLM | /health/readiness | HTTP |
| Document Converter | port 80 | TCP |
| Boost | port 80 | TCP |
| PostgreSQL | configured port | TCP |
| PGVector | configured port | TCP |
| MongoDB | port 27017 | TCP |
| RabbitMQ | port 5672 | TCP |
Hasura, LiteLLM, and Boost probes are only active when those components are enabled in your Helm values.
Helm Values Reference
Prometheus
monitoring:
prometheus:
replicaCount: 1
retention: "15d" # How long to keep metrics
scrapeInterval: "15s"
evaluationInterval: "15s"
persistence:
enabled: true
size: 50Gi
storageClass: "" # Uses cluster default if empty
resources:
requests:
memory: 512Mi
cpu: 100m
limits:
memory: 2Gi
cpu: 1000m
externalLabels: {} # Added to all metrics (useful for multi-cluster setups)
additionalScrapeConfigs: [] # Custom Prometheus scrape targets
Grafana
monitoring:
grafana:
adminUser: admin
adminPassword: <GRAFANA_ADMIN_PASSWORD>
persistence:
enabled: true
size: 5Gi
storageClass: ""
resources:
requests:
memory: 256Mi
cpu: 100m
env: {} # Optional environment variables
Loki (Log Aggregation)
monitoring:
loki:
enabled: true # Set to false to disable log aggregation
retention: "30d" # How long to keep logs
storage:
type: filesystem # Options: filesystem, s3, azure
# S3 configuration (when type: s3)
s3:
bucketName: ""
region: ""
accessKeyId: ""
secretAccessKey: ""
# Azure Blob configuration (when type: azure)
azure:
accountName: ""
accountKey: ""
containerName: ""
persistence:
enabled: true
size: 100Gi
promtail:
enabled: true # Set to false to use an existing Promtail in your cluster
When promtail.enabled: false, configure your existing Promtail to push logs to the Loki endpoint at http://<release-name>-loki:3100/loki/api/v1/push.
Tempo (Distributed Tracing)
monitoring:
tempo:
enabled: true # Set to false to disable distributed tracing
retention: 24h # How long to keep traces
logLevel: info
storage:
type: local # Options: local, s3, azure, gcs
persistence:
enabled: true
size: 10Gi
OpenTelemetry Collector
monitoring:
opentelemetry:
samplerRatio: "1.0" # 1.0 = 100% sampling; lower for high-traffic production (e.g., 0.1)
collector:
resources:
requests:
memory: 256Mi
cpu: 100m
Node Exporter and Kube State Metrics
Both components can be disabled if your cluster already runs them (e.g., via kube-prometheus-stack). When disabled, point Prometheus to the existing services.
monitoring:
nodeExporter:
enabled: true # Set to false to use an existing Node Exporter
external:
namespace: "kube-system"
serviceName: "kube-prometheus-stack-prometheus-node-exporter"
kubeStateMetrics:
enabled: true # Set to false to use an existing Kube State Metrics
external:
namespace: "kube-system"
serviceName: "kube-prometheus-stack-kube-state-metrics"
Prometheus Remote Write
Prometheus can forward metrics to an external backend (e.g., Azure Managed Prometheus, AWS AMP, Grafana Cloud) using remote write. When remote write is enabled, Grafana automatically reads from the remote backend.
monitoring:
prometheus:
remoteWrite:
enabled: true
configs:
- name: "example"
enabled: true
url: "https://your-prometheus-endpoint/api/v1/write"
# Authentication — choose one:
basicAuth:
username: ""
password: ""
# bearerToken: ""
For detailed remote write configuration including Azure Managed Prometheus with sidecar authentication, AWS AMP, and Grafana Cloud, see the Helm chart’s values.yaml.