Administration
This guide covers administration tasks for Mipiti, including user management (on-prem), usage monitoring, telemetry, cache management, backup signing, and cross-instance migration.
User Management (On-Prem)
When running Mipiti in on-prem mode (DEPLOYMENT_MODE=onprem), user accounts are managed locally rather than through OAuth providers.
First-Run Setup
On first launch with an empty database, navigate to the application URL. You'll be presented with a setup wizard to create the first administrator account. This wizard only appears when there are zero users in the database.
Creating Users
Administrators can create new user accounts from Settings > Users:
- Click Create User
- Enter email, display name, and password
- Select role: User (standard access), Admin (org-level administrative access — only available in orgs with org-level billing), or Superadmin (platform-wide access — create/delete orgs, crypto-shred)
- Click Create
There is no self-registration. All accounts must be created by an administrator.
Password Management
- Admin reset: Admins can reset any user's password from the Users page
- Self-service: Users can change their own password from Settings > Change Password
Password requirements: minimum 8 characters, at least one uppercase letter, one lowercase letter, and one digit.
Disabling Users
Click Disable next to a user in the Users page. Disabled users cannot log in. Click Enable to re-activate.
Multi-Factor Authentication (MFA)
Local auth users can protect their accounts with a second factor. TOTP and WebAuthn are independent methods — either or both can be enabled. OAuth users (GitHub, Google) already have MFA from their identity provider — this feature applies to local accounts only.
TOTP (Authenticator App)
Users enable TOTP from Settings > Account > Authenticator App:
- Click Enable Authenticator App
- Scan the QR code with an authenticator app (Google Authenticator, Authy, 1Password, etc.)
- Enter the 6-digit code to verify
- Save the recovery codes in a secure location — each code can only be used once
After enabling, login requires a 6-digit code from the authenticator app in addition to the password.
To reconfigure the authenticator app (e.g., new phone), click Reconfigure Authenticator App. This requires both the current password and a code from the existing authenticator. The old secret stays active until the new code is verified (atomic swap).
Security Keys (WebAuthn)
Users can add hardware security keys (YubiKey, Titan) or platform authenticators (TouchID, Windows Hello) from Settings > Account > Security Keys. No other MFA method is required first.
- Click Add Security Key
- Follow the browser prompt to tap/verify the security key
- Optionally rename the key for identification
When both TOTP and WebAuthn are enabled, a method chooser is shown at login. WebAuthn-only users get auto-triggered security key verification.
Recovery Codes
- 10 recovery codes are generated when the first MFA method is enrolled (TOTP or WebAuthn)
- Each code can be used once as an alternative to a TOTP code or security key
- Remaining codes are shown on the MFA status page
- Users can regenerate codes from Settings (requires password)
Removing MFA Methods
Each method can be removed independently. Removing the last authentication factor is blocked for password-only users (they must keep at least one). OAuth users can always remove MFA since they have a login fallback.
Admin MFA Reset
If a user is locked out of their account (lost phone, lost security key, no recovery codes), an administrator can reset their MFA from the Admin panel. This removes both TOTP and all WebAuthn credentials. Admins cannot reset their own MFA — they must use the regular reconfigure flow in Settings.
Configuration
No additional configuration is required. MFA encryption keys are auto-generated on first boot and persisted in the data directory.
| Variable | Default | Description |
|---|---|---|
MFA_ENCRYPTION_KEY |
Auto-generated | Fernet key for encrypting TOTP secrets at rest |
MFA_PRE_AUTH_EXPIRY_SECONDS |
300 |
Pre-MFA token lifetime (5 minutes) |
MFA_RECOVERY_CODE_COUNT |
10 |
Number of recovery codes generated |
MFA_REQUIRED_FOR_LOCAL |
true |
Require MFA enrollment for password-based accounts |
WEBAUTHN_RP_ID |
localhost |
WebAuthn Relying Party ID (set to your domain) |
WEBAUTHN_RP_NAME |
Mipiti |
WebAuthn Relying Party display name |
WEBAUTHN_ORIGIN |
http://localhost:5173 |
Expected origin for WebAuthn verification |
Production deployment: Set WEBAUTHN_RP_ID to your domain (e.g., mipiti.example.com) and WEBAUTHN_ORIGIN to your frontend URL (e.g., https://mipiti.example.com).
Self-Service Signup (SaaS)
When SIGNUP_ENABLED=true, users can create accounts with any email address and a password. A 6-digit verification code is emailed to them. After verification, the account is auto-approved and the user is logged in.
This is useful for SaaS deployments where prospects or colleagues may be reluctant to OAuth with their corporate Google account. On-prem deployments typically keep signup disabled and manage users directly.
SMTP Configuration
Email verification requires an SMTP relay. Any SMTP server works — Resend, SendGrid, AWS SES, or an on-prem mail server.
| Variable | Default | Description |
|---|---|---|
SMTP_HOST |
(empty) | SMTP server hostname (e.g., smtp.resend.com) |
SMTP_PORT |
587 |
SMTP port (587 for STARTTLS, 465 for SSL) |
SMTP_USERNAME |
(empty) | SMTP login username |
SMTP_PASSWORD |
(empty) | SMTP login password or API key |
SMTP_FROM_ADDRESS |
noreply@send.mipiti.io |
Sender email address |
SMTP_FROM_NAME |
Mipiti |
Sender display name |
SMTP_USE_TLS |
true |
Enable STARTTLS |
Signup Configuration
| Variable | Default | Description |
|---|---|---|
SIGNUP_ENABLED |
false |
Enable self-service signup |
LOCAL_AUTH_ENABLED |
false |
Must also be true for email/password login to work |
Security Notes
- Anti-enumeration: Signup and resend endpoints return identical generic responses regardless of whether the email exists.
- No user record until verified: Only a temporary verification record exists during the 15-minute verification window. This prevents email squatting.
- Rate limits: Max 3 verification code sends per email per 15-minute window. Max 5 verification attempts before the code is invalidated.
- Password requirements: Minimum 8 characters, at least one uppercase letter, one lowercase letter, and one digit.
Usage Monitoring
Track per-user LLM token consumption to understand resource usage across your team.
Endpoint: GET /api/admin/usage?days=30 (admin only)
Returns aggregated token usage per user over the specified period:
{
"enabled": true,
"days": 30,
"users": [
{
"user_id": "...",
"email": "alice@company.com",
"display_name": "Alice",
"traces": 42,
"prompt_tokens": 150000,
"completion_tokens": 50000,
"total_tokens": 200000,
"cache_hits": 5
}
]
}
- traces: Number of distinct operations (each threat model generation, refinement, or query is one trace)
- cache_hits: LLM calls served from cache (no API cost)
- total_tokens: Total LLM tokens consumed (prompt + completion)
Telemetry
Mipiti records structured telemetry for every LLM call, enabling debugging and cost monitoring.
Configuration
| Environment Variable | Default | Description |
|---|---|---|
TELEMETRY_ENABLED |
true |
Enable/disable telemetry recording |
TELEMETRY_LOG_LEVEL |
INFO |
Log level (DEBUG includes prompt previews) |
TELEMETRY_LOG_FILE |
(none) | Optional file path for telemetry JSON logs |
TELEMETRY_RETENTION_DAYS |
30 |
Auto-prune records older than this |
Endpoints
GET /api/admin/telemetry/stats— Aggregate statistics (total calls, token counts, cache hit rate, error rate)GET /api/admin/telemetry/traces?limit=50— List recent traces (newest first), optionally filtered byuser_idGET /api/admin/telemetry/traces/{trace_id}— Detailed view of all LLM calls within a single operation
What's Recorded
Each LLM call records: model name, prompt template, token counts (prompt/completion/total), latency, cache hit status, and any errors. Calls are grouped into traces — one trace per user operation (generation, refinement, query).
Cache Management
The LLM response cache reduces API costs by reusing responses for identical prompts. Cache entries are per-user isolated.
Configuration
| Environment Variable | Default | Description |
|---|---|---|
LLM_CACHE_ENABLED |
true |
Enable/disable caching |
LLM_CACHE_TTL_HOURS |
24 |
Cache entry lifetime |
LLM_CACHE_MAX_ENTRIES |
10000 |
Max entries before LRU eviction |
Endpoints
GET /api/admin/cache/stats— Cache statistics (total entries, token savings, oldest/newest entry)DELETE /api/admin/cache— Clear the entire cache
Backup & Restore
Downloading a Backup
Endpoint: GET /api/admin/backup (admin only)
Returns a signed, gzipped SQLite database file. The response includes a Content-Disposition header with a timestamped filename (e.g., mipiti-backup-20260220-143000.db.gz.signed).
Every backup is cryptographically signed with the instance's ECDSA P-256 private key. This ensures that backups cannot be tampered with and that the restore endpoint can verify the backup came from a trusted source.
Per-org mode: When ORG_DB_ISOLATION is enabled, the backup endpoint operates per-organization:
- No
org_idparameter: backs up the caller's own org database org_idparameter (superadmin only): backs up a specific org's database- The backup contains the org's encrypted database file (still encrypted with the org's DEK)
Personal workspace isolation: Personal workspace data is stored on the platform database, not in any org's encrypted database. Org backups do not include personal workspace threat models, sessions, or other data. This ensures org admins cannot access users' personal work through backup/restore.
Restoring a Backup
Endpoint: POST /api/admin/restore (admin only)
Restores the database from a previously downloaded backup. This is a destructive operation that replaces all data (or, in per-org mode, replaces only the target org's data).
Per-org mode: Include an org_id form field to specify the target organization. The same access rules apply — admins can restore their own org, superadmins can restore any org.
The restore endpoint verifies the backup's ECDSA signature before applying it. The backup must have been signed by either this instance or an instance whose public key has been imported as a trusted signer.
Safety requirements:
- Valid signature — the backup must be signed by a trusted instance
- Admin authentication — must be logged in as an admin
- Confirmation text — you must include the exact string
CONFIRM RESTOREin the request body - Password re-entry (on-prem only) — local accounts must re-enter their admin password. OAuth-only accounts (SaaS) skip this step since there is no local password.
Request: Multipart form upload with the backup file and confirmation fields:
POST /api/admin/restore
Content-Type: multipart/form-data
confirmation: CONFIRM RESTORE
password: <your-admin-password> (on-prem local accounts only)
backup_file: <mipiti-backup-*.db.gz.signed>
Using curl (on-prem with local account):
curl -X POST https://your-instance/api/admin/restore \
-F 'confirmation=CONFIRM RESTORE' \
-F 'password=YourAdminPassword' \
-F 'backup_file=@mipiti-backup-20260220-143000.db.gz.signed' \
-b cookies.txt
Using curl (SaaS / OAuth account):
curl -X POST https://your-instance/api/admin/restore \
-F 'confirmation=CONFIRM RESTORE' \
-F 'backup_file=@mipiti-backup-20260220-143000.db.gz.signed' \
-b cookies.txt
After restore: The web UI will automatically reload. If using the API directly, subsequent requests will use the restored database immediately — no restart required.
Backup Signing
Every backup is cryptographically signed with an ECDSA P-256 key. This provides two guarantees:
- Integrity — a tampered or corrupted backup will be rejected on restore
- Authenticity — only backups from trusted sources can be restored
Per-Org Signing Keys (with database isolation)
When ORG_DB_ISOLATION is enabled, each organization gets its own ECDSA P-256 signing key pair:
- Private key — encrypted with the org's DEK (AES-256-GCM) and stored in the platform database. No secrets on disk.
- Public key — stored in cleartext in the platform database, exportable via the API.
- Signing keys are generated automatically when an org is provisioned.
- Crypto-shred destroys the DEK, making the signing key (and all org data) unrecoverable.
Instance Signing Key (legacy / without database isolation)
Without database isolation, the instance uses a single ECDSA P-256 key pair:
- Private key — persisted to
.backup_signing_key.pemon the data volume - Public key — derived from the private key, exportable via the API
The key pair persists across restarts. To use a pre-existing key (e.g., from a backup or when migrating infrastructure), set the BACKUP_SIGNING_KEY environment variable to the PEM-encoded private key.
When migrating from legacy to per-org mode, the instance key is retained as a fallback for verifying pre-migration backups.
Export this instance's public key:
Navigate to Admin > Database > Signing Key to view and copy the public key PEM and fingerprint. Alternatively, use the API:
GET /api/admin/signing-key
Returns:
{
"fingerprint": "a1b2c3d4e5f6...",
"public_key_pem": "-----BEGIN PUBLIC KEY-----\n..."
}
The fingerprint is the SHA-256 hash of the DER-encoded public key, used to identify the signer in backup files.
Cross-Instance Backup Migration (On-Prem)
When migrating data between Mipiti instances (hardware swap, disaster recovery, instance consolidation), backups signed by the source instance must be trusted by the destination instance.
Workflow:
- On the source instance, copy its public key from Admin > Database > Signing Key
- On the destination instance, go to Admin > Database > Trusted Signers, click Import Trusted Signer, paste the public key PEM, and give it a label (e.g., "Production instance")
- Download a backup from the source instance and restore it on the destination
Alternatively, use the API directly:
- Export the source's public key:
GET /api/admin/signing-key - Import on the destination:
POST /api/admin/trusted-signerswith{"public_key_pem": "...", "label": "..."} - Download and restore the backup
Managing Trusted Signers
Trusted signers are other Mipiti instances whose backups this instance will accept. Only ECDSA P-256 public keys are accepted. Manage them from Admin > Database > Trusted Signers or via the API.
List trusted signers:
GET /api/admin/trusted-signers
Import a trusted signer:
POST /api/admin/trusted-signers
{
"public_key_pem": "-----BEGIN PUBLIC KEY-----\n...",
"label": "Staging instance"
}
The fingerprint is computed automatically from the public key. Duplicate fingerprints and importing the instance's own key are rejected.
Remove a trusted signer:
DELETE /api/admin/trusted-signers/{fingerprint}
Signed Backup Format
Backups use a trailer appended after the gzip payload:
[gzip payload][MIPIBAK\x02 magic (8 bytes)][signer fingerprint (32 bytes)][sig length (2 bytes BE)][ECDSA signature]
The ECDSA signature covers only the gzip payload. The trailer is verified and stripped during restore.
Organization Management
Organizations are the top-level tenant boundary in Mipiti. Only superadmins can create or delete organizations.
Creating an organization
- Navigate to Settings > Organizations
- Click + New Organization
- Enter a name (required), slug (optional, auto-generated from name), and description (optional)
- Click Create
The slug is a URL-friendly identifier (lowercase, hyphens) used in API paths. If omitted, it's auto-generated from the name. The slug cannot be changed after creation.
On-prem deployments with a license enforce a maximum number of organizations (max_orgs). If the limit is reached, creation is rejected with a 403 error.
Deleting an organization
- Navigate to Settings > Organizations and select the organization
- Click Delete Organization at the bottom of the settings panel
This permanently removes the organization record and all associated data. If per-org database isolation is enabled, the encrypted database and encryption key are destroyed.
This action cannot be undone. For GDPR-compliant data erasure without deleting the org record, use crypto-shred instead.
Roles
Mipiti has three role levels:
| Role | Plan tier | Threat models | Members | Org settings | Platform |
|---|---|---|---|---|---|
| Superadmin | Enterprise | Full access | Invite, remove, change roles | Edit, create, delete orgs | Manage all orgs, crypto-shred |
| Admin | Organization | Full access | Invite, remove, change roles | Edit name, description | — |
| User | Personal | Full access | View only | View only | — |
- Superadmin is the platform-level role for Mipiti operators. Superadmins have Enterprise tier with unlimited credits, and can create and delete organizations, crypto-shred org data, configure resource limits, and access all orgs. Set via
SUPERADMIN_EMAILSenv var — not self-assignable. - Admin is the org-level role, only available in organizations with org-level billing (Organization or Enterprise tier). Admins manage members, billing, and settings within their organization. Personal-billing orgs (e.g., Public SaaS) do not have admins — platform operators use the Superadmin role instead.
- User is the default role for all members. Their plan tier comes from their personal billing (Developer or Pro) or their org's plan if the org has org-level billing.
Per-Org Database Isolation
When enabled, each organization's data is stored in a separate SQLCipher-encrypted database with a unique encryption key. This provides cryptographic tenant isolation — even with full disk access, one org's data cannot be read without its key.
Configuration
| Environment Variable | Default | Description |
|---|---|---|
ORG_DB_ISOLATION |
false |
Master switch for per-org encrypted databases |
KMS_PROVIDER |
none |
Key management backend: none, static, aws, vault |
ENCRYPTION_KEY |
(none) | Base64-encoded 256-bit key (for static provider) |
KMS Providers
Static (on-prem)
Uses a single master key from the ENCRYPTION_KEY environment variable to wrap per-org DEKs. Suitable for single-server on-prem deployments where you manage your own key.
| Variable | Description |
|---|---|
ENCRYPTION_KEY |
Base64-encoded 256-bit key |
Generate a key: openssl rand -base64 32
AWS KMS (recommended for SaaS)
Uses AWS KMS to wrap/unwrap per-org DEKs. The master key never leaves AWS — only encrypted DEK blobs are stored in the platform database.
| Variable | Description |
|---|---|
AWS_KMS_KEY_ID |
KMS key ARN (e.g., arn:aws:kms:us-east-1:123456:key/abc-123) |
AWS_KMS_REGION |
AWS region (e.g., us-east-1) |
AWS_ACCESS_KEY_ID |
IAM access key |
AWS_SECRET_ACCESS_KEY |
IAM secret key |
Setup:
- Create a symmetric KMS key (Encrypt and decrypt) in the AWS region closest to your deployment
- Create an IAM user (e.g.,
mipiti-api) with no console access - Attach an inline policy granting only
kms:Encryptandkms:Decrypton the key:{ "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Action": ["kms:Encrypt", "kms:Decrypt"], "Resource": "arn:aws:kms:REGION:ACCOUNT_ID:key/KEY_ID" }] } - Create access keys for the IAM user (Security credentials > Create access key > "Application running outside AWS")
- Set the environment variables on your deployment
Cost: ~$1/month per KMS key + $0.03 per 10,000 API calls (negligible).
Security: The IAM user can only encrypt and decrypt with this one key — no admin, no delete, no access to any other AWS service. Access keys are injected as encrypted secrets at runtime.
Vault (future)
HashiCorp Vault Transit engine. Not yet implemented. Configure VAULT_ADDR, VAULT_TOKEN, VAULT_TRANSIT_KEY.
How it works
- Each organization gets a unique 256-bit data encryption key (DEK)
- The DEK is wrapped (encrypted) by the KMS provider and stored in the platform database
- At runtime, the DEK is unwrapped and cached in memory to open the org's SQLCipher database
- Each org also gets its own ECDSA P-256 signing key pair for backup signing — the private key is encrypted with the org's DEK (AES-256-GCM) and stored alongside the wrapped DEK
- All org-scoped data (threat models, controls, sessions, findings, Jira mappings, etc.) lives in the org's encrypted database
KMS master key (KEK)
└─ wraps per-org DEK
├─ encrypts org SQLCipher database
└─ encrypts org backup signing key (AES-256-GCM)
Platform-level data (users, organizations, API keys, billing) remains in the shared platform database. No secrets are stored on disk — all cryptographic material is protected by the KMS key hierarchy.
Data migration
When you enable ORG_DB_ISOLATION on an existing instance, the startup process automatically migrates data from the shared database into per-org encrypted databases. This is a one-time operation that:
- Copies each org's data to its encrypted database
- Uses
INSERT OR IGNOREfor idempotency (safe to re-run) - Does not delete legacy data from the shared database (safety net)
- Logs migration progress and row counts
Crypto-shred
Superadmins can permanently destroy all data for an organization by deleting its encryption key:
DELETE /api/organizations/{org_id}/data
This is cryptographic erasure — the org's database file and DEK are both deleted. Even if the encrypted file were recovered from disk, it would be unreadable without the key. The org record itself is preserved in the platform database (it can be re-provisioned).
This is irreversible. Use for GDPR right-to-erasure or when decommissioning an org.
Integrations (Jira / Confluence)
Jira and Confluence integrations are org-scoped. See Organizations > Integrations for the authorization model and user-facing setup guide.
This section covers the on-prem server configuration required to enable the integration.
Configuration
| Variable | Default | Description |
|---|---|---|
JIRA_CLIENT_ID |
(empty) | Jira Cloud OAuth 2.0 app client ID |
JIRA_CLIENT_SECRET |
(empty) | Jira Cloud OAuth 2.0 app client secret |
JIRA_REDIRECT_URI |
(empty) | OAuth callback URL (e.g., https://mipiti.example.com/auth/jira/callback) |
JIRA_TOKEN_ENCRYPTION_KEY |
(empty) | Fernet key for encrypting OAuth tokens at rest |
JIRA_WEBHOOK_SECRET |
(empty) | HMAC-SHA256 secret for Jira webhook verification |
All variables are optional — if not set, Jira/Confluence integration is unavailable in the UI.
Generate a token encryption key: python -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode())"
Credit Management
Superadmins can add or deduct credits from any user or organization directly from the UI.
User credits
Navigate to Settings > Admin > Users. The Balance column shows each user's current credit balance. Click Adjust next to any user to expand the inline adjustment form:
- Enter the amount (positive to add, negative to deduct)
- Optionally enter a note (required for deductions)
- Click Add Credits or Deduct
Guards:
- Zero amounts are rejected
- Deductions require a written reason
- Cannot deduct more than the user's current balance
The user receives an email notification: congratulatory for grants, with the reason note for deductions. The adjustment is logged in the activity feed.
Organization credit pool
Navigate to Settings > Organizations, select an organization, and scroll to Credit Pool. Superadmins see an "Adjust Credit Pool" section with the same amount + note + Add/Deduct controls.
All org admins receive email notifications for pool adjustments.
Audit trail
Every credit adjustment is recorded as a credit_adjustment activity event, visible in the activity feed with a human-readable summary (e.g., "+100 credits to alice@example.com" or "-50 credits from Acme Corp (reason: trial expired)").
Per-Org Resource Limits
Superadmins can configure per-organization limits on workspaces and members. This is separate from on-prem license limits — per-org limits work in both SaaS and on-prem deployments.
| Field | Default | Description |
|---|---|---|
max_workspaces |
0 (unlimited) |
Maximum non-personal workspaces in the org |
max_members |
0 (unlimited) |
Maximum members in the org |
Set limits from Settings > Organizations > Limits & Usage (superadmin only), or via the API:
PATCH /api/organizations/{org_id}
{"max_workspaces": 10, "max_members": 50}
When a limit is reached, workspace creation or member invitation returns 403. Set to 0 to remove the limit.
See Organizations > Resource limits for the full usage monitoring guide.
License (On-Prem)
On-prem deployments require a license file for workspace and organization limit enforcement.
Setup
- Place your license file (e.g.,
license.jwt) in the data volume - Set
LICENSE_KEY_FILE=/data/license.jwtin your environment
Behavior
- Valid license: Application starts normally; workspace creation is limited to
max_workspacesand organization creation is limited tomax_orgs - Expired/invalid license: Application refuses to start with an error message
- No license file: Application starts with a warning; limits are not enforced
License claims
| Claim | Default | Description |
|---|---|---|
max_workspaces |
(required) | Maximum workspaces per instance |
max_orgs |
1 |
Maximum organizations (backward-compatible default for legacy licenses) |
Security
JWT Signing Key
The application uses a secret key to sign and verify authentication tokens (JWTs).
- If
AUTH_SECRETis set in the environment, that value is used - If not set, a cryptographically secure key is auto-generated on first boot and persisted to
.auth_secreton the data volume - The key is reused on subsequent restarts, so existing sessions remain valid
For most on-prem deployments, the auto-generated key is sufficient. You only need to set AUTH_SECRET manually if you're running multiple backend instances that must share the same signing key.
Supply Chain Security
The CI pipeline includes automated dependency and container scanning:
- Python dependencies:
pip-auditruns on every push/PR, checking installed packages against the OSV/PyPI advisory database. Fails CI on known vulnerabilities. - npm dependencies:
npm audit --audit-level=high --omit=devruns on every push/PR, scanning production dependencies only. Dev-only advisories are tracked in the security risk registry (SECURITY-RISKS.md) rather than blocking CI. - Container images: Trivy scans both backend and frontend Docker images for CRITICAL/HIGH CVEs during the release workflow. Blocks publishing if vulnerabilities are found.
- SBOM: CycloneDX Software Bill of Materials generated for both container images on release, available as downloadable build artifacts (90-day retention).
Accepted risks (dev-only vulnerabilities that cannot currently be resolved) are documented in SECURITY-RISKS.md at the repository root with severity, justification, mitigation, and review dates.
Backup Signing Key
Each instance has a dedicated ECDSA P-256 key pair for signing backups, separate from the JWT signing key.
- If
BACKUP_SIGNING_KEYis set (PEM-encoded private key), that key is used - If not set, a key pair is auto-generated on first boot and the private key is persisted to
.backup_signing_key.pemon the data volume - The public key and fingerprint are derived from the private key at startup
The backup signing key is intentionally separate from AUTH_SECRET — a leaked signed backup cannot be used to recover the JWT signing key or forge authentication tokens.
See Backup Signing above for cross-instance trust configuration.
Maintenance Mode (Circuit Breaker)
The platform includes an automatic circuit breaker that activates maintenance mode when the OpenAI API quota is exhausted. When tripped, all non-admin API requests receive a 503 Service Unavailable response, and non-admin users see a "System Under Maintenance" page.
How it works
- When any LLM call returns an
insufficient_quotaerror from OpenAI, the circuit breaker trips automatically - An urgent email alert is sent to the configured
MAINTENANCE_ALERT_EMAILaddress - All API endpoints, MCP tools, and SSE streams are blocked for non-admin users
- Superadmins can still access the platform to investigate and resolve the issue
Important: The circuit breaker does not trigger when individual users or organizations run out of their internal credit balance. Only OpenAI platform-level quota errors activate it.
Configuration
| Variable | Default | Description |
|---|---|---|
MAINTENANCE_ALERT_EMAIL |
(empty) | Email address for urgent maintenance alerts. Leave empty to disable email alerts. |
SMTP must be configured (see Self-Service Signup for SMTP settings) for email alerts to be sent.
Admin endpoints
| Method | Endpoint | Description |
|---|---|---|
GET |
/api/admin/maintenance |
Returns {active, reason, tripped_at} (superadmin only) |
POST |
/api/admin/maintenance/clear |
Clears maintenance mode and restores normal operation (superadmin only) |
Clearing maintenance mode
- Log in as a superadmin (superadmins bypass the maintenance gate)
- Resolve the underlying issue (e.g., increase OpenAI spending limit, add credits)
- Call
POST /api/admin/maintenance/clearor use the admin panel to clear maintenance mode - All users can now access the platform again