Integrations
Mipiti integrates with external tools to fit into your existing workflow. Configure integrations on the Settings page.
Jira
Export your threat model to Jira for tracking and implementation:
- Implementation Controls are pushed as Jira issues under the source issue that triggered the model (or a container issue for web-generated models)
- The issue type for exported controls is configurable (Story, Task, Sub-task, etc.)
- Status syncs back from Jira automatically via webhooks
- When you remove an asset or attacker and controls are dropped, their linked Jira issues are automatically transitioned to "Won't Do" with an explanatory comment
To get started:
- Ask your org admin to configure Jira integration (select a project, set up webhooks) in Settings > Integrations
- Connect your own Atlassian account from Settings > Integrations > Jira
- Use the "Export to Jira" button on any model
Note: Jira integration is an organization feature. It is not available for personal workspaces (users not in an organization). If you need Jira integration, join an organization where an admin has configured it.
MCP (Model Context Protocol)
Connect AI coding agents (Claude Code, Claude Desktop, Cursor) to Mipiti for programmatic threat modeling as part of your development workflow.
Authentication
Two ways to authenticate MCP connections:
- OAuth (recommended) — MCP clients with OAuth support automatically prompt you to log in via your browser. No API key needed — just point the client at the URL and approve access when prompted. Tokens refresh automatically.
- API key — For clients without OAuth support, or headless/CI environments, create an API key in Settings and pass it via the
X-API-Keyheader.
Connection options
- Hosted — connect directly to
https://api.mipiti.io/mcp. No installation needed. - Standalone — install the
mipiti-mcpPython package locally for stdio-based connections.
The Settings page has configuration snippets for both options.
MCP Tools Reference
49 tools are available, grouped by category:
| Category | Tools |
|---|---|
| Generation | generate_threat_model, refine_threat_model, regenerate_controls |
| Retrieval | list_threat_models, get_threat_model, get_controls, get_control_objectives, assess_model, get_review_queue |
| Manage | rename_threat_model, delete_threat_model |
| Queries | query_threat_model |
| Updates | update_control_status, refine_control, add_evidence, remove_evidence |
| Entity Management | add_asset, edit_asset, remove_asset, add_attacker, edit_attacker, remove_attacker |
| Evidence Verification | submit_assertions, list_assertions, delete_assertion, get_verification_report |
| Gap Discovery | submit_findings, list_findings, update_finding, get_scan_prompt |
| Compliance | list_compliance_frameworks, select_compliance_frameworks, get_compliance_report, map_control_to_requirement, auto_map_controls, suggest_compliance_remediation, apply_compliance_remediation |
| Workspaces | list_workspaces |
| Systems | list_systems, get_system, create_system, add_model_to_system |
| Export | export_threat_model (CSV, PDF, HTML) |
| Async | get_operation_status |
Pagination and Filtering
Retrieval tools return counts and summaries only by default to stay within AI context limits. For example, get_controls returns total/implemented/not_implemented counts without listing individual controls, and assess_model returns only the summary counts.
To drill into details:
offset/limit— page through results (e.g.,offset=0, limit=20for the first 20 items)status— filter by status (e.g.,status="not_implemented"onget_controls,status="at_risk"onassess_model)control_id— get full details for a single control (e.g.,control_id="CTRL-01"onget_controls)co_id— filter controls mapped to a specific control objectiveinclude_cos— include control objectives inline onget_threat_model(excluded by default)
Common Workflows
Generate and track implementation:
generate_threat_model— create a model from a feature descriptionget_controlswithstatus="not_implemented"— see what needs implementingrefine_control— if a generated control is overly prescriptive, propose a refined description with justification. AI evaluates whether the mitigation group still satisfies all mapped COsupdate_control_status— mark controls as implemented with notesassess_model— check overall assurance posture
Evidence verification (prove controls are implemented):
get_controlswithstatus="implemented"— find controls needing evidence- Analyze your codebase to identify implementing code artifacts
submit_assertions— create typed, machine-verifiable claims (e.g., function exists, test passes)- Run
mipiti-verifyin CI to independently verify assertions against the codebase (Tier 1 + Tier 2) get_verification_report— check verification status and detect drift
See Evidence Verification for the full guide on assertion types, the mipiti-verify CLI, and CI integration.
Gap discovery (find missing implementations):
get_scan_prompt— get a structured guide for scanning against not-implemented controls- Scan the codebase following the prompt, noting checked locations and missing patterns
submit_findings— record gaps with evidence of what was checkedupdate_finding— transition findings through acknowledge → remediate → verify
See Negative Findings for the full lifecycle and remediation bridge.
Compliance gap remediation:
list_compliance_frameworks— see available frameworksselect_compliance_frameworks— link a framework to your modelget_compliance_reportwithstatus="uncovered"— find gapssuggest_compliance_remediation— propose missing assets and attackers (async, ~5 min)apply_compliance_remediation— apply approved suggestions (sync, ~5s)regenerate_controls— generate controls for newly added control objectives
Async Operations
Long-running tools support background execution via the async_mode parameter. When enabled, the tool returns a job_id immediately instead of blocking, and you poll get_operation_status for progress and results.
Async by default (30-120s): generate_threat_model, refine_threat_model, suggest_compliance_remediation. These return a job_id by default. Set async_mode=false to block instead.
Sync by default (5-60s): query_threat_model, get_controls, regenerate_controls, import_controls, check_control_gaps, auto_map_controls. These block by default. Set async_mode=true for background execution.
Polling example:
generate_threat_model(feature_description="User login with OAuth")→{"job_id": "job_a1b2c3d4e5f6"}get_operation_status(job_id="job_a1b2c3d4e5f6")→{"status": "running", "progress": 2, "total": 5, "message": "Generating controls..."}get_operation_status(job_id="job_a1b2c3d4e5f6")→{"status": "completed", "result": {...}}
Jobs expire after 1 hour. Results are persisted in the database regardless of job expiry.
REST API
All features available via MCP are also available as REST API endpoints, callable directly with an API key (X-API-Key header) or OAuth Bearer token. This is useful for custom integrations, CI pipelines, or external tools that don't use MCP.
Finding endpoints (gap discovery):
| Method | Path | Description |
|---|---|---|
POST |
/api/models/{id}/findings |
Submit one or more findings |
GET |
/api/models/{id}/findings |
List findings (filter by control_id, status) |
GET |
/api/models/{id}/findings/{finding_id} |
Get a single finding |
PATCH |
/api/models/{id}/findings/{finding_id} |
Transition finding status |
DELETE |
/api/models/{id}/findings/{finding_id} |
Delete a finding |
GET |
/api/models/{id}/findings/summary |
Finding counts by status |
Assertion endpoints (evidence verification):
| Method | Path | Description |
|---|---|---|
POST |
/api/models/{id}/assertions |
Submit assertions for a control |
GET |
/api/models/{id}/assertions |
List assertions (filter by control_id) |
DELETE |
/api/models/{id}/assertions/{assertion_id} |
Delete an assertion |
GET |
/api/models/{id}/verification/report |
Verification report with drift detection |
POST |
/api/models/{id}/verification/results |
Submit CI verification results |
See the full API reference for all available endpoints.
API Keys
Create API keys on the Settings page to authenticate MCP connections and direct API calls. Keys are shown once at creation and stored securely — they cannot be retrieved later.
Tip: If your MCP client supports OAuth (Claude Code, Claude Desktop, Cursor), you don't need an API key — just use the OAuth config from Settings and log in when prompted. API keys are still useful for headless/CI environments and direct REST API calls.
Key scopes
| Scope | Prefix | Who can create | Use case |
|---|---|---|---|
| Developer | mk_ |
Any user | MCP connections, API calls, model generation |
| Verifier | mv_ |
Workspace owners only | CI pipeline submissions, evidence verification, attestation |
Choose the scope when creating a key. Verifier keys are intended for CI pipelines that submit evidence verification results — they carry restricted permissions scoped to verification operations.
Workspace scoping
Every API key is scoped to a workspace, selected when the key is created. When the key is used for authentication, the workspace resolves automatically — no X-Workspace-Id header is needed.
This means a CI pipeline using a workspace-scoped verifier key can call mipiti-verify run --all to verify every threat model in that workspace in a single invocation. To change the workspace scope, revoke the key and create a new one — this ensures CI pipelines never silently switch context.
Confluence
Mipiti can sync threat models to Confluence pages automatically:
- Pages are created or updated on model save
- Content is rendered as structured XHTML
- The target Confluence space is configured by your org admin in Settings > Integrations
Note: Confluence integration is an organization feature. It is not available for personal workspaces.
Verification Badge
Add a dynamic badge to your README that shows your threat model's verification status. The badge updates automatically as controls are verified in CI.
Setup
- Enable the badge on your threat model — go to the model detail page and toggle "Public Badge" on (or use
PATCH /api/models/{model_id}/badge) - Add this to your
README.md, replacingMODEL_IDwith your threat model's ID:
[](https://mipiti.io/models/MODEL_ID)
Find your model ID on the model detail page URL: mipiti.io/models/{MODEL_ID}.
Badge states
| Badge | Meaning |
|---|---|
| All COs fulfilled and verified by CI | |
| All controls implemented, verification in progress | |
| Some controls implemented, gaps remain | |
| Threat model exists but no controls generated |
Framework-specific badges
Show compliance status for a specific framework:
[](https://mipiti.io/models/MODEL_ID)
How it works
The badge endpoint (/api/badge/{model_id}) is unauthenticated — Shields.io fetches it publicly. It runs the assurance assessment and returns a Shields.io endpoint JSON response with the current status. Cached for 5 minutes to prevent excessive load.
The badge is a live attestation: it reflects the current verification state, not a point-in-time snapshot. If a CI run fails (assertion drift, code change breaks a control), the badge updates automatically on the next fetch.
Privacy: Badges are opt-in. The verification status (fulfilled/at-risk CO counts) is security-sensitive — it reveals where gaps are. Only enable badges for projects where you're comfortable showing this publicly (e.g., open-source projects). The endpoint returns identical "unknown" responses for non-existent and non-enabled models.