Integrations

Mipiti integrates with external tools to fit into your existing workflow. Configure integrations on the Settings page.

Jira

Export your threat model to Jira for tracking and implementation:

To get started:

  1. Ask your org admin to configure Jira integration (select a project, set up webhooks) in Settings > Integrations
  2. Connect your own Atlassian account from Settings > Integrations > Jira
  3. Use the "Export to Jira" button on any model

Note: Jira integration is an organization feature. It is not available for personal workspaces (users not in an organization). If you need Jira integration, join an organization where an admin has configured it.

MCP (Model Context Protocol)

Connect AI coding agents (Claude Code, Claude Desktop, Cursor) to Mipiti for programmatic threat modeling as part of your development workflow.

Authentication

Two ways to authenticate MCP connections:

Connection options

The Settings page has configuration snippets for both options.

MCP Tools Reference

49 tools are available, grouped by category:

Category Tools
Generation generate_threat_model, refine_threat_model, regenerate_controls
Retrieval list_threat_models, get_threat_model, get_controls, get_control_objectives, assess_model, get_review_queue
Manage rename_threat_model, delete_threat_model
Queries query_threat_model
Updates update_control_status, refine_control, add_evidence, remove_evidence
Entity Management add_asset, edit_asset, remove_asset, add_attacker, edit_attacker, remove_attacker
Evidence Verification submit_assertions, list_assertions, delete_assertion, get_verification_report
Gap Discovery submit_findings, list_findings, update_finding, get_scan_prompt
Compliance list_compliance_frameworks, select_compliance_frameworks, get_compliance_report, map_control_to_requirement, auto_map_controls, suggest_compliance_remediation, apply_compliance_remediation
Workspaces list_workspaces
Systems list_systems, get_system, create_system, add_model_to_system
Export export_threat_model (CSV, PDF, HTML)
Async get_operation_status

Pagination and Filtering

Retrieval tools return counts and summaries only by default to stay within AI context limits. For example, get_controls returns total/implemented/not_implemented counts without listing individual controls, and assess_model returns only the summary counts.

To drill into details:

Common Workflows

Generate and track implementation:

  1. generate_threat_model — create a model from a feature description
  2. get_controls with status="not_implemented" — see what needs implementing
  3. refine_control — if a generated control is overly prescriptive, propose a refined description with justification. AI evaluates whether the mitigation group still satisfies all mapped COs
  4. update_control_status — mark controls as implemented with notes
  5. assess_model — check overall assurance posture

Evidence verification (prove controls are implemented):

  1. get_controls with status="implemented" — find controls needing evidence
  2. Analyze your codebase to identify implementing code artifacts
  3. submit_assertions — create typed, machine-verifiable claims (e.g., function exists, test passes)
  4. Run mipiti-verify in CI to independently verify assertions against the codebase (Tier 1 + Tier 2)
  5. get_verification_report — check verification status and detect drift

See Evidence Verification for the full guide on assertion types, the mipiti-verify CLI, and CI integration.

Gap discovery (find missing implementations):

  1. get_scan_prompt — get a structured guide for scanning against not-implemented controls
  2. Scan the codebase following the prompt, noting checked locations and missing patterns
  3. submit_findings — record gaps with evidence of what was checked
  4. update_finding — transition findings through acknowledge → remediate → verify

See Negative Findings for the full lifecycle and remediation bridge.

Compliance gap remediation:

  1. list_compliance_frameworks — see available frameworks
  2. select_compliance_frameworks — link a framework to your model
  3. get_compliance_report with status="uncovered" — find gaps
  4. suggest_compliance_remediation — propose missing assets and attackers (async, ~5 min)
  5. apply_compliance_remediation — apply approved suggestions (sync, ~5s)
  6. regenerate_controls — generate controls for newly added control objectives

Async Operations

Long-running tools support background execution via the async_mode parameter. When enabled, the tool returns a job_id immediately instead of blocking, and you poll get_operation_status for progress and results.

Async by default (30-120s): generate_threat_model, refine_threat_model, suggest_compliance_remediation. These return a job_id by default. Set async_mode=false to block instead.

Sync by default (5-60s): query_threat_model, get_controls, regenerate_controls, import_controls, check_control_gaps, auto_map_controls. These block by default. Set async_mode=true for background execution.

Polling example:

  1. generate_threat_model(feature_description="User login with OAuth"){"job_id": "job_a1b2c3d4e5f6"}
  2. get_operation_status(job_id="job_a1b2c3d4e5f6"){"status": "running", "progress": 2, "total": 5, "message": "Generating controls..."}
  3. get_operation_status(job_id="job_a1b2c3d4e5f6"){"status": "completed", "result": {...}}

Jobs expire after 1 hour. Results are persisted in the database regardless of job expiry.

REST API

All features available via MCP are also available as REST API endpoints, callable directly with an API key (X-API-Key header) or OAuth Bearer token. This is useful for custom integrations, CI pipelines, or external tools that don't use MCP.

Finding endpoints (gap discovery):

Method Path Description
POST /api/models/{id}/findings Submit one or more findings
GET /api/models/{id}/findings List findings (filter by control_id, status)
GET /api/models/{id}/findings/{finding_id} Get a single finding
PATCH /api/models/{id}/findings/{finding_id} Transition finding status
DELETE /api/models/{id}/findings/{finding_id} Delete a finding
GET /api/models/{id}/findings/summary Finding counts by status

Assertion endpoints (evidence verification):

Method Path Description
POST /api/models/{id}/assertions Submit assertions for a control
GET /api/models/{id}/assertions List assertions (filter by control_id)
DELETE /api/models/{id}/assertions/{assertion_id} Delete an assertion
GET /api/models/{id}/verification/report Verification report with drift detection
POST /api/models/{id}/verification/results Submit CI verification results

See the full API reference for all available endpoints.

API Keys

Create API keys on the Settings page to authenticate MCP connections and direct API calls. Keys are shown once at creation and stored securely — they cannot be retrieved later.

Tip: If your MCP client supports OAuth (Claude Code, Claude Desktop, Cursor), you don't need an API key — just use the OAuth config from Settings and log in when prompted. API keys are still useful for headless/CI environments and direct REST API calls.

Key scopes

Scope Prefix Who can create Use case
Developer mk_ Any user MCP connections, API calls, model generation
Verifier mv_ Workspace owners only CI pipeline submissions, evidence verification, attestation

Choose the scope when creating a key. Verifier keys are intended for CI pipelines that submit evidence verification results — they carry restricted permissions scoped to verification operations.

Workspace scoping

Every API key is scoped to a workspace, selected when the key is created. When the key is used for authentication, the workspace resolves automatically — no X-Workspace-Id header is needed.

This means a CI pipeline using a workspace-scoped verifier key can call mipiti-verify run --all to verify every threat model in that workspace in a single invocation. To change the workspace scope, revoke the key and create a new one — this ensures CI pipelines never silently switch context.

Confluence

Mipiti can sync threat models to Confluence pages automatically:

Note: Confluence integration is an organization feature. It is not available for personal workspaces.

Verification Badge

Add a dynamic badge to your README that shows your threat model's verification status. The badge updates automatically as controls are verified in CI.

Setup

  1. Enable the badge on your threat model — go to the model detail page and toggle "Public Badge" on (or use PATCH /api/models/{model_id}/badge)
  2. Add this to your README.md, replacing MODEL_ID with your threat model's ID:
[![Mipiti Verified](https://img.shields.io/endpoint?url=https://api.mipiti.io/api/badge/MODEL_ID)](https://mipiti.io/models/MODEL_ID)

Find your model ID on the model detail page URL: mipiti.io/models/{MODEL_ID}.

Badge states

Badge Meaning
Verified All COs fulfilled and verified by CI
Implemented All controls implemented, verification in progress
Partial Some controls implemented, gaps remain
No controls Threat model exists but no controls generated

Framework-specific badges

Show compliance status for a specific framework:

[![Mipiti SOC 2](https://img.shields.io/endpoint?url=https://api.mipiti.io/api/badge/MODEL_ID?framework=soc2-tsc)](https://mipiti.io/models/MODEL_ID)

How it works

The badge endpoint (/api/badge/{model_id}) is unauthenticated — Shields.io fetches it publicly. It runs the assurance assessment and returns a Shields.io endpoint JSON response with the current status. Cached for 5 minutes to prevent excessive load.

The badge is a live attestation: it reflects the current verification state, not a point-in-time snapshot. If a CI run fails (assertion drift, code change breaks a control), the badge updates automatically on the next fetch.

Privacy: Badges are opt-in. The verification status (fulfilled/at-risk CO counts) is security-sensitive — it reveals where gaps are. Only enable badges for projects where you're comfortable showing this publicly (e.g., open-source projects). The endpoint returns identical "unknown" responses for non-existent and non-enabled models.