Working with Models
Refining a threat model
After generating a model, you can refine it in the same chat session:
- "Add an insider threat attacker with database access"
- "Remove the availability property from the logging asset"
- "Add a trust boundary between the API gateway and backend services"
Each refinement creates a new version of the model. The original is preserved.
You can also refine from the Dashboard or Models page by clicking the refine button on any model card.
Versioning and diffs
Every model is versioned automatically. On the Models page you can:
- View all versions of a model
- Compare any two versions to see a structured diff — additions, removals, and modifications to assets, attackers, and control objectives are color-coded
- Navigate back to previous versions
Sessions
The Sessions page shows your chat conversation history. Each session records:
- Your messages and Mipiti's responses
- Which threat model was created or refined
- The intent classification for each turn
Click any session to replay the full conversation.
Adding, editing, and removing entities
You can directly add, edit, or soft-delete individual assets, attackers, trust boundaries, components, and assumptions without going through chat refinement. In the Assurance tab, expand the "View Model" section:
- Click the + button in a section header to add a new entity
- Click the pencil icon on any row to edit it
- Click the trash icon to soft-delete it (with confirmation)
- A previously soft-deleted asset or attacker can be restored — its ID is reinstated along with every control it anchored
The same modal flow opens when an operator clicks a one-click resolution button on a coherence-report finding (e.g. Scope asset on a co_asset_unbounded finding opens the asset edit modal pre-filled with the component picker; Ground component on a component_unbound finding opens the component edit modal). The asset modal carries a multi-select component picker; the assumption modal exposes an optional structured-exclusion-predicate section auto-expanded when launched from a finding so the operator's prose context flows in without retyping.
Each direct edit creates a new version of the model. Mipiti never hard-removes assets or attackers, and never renumbers control-objective IDs. The lifecycle works like this:
- Soft-delete: the asset (or attacker) keeps its ID forever — never reused. Its linked control objectives are tombstoned (marked removed but retained so controls referencing them stay valid). Any control whose mapped COs are all tombstoned surfaces as orphaned — a derived state, shown as a distinct section of Assurance — so you can either remap it to a live CO or soft-delete the control explicitly. You cannot soft-delete the last live asset or the last live attacker (the cross-product would be empty, so there'd be nothing to threat-model).
- Restore: flip the entity back to live. Its CO tombstones revive with their original IDs, and orphaned controls re-attach automatically.
- Edit (identity-preserving): wording fixes, property tightening, description clarifications. Applied normally.
- Edit (identity-changing): Mipiti refuses. Changing "OIDC Token" → "CI Attestation Bundle" describes a different asset — the platform asks you to soft-delete the old and add a new one under a new ID. This runs an LLM classifier that fails closed (HTTP 503) on LLM outage and 502 on malformed response, so identity is never silently replaced.
- Re-adding a previously soft-deleted entity: if your proposed asset or attacker matches a soft-deleted one by identity, Mipiti reanimates it automatically (preserving its ID and every control tied to it) rather than creating a duplicate. If the match is plausible but not certain, the add is rejected with an explicit suggestion to restore the matched ID or pick a more distinctive name. The same reanimation path runs during bulk compliance remediation.
- Adding controls (auto-generation): when control auto-generation runs after a model edit (e.g., adding an attacker triggers controls for the new COs), the platform folds new controls into existing ones if they describe the same mechanism — preserving existing control IDs and unioning their
co_idsandframework_refs. This is load-bearing because control IDs key into Jira mappings, compliance mappings, sufficiency signatures, and assertions; the prior bare-concat behavior would silently break those links on every iteration. Genuinely new controls receive fresh IDs starting frommax(existing) + 1.
Existing implementation controls are automatically carried forward to the new version. Because CO IDs are stable, controls' mappings stay valid without remapping heuristics. Controls whose underlying pair was removed become orphaned (not dropped). Jira integration mirrors the lifecycle: orphaned controls are auto-closed in Jira (without removing the ticket), and resurrected controls have their prior ticket reopened — ticket history is preserved across soft-delete / restore cycles.
The activity feed distinguishes each action (asset_added, asset_edited, asset_soft_deleted, asset_restored, and attacker equivalents) with the specific entity identifier, so the audit trail names what changed, not just that something changed.
Three ways to modify a model
| Method | How | LLM involved? | Best for |
|---|---|---|---|
| Direct CRUD | UI buttons (+, pencil, trash) | No | Quick edits — names, properties, risk ratings |
| Targeted edit | "Add a DDoS attacker" | Yes (single-entity) | Adding or editing with LLM reasoning |
| Full refinement | "Restructure the model for API security" | Yes (full model) | Broad structural changes |
When you ask to add, remove, or edit a single entity in the conversation, Mipiti automatically routes to a fast targeted operation instead of regenerating the entire model. For broader instructions, it falls back to full refinement.
Managing control groups
When controls are generated, the LLM assigns each control to a mitigation group for each CO it covers. See Control Alternatives and Defense-in-Depth for how groups work.
In the CO drill-down view on the Assurance page, each control shows its group assignment as a badge:
- Group N (indigo badge) — the control belongs to mitigation group N
- Defense-in-depth (gray badge) — the control has no group and doesn't affect mitigation status
The summary for each CO shows which groups are complete and which are incomplete. When multiple groups exist for the same CO, they represent alternative paths — completing any one group mitigates the CO.
Editing risk ratings
Each asset has an impact rating and each attacker has a likelihood rating (High / Medium / Low). These ratings are composed deterministically from underlying factor decompositions — seven factors for asset impact, five for attacker likelihood. The LLM judges the factors during generation; you adjust ratings by editing the factors, not the H/M/L directly.
To edit, click any asset or attacker rating badge (or the pencil icon next to the entity) to open the edit modal. The modal shows every factor the LLM judged, the derived rating updates live as you change values, and you supply a change reason documenting why you're overriding the LLM's judgment. The change reason is required and persists to the audit trail (editor, timestamp, factor delta) — there's no path to change a rating without leaving an audit record.
When ratings change, all related control objectives recompute their risk tier immediately. Factor edits don't bump the model's version chain; they're recorded as append-only revisions on a side-table and surfaced via a read-time overlay, so the live view reflects current state while historical version reads (GET /api/models/{id}/versions/{N}) preserve the factors as they were at that version's creation. Each revision row carries an explicit model_version so you can attribute every factor change to the version it was applied against. For per-CO drill-down, GET /api/models/{id}/control-objectives/{co_id}/factor-history returns a chronological per-version timeline with the derived risk_tier at each boundary and the per-revision delta sequence (editor / rationale / change_reason) — useful for walking how a CO's risk profile evolved across versions.
When you add an asset or attacker manually, you supply only identity-bearing fields (name / description / security properties for assets; capability / position / archetype for attackers). The platform LLM-reasons the factor decomposition using the same prompt the generation pipeline applies to LLM-introduced entities — calibration is consistent regardless of who introduced the entity. You can override any factor afterward via the edit flow.
See the Methodology page for the full factor list, the composition rules, and worked examples.
Model names
When you generate a model, Mipiti automatically creates a concise 3-5 word name (e.g., "Payment Gateway API") from your feature description. This name appears on model cards, the assurance dashboard, system compliance pills, and exports.
To rename a model, hover over the name and click the pencil icon. Type the new name and press Enter (or click away) to save. Press Escape to cancel. Renaming is a metadata-only change — it does not create a new version.
Model names must be unique within a workspace (case-insensitive) — renaming to a name already in use returns a conflict error, and generation auto-suffixes when the LLM proposes a name that already exists. Different workspaces can independently use the same name.
You can also rename models via the MCP tool rename_threat_model.
Querying a model
You can ask questions about an existing model without changing it:
- "Which assets have the Usage property?"
- "How many control objectives cover the authentication token?"
- "What assumptions did you make about the network?"
Queries use the model as context but do not create new versions.
Selecting a compliance framework
You can link a compliance framework (e.g., OWASP ASVS 5.0) to any model:
- Open the Compliance tab on the model
- Select one or more frameworks
- If controls already exist, click Auto-Map Controls to create mappings
- If controls have not been generated yet, the next generation will include framework requirements automatically
Framework selection affects control generation — the LLM sees the framework's requirements and maps controls to them. See the Compliance page for gap analysis, remediation, and exclusions.