Working with Models

Refining a threat model

After generating a model, you can refine it in the same chat session:

Each refinement creates a new version of the model. The original is preserved.

You can also refine from the Dashboard or Models page by clicking the refine button on any model card.

Versioning and diffs

Every model is versioned automatically. On the Models page you can:

Sessions

The Sessions page shows your chat conversation history. Each session records:

Click any session to replay the full conversation.

Adding, editing, and removing entities

You can directly add, edit, or soft-delete individual assets, attackers, trust boundaries, components, and assumptions without going through chat refinement. In the Assurance tab, expand the "View Model" section:

The same modal flow opens when an operator clicks a one-click resolution button on a coherence-report finding (e.g. Scope asset on a co_asset_unbounded finding opens the asset edit modal pre-filled with the component picker; Ground component on a component_unbound finding opens the component edit modal). The asset modal carries a multi-select component picker; the assumption modal exposes an optional structured-exclusion-predicate section auto-expanded when launched from a finding so the operator's prose context flows in without retyping.

Each direct edit creates a new version of the model. Mipiti never hard-removes assets or attackers, and never renumbers control-objective IDs. The lifecycle works like this:

Existing implementation controls are automatically carried forward to the new version. Because CO IDs are stable, controls' mappings stay valid without remapping heuristics. Controls whose underlying pair was removed become orphaned (not dropped). Jira integration mirrors the lifecycle: orphaned controls are auto-closed in Jira (without removing the ticket), and resurrected controls have their prior ticket reopened — ticket history is preserved across soft-delete / restore cycles.

The activity feed distinguishes each action (asset_added, asset_edited, asset_soft_deleted, asset_restored, and attacker equivalents) with the specific entity identifier, so the audit trail names what changed, not just that something changed.

Three ways to modify a model

Method How LLM involved? Best for
Direct CRUD UI buttons (+, pencil, trash) No Quick edits — names, properties, risk ratings
Targeted edit "Add a DDoS attacker" Yes (single-entity) Adding or editing with LLM reasoning
Full refinement "Restructure the model for API security" Yes (full model) Broad structural changes

When you ask to add, remove, or edit a single entity in the conversation, Mipiti automatically routes to a fast targeted operation instead of regenerating the entire model. For broader instructions, it falls back to full refinement.

Managing control groups

When controls are generated, the LLM assigns each control to a mitigation group for each CO it covers. See Control Alternatives and Defense-in-Depth for how groups work.

In the CO drill-down view on the Assurance page, each control shows its group assignment as a badge:

The summary for each CO shows which groups are complete and which are incomplete. When multiple groups exist for the same CO, they represent alternative paths — completing any one group mitigates the CO.

Editing risk ratings

Each asset has an impact rating and each attacker has a likelihood rating (High / Medium / Low). These ratings are composed deterministically from underlying factor decompositions — seven factors for asset impact, five for attacker likelihood. The LLM judges the factors during generation; you adjust ratings by editing the factors, not the H/M/L directly.

To edit, click any asset or attacker rating badge (or the pencil icon next to the entity) to open the edit modal. The modal shows every factor the LLM judged, the derived rating updates live as you change values, and you supply a change reason documenting why you're overriding the LLM's judgment. The change reason is required and persists to the audit trail (editor, timestamp, factor delta) — there's no path to change a rating without leaving an audit record.

When ratings change, all related control objectives recompute their risk tier immediately. Factor edits don't bump the model's version chain; they're recorded as append-only revisions on a side-table and surfaced via a read-time overlay, so the live view reflects current state while historical version reads (GET /api/models/{id}/versions/{N}) preserve the factors as they were at that version's creation. Each revision row carries an explicit model_version so you can attribute every factor change to the version it was applied against. For per-CO drill-down, GET /api/models/{id}/control-objectives/{co_id}/factor-history returns a chronological per-version timeline with the derived risk_tier at each boundary and the per-revision delta sequence (editor / rationale / change_reason) — useful for walking how a CO's risk profile evolved across versions.

When you add an asset or attacker manually, you supply only identity-bearing fields (name / description / security properties for assets; capability / position / archetype for attackers). The platform LLM-reasons the factor decomposition using the same prompt the generation pipeline applies to LLM-introduced entities — calibration is consistent regardless of who introduced the entity. You can override any factor afterward via the edit flow.

See the Methodology page for the full factor list, the composition rules, and worked examples.

Model names

When you generate a model, Mipiti automatically creates a concise 3-5 word name (e.g., "Payment Gateway API") from your feature description. This name appears on model cards, the assurance dashboard, system compliance pills, and exports.

To rename a model, hover over the name and click the pencil icon. Type the new name and press Enter (or click away) to save. Press Escape to cancel. Renaming is a metadata-only change — it does not create a new version.

Model names must be unique within a workspace (case-insensitive) — renaming to a name already in use returns a conflict error, and generation auto-suffixes when the LLM proposes a name that already exists. Different workspaces can independently use the same name.

You can also rename models via the MCP tool rename_threat_model.

Querying a model

You can ask questions about an existing model without changing it:

Queries use the model as context but do not create new versions.

Selecting a compliance framework

You can link a compliance framework (e.g., OWASP ASVS 5.0) to any model:

  1. Open the Compliance tab on the model
  2. Select one or more frameworks
  3. If controls already exist, click Auto-Map Controls to create mappings
  4. If controls have not been generated yet, the next generation will include framework requirements automatically

Framework selection affects control generation — the LLM sees the framework's requirements and maps controls to them. See the Compliance page for gap analysis, remediation, and exclusions.