Discovery and Assurance

Mipiti separates AI-powered generation from deterministic evaluation — two architectural layers over the same data.

Generation

Generation is the AI-exploratory layer. When you generate or refine a threat model, you are using the generation layer.

Generation gives you a comprehensive starting point in minutes instead of days.

Note: Direct entity edits (add/edit/remove via UI buttons) are not generation operations — they are deterministic, create new versions, and preserve existing control mappings. See Working with Models for details.

Assurance

Assurance is the evidence-bound, deterministic layer. It appears on the Assurance page.

Orphaned controls

A control becomes orphaned when every control objective it was mapped to has been tombstoned — that is, its (asset, attacker) pair was removed in a later version of the model. The control still exists (its evidence, assertions, Jira ticket, and version history are preserved) but it no longer defends any live pair in the current cross-product, so it contributes to nothing current-posture: not coverage, not the stakeholder report's "N of M implemented" counts, not CI verification work allocation, not cross-model attestation satisfaction, not Jira's active-work queue.

When orphaned controls exist on a model, Assurance shows an amber Orphaned controls panel at the top of the controls view. Each row lists the control, its status, its description, and which COs got tombstoned. You have two decisions per control:

The orphaned state is derived, not stored: every read recomputes it from the current version's live-CO set. That means remapping or restoring the underlying asset automatically un-orphans the control — no separate rehabilitation step. Jira integration mirrors the lifecycle: the control's prior ticket was auto-transitioned to Done when it became orphaned, and gets reopened when the control becomes live again (preserving ticket history instead of spawning a new key).

Control Alternatives and Defense-in-Depth

Controls generated for a control objective are organized into mitigation groups — alternative paths to satisfy the same CO.

How mitigation groups work

For example, a CO protecting data confidentiality might have:

Group Controls Meaning
1 CTRL-01, CTRL-02 Encrypt at rest + key rotation (both required)
2 CTRL-03 Use hardware security module (alternative path)
CTRL-04 Regular penetration testing (defense-in-depth)

Implementing either Group 1 (CTRL-01 AND CTRL-02) or Group 2 (CTRL-03) mitigates the CO. CTRL-04 is always defense-in-depth.

Defense-in-depth is computed, not stored

When a CO is mitigated via one group, controls in other incomplete groups automatically become defense-in-depth. This status is dynamically calculated based on current implementation state — it is not a fixed label.

Legacy behavior

Models generated before mitigation groups were introduced have all controls with no group assignment. In this case, the system uses legacy behavior: all controls must be implemented for the CO to be mitigated.

Managing groups

The LLM assigns mitigation groups during control generation. You can manually adjust group assignments in the Assurance page using the group override API.

Control Refinement

AI-generated controls are intentionally prescriptive — they specify exact mechanisms (e.g., "use Argon2id with 64 MiB memory cost"). Sometimes the prescribed mechanism doesn't match the implementation (e.g., you use bcrypt, or SameSite=Lax instead of Strict because OAuth requires it).

Control refinement lets you propose a new description with a justification. The platform evaluates whether the mitigation group still collectively satisfies all mapped control objectives after the change:

  1. Propose a new description and justification via refine_control (MCP) or the REST API
  2. The AI evaluator checks each mapped CO — does the group of controls still provide coverage?
  3. If accepted, the control is updated. The original description, justification, and AI assessment are preserved as an audit trail
  4. If rejected, the response includes per-CO reasoning explaining which objectives would lose coverage — use this to adjust and retry

Key properties

Why two modes?

Many tools use AI for everything, including risk evaluation. This means their risk assessments can hallucinate.

Mipiti uses AI where it adds value (creative threat identification) and determinism where correctness matters (coverage evaluation). Auditors can trust the assurance posture because it is not an LLM opinion — it is a mathematical cross-product.

Trust Boundaries and Assumptions

Trust Boundaries

A trust boundary is an architectural boundary where trust transitions occur — for example, "Public internet to API gateway" or "Application server to third-party payment processor." Trust boundaries define which security properties are your responsibility to implement vs. which you must assume about the environment.

Trust boundaries are first-class structural entities. You can add, edit, or remove them via the Overview page or MCP tools (add_trust_boundary, edit_trust_boundary, remove_trust_boundary). Each change creates a new model version with full carry-forward.

What trust boundaries do:

Computed Reachability

Whether each CO is reachable — whether the attacker can actually reach the asset for the property under consideration — is computed deterministically by the reachability composer: a pure function over the model's structural primitives (Asset.component_ids, Component.trust_boundary_ids, TrustBoundary.passes, Attacker.trust_boundary_ids + Attacker.attack_vector, Assumption.exclusion). Same inputs always produce the same verdict — no LLM judgment in the predicate. Two narrowing rules:

  1. Active exclusion-predicate match. An assumption with a structured exclusion predicate that's active and matches the CO (by attacker_id / attacker_vector / asset_id / asset_component_id / property_match, or via an explicit co_ids list) makes the CO unreachable.
  2. Boundary-vector path. When every shared boundary between the attacker's positioned boundaries and the asset's component-derived boundaries blocks the attacker's attack_vector, the CO is unreachable.

These are the only narrowing rules. The composer never narrows reach via inference over loose vocabularies — false-unreachable verdicts produce silent under-scoping and false security, which is much worse than false-reachable verdicts (those only cost extra control review).

The composer's verdict is the only reachability surface — exposed via GET /api/models/{id}/reachability, the get_reachability_verdicts MCP tool, and the per-CO chips on the Models / Assurance pages. Re-run independently; no Mipiti runtime trust required. The deterministic-computation provenance class is what makes the verdict auditor-verifiable: the function and its structural inputs are the proof. Operator non-applicability claims are authored as Assumption records with structured exclusion predicates fed back into the composer.

When the composer can't decide structurally, indeterminate-cause findings (co_attacker_unpositioned, co_asset_unbounded, co_no_shared_boundary, co_missing_entity) name the structural-completeness gap with one-click resolution buttons that route the operator to the right edit (add components, position attackers, author exclusion assumptions).

Indeterminate verdicts default to reachable for CO-coverage — the CO stays in scope until the operator addresses the gap. The platform never auto-decides what the model is structurally missing; modeling gaps stay visible.

Finding kinds

Findings come in two provenance classes: operator-authored (manual gap discovery, AI-coding-agent submissions via MCP) and platform-emitted (structural-completeness checks that run deterministically on writes). Each finding carries a kind discriminator so the UI and the remediation API can dispatch on it.

Kind Provenance Meaning
manual Operator / agent Default for user-submitted gap findings — see Negative Findings.
co_reach_* (co_attacker_unpositioned, co_asset_unbounded, co_no_shared_boundary, co_missing_entity) Platform (reachability composer) Indeterminate-verdict causes — names the structural gap with a one-click resolution button.
framework_binding_asymmetry Platform (post-write on set_mitigation_groups) A CO has multiple OR-alternative mitigation groups whose framework_refs disagree — the groups claim equivalent mitigation but satisfy different framework requirements, so a real coverage gap exists that the LLM sufficiency check can't see. Anchored on the first control in the first missing group.
structural_duplicate_controls Platform (post-write on every control mutation) The control set has a normalized-description collision — the same prescribed mechanism appears under two or more control IDs. Anchored on the recommended survivor (lowest control ID), with the colliding IDs and the unioned co_ids / framework_refs carried in the finding's details payload.

Platform-emitted structural findings expose a generic preview-then-apply remediation flow:

The same primitives will handle every future structural finding kind without expanding the API surface.

Assumptions

An assumption documents a security property that is tracked rather than directly implemented. Each assumption can optionally carry a structured exclusion predicate — a closed-vocabulary match on (attacker, asset, property, or explicit co_ids list) that the reachability composer uses to derive unreachable verdicts deterministically. The predicate is the structural cause; the assumption's attestation is the operator-signoff backing. Resolution buttons on indeterminate-cause findings open the assumption modal pre-filled with the originating CO's structural context (asset_id, attacker_id, co_ids).

There are two types:

Type Created by Verification Example
Non-applicability Auto-created during generation when an asset or attacker is flagged as not applicable CI verification required (submit assertions, run mipiti-verify). Manual attestation rejected. "Asset A3 (payment DB) is not applicable — feature does not process payments"
External obligation Manually created by agent or user (default type) Manual attestation allowed for responsibilities handled by a third party that cannot be CI-verified against the codebase (e.g., vendor SLAs, infrastructure isolation, customer CI hardening) "Payment processor maintains PCI DSS certification"

Non-applicability verification: For greenfield projects (no codebase), assertions can target the feature description itself using target: "feature_description" instead of a file path. The platform injects the description content into the assertion for CI verification. When the feature description changes during refinement, all target: "feature_description" assertions are automatically superseded — new assertions must be submitted against the updated description.

Auto-attestation: When all assertions for an assumption pass both CI verification tiers (Tier 1 mechanical + Tier 2 semantic), the system automatically creates an attestation (system:ci-verification, 365-day expiry). If new assertions are added after auto-attestation, the attestation becomes stale until CI re-verifies. If all backing assertions are deleted, the CI attestation is invalidated.

Assumptions are managed on the Assumptions page (add, edit, soft-delete, restore).

CO-Level Mitigation

An assumption can be linked to one or more COs it covers. When linked and attested with a current (non-expired) attestation, the assumption mitigates those COs in the assessment — they show as mitigated_by: assumption rather than at_risk.

This is the correct model for COs where no positioned attacker can reach the asset across any trust boundary: instead of leaving them permanently at_risk (misleading) or fabricating unimplementable controls (wrong), you record the responsible party's obligation and track their attestation.

Control-Level Assumption Groups

Some COs span a trust boundary — part is your responsibility, part is external. For these, individual controls within the CO can be marked as externally handled by one or more assumptions. Mipiti uses the same group concept here as for mitigation groups on COs:

A control whose assumption_groups contains at least one complete group counts as active for mitigation group completeness — without being marked as implemented by you.

Compound case: a single control may need multiple simultaneous external claims to be satisfied. For example, "User data protected at rest in cloud KMS" might require both "AWS KMS is configured" and "KMS access policy reviewed quarterly" — neither alone is sufficient. Express this as one group with both assumption IDs: {1: ["AS-kms", "AS-review"]}.

Multi-path case: a control may have several alternative external-handling paths. For example, "User data protected at rest" could be satisfied either by "AWS KMS + quarterly review" or by "On-prem HSM with FIPS 140-2 certification". Express each path as its own group: {1: ["AS-kms", "AS-review"], 2: ["AS-hsm-fips"]}. Either complete group satisfies the control.

Shorthand: for the common single-assumption case, the assume_control MCP tool and the Assurance-page button both write to group 1 as the sole member. Use the set_control_assumption_groups tool (or the "Manage groups" modal in the UI) for compound or multi-path cases.

This correctly handles the case where a CO like "Protect session tokens from MITM" requires both TLS enforcement (yours — a control) and TLS termination configuration (vendor's — assumed).

AI relevance gate. Every assumption-to-control linkage is evaluated by the platform before being saved. The evaluator judges whether the proposed assumptions, treated as ALL must hold (AND within a group), plausibly cover the control's responsibility. There is no override: if the evaluator rejects a group, the caller must either add an assumption that covers the control or sharpen an existing assumption's description so coverage is explicit — clicking past the gate is not an option.

For multi-group submissions (compound or multi-path), each group is evaluated independently:

Attestation

An attestation is a timestamped record that the responsible party affirmed an assumption holds:

Field Description
Attested by Identity of the attesting party (name, role, organization)
Statement What was affirmed
Expires at When the attestation must be renewed
Evidence URL Optional link to supporting documentation (SOC 2 report, SLA, deployment config)

Manual attestation is only accepted for external obligation assumptions. Non-applicability assumptions require CI verification — submit assertions and run mipiti-verify. When all assertions pass both tiers, the system auto-attests.

For external obligations, Mipiti records attestations, tracks expiry, and flags gaps. The auditor verifies, same as SOC 2 CUECs (Complementary User Entity Controls).

When an attestation expires, affected COs revert to at_risk until re-attested or covered by controls.

Assumption Lifecycle

State Meaning Assessment impact
Active + attested (current) Responsible party confirmed it holds COs mitigated by assumption
Active + unattested No attestation on record COs at risk
Active + attestation expired Attestation lapsed COs at risk — action required
Soft-deleted Removed but preserved for audit Assumed controls become inert; COs at risk
Restored Reinstated from soft-delete Controls reconnect automatically; re-attestation required

Violation Workflow

When an assumption is violated or its attestation expires, affected COs become at_risk. Four remediation paths:

  1. Re-attest — submit a new attestation with updated expiry (submit_attestation MCP tool or the Assumptions tab on the model detail page)
  2. Restore — if the assumption was soft-deleted and is still valid, restore it and re-attest
  3. Convert to controls — generate controls for the affected COs and retire the assumption linkage (convert_assumption_to_controls MCP tool). For control-level assumptions, the assumption is removed from every group on every control that referenced it (and any group left empty is dropped); affected controls revert to not_implemented if no other complete group covers them. For CO-level assumptions, the LLM generates replacement controls.
  4. Accept risk — formally document that the risk is accepted (see Risk Acceptance)

Risk Acceptance

Some control objectives may remain at risk by deliberate choice — the cost of mitigation outweighs the risk, or the risk is accepted temporarily while controls are being implemented.

Risk acceptance is distinct from assumption-based mitigation:

Risk acceptance is a formal, human-owned decision made through the Assurance page:

  1. Navigate to the Assurance page and run an assessment
  2. In the assessment report, find an at-risk CO and click Accept Risk
  3. Fill in:
    • Owner — the person or team accountable for this risk going forward
    • Justification — why the risk is being accepted (required)
    • Review by — a future date when the acceptance must be re-evaluated (required, must be in the future)
  4. The CO remains at_risk in the assessment but is annotated with the acceptance record

Key properties