What is Mipiti?

Mipiti is a security posture platform that turns natural-language feature descriptions into structured security models — controls, compliance mapping, and verifiable evidence.

Its key differentiator is the assurance pipeline: claims that a control is implemented are verified in CI against the real codebase, and final posture is determined deterministically from verified evidence, assumptions, and objective logic.

The assurance pipeline

This is the assurance chain that runs continuously in CI:

  1. The coding agent submits assertions — structured evidence claims about code properties (e.g. function_exists, test_passes, config_value_matches).
  2. CI mechanically verifies each assertion against the codebase (Tier 1 — deterministic structural checks such as file inspection, pattern matching, configuration checks, and other machine-verifiable validations).
  3. CI semantically evaluates relevant verified evidence with an LLM (Tier 2 — does the code actually implement the control's intent, or is it a stub?). Tier 2 runs in your CI; code context goes directly to the LLM provider you configure, not through Mipiti.
  4. The platform then deterministically evaluates whether validated controls still satisfy each control objective — no LLM in the risk determination. Final status is Mitigated, At Risk, or Unassessed, computed deterministically from verified evidence, assumptions, and objective logic.

AI helps generate the security framework and evaluate semantic evidence. Risk determination is purely deterministic.

Trust boundary

The inspectable, code-touching components run in your environment — your AI coding agent and your CI pipeline both have direct source-code access. The hosted platform coordinates threat models, controls, evidence metadata, and the deterministic assurance computation. Mipiti itself never requires source-code access.

End-to-end in one workflow

  1. Describe your feature — in the chat, from a Jira issue, or through your AI coding agent via MCP
  2. Generate — Mipiti's agentic pipeline produces a complete threat model: trust boundaries, assets, capability-defined attackers, a cross-product of control objectives, and concrete controls
  3. Refine — ask follow-up questions or request changes conversationally
  4. Implement — your AI coding agent implements the controls as part of writing code, and submits typed assertions as evidence
  5. Verify — your CI pipeline runs mipiti-verify to check each assertion against the actual codebase (Tier 1 mechanical + Tier 2 semantic) and submits signed results
  6. Assess — the platform deterministically evaluates whether verified controls satisfy each objective, producing a Mitigated / At Risk / Unassessed posture per objective
  7. Discover gaps — AI coding agents report missing implementations as negative findings, closing the loop between what the model requires and what the code actually does
  8. Comply and track — select a compliance framework, run gap analysis, remediate, export signed reports

Every step works through the web UI, the REST API, or the MCP server (97 tools) — so AI coding agents like Claude Code or Cursor can drive the entire workflow from the developer's IDE.

A concrete walkthrough

Say you're adding file uploads to an API. Here's the full chain for a single control:

  1. You describe the feature. Mipiti generates a control objective: "Confidentiality of uploaded files shall be protected from an external attacker."
  2. A concrete control is derived: "Virus-scan uploads before accepting them."
  3. Your coding agent implements scan_upload() in upload.py and calls it from the upload handler. It submits two assertions:
    • function_exists(name="scan_upload", file="upload.py")
    • function_calls(caller="handle_upload", callee="scan_upload")
  4. On push, CI runs mipiti-verify. Tier 1 confirms both claims structurally. Tier 2 evaluates the relevant code context in CI using your configured LLM provider to determine whether the implementation actually satisfies the control's intent (e.g. that a virus scanner is really invoked, not a no-op stub).
  5. CI submits signed verification results with provenance. The platform checks that the required evidence passed verification and that the full evidence set is sufficient for the control. It then marks the control as verified.
  6. The platform recomputes the control objective's status deterministically: the relevant mitigation group is now satisfied → objective is Mitigated.
  7. A later refactor removes the scanner call. On next push, Tier 2 fails. The control is no longer verified, the objective flips to At Risk, and drift is visible in the UI — targeted to this exact control, not a vague "something changed."

What you get

Every threat model includes:

Key differentiator

Mipiti automates proven formal methods — capability-defined attackers and Security Problem Definition (Common Criteria ISO 15408), systematic asset-attacker mapping (NIST SP 800-30), and traceable control derivation (NIST RMF) — that were previously impractical outside high-assurance environments.

AI handles threat-model generation and semantic evidence evaluation (Generation). Deterministic evaluation handles the final assurance posture (Assurance). Control Objectives are computed as a mathematical cross-product of assets and attackers — not generated by the LLM. Coverage is guaranteed and auditable.