What is Mipiti?
Mipiti is a security posture platform that turns natural-language feature descriptions into structured security models — controls, compliance mapping, and verifiable evidence.
Its key differentiator is the assurance pipeline: claims that a control is implemented are verified in CI against the real codebase, and final posture is determined deterministically from verified evidence, assumptions, and objective logic.
The assurance pipeline
This is the assurance chain that runs continuously in CI:
- The coding agent submits assertions — structured evidence claims about code properties (e.g.
function_exists,test_passes,config_value_matches). - CI mechanically verifies each assertion against the codebase (Tier 1 — deterministic structural checks such as file inspection, pattern matching, configuration checks, and other machine-verifiable validations).
- CI semantically evaluates relevant verified evidence with an LLM (Tier 2 — does the code actually implement the control's intent, or is it a stub?). Tier 2 runs in your CI; code context goes directly to the LLM provider you configure, not through Mipiti.
- The platform then deterministically evaluates whether validated controls still satisfy each control objective — no LLM in the risk determination. Final status is Mitigated, At Risk, or Unassessed, computed deterministically from verified evidence, assumptions, and objective logic.
AI helps generate the security framework and evaluate semantic evidence. Risk determination is purely deterministic.
Trust boundary
The inspectable, code-touching components run in your environment — your AI coding agent and your CI pipeline both have direct source-code access. The hosted platform coordinates threat models, controls, evidence metadata, and the deterministic assurance computation. Mipiti itself never requires source-code access.
End-to-end in one workflow
- Describe your feature — in the chat, from a Jira issue, or through your AI coding agent via MCP
- Generate — Mipiti's agentic pipeline produces a complete threat model: trust boundaries, assets, capability-defined attackers, a cross-product of control objectives, and concrete controls
- Refine — ask follow-up questions or request changes conversationally
- Implement — your AI coding agent implements the controls as part of writing code, and submits typed assertions as evidence
- Verify — your CI pipeline runs
mipiti-verifyto check each assertion against the actual codebase (Tier 1 mechanical + Tier 2 semantic) and submits signed results - Assess — the platform deterministically evaluates whether verified controls satisfy each objective, producing a Mitigated / At Risk / Unassessed posture per objective
- Discover gaps — AI coding agents report missing implementations as negative findings, closing the loop between what the model requires and what the code actually does
- Comply and track — select a compliance framework, run gap analysis, remediate, export signed reports
Every step works through the web UI, the REST API, or the MCP server (97 tools) — so AI coding agents like Claude Code or Cursor can drive the entire workflow from the developer's IDE.
A concrete walkthrough
Say you're adding file uploads to an API. Here's the full chain for a single control:
- You describe the feature. Mipiti generates a control objective: "Confidentiality of uploaded files shall be protected from an external attacker."
- A concrete control is derived: "Virus-scan uploads before accepting them."
- Your coding agent implements
scan_upload()inupload.pyand calls it from the upload handler. It submits two assertions:function_exists(name="scan_upload", file="upload.py")function_calls(caller="handle_upload", callee="scan_upload")
- On push, CI runs
mipiti-verify. Tier 1 confirms both claims structurally. Tier 2 evaluates the relevant code context in CI using your configured LLM provider to determine whether the implementation actually satisfies the control's intent (e.g. that a virus scanner is really invoked, not a no-op stub). - CI submits signed verification results with provenance. The platform checks that the required evidence passed verification and that the full evidence set is sufficient for the control. It then marks the control as verified.
- The platform recomputes the control objective's status deterministically: the relevant mitigation group is now satisfied → objective is Mitigated.
- A later refactor removes the scanner call. On next push, Tier 2 fails. The control is no longer verified, the objective flips to At Risk, and drift is visible in the UI — targeted to this exact control, not a vague "something changed."
What you get
Every threat model includes:
- Trust Boundaries — where different security domains meet, each carrying a closed-vocabulary
passesattribute that names which attack vectors structurally traverse it - Components — deployable units (services, processes, frontends) that bridge security architecture to code organization. Assets are scoped to components; reachability flows through this graph
- Assets — data and components that need protection, each tagged with applicable security properties (Confidentiality, Integrity, Availability, and Usage for non-extractable assets where purpose-binding matters)
- Attackers — capability-defined threat actors described by their position and concrete abilities
- Control Objectives — testable "SHALL" statements covering every asset-attacker combination, with all applicable security properties bundled into each statement
- Computed reachability — the reachability composer is a pure function over the structural primitives that decides per-CO reachability without LLM judgment. Verdicts are exposed as a separate read surface (auditors re-derive independently); divergences from operator attestations surface as actionable findings
- Implementation Controls — concrete, actionable security measures mapped to control objectives, organized into mitigation groups
- Evidence verification — prove controls are really implemented with machine-checked evidence, without requiring source-code access by the Mipiti platform
- Compliance coverage — gap analysis against frameworks like OWASP ASVS 5.0, with automated remediation for unmapped requirements
- Assumptions — explicit statements about what the model takes for granted, optionally backed by structured
exclusionpredicates the composer matches deterministically
Key differentiator
Mipiti automates proven formal methods — capability-defined attackers and Security Problem Definition (Common Criteria ISO 15408), systematic asset-attacker mapping (NIST SP 800-30), and traceable control derivation (NIST RMF) — that were previously impractical outside high-assurance environments.
AI handles threat-model generation and semantic evidence evaluation (Generation). Deterministic evaluation handles the final assurance posture (Assurance). Control Objectives are computed as a mathematical cross-product of assets and attackers — not generated by the LLM. Coverage is guaranteed and auditable.