Frequently Asked Questions

Getting Started

Does Mipiti replace security teams or security architects?

No — it's a force multiplier, not a replacement. Mipiti automates the mechanical work: enumerating assets, generating the cross-product matrix, tracking control status, and verifying implementation evidence via AI agents and CI pipelines.

What it won't automate are the judgment calls: whether a risk is acceptable, whether a finding should be dismissed, and whether the overall model is fit for purpose. Those require human context and stay human-owned.

How long does it take to generate a threat model?

The pipeline makes 5 sequential LLM calls (generate assets, refine assets, generate attackers, refine attackers, build control objective matrix) plus a batch control generation step. You'll see a live progress bar with step-by-step updates as each stage runs.

Refinement uses a single LLM call, so it's noticeably faster than initial generation. Queries and general questions are faster still — a single, shorter LLM call.

What makes a good feature description?

Be specific about what the feature does, what data it handles, and how users interact with it. Include technical details like authentication methods, storage mechanisms, and external integrations. See Getting Started for detailed tips and examples.

Can I edit the generated model?

Yes, three ways:

  1. Conversational refinement — describe what to change in natural language ("add a database attacker", "remove the admin API asset")
  2. Direct editing — click the edit/add/remove buttons on assets and attackers in the Assurance Dashboard
  3. MCP tools — use add_asset, edit_asset, remove_asset, add_attacker, edit_attacker, remove_attacker programmatically

All changes create a new version — you can always view previous versions or compare diffs. See Working with Models for details.

Methodology

Why Security Properties instead of STRIDE or PASTA?

Security Properties is asset-centric rather than attack-centric. Instead of asking "what attacks might happen?" (which produces unbounded threat catalogs), it asks "what needs protecting and from whom?" — producing a bounded set of control objectives tied directly to your assets.

The result is actionable: each control objective is a specific statement like "Confidentiality and Integrity of OAuth Tokens shall be protected from External Network Attacker", which maps directly to implementation controls. See Methodology for the formal foundations.

What is Usage (U)?

Usage is the fourth security property — alongside Confidentiality, Integrity, and Availability. It captures purpose-binding: the asset can only be used for its intended purpose through its operational interface.

Usage applies to non-extractable assets that expose an operational interface — like an HSM signing key (can sign but can't be exported), a biometric template (can verify but can't reconstruct the original), or a DRM license (can play but can't copy). See Methodology for detailed examples.

How are risk tiers calculated?

Two layers, both deterministic:

  1. Per-entity rating (Impact for assets, Likelihood for attackers) — composed from a factor decomposition. Asset impact has seven factors (per-property severity subscores plus blast radius, recoverability, regulatory scope). Attacker likelihood has five (attack vector, privileges required, attack complexity, user interaction, capability prevalence). The LLM judges the factors; the H/M/L composes from them by a fixed rule.
  2. Per-CO risk tierImpact × Likelihood via a 3×3 matrix produces Critical / High / Medium / Low.

Neither step uses LLM opinion — both are mechanical. To change a rating, you edit the underlying factors with a documented reason; the rating recomposes automatically. See Methodology for the full factor list, composition rules, and worked examples.

Model Quality

What if the AI generates irrelevant assets or attackers?

This is expected — the AI errs on the side of inclusion. You have several options:

Every change creates a new version, so nothing is permanently lost.

What happens to my implementation work if I remove an asset or attacker?

Nothing is lost. Under Mipiti's stable-ID design:

Can I regenerate controls without losing the model?

Yes. The Regenerate Controls button (in the Assurance Dashboard) and the MCP regenerate_controls tool generate fresh controls based on the current assets and attackers. Your model structure (assets, attackers, trust boundaries) is preserved.

Controls whose descriptions survive regeneration unchanged carry forward their implementation status, evidence, notes, and assertions. New or significantly changed controls start as "not implemented".

Workspaces & Teams

What is a workspace?

A workspace is the data isolation boundary. All threat models, systems, sessions, and activity belong to a workspace. Every user gets a personal workspace automatically. You can create team workspaces to collaborate — see Workspaces for details.

Can team members edit my threat models?

Yes. All members of a workspace have full read/write access to its models, systems, and controls. Only workspace owners can manage membership and workspace settings.

What happens when I switch workspaces?

All views (Dashboard, Models, Systems, Assurance, etc.) update to show data from the selected workspace. Each workspace is fully isolated — models in one workspace are invisible from another.

What is a system?

A system is a lightweight grouping of related threat models within a workspace. Use it to organize feature-level models that belong to the same product. A model can belong to at most one system. See Systems for details.

Security & Privacy

Where is my data stored?

Threat models are stored in a SQLite database on the backend server (hosted on Fly.io). All data is scoped to your workspace — members of the same workspace share data, but there is no access across workspaces.

Can other users see my threat models?

Only if they are members of the same workspace. Every API request is authenticated and workspace-checked. Your personal workspace is single-user by design. Team workspaces share data among all members.

What happens when I delete my account?

All your data is cascade-deleted: threat models (all versions), controls, chat sessions, API keys, Jira connections, activity events, compliance data, cache entries, telemetry records, and billing records. For team workspaces you own, ownership is transferred to the next member (or the workspace is deleted if you're the sole member). This is irreversible.

What happens when a workspace is deleted?

All data scoped to that workspace is permanently deleted — models, controls, systems, sessions, activity events, and Jira/Confluence mappings. Personal workspaces cannot be deleted.

Evidence Verification

How does evidence verification work without accessing my code?

Three parties, none of which hold both code and assertions:

  1. Your AI agent (has code access) — analyzes the codebase and submits typed assertions (e.g., "function require_auth exists in auth/middleware.py")
  2. Mipiti (no code access) — stores assertions and coordinates verification
  3. Your CI pipeline (has code access) — independently verifies assertions against the actual codebase

The Mipiti platform itself never accesses your source code. For Tier 2 semantic verification, your CI pipeline sends code context directly to the LLM provider you configure (OpenAI, Anthropic, or a local Ollama instance) — this traffic goes from your CI to the provider, not through Mipiti. See Evidence Verification for the full guide.

What's the difference between Tier 1, Tier 2, and Sufficiency?

Tier 1 is mechanical and deterministic — does the function exist at that path? Does the test pass? Does the config key exist? It runs in seconds with zero AI cost. Tier 1 always runs for all pending assertions.

Tier 2 is semantic and AI-assisted — does the function actually implement the security control, or is it a no-op stub? The platform generates a targeted yes/no verification prompt per assertion (template-based, consumes no usage credits). Your CI runs this prompt against the actual code using the LLM provider you configure, then submits the result.

Collective Sufficiency evaluates whether all assertions together prove the full control. A single assertion may correctly evidence one facet (e.g., "authentication function exists") without covering others (e.g., session timeout). Sufficiency asks: taken together, is the evidence set complete? Runs automatically in CI for all controls with assertions.

Tier 1 catches structural errors; Tier 2 catches semantic ones; Sufficiency catches evidence gaps. Together they defend against both accidents and gaming.

All assertion types undergo both Tier 1 and Tier 2 verification. A control becomes verified only when all its assertions pass both tiers, the coherence check passes, and collective sufficiency is "sufficient" — "verified" is computed, not manually settable.

How do I set up CI verification?

The easiest way is the mipiti-verify CLI — a single command handles the entire Tier 1 + Tier 2 + Sufficiency pipeline:

pip install mipiti-verify[all]
mipiti-verify run <model_id> --api-key $MIPITI_API_KEY --tier2-provider openai

It pulls pending assertions, runs 21 built-in verifiers, submits Tier 1 results, then runs Tier 2 semantic checks, and finally evaluates collective sufficiency. See Evidence Verification for full details.

For attestation (cryptographic proof that results came from CI), two options:

GitHub Actions and GitLab.com are trusted by default. For self-hosted GitLab or other OIDC providers, add custom issuers in Workspace Settings > Security > Trusted OIDC Issuers. See Evidence Verification for step-by-step instructions.

Do I need attestation?

It's optional but recommended. Without attestation, verification results are accepted based on API key auth alone. With the "Require CI attestation" toggle enabled in Workspace Settings, all submissions must include either an OIDC token or an ECDSA signature — proving results came from a real CI pipeline.

Can assertions be gamed?

The system has three defenses against gaming:

  1. Assertion coherence check — when assertions are submitted, the platform validates that the assertion type actually proves the control. For example, file_hash is flagged as incoherent for an input-validation control (a matching hash proves the file hasn't changed, not that it validates input). Incoherent assertions are blocked from reaching "verified" status.

  2. Two-tier verification — even if the assertion type is appropriate, Tier 2 semantic verification checks whether the implementation is real or a stub. A function_exists assertion passes Tier 1 if the function exists, but Tier 2 asks "does this function actually implement the security control?"

  3. Collective sufficiency — even if individual assertions pass both tiers, the full evidence set must be sufficient to prove the control. Submitting a handful of easy-to-pass assertions that only cover a narrow facet of a multi-faceted control will be flagged as "insufficient."

See Evidence Verification for details on how the coherence check works.

What is mipiti-verify?

An open-source CLI tool that runs the complete verification pipeline (Tier 1 + Tier 2 + Sufficiency) in a single command. It's designed for two use cases:

It includes 21 built-in Tier 1 verifiers and supports OpenAI, Anthropic, or Ollama for Tier 2. See Evidence Verification for installation and usage.

How does sufficiency evaluation work?

Sufficiency is evaluated server-side when assertions are submitted — the platform checks whether the assertion set covers all aspects of the control using its own LLM. You get immediate feedback on coverage gaps without waiting for a CI run. No manual trigger needed.

What happens if sufficiency fails?

The control remains "implemented" (not "verified") and the verification report shows the control as "insufficient" with a reason describing which aspects of the control are not covered. Submit additional assertions to address the gaps and re-run verification.

Is the sufficiency verdict signed?

Yes. Every sufficiency verdict is ECDSA-signed with the Mipiti instance's key. The signed payload binds the verdict plus a hash of every assertion that fed into it (Tier 1/Tier 2 status + attested flags included), so any post-hoc tampering in the database invalidates the signature. Click the padlock next to a control's status on the Assurance tab to see the signed bundle and run a one-click server-side verify. Signatures travel with audit archive exports so downstream auditors can re-verify against the origin instance's public key.

Can I move a threat model between Mipiti instances?

Yes. Export the model as an audit archive (JSON) and import it into any other instance. The archive is self-contained — every version, all assertions with their CI verdicts and attested flags, findings, attestations, and instance signatures travel together. The importing instance assigns a fresh model_id on each import, so the same archive can be restored any number of times. Signatures preserve their origin fingerprints; to trust-verify them, register the origin's public key via the importing instance's Admin Panel > Trusted Signers. See Audit archive (JSON).

Gap Discovery (Negative Findings)

What's the difference between assertions and findings?

Assertions are positive claims — "this control IS implemented, here's the proof." Findings are negative observations — "this control is NOT implemented, here's what I checked." Assertions prove presence; findings document absence. Together they give a complete picture: what's been verified, and what's still missing.

How does an AI agent discover gaps?

  1. The agent calls get_scan_prompt to get a structured scanning guide for not-implemented controls
  2. Following the prompt, the agent searches the codebase for expected implementations (functions, tests, configs, patterns)
  3. When something is expected but not found, the agent submits a finding with the locations it checked, the patterns it searched for, and what a correct implementation would look like

The scan prompt is a deterministic template — it consumes no usage credits. The agent's own AI capabilities drive the codebase analysis.

How do findings connect to assertions?

Through the remediation bridge. When a finding is remediated, you can link it to one or more assertion IDs. If all linked assertions pass both tiers of CI verification, the finding is auto-verified — creating a complete audit trail from gap discovery through fix to independent verification.

Can I submit findings without MCP?

Yes. Findings can be submitted via the REST API (POST /api/models/{id}/findings) using an API key, or added manually through the Findings panel on the Assurance page. The API accepts the same structured data as the MCP tool — checked locations, patterns, severity, etc.

What is the finding lifecycle?

discovered → acknowledged → remediated → verified (with an optional dismissed path for false positives). Transitions are strictly forward — a finding cannot go backward. Dismissal requires a reason. Verification is auto-computed from linked assertions, not manually settable.

Is OIDC attestation as secure as ECDSA?

With required claims and a branch-protected GitHub Environment, OIDC provides equivalent cross-repo and cross-branch protection. Without claim restrictions, OIDC proves CI identity but not which repo the pipeline ran in — any repo with id-token: write can produce a valid token. Configure repository and environment claims in Workspace Settings > Security > Required OIDC Claims to close this gap. See Securing CI Attestation for the full guide.

How do I prevent PR authors from faking attestation?

For OIDC: configure required claims (repository + environment) in Workspace Settings and create a branch-protected GitHub Environment restricted to main. PR branches can't access the environment, so they can't produce valid tokens.

For ECDSA: store the signing key as a repo secret (not exposed to fork PRs). Only workflows on the default/protected branch can access the key.

Both approaches prevent untrusted branches from producing valid attestations. See Securing CI Attestation for step-by-step setup.

Compliance

Pro Plan — Compliance features require a Pro subscription. Start a 30-day free trial →

How does compliance differ from assurance?

Assurance checks whether your risk-derived control objectives are mitigated — it is bottom-up. Compliance checks whether your model satisfies an external framework's requirements (like OWASP ASVS 5.0) — it is top-down. A model can be fully mitigated but not fully compliant if the framework mandates controls that risk analysis did not produce. See Compliance for the full guide.

Which frameworks are supported?

Mipiti ships with 11 built-in frameworks: OWASP ASVS 5.0 (345 requirements), ISO 27001:2022, SOC 2 Type II, NIST CSF 2.0, GDPR, FedRAMP Moderate, PCI DSS 4.0, EU Cyber Resilience Act, ISO/SAE 21434:2021 (automotive cybersecurity engineering), and IEC 62443 (industrial automation and control systems — Part 4-1 SDL plus Part 3-3 system security requirements). You can also import custom frameworks as CSV or JSON — see Importing custom frameworks.

What does "Remediate Gaps" do?

The remediation pipeline uses the LLM to analyze unmapped framework requirements and propose new assets and attackers that would close the gaps. You review each suggestion before applying. Approved entities are added in a single version bump, controls are generated, and they are auto-mapped to the framework. Requirements that do not apply to your system can be excluded.

Integrations

Do I need an API key for MCP?

For MCP, you can authenticate via OAuth (browser-based login) or an API key. Create an API key on the Settings page under API Keys — the same key works for both the hosted MCP endpoint (https://api.mipiti.io/mcp) and direct REST API calls. See Getting Started for setup instructions.

Why do some MCP tools return only summaries?

Large models can have hundreds of controls and over a thousand control objectives. Returning everything at once would exceed AI context limits. Retrieval tools return only counts and summaries by default — use offset/limit to page through individual items and status to filter. See Integrations for the full list of parameters.

Billing

How are credits consumed?

Credits are consumed per LLM token. Different operations have different costs:

Cached responses (identical prompts within the cache TTL) consume zero credits.

Every tier includes a monthly credit allowance that resets on the 1st of each month: Free gets 200/month, Pro gets 1,500/month, Organization includes unlimited usage. Credits are consumed from the monthly allowance first, then from purchased credits.

What are credit packs?

One-time credit purchases available on the Pricing page. Multiple pack sizes are offered with lower per-credit pricing on larger packs. Purchased credits are permanent (they never reset or expire) and stack on top of your monthly allowance. After payment, credits are added to your balance automatically within a few seconds.

How does the monthly allowance work?

Every user gets a monthly credit allowance based on their plan tier. The allowance resets to its full value on the 1st of each calendar month — unused allowance does not roll over. When you use credits, the system draws from your monthly allowance first, then from any purchased credit packs. Your billing page shows both pools separately with a progress bar for the monthly allowance.

Is there a daily usage limit?

Yes. All plans are subject to a 2,000 AI requests per day fair use limit. An AI request is any operation that triggers LLM processing (generation, refinement, queries, control generation, gap analysis, compliance mapping). Cached responses do not count toward the limit. If you exceed 2,000 requests in a day, you'll receive an HTTP 429 response and can retry the next day (UTC reset).

What are referral codes?

Referral codes let you invite colleagues. When someone signs up using your code, you earn 25 bonus credits. Codes are earned through platform engagement — each month, the most active users receive a referral code. You can hold up to 3 unused codes at a time. Codes expire after 90 days.

Administration

How do I see per-user usage?

Administrators can view per-user token consumption in Settings > Platform > Usage. The dashboard shows total requests, tokens, estimated LLM cost, and active users over a selectable period (7, 30, or 90 days). A top users table shows per-user breakdown with plan tier, token counts, and estimated cost. The same data is available via GET /api/admin/usage/summary?days=30.

How do I back up my data?

There are three backup scopes:

For a complete one-shot signed snapshot, use the full-instance backup. Volume snapshots remain the operational alternative that bypasses the application layer entirely. Restore counterparts: POST /api/admin/restore (org), POST /api/admin/platform-restore (superadmin), POST /api/admin/full-restore (superadmin, two-phase: validate-all-inner-files-first, then write platform + each org). All backups are ECDSA P-256 signed — the restore endpoints verify signature + schema before applying. Cross-instance migration is supported via the trusted-signer registry on the per-org and platform restore flows; full-restore is same-instance only in v1. See Administration for the full procedure.