Frequently Asked Questions
Getting Started
Does Mipiti replace security teams or security architects?
No — it's a force multiplier, not a replacement. Mipiti automates the mechanical work: enumerating assets, generating the cross-product matrix, tracking control status, and verifying implementation evidence via AI agents and CI pipelines.
What it won't automate are the judgment calls: whether a risk is acceptable, whether a finding should be dismissed, and whether the overall model is fit for purpose. Those require human context and stay human-owned.
How long does it take to generate a threat model?
Initial generation takes 30-60 seconds. The pipeline makes 5 sequential LLM calls (generate assets, refine assets, generate attackers, refine attackers, build control objective matrix) plus a batch control generation step. You'll see real-time progress for each step.
Refinement is faster — typically a single LLM call (5-15 seconds). Queries and general questions are near-instant.
What makes a good feature description?
Be specific about what the feature does, what data it handles, and how users interact with it. Include technical details like authentication methods, storage mechanisms, and external integrations. See Getting Started for detailed tips and examples.
Can I edit the generated model?
Yes, three ways:
- Conversational refinement — describe what to change in natural language ("add a database attacker", "remove the admin API asset")
- Direct editing — click the edit/add/remove buttons on assets and attackers in the Assurance Dashboard
- MCP tools — use
add_asset,edit_asset,remove_asset,add_attacker,edit_attacker,remove_attackerprogrammatically
All changes create a new version — you can always view previous versions or compare diffs. See Working with Models for details.
Methodology
Why Security Properties instead of STRIDE or PASTA?
Security Properties is asset-centric rather than attack-centric. Instead of asking "what attacks might happen?" (which produces unbounded threat catalogs), it asks "what needs protecting and from whom?" — producing a bounded set of control objectives tied directly to your assets.
The result is actionable: each control objective is a specific statement like "Confidentiality and Integrity of OAuth Tokens shall be protected from External Network Attacker", which maps directly to implementation controls. See Methodology for the formal foundations.
What is Usage (U)?
Usage is the fourth security property — alongside Confidentiality, Integrity, and Availability. It captures purpose-binding: the asset can only be used for its intended purpose through its operational interface.
Usage applies to non-extractable assets that expose an operational interface — like an HSM signing key (can sign but can't be exported), a biometric template (can verify but can't reconstruct the original), or a DRM license (can play but can't copy). See Methodology for detailed examples.
How are risk tiers calculated?
Risk tiers are derived from two ratings:
- Impact (on assets): High, Medium, or Low — how severe would a breach be?
- Likelihood (on attackers): High, Medium, or Low — how probable is this attacker?
These combine via a 3x3 matrix into four tiers: Critical (high impact + high likelihood), High, Medium, and Low. No extra LLM call is needed — risk is deterministic from the entity ratings. See Methodology for the full matrix.
Model Quality
What if the AI generates irrelevant assets or attackers?
This is expected — the AI errs on the side of inclusion. You have several options:
- Refine via chat: "Remove the physical theft attacker — this is a cloud-only service"
- Remove directly: Click the delete button on the entity in the Assurance Dashboard
- Regenerate controls: If control objectives are stale after many edits, use the "Regenerate Controls" button to get a fresh set
Every change creates a new version, so nothing is permanently lost.
Can I regenerate controls without losing the model?
Yes. The Regenerate Controls button (in the Assurance Dashboard) and the MCP regenerate_controls tool delete existing controls and generate fresh ones based on the current assets and attackers. Your model structure (assets, attackers, trust boundaries) is preserved.
Note that implementation status is reset — all controls start as "not implemented" after regeneration.
Workspaces & Teams
What is a workspace?
A workspace is the data isolation boundary. All threat models, systems, sessions, and activity belong to a workspace. Every user gets a personal workspace automatically. You can create team workspaces to collaborate — see Workspaces for details.
Can team members edit my threat models?
Yes. All members of a workspace have full read/write access to its models, systems, and controls. Only workspace owners can manage membership and workspace settings.
What happens when I switch workspaces?
All views (Dashboard, Models, Systems, Assurance, etc.) update to show data from the selected workspace. Each workspace is fully isolated — models in one workspace are invisible from another.
What is a system?
A system is a lightweight grouping of related threat models within a workspace. Use it to organize feature-level models that belong to the same product. A model can belong to at most one system. See Systems for details.
Security & Privacy
Where is my data stored?
Threat models are stored in a SQLite database on the backend server (hosted on Fly.io). All data is scoped to your workspace — members of the same workspace share data, but there is no access across workspaces.
Can other users see my threat models?
Only if they are members of the same workspace. Every API request is authenticated and workspace-checked. Your personal workspace is single-user by design. Team workspaces share data among all members.
What happens when I delete my account?
All your data is cascade-deleted: threat models (all versions), controls, chat sessions, API keys, Jira connections, activity events, compliance data, cache entries, telemetry records, and billing records. For team workspaces you own, ownership is transferred to the next member (or the workspace is deleted if you're the sole member). This is irreversible.
What happens when a workspace is deleted?
All data scoped to that workspace is permanently deleted — models, controls, systems, sessions, activity events, and Jira/Confluence mappings. Personal workspaces cannot be deleted.
Evidence Verification
How does evidence verification work without accessing my code?
Three parties, none of which hold both code and assertions:
- Your AI agent (has code access) — analyzes the codebase and submits typed assertions (e.g., "function
require_authexists inauth/middleware.py") - Mipiti (no code access) — stores assertions and coordinates verification
- Your CI pipeline (has code access) — independently verifies assertions against the actual codebase
No source code ever leaves your infrastructure. See Evidence Verification for the full guide.
What's the difference between Tier 1, Tier 2, and Sufficiency?
Tier 1 is mechanical and deterministic — does the function exist at that path? Does the test pass? Does the config key exist? It runs in seconds with zero AI cost. Tier 1 always runs for all pending assertions.
Tier 2 is semantic and AI-assisted — does the function actually implement the security control, or is it a no-op stub? The platform generates a targeted yes/no verification prompt per assertion (template-based, consumes no usage credits). Your CI runs this prompt against the actual code using a local AI tool, then submits the result. Tier 2 runs automatically for all controls with assertions.
Collective Sufficiency evaluates whether all assertions together prove the full control. A single assertion may correctly evidence one facet (e.g., "authentication function exists") without covering others (e.g., session timeout). Sufficiency asks: taken together, is the evidence set complete? Runs automatically in CI for all controls with assertions.
Tier 1 catches structural errors; Tier 2 catches semantic ones; Sufficiency catches evidence gaps. Together they defend against both accidents and gaming.
All assertion types undergo both Tier 1 and Tier 2 verification. A control becomes verified only when all its assertions pass both tiers, the coherence check passes, and collective sufficiency is "sufficient" — "verified" is computed, not manually settable.
How do I set up CI verification?
The easiest way is the mipiti-verify CLI — a single command handles the entire Tier 1 + Tier 2 + Sufficiency pipeline:
pip install mipiti-verify[all]
mipiti-verify run <model_id> --api-key $MIPITI_API_KEY --tier2-provider openai
It pulls pending assertions, runs 21 built-in verifiers, submits Tier 1 results, then runs Tier 2 semantic checks for controls with evidence marked complete, and finally evaluates collective sufficiency. See Evidence Verification for full details.
For attestation (cryptographic proof that results came from CI), two options:
- OIDC (GitHub Actions / GitLab CI) — add
permissions: id-token: writeto your workflow. No keys to manage. Mipiti validates tokens against the provider's published JWKS. - ECDSA (any CI system) — generate a P-256 key pair, upload the public key in Workspace Settings, store the private key as a CI secret.
GitHub Actions and GitLab.com are trusted by default. For self-hosted GitLab or other OIDC providers, add custom issuers in Workspace Settings > Security > Trusted OIDC Issuers. See Evidence Verification for step-by-step instructions.
Do I need attestation?
It's optional but recommended. Without attestation, verification results are accepted based on API key auth alone. With the "Require CI attestation" toggle enabled in Workspace Settings, all submissions must include either an OIDC token or an ECDSA signature — proving results came from a real CI pipeline.
Can assertions be gamed?
The system has three defenses against gaming:
-
Assertion coherence check — when assertions are submitted, the platform validates that the assertion type actually proves the control. For example,
file_hashis flagged as incoherent for an input-validation control (a matching hash proves the file hasn't changed, not that it validates input). Incoherent assertions are blocked from reaching "verified" status. -
Two-tier verification — even if the assertion type is appropriate, Tier 2 semantic verification checks whether the implementation is real or a stub. A
function_existsassertion passes Tier 1 if the function exists, but Tier 2 asks "does this function actually implement the security control?" -
Collective sufficiency — even if individual assertions pass both tiers, the full evidence set must be sufficient to prove the control. Submitting a handful of easy-to-pass assertions that only cover a narrow facet of a multi-faceted control will be flagged as "insufficient."
See Evidence Verification for details on how the coherence check works.
What is mipiti-verify?
An open-source CLI tool that runs the complete verification pipeline (Tier 1 + Tier 2 + Sufficiency) in a single command. It's designed for two use cases:
- CI integration — add to your GitHub Actions, GitLab CI, or any pipeline
- Independent audit — auditors can run it themselves against the codebase without trusting your CI
It includes 21 built-in Tier 1 verifiers and supports OpenAI, Anthropic, or Ollama for Tier 2. See Evidence Verification for installation and usage.
How does sufficiency evaluation work?
Sufficiency is evaluated server-side when assertions are submitted — the platform checks whether the assertion set covers all aspects of the control using its own LLM. You get immediate feedback on coverage gaps without waiting for a CI run. No manual trigger needed.
What happens if sufficiency fails?
The control remains "implemented" (not "verified") and the verification report shows the control as "insufficient" with a reason describing which aspects of the control are not covered. Submit additional assertions to address the gaps and re-run verification.
Gap Discovery (Negative Findings)
What's the difference between assertions and findings?
Assertions are positive claims — "this control IS implemented, here's the proof." Findings are negative observations — "this control is NOT implemented, here's what I checked." Assertions prove presence; findings document absence. Together they give a complete picture: what's been verified, and what's still missing.
How does an AI agent discover gaps?
- The agent calls
get_scan_promptto get a structured scanning guide for not-implemented controls - Following the prompt, the agent searches the codebase for expected implementations (functions, tests, configs, patterns)
- When something is expected but not found, the agent submits a finding with the locations it checked, the patterns it searched for, and what a correct implementation would look like
The scan prompt is a deterministic template — it consumes no usage credits. The agent's own AI capabilities drive the codebase analysis.
How do findings connect to assertions?
Through the remediation bridge. When a finding is remediated, you can link it to one or more assertion IDs. If all linked assertions pass both tiers of CI verification, the finding is auto-verified — creating a complete audit trail from gap discovery through fix to independent verification.
Can I submit findings without MCP?
Yes. Findings can be submitted via the REST API (POST /api/models/{id}/findings) using an API key, or added manually through the Findings panel on the Assurance page. The API accepts the same structured data as the MCP tool — checked locations, patterns, severity, etc.
What is the finding lifecycle?
discovered → acknowledged → remediated → verified (with an optional dismissed path for false positives). Transitions are strictly forward — a finding cannot go backward. Dismissal requires a reason. Verification is auto-computed from linked assertions, not manually settable.
Is OIDC attestation as secure as ECDSA?
With required claims and a branch-protected GitHub Environment, OIDC provides equivalent cross-repo and cross-branch protection. Without claim restrictions, OIDC proves CI identity but not which repo the pipeline ran in — any repo with id-token: write can produce a valid token. Configure repository and environment claims in Workspace Settings > Security > Required OIDC Claims to close this gap. See Securing CI Attestation for the full guide.
How do I prevent PR authors from faking attestation?
For OIDC: configure required claims (repository + environment) in Workspace Settings and create a branch-protected GitHub Environment restricted to main. PR branches can't access the environment, so they can't produce valid tokens.
For ECDSA: store the signing key as a repo secret (not exposed to fork PRs). Only workflows on the default/protected branch can access the key.
Both approaches prevent untrusted branches from producing valid attestations. See Securing CI Attestation for step-by-step setup.
Compliance
Pro Plan — Compliance features require a Pro subscription. Start a 30-day free trial →
How does compliance differ from assurance?
Assurance checks whether your risk-derived control objectives are mitigated — it is bottom-up. Compliance checks whether your model satisfies an external framework's requirements (like OWASP ASVS 5.0) — it is top-down. A model can be fully mitigated but not fully compliant if the framework mandates controls that risk analysis did not produce. See Compliance for the full guide.
Which frameworks are supported?
Mipiti ships with OWASP ASVS 5.0 (345 requirements at three levels). You can also import custom frameworks as CSV or JSON — see Importing custom frameworks.
What does "Remediate Gaps" do?
The remediation pipeline uses the LLM to analyze unmapped framework requirements and propose new assets and attackers that would close the gaps. You review each suggestion before applying. Approved entities are added in a single version bump, controls are generated, and they are auto-mapped to the framework. Requirements that do not apply to your system can be excluded.
Integrations
Do I need an API key for MCP?
Yes. Create one on the Settings page under API Keys. The same key works for both the hosted MCP endpoint (https://api.mipiti.io/mcp) and direct API calls.
Why do some MCP tools return only summaries?
Large models can have hundreds of controls and over a thousand control objectives. Returning everything at once would exceed AI context limits. Retrieval tools return only counts and summaries by default — use offset/limit to page through individual items and status to filter. See Integrations for the full list of parameters.
Billing
How are credits consumed?
Credits are consumed per LLM token. Different operations have different costs:
- Generation (highest) — 5 LLM calls plus batch control generation
- Refinement — 1-2 LLM calls depending on scope
- Queries and general questions — 1 LLM call (cheapest)
- Control regeneration — 1+ batch LLM calls depending on model size
Cached responses (identical prompts within the cache TTL) consume zero credits.
Every tier includes a monthly credit allowance that resets on the 1st of each month: Free gets 200/month, Pro gets 1,500/month, Organization includes unlimited usage. Credits are consumed from the monthly allowance first, then from purchased credits.
What are credit packs?
One-time credit purchases available on the Pricing page. Multiple pack sizes are offered with lower per-credit pricing on larger packs. Purchased credits are permanent (they never reset or expire) and stack on top of your monthly allowance. After payment, credits are added to your balance automatically within a few seconds.
How does the monthly allowance work?
Every user gets a monthly credit allowance based on their plan tier. The allowance resets to its full value on the 1st of each calendar month — unused allowance does not roll over. When you use credits, the system draws from your monthly allowance first, then from any purchased credit packs. Your billing page shows both pools separately with a progress bar for the monthly allowance.
Is there a daily usage limit?
Yes. All plans are subject to a 2,000 AI requests per day fair use limit. An AI request is any operation that triggers LLM processing (generation, refinement, queries, control generation, gap analysis, compliance mapping). Cached responses do not count toward the limit. If you exceed 2,000 requests in a day, you'll receive an HTTP 429 response and can retry the next day (UTC reset).
What are referral codes?
Referral codes let you invite colleagues. When someone signs up using your code, you earn 25 bonus credits. Codes are earned through platform engagement — each month, the most active users receive a referral code. You can hold up to 3 unused codes at a time. Codes expire after 90 days.
Administration
How do I see per-user usage?
Administrators can view per-user token consumption in Settings > Platform > Usage. The dashboard shows total requests, tokens, estimated LLM cost, and active users over a selectable period (7, 30, or 90 days). A top users table shows per-user breakdown with plan tier, token counts, and estimated cost. The same data is available via GET /api/admin/usage/summary?days=30.
How do I back up my data?
Administrators can download a complete database backup via GET /api/admin/backup. This returns a gzipped SQLite file that can be restored by replacing the database file on the data volume and restarting. There is no restore API — restoring requires direct access to the data volume. See Administration for the full procedure.