Systems
What is a system?
A system is a lightweight grouping of related threat models within a workspace. If you have 50 feature-level threat models for a single product, a system lets you organize them together.
A threat model can belong to at most one system. Models that aren't added to any system remain standalone.
Creating a system
- Navigate to Systems in the sidebar
- Click Create System
- Enter a name and optional description
- Optionally select models to add immediately
Adding models to a system
On the system detail page, click Add Model to see a list of unaffiliated models in the current workspace. Select a model to add it.
If a model already belongs to another system, you must remove it from that system first.
Removing models
Click the remove button on any model card in the system detail view. The model becomes unaffiliated — it is not deleted, just removed from the system.
Deleting a system
Deleting a system removes the grouping only. All member models become unaffiliated and are not deleted.
Cross-model dependencies
When models in a system have external assumptions about each other, those assumptions can be linked to the model that should satisfy them.
For example, if your API's threat model assumes "the database encrypts data at rest," and your database service has its own threat model with a control for encryption — you can link the assumption to the database model. The platform will:
- Create a compliance requirement on the target model
- Track whether mapped controls in the target model satisfy the requirement
- Auto-attest the assumption when satisfied
View cross-model dependencies in the Dependencies tab of the system detail page.
Linking an assumption
In the Assumptions tab of a model, use the Link to model action on an external assumption to select the target model in the same system. The assumption becomes a cross-model dependency.
How satisfaction works
Cross-model assumptions have two independent satisfaction paths:
- Auto-attestation from controls — when assessing a model, the platform checks directly whether the target model's mapped controls are all implemented. When they are, the dependency is satisfied and the source model's control objectives are automatically mitigated. No manual attestation needed.
- Manual attestation — you can also manually attest the assumption (e.g., while controls in the target model are still being implemented).
Either path alone suffices. If the auto-attestation check fails but a valid manual attestation exists, the assumption is still satisfied. If the link is later removed, the manual attestation continues to apply.
This is always accurate — satisfaction via controls is evaluated inline from the current state of the target model's controls, not from cached attestation records. Consistent with how CI attestation staleness is checked inline from assertion results.
Components
A component is a deployable unit that bridges security architecture to code organization. It maps trust boundaries (where attackers can reach) to repos (where the code lives).
Why components?
A threat model may describe a feature that spans multiple codebases — a backend API, a frontend app, a worker service. Without components, all controls appear in one flat list. With components, each control is scoped to the codebase that implements it.
Adding components
In the model detail page, add components with:
- Name — e.g., "Backend API", "Auth Worker"
- Repo URL — e.g.,
github.com/org/backend - Path — for monorepos, e.g.,
services/auth(optional) - Trust boundaries — which trust boundaries this component operates within
How scoping works
Controls with a component_id are scoped to that component. When a coding agent works in a repo, it matches the git remote URL to a component's repo_url and sees only the controls relevant to its codebase. Unscoped controls (no component) are visible to all.
MCP tools
add_component/edit_component/remove_component— manage componentsget_controlswithcomponent_id— filter controls by component
Sharing threat models
To share a threat model publicly (e.g., for an open-source component), export it from the model detail page. The export includes model structure, controls with implementation status, and assumptions as conditions. Cross-model attestation data (assumptions linked to other models) is stripped since it's deployment-specific. Manual attestations are preserved.
For auditors who need the full picture including cross-model dependency satisfaction, use the system export from the system detail page. This produces a comprehensive report showing all models, the cross-model dependency graph, and attestation status.
MCP tools
Systems are available via MCP:
list_systems— list all systems in the workspaceget_system— get a system with model summariescreate_system— create a new systemadd_model_to_system— add a model to a systemget_system_dependencies— view cross-model dependency graphlink_dependency— link an assumption to a target model