Systems

What is a system?

A system is a lightweight grouping of related threat models within a workspace. If you have 50 feature-level threat models for a single product, a system lets you organize them together.

A threat model can belong to at most one system. Models that aren't added to any system remain standalone.

Creating a system

  1. Navigate to Systems in the sidebar
  2. Click Create System
  3. Enter a name and optional description
  4. Optionally select models to add immediately

Adding models to a system

On the system detail page, click Add Model to see a list of unaffiliated models in the current workspace. Select a model to add it.

If a model already belongs to another system, you must remove it from that system first.

Removing models

Click the remove button on any model card in the system detail view. The model becomes unaffiliated — it is not deleted, just removed from the system.

Deleting a system

Deleting a system removes the grouping only. All member models become unaffiliated and are not deleted.

Cross-model dependencies

When models in a system have external assumptions about each other, those assumptions can be linked to the model that should satisfy them.

For example, if your API's threat model assumes "the database encrypts data at rest," and your database service has its own threat model with a control for encryption — you can link the assumption to the database model. The platform will:

  1. Create a compliance requirement on the target model
  2. Track whether mapped controls in the target model satisfy the requirement
  3. Auto-attest the assumption when satisfied

View cross-model dependencies in the Dependencies tab of the system detail page.

Linking an assumption

In the Assumptions tab of a model, use the Link to model action on an external assumption to select the target model in the same system. The assumption becomes a cross-model dependency.

How satisfaction works

Cross-model assumptions have two independent satisfaction paths:

  1. Auto-attestation from controls — when assessing a model, the platform checks directly whether the target model's mapped controls are all implemented. When they are, the dependency is satisfied and the source model's control objectives are automatically mitigated. No manual attestation needed.
  2. Manual attestation — you can also manually attest the assumption (e.g., while controls in the target model are still being implemented).

Either path alone suffices. If the auto-attestation check fails but a valid manual attestation exists, the assumption is still satisfied. If the link is later removed, the manual attestation continues to apply.

This is always accurate — satisfaction via controls is evaluated inline from the current state of the target model's controls, not from cached attestation records. Consistent with how CI attestation staleness is checked inline from assertion results.

Components

A component is a deployable unit that bridges security architecture to code organization. It maps trust boundaries (where attackers can reach) to repos (where the code lives).

Why components?

A threat model may describe a feature that spans multiple codebases — a backend API, a frontend app, a worker service. Without components, all controls appear in one flat list. With components, each control is scoped to the codebase that implements it.

Adding components

In the model detail page, add components with:

How scoping works

Controls with a component_id are scoped to that component. When a coding agent works in a repo, it matches the git remote URL to a component's repo_url and sees only the controls relevant to its codebase. Unscoped controls (no component) are visible to all.

MCP tools

Sharing threat models

To share a threat model publicly (e.g., for an open-source component), export it from the model detail page. The export includes model structure, controls with implementation status, and assumptions as conditions. Cross-model attestation data (assumptions linked to other models) is stripped since it's deployment-specific. Manual attestations are preserved.

For auditors who need the full picture including cross-model dependency satisfaction, use the system export from the system detail page. This produces a comprehensive report showing all models, the cross-model dependency graph, and attestation status.

MCP tools

Systems are available via MCP: