The Security Properties Methodology

Formal Foundations

Mipiti's methodology is grounded in established security science, not a proprietary invention:

Concept Formal Origin How Mipiti applies it
Capability-defined attackers Common Criteria Security Problem Definition (ISO 15408) Attackers defined by expertise, resources, and access level, reusable via Protection Profiles
Asset × Attacker → Control Objective Common Criteria Security Problem Definition, NIST SP 800-30 Deterministic cross-product — every combination covered, no gaps
Traceable control derivation Common Criteria assurance (EAL), NIST Risk Management Framework Every control traces back through a CO to a specific asset-attacker pairing
Systematic hazard analysis HAZOP (ICI, 1960s) Crossing parameters (assets × attackers) to identify risks — the same structural approach applied to security

These methods have been used for decades in high-assurance environments (defense, critical infrastructure, hardware security) but were impractical for product teams due to the manual effort required. Mipiti automates them with AI while preserving their formal guarantees.

What Mipiti adds

The Security Property Set

Each asset is tagged with the security properties that matter for it. The default set ships with four properties:

Property Key Question Examples
C — Confidentiality Who can read this? Data leakage, unauthorized access, eavesdropping
I — Integrity Who can modify this? Tampering, injection, forgery
A — Availability What keeps this running? Denial of service, resource exhaustion
U — Usage Can this be used without being compromised? HSM key misuse, biometric oracle abuse, DRM bypass, sealed model extraction

The first three (C, I, A) come from the well-known CIA triad. Usage extends the set to capture a threat that CIA cannot express: when an asset's confidentiality is architecturally assured (non-extractable), an attacker can still use the asset — invoking operations through its interface — without compromising the asset itself. The asset remains perfectly confidential, yet the attacker leverages its functionality for an unauthorized purpose.

This is formally independent from C, I, and A. Confidentiality is intact (the asset was never disclosed). Integrity is intact (the asset was never modified). Availability is intact (the service is running). Yet the security posture is degraded because the asset's functionality was exercised for the wrong purpose.

The property set is not closed — the methodology works with any set of security properties. The completeness guarantee comes from the cross-product math (Asset x Attacker), not from any specific property vocabulary.

When does Usage apply?

Usage applies when two conditions are met simultaneously:

  1. The asset's confidentiality is architecturally assured — the asset is non-extractable by design (hardware isolation, secure enclaves, etc.)
  2. The asset exposes an operational interface — operations can be invoked on or with the asset without extracting it

If the asset is extractable (e.g., a key stored in a config file, model weights on a regular GPU), then misuse reduces to a Confidentiality failure — someone obtained access to the asset or the credentials that protect it. Usage does not apply.

If the asset is non-extractable but has no operational interface (e.g., encrypted data at rest with no query API), then there is nothing to use. Usage does not apply.

Examples

Asset Non-extractable? Operational interface? Usage applies?
Signing key in an HSM Yes — key never leaves hardware Yes — signing operations via PKCS#11 Yes
Signing key in a config file No — readable by anyone with file access Yes — but misuse = C breach of the key No
Biometric template in a phone's secure enclave Yes — template never leaves enclave Yes — match/no-match decisions via API Yes
Biometric template in a database No — queryable/extractable Yes — but misuse = C breach of the database No
DRM content in a hardware decryption module (Widevine L1) Yes — decrypted stream never leaves secure pipeline Yes — playback can be invoked Yes
ML model weights in a confidential AI environment (NVIDIA H100 CC, Azure Confidential AI) Yes — weights sealed in TEE, even cloud operator cannot extract Yes — inference can be invoked via API Yes
ML model weights on a regular GPU No — cloud operator or sysadmin can copy the weight files Yes — but misuse = C breach of API credentials or server access No
Data in a confidential computing enclave (AWS Nitro, Azure CC) Yes — sealed from infrastructure operator Yes — computations can be invoked over the data Yes

The pattern: hardware or architectural isolation creates the non-extractability that makes Usage independent from Confidentiality. Without that isolation, every "misuse" scenario traces back to a Confidentiality failure of some credential or access path.

Capability-Defined Attackers

Attackers are defined by position and capability — not by persona labels like "script kiddie" or attack scenarios like "SQL injection."

Each attacker is a concrete, verifiable capability statement:

"Attacker with ability to intercept network traffic between client and server"

This framing is reusable across models and directly maps to control objectives.

The Control Objective Matrix

Control Objectives (COs) are the cross-product of:

Asset x Attacker = Control Objective

Each CO bundles all of an asset's security properties into a single testable "SHALL" statement:

"Confidentiality, Integrity, and Availability of [User Session Tokens] shall be protected from an attacker with [ability to intercept network traffic]."

This condensed format produces one CO per asset-attacker pair. The asset's full set of security properties is included in the statement, ensuring every property is covered without generating separate COs per property.

Because COs are computed deterministically (not generated by the LLM), coverage is mathematically guaranteed — no gaps, no hallucinations.

Implementation Controls

Control objectives define what must be protected. Implementation controls define how. After the CO matrix is computed, Mipiti uses LLM reasoning to generate concrete controls that satisfy each CO:

CO: "Integrity of [User Session Tokens] shall be protected from an attacker with [ability to intercept network traffic]."

Controls: HMAC-sign session tokens (CTRL-01), enforce TLS 1.3 on all endpoints (CTRL-02)

A single control can cover multiple COs, and a single CO can be covered by multiple controls organized into mitigation groups — alternative paths to satisfy the same objective.

Controls are the actionable output of the threat model: they are what your team implements, tracks in Jira, and marks as implemented on the Assurance page. Every control traces back through a CO to a specific asset-attacker pairing, maintaining full traceability from risk to mitigation.

Risk Ratings: Impact, Likelihood, and Risk Tiers

Every control objective carries a risk tier (Critical / High / Medium / Low) that helps prioritize which COs to address first. Risk tiers are derived deterministically — no LLM call — from two ratings assigned during Discovery:

Asset Impact (H / M / L)

Impact is an intrinsic property of the asset — what happens if this asset is compromised. It reflects the asset's sensitivity, criticality, and the severity of consequences if the asset's security is breached. The LLM assigns an impact rating (High, Medium, Low) to each asset during generation based on the asset's role and the data it holds.

Example: A credential database is typically rated H (High impact — breach leads to regulatory penalties, mass credential exposure). A public landing page cache is typically rated L (Low — minimal consequence if compromised).

Attacker Likelihood (H / M / L)

Likelihood is an intrinsic property of the attacker — how probable it is that this attacker capability will be exercised against the system. The LLM assigns a likelihood rating (High, Medium, Low) to each attacker during generation based on the attacker's position and capability.

Example: An external attacker with credential stuffing capability is typically rated H (High likelihood). An attacker with physical access to a data center is typically rated L (Low).

Risk Tier Matrix

Each CO's risk tier is computed from Asset Impact × Attacker Likelihood using a 3×3 matrix:

Likelihood: H Likelihood: M Likelihood: L
Impact: H Critical High Medium
Impact: M High Medium Low
Impact: L Medium Low Low

Risk tiers appear as color-coded badges on each CO throughout the platform — in the model view, CO lists, and assurance reports. Hovering a CO's risk badge shows the underlying Impact and Likelihood values from the parent asset and attacker.

Editing Risk Ratings

Both impact and likelihood can be edited after generation using inline dropdowns in the model view (accessible from the Assurance tab or the Models tab). Changes cascade automatically — all related COs recompute their risk tier.

Trust Boundaries

Trust boundaries define where security domains meet. Data crossing a trust boundary requires validation and protection. Mipiti identifies boundaries automatically and maps which assets and data flows cross them.

Assumptions

Every threat model rests on assumptions. Mipiti makes these explicit and trackable. Assumptions can be: