Assertion Verification

Mipiti's assertion verification system lets you prove that security controls are actually implemented in your codebase — without the platform ever accessing your source code.

The Problem

Marking a control as "implemented" with a note like "see auth.py line 45" is self-attestation. Auditors and compliance frameworks (SOC 2, ISO 27001) need independent, machine-verified proof. But sending your source code to a third-party platform creates custody risk.

Assertion verification solves both problems: typed, machine-verifiable claims about your code that can be independently checked by your own CI pipeline.

How It Works

The system has three parties:

  1. Code analysis (your AI agent — Claude Code, Cursor, etc.) — has full code access, produces typed assertions
  2. Mipiti (the platform) — stores assertions, never sees your code
  3. Your CI pipeline (GitHub Actions, GitLab CI, etc.) — independently verifies assertions against the actual codebase

No single party needs both code access and assertion storage. Mipiti coordinates verification without touching your source code.

Assertions

An assertion is a typed, machine-verifiable claim about a property of your source code. Each assertion is linked to a specific implementation control.

Assertion Types

Type What it claims Example
function_exists A named function exists at a location require_auth in auth/middleware.py
class_exists A named class exists RateLimiter in security/rate_limit.py
decorator_present A function has a specific decorator @require_auth on get_user_profile
function_calls One function calls another logout() calls invalidate_session()
parameter_validated A function parameter has validation user_id validated in update_profile()
test_passes Matching tests exist and pass test_require_auth_* in tests/test_auth.py
test_exists Matching tests exist test_logout_* in tests/test_auth.py
config_key_exists A config key is present SESSION_TIMEOUT in config.yaml
config_value_matches A config value matches expected HSTS_MAX_AGE >= 31536000
dependency_exists A package is in the dependency manifest bcrypt in requirements.txt
dependency_version A package version satisfies a constraint cryptography >= 41.0
file_exists A file exists at a path security/cors_config.py
file_hash A file matches a cryptographic hash, with reference to the code that pins it SHA-256 of nginx.conf verified in deploy/verify.py
pattern_matches A regex pattern matches in a file Strict-Transport-Security in responses
pattern_absent A regex pattern does NOT match No eval() in user input handlers
import_present A module import exists from cryptography.fernet import Fernet
env_var_referenced An environment variable is used DATABASE_ENCRYPTION_KEY referenced
error_handled Error handling exists for specific cases AuthenticationError caught in login()
no_plaintext_secret No hardcoded secrets matching a pattern No password = "..." in source files
middleware_registered Named middleware is registered cors_middleware in app setup
http_header_set An HTTP header is set in responses X-Content-Type-Options: nosniff

Creating Assertions

Assertions can be created in two ways:

Via MCP (recommended) — your AI coding agent analyzes the codebase and submits assertions programmatically:

submit_assertions(
  model_id: "...",
  control_id: "CTRL-07",
  assertions_json: [
    {
      "type": "function_exists",
      "params": { "file": "auth/middleware.py", "name": "require_auth" },
      "description": "Authentication middleware function exists"
    },
    {
      "type": "test_passes",
      "params": { "file": "tests/test_auth.py", "test_name": "test_require_auth_rejects_expired" },
      "description": "Auth middleware correctly rejects expired tokens"
    }
  ]
)

Via the UI — on the Assurance page, expand any control and use the Assertions panel to add assertions manually. Select a type, fill in the parameters, and provide a description.

Assertion Lifecycle

Workflow: Submit, Verify, Then Mark Implemented

The recommended workflow for each control:

  1. Submit assertions — create assertions proving the control is implemented. You can submit multiple assertions incrementally.
  2. CI verifies immediately — assertions are verified on the next CI run regardless of the control's implementation status. This gives you early feedback: you can see which assertions pass or fail before committing to the "implemented" status.
  3. Fix failures — if any assertions fail, update your code or assertions and resubmit. CI re-verifies on the next run.
  4. Mark as implemented — once you're confident your assertions pass, mark the control as implemented. At least one assertion is required.
  5. Platform evaluates — the platform checks: all assertions pass (tier 1 + tier 2), collective sufficiency is sufficient. If everything passes, the control is promoted to verified.

This order is important: CI verification runs continuously on all assertions, not just those on implemented controls. You always get feedback before making a status commitment.

Three-Gate Verification

Verification operates in three gates. A control is only verified when all three pass:

Tier 1 — Mechanical verification (deterministic, no AI required):

Tier 2 — Semantic verification (AI-assisted, uses your existing CI AI tools):

Collective Sufficiency — Per-control evaluation (AI-assisted):

Tier 1 catches structural falsehoods ("this function doesn't exist"). Tier 2 catches semantic falsehoods ("this function exists but doesn't prove the stated aspect of the control"). Sufficiency catches evidence gaps ("these assertions prove authentication exists but don't address the session timeout requirement"). Together, they provide defense against both accidental errors and gaming.

How Tier 2 Prompts Work

The platform generates targeted verification prompts automatically — one per assertion, tailored to the assertion type and linked control. These prompts are template-based (deterministic, consumes no usage credits).

Each prompt presents the control as the primary evaluation target, with the assertion description scoping which aspect of the control this evidence covers. For example, a function_exists assertion for a control about authentication enforcement produces:

Read the function require_auth in auth/middleware.py.

Security Control

Enforce authentication on all API endpoints; reject unauthenticated requests with 401.

This Assertion's Scope

This assertion covers one specific aspect of the control above: Authentication middleware validates JWT tokens and rejects expired/invalid tokens with 401. Other assertions may cover other aspects — evaluate only whether this evidence proves the stated aspect.

Answer YES if the function contains meaningful implementation logic that proves the stated aspect of the control. Answer NO if it is a stub, no-op, pass-through, or does not prove it.

All assertion types undergo both Tier 1 and Tier 2 verification. For types like file_hash and no_plaintext_secret, Tier 2 evaluates whether the Tier 1 method is meaningful evidence for the control — for example, whether the regex patterns genuinely catch real secrets, or whether the code referencing a file hash actually verifies integrity as part of the control.

Assertion Coherence Check

When assertions are submitted, the platform runs a coherence check — a lightweight LLM call that validates whether the assertion type and parameters are a reasonable way to prove the linked control.

This prevents type-shopping — for example, a developer submitting file_hash for a control about input validation. The hash would match but prove nothing about the implementation.

How it works:

Hard gate enforcement:

Incoherent assertions are flagged in the verification report (coherence_warnings count) and returned as warnings in the submission response, giving immediate feedback to the submitter.

Automatic Evaluation

Tier 2 runs automatically in CI for all controls with assertions. Collective sufficiency is evaluated server-side when assertions are submitted — the platform uses its own LLM to check whether the assertion set covers all aspects of the control, giving immediate feedback without waiting for a CI run.

The mipiti-verify CLI

mipiti-verify is an open-source CLI tool that handles the entire verification pipeline in a single command:

pip install mipiti-verify[all]

# Verify a single model
mipiti-verify run <model_id> \
  --api-key $MIPITI_API_KEY \
  --tier2-provider openai \
  --project-root .

# Verify all models in the workspace
mipiti-verify run --all \
  --api-key $MIPITI_API_KEY \
  --tier2-provider openai \
  --project-root .

The CLI:

  1. Pulls pending Tier 1 assertions from the Mipiti API
  2. Runs 21 built-in verifiers against your codebase (regex-based, multi-language)
  3. Submits Tier 1 results
  4. Pulls pending Tier 2 assertions for controls with evidence marked complete
  5. Feeds each prompt + relevant source code to your chosen AI provider
  6. Submits Tier 2 results
  7. Evaluates collective sufficiency for controls pending sufficiency review
  8. Submits sufficiency results

Tier 2 AI providers — pluggable, choose one:

Provider Flag Notes
OpenAI --tier2-provider openai Uses OPENAI_API_KEY env var
Anthropic --tier2-provider anthropic Uses ANTHROPIC_API_KEY env var
Ollama --tier2-provider ollama Local, no API key needed

API key scopes:

Prefix Scope Use
mk_ Developer Local development. Runs assertions but does not submit results.
mv_ Verifier CI pipelines. Runs assertions and submits results to update verification status.

Developer keys (mk_) skip result submission automatically — no --dry-run needed. Use a developer key for local testing and a verifier key (mv_) in CI.

Additional commands and flags:

When using --all, the CLI fetches every model in the workspace associated with the API key and verifies each one sequentially. The exit code is non-zero if any model has failures.

Trust model: The CLI is open-source so that auditors can inspect the verification logic, run it independently against the codebase, and trust the results without relying on the company's CI pipeline.

Tier 2 CI Workflow

The recommended approach is to use mipiti-verify, which handles both tiers automatically. If you prefer to integrate manually:

  1. Pull pending Tier 2 assertions: GET /api/models/{id}/verification/pending?tier=2
    • Each assertion includes a tier2_prompt field with the generated prompt
  2. Run the prompt locally: Feed the prompt + relevant source code to your CI's AI tool
  3. Submit results: POST /api/models/{id}/verification/results with tier: 2 per result

Computed Verification Status

A control's verification status is computed, not settable. It reflects the strength of its evidence chain:

Status Meaning Action needed
Verified All assertions pass both tiers, coherence checks pass, and sufficiency is "sufficient" None — control is proven
Partially verified Verification attempted but incomplete — assertion failures, sufficiency gaps, or both Investigate failing assertions or submit additional assertions for coverage gaps
Pending Assertions exist but some still await CI evaluation Wait for or trigger a CI run
Unverified No assertions submitted Submit assertions

"Verified" is earned through independent CI verification — not self-attestation.

CI Attestation

When your CI pipeline submits verification results, it can include attestation — cryptographic proof that the results came from a real CI environment, not a manual submission.

Two attestation methods

OIDC (zero setup for GitHub Actions / GitLab CI):

ECDSA (universal — works with any CI system):

Enforcement

In Workspace Settings, the "Require CI attestation" toggle controls enforcement:

Audit trail

Attested verification runs include a permanent, independently verifiable audit bundle:

Key history is preserved — even after key rotation, audit bundles reference the specific key that signed them.

Platform notarization

When verification results are submitted with OIDC attestation (no CI-side ECDSA signature), the platform automatically signs the results with the instance's ECDSA P-256 key — the same key used for backup signing. This provides content integrity: the auditor can verify that the results have not been modified since the platform received them.

The verification report shows the key fingerprint and results hash. The instance's public key is available from the admin panel (Settings > Admin > Signing Key) for independent verification.

Control Drift Detection

When CI runs verification periodically (on every push or on a schedule), previously-passing assertions that now fail indicate control drift — a code change invalidated a previously-implemented security control.

The verification report flags drift items and correlates them to specific controls and control objectives, enabling targeted remediation rather than full re-assessment.

Multi-Repository Support

Features often span multiple repositories — frontend, backend, infrastructure. Each assertion can declare which repository it targets via the repo field (e.g., "org/backend"). When omitted, the assertion applies to the default (single-repo) codebase.

How it works

Setup

No setup needed for single-repo projects — the repo field defaults to empty and everything works as before. For multi-repo:

  1. When submitting assertions, include repo: "org/repo-name" to tag which repo the assertion targets
  2. Add mipiti-verify to each repo's CI pipeline with the same model ID
  3. Each repo verifies its own assertions; the platform shows the combined posture

Consistency

Each repo's CI independently pushes results on every commit to its protected branch. The platform stores the latest verified commit per repo. The audit package shows all repo commit SHAs — the auditor verifies each at the pinned commit.

Negative Findings (Gap Discovery)

Evidence verification proves controls are implemented. Negative findings discover controls that are not implemented — the inverse. An AI coding agent proactively scans your codebase against security controls and records structured evidence of what was checked and not found.

Example: "CTRL-07 requires input length validation on the API endpoint. Checked main.py:1680-1720 — no length check found on the messages field."

How Findings Work

Findings can be created three ways: by your AI coding agent (via MCP tools), by external tools calling the REST API directly with an API key, or manually in the UI. Each finding records:

Scan Prompts

The get_scan_prompt MCP tool generates a structured scanning guide for your AI agent — consuming no usage credits. The prompt is a deterministic template that provides:

The agent uses this prompt to guide its own codebase analysis, then submits any discovered gaps as findings.

Finding Lifecycle

Findings follow a strict lifecycle with no backward transitions:

discovered → acknowledged → remediated → verified
     ↓             ↓
  dismissed     dismissed

Remediation Bridge

The key connection between negative findings and positive assertions: when remediating a finding, you can link it to one or more assertion IDs. This bridges "we found a gap" to "we proved we fixed it."

A finding is auto-verified when:

  1. Its status is "remediated"
  2. It has at least one linked assertion ID
  3. All linked assertions pass both Tier 1 and Tier 2 verification

This creates a complete audit trail: gap discovered → fix acknowledged → evidence submitted → CI verified.

Findings in the UI

On the Assurance page, each control card shows a Findings panel (below the Assertions panel). From here you can:

MCP Workflow

The recommended workflow for an AI coding agent:

Positive evidence (prove controls are implemented):

  1. get_controls with status="implemented" — find controls that need assertions
  2. Analyze the codebase — read the relevant source files
  3. Before each assertion — verify locally with mipiti-verify verify <type> -p key=value --project-root . to catch bad file paths, missing functions, or wrong params, then read the target file and evaluate whether a reviewer seeing only that code would agree with the assertion's claim. If the code doesn't actually implement the control, fix the implementation first.
  4. submit_assertions — create typed claims for each control (sufficiency evaluated immediately server-side; Tier 1 + Tier 2 verified on next CI run)
  5. get_verification_report — check overall verification status

Negative findings (discover gaps):

  1. get_scan_prompt — get a structured scanning guide for not-implemented controls
  2. Analyze the codebase — scan for missing implementations following the prompt
  3. submit_findings — record gaps with checked locations and expected evidence
  4. list_findings — review submitted findings by control or status
  5. update_finding — transition findings through their lifecycle

Control refinement (fix overly prescriptive controls):

  1. refine_control — propose a new description with justification. AI evaluates whether the mitigation group still satisfies all mapped COs. See Control Refinement for details.

Ongoing maintenance:

  1. get_review_queue — find stale controls needing re-verification
  2. list_assertions — see existing assertions for a control
  3. Update or replace assertions as the code evolves

Setting Up CI Attestation

Two options — choose based on your CI provider.

Option A: OIDC (GitHub Actions / GitLab CI)

Zero key management. Your CI provider issues a short-lived identity token (JWT) signed with the provider's private key. Mipiti validates it automatically by fetching the provider's public keys from their standard OIDC discovery endpoint ({issuer}/.well-known/jwks). You never need to provision or rotate any keys — the trust is established through the provider's published key infrastructure.

GitHub Actions:

# .github/workflows/mipiti-verify.yml
name: Security Control Verification
on:
  push:
    branches: [main]
  schedule:
    - cron: '0 6 * * 1'  # Weekly Monday 6am

permissions:
  id-token: write   # Required for OIDC attestation
  contents: read

jobs:
  verify:
    runs-on: ubuntu-latest
    environment: verify
    steps:
      - uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
      - uses: Mipiti/mipiti-verify@a4abc180daa1d3035a06f454ce1ccb02794c6dd0 # v0.14.0
        with:
          api-key: ${{ secrets.MIPITI_API_KEY }}
          all: true
          tier2-provider: openai
          tier2-model: gpt-4o-mini
          tier2-api-key: ${{ secrets.OPENAI_API_KEY }}

The action auto-detects OIDC tokens from the GitHub Actions environment. No manual token handling needed.

GitLab CI:

# .gitlab-ci.yml
# Required CI/CD variables (Settings → CI/CD → Variables, masked + protected):
#   MIPITI_API_KEY  — verifier-scoped API key (mv_...)
#   OPENAI_API_KEY  — OpenAI API key for Tier 2 verification
verify-controls:
  stage: test
  id_tokens:
    MIPITI_OIDC:
      aud: mipiti
  script:
    - pip install "mipiti-verify[openai]==0.13.0"
    # For supply chain safety, generate a lockfile with hashes:
    #   pip-compile --generate-hashes -o requirements-verify.txt
    # Then install with:
    #   pip install --require-hashes -r requirements-verify.txt
    - mipiti-verify run --all
        --tier2-provider openai
        --tier2-model gpt-4o-mini
        --output json

No keys to generate, rotate, or manage. The CI provider handles OIDC automatically.

Self-hosted GitLab or other OIDC providers:

If your CI system uses a custom OIDC issuer (e.g., https://gitlab.example.com), add it in Settings > Workspaces > Security > Trusted OIDC Issuers. Platform defaults (GitHub Actions, GitLab.com) are always trusted — custom issuers are merged with them.

Option B: ECDSA (Any CI System)

For Jenkins, CircleCI, Buildkite, or any CI system without OIDC support. You generate a key pair, upload the public key to Mipiti, and store the private key as a CI secret.

Step 1 — Generate key pair:

# Generate ECDSA P-256 private key
openssl ecparam -genkey -name prime256v1 -noout -out private.pem

# Extract public key
openssl ec -in private.pem -pubout -out public.pem

Step 2 — Upload public key to Mipiti:

  1. Go to Settings > Workspaces
  2. Select your team workspace (not personal)
  3. In the Security section, find ECDSA Signing Key
  4. Paste the contents of public.pem into the text area
  5. Click Upload Key
  6. You should see a green "Configured" badge with the key fingerprint

Optionally, enable Require CI attestation to reject unattested submissions.

Step 3 — Store private key in CI:

Add private.pem as a secret in your CI system. Never commit it to your repository.

CI System Where to store
GitHub Actions Repository Settings > Secrets > MIPITI_SIGNING_KEY
GitLab CI Settings > CI/CD > Variables > MIPITI_SIGNING_KEY (masked)
Jenkins Credentials > Secret text > MIPITI_SIGNING_KEY
CircleCI Project Settings > Environment Variables > MIPITI_SIGNING_KEY

Step 4 — Sign verification results in CI:

Your CI script needs to:

  1. Create a short-lived JWT for authentication (signed with the private key)
  2. Compute a hash of the verification results
  3. Sign the hash with the private key
  4. Submit results with the JWT and signature
# 1. Create auth JWT (ES256, 5 min lifetime)
#    Header: {"alg":"ES256","typ":"JWT"}
#    Payload: {"sub":"<workspace_id>","iat":<now>,"exp":<now+300>}
AUTH_JWT=$(python3 -c "
import jwt, time
key = open('private.pem').read()
now = int(time.time())
print(jwt.encode({'sub': '$WORKSPACE_ID', 'iat': now, 'exp': now + 300}, key, algorithm='ES256'))
")

# 2. Compute results hash
RESULTS_JSON='[{"assertion_id":"asrt_001","tier":1,"result":"pass","details":"..."}]'
SIGNED_HASH="sha256:$(echo -n "$RESULTS_JSON" | sha256sum | cut -d' ' -f1)"

# 3. Sign the hash
echo -n "$SIGNED_HASH" | openssl dgst -sha256 -sign private.pem | base64 -w0 > signature.b64
SIGNATURE=$(cat signature.b64)

# 4. Submit with JWT auth + attestation signature
curl -X POST https://api.mipiti.io/api/models/$MODEL_ID/verification/results \
  -H "Authorization: Bearer $AUTH_JWT" \
  -H "Content-Type: application/json" \
  -d "{
    \"pipeline\": {\"provider\": \"jenkins\", \"run_id\": \"$BUILD_NUMBER\"},
    \"results\": $RESULTS_JSON,
    \"signature\": \"$SIGNATURE\",
    \"signed_hash\": \"$SIGNED_HASH\"
  }"

Key rotation:

  1. Generate a new key pair
  2. Upload the new public key in Workspace Settings (the old key is automatically revoked)
  3. Update the CI secret with the new private key
  4. Previous audit bundles remain verifiable — key history preserves the old public key

Which should I use?

OIDC ECDSA
Setup effort Minimal (one permission line) Moderate (key generation + upload + CI secret)
Key management None You manage rotation
CI support GitHub Actions, GitLab CI Any CI system
Provenance Rich (repo, branch, actor, run ID) Cryptographic (signature + key fingerprint)
Security Requires claim restrictions + branch-protected Environment for full protection — see Securing CI Attestation Requires repo secret restrictions for full protection — see Securing CI Attestation
Recommended for GitHub/GitLab shops Jenkins, CircleCI, Buildkite, or mixed CI

Both methods produce attested verification runs. You can use different methods for different models or pipelines within the same workspace.

Securing CI Attestation

Attestation proves provenance — but provenance is only meaningful if the CI pipeline itself is trustworthy. This section explains how to configure both OIDC and ECDSA attestation for full protection against cross-repo and cross-branch forgery.

Why attestation alone isn't enough

Attestation proves "this came from CI" — but which CI? Without restrictions:

Both flows need CI-side protections to be trustworthy.

Securing OIDC attestation

  1. Configure required claims in Mipiti — Go to Workspace Settings > Security > Required OIDC Claims. Set:

    • repository = your-org/your-repo (ensures tokens only from your repo)
    • environment = attestation (ensures tokens only from a protected GitHub Environment)
  2. Create a GitHub Environment with branch protection:

    • Repository Settings > Environments > New environment: attestation
    • Under "Deployment branches": select "Selected branches" > add main
    • This means only workflows triggered on main can access the attestation environment, so PR authors from forks or feature branches cannot produce valid tokens
  3. Update your workflow to use the environment:

    jobs:
      verify:
        runs-on: ubuntu-latest
        environment: attestation    # requires branch protection
        permissions:
          id-token: write
          contents: read
        steps: ...
    
  4. How this protects you: A malicious contributor can open a PR that modifies the workflow, but GitHub won't let it run in the attestation environment because the PR branch isn't main. Without the environment, the OIDC token won't have the environment claim and Mipiti will reject it.

Securing ECDSA attestation

  1. Store the private key as a repository secret — never as a workflow file or artifact

    • GitHub: Repository Settings > Secrets and variables > Actions > New repository secret: MIPITI_SIGNING_KEY
    • GitLab: Settings > CI/CD > Variables > MIPITI_SIGNING_KEY (masked, protected)
  2. Restrict the secret to protected branches:

    • GitHub: Repository secrets are already restricted — they're not exposed to workflows triggered by PRs from forks
    • GitLab: Check "Protected" on the variable so it's only available on protected branches
  3. How this protects you: A malicious contributor who opens a PR cannot access the signing key. Their workflow runs without the secret, so they can't produce valid signatures. Only workflows on the default/protected branch can sign results.

Security comparison (with protections)

OIDC + claim restrictions ECDSA + repo secret
Cross-repo protection Yes (repository claim pinned) Yes (key per-workspace)
Branch protection Via GitHub Environment Via repo secret restrictions
PR author can forge? No (environment restricted to main) No (secret not exposed to forks)
Result integrity Provenance only (token not bound to payload) Signature covers payload
Key management None (provider-managed keys) Manual (generate, upload, rotate)

Secure CI Setup Guide

Attestation proves provenance — but provenance is only meaningful if the CI pipeline itself is trustworthy. The five requirements below ensure that developers cannot influence verification results.

The five requirements for tamper-proof verification:

# Requirement Why How (GitHub)
1 Protected branch Developers can't push directly — all changes require PR Branch protection rule: require PR reviews, no force push
2 Required PR reviews No unreviewed code reaches the verified branch Branch protection: require 1+ approving review
3 CODEOWNERS on workflow files Changes to the verification workflow require security team approval .github/CODEOWNERS: /.github/workflows/mipiti-verify.yml @security-team
4 Pinned tool version The verification tool can't be silently replaced pip install mipiti-verify==X.Y.Z --require-hashes --hash=sha256:... (see below)
5 Environment with branch restriction Attestation tokens only issued from the protected branch GitHub Environment attestation restricted to main branch

Step-by-step setup (GitHub Actions):

  1. Create branch protection rule on main: require PR reviews, disable force push, require status checks
  2. Add .github/CODEOWNERS with workflow file ownership (security team or repo owner)
  3. Create GitHub Environment attestation in repo settings, restrict deployment branches to main
  4. Configure the workflow to use environment: attestation and pin mipiti-verify with hash verification (see below)
  5. In Mipiti Workspace Settings, set required OIDC claims: repository + environment
  6. Enable "Require CI attestation" toggle

What this guarantees to auditors:

What would violate this guarantee:

Pinning the verification tool:

A version pin (==X.Y.Z) prevents silent upgrades but doesn't protect against index substitution (dependency confusion) or a compromised package index. For a tamper-proof pipeline, pin the exact artifact hash:

# Strongest — pins the exact wheel artifact (immutable)
pip install mipiti-verify==X.Y.Z \
  --require-hashes \
  --hash=sha256:abc123...

Get the hash from PyPI (pip hash or the "Download files" page). If the artifact changes for any reason, pip will refuse to install it. This is the same approach used by pip-compile --generate-hashes and recommended by the Python Packaging Authority for supply chain security.

Alternatively, pin to a commit SHA (requires git in CI, slower, no wheel cache):

pip install git+https://github.com/mipiti/mipiti-verify@abc123def456...

At minimum, always pin the version (==X.Y.Z). PyPI versions are immutable once published — they cannot be overwritten — but hash pinning provides defense in depth against index-level attacks.

For ECDSA attestation — the same five requirements apply, with one difference: the signing key is stored as a repo secret (automatically restricted to protected branches). The ECDSA signature additionally covers the results payload, providing tamper detection if results are modified in transit.

Self-hosted / private runners

Both approaches work on self-hosted and private runners:

The only exception: fully air-gapped runners that can't reach token.actions.githubusercontent.com can't use OIDC at all — use ECDSA in that case.