Assertion Verification
Mipiti's assertion verification system lets you prove that security controls are actually implemented in your codebase — without the platform ever accessing your source code.
The Problem
Marking a control as "implemented" with a note like "see auth.py line 45" is self-attestation. Auditors and compliance frameworks (SOC 2, ISO 27001) need independent, machine-verified proof. But sending your source code to a third-party platform creates custody risk.
Assertion verification solves both problems: typed, machine-verifiable claims about your code that can be independently checked by your own CI pipeline.
How It Works
The system has three parties:
- Code analysis (your AI agent — Claude Code, Cursor, etc.) — has full code access, produces typed assertions
- Mipiti (the platform) — stores assertions, never sees your code
- Your CI pipeline (GitHub Actions, GitLab CI, etc.) — independently verifies assertions against the actual codebase
No single party needs both code access and assertion storage. Mipiti coordinates verification without touching your source code.
Assertions
An assertion is a typed, machine-verifiable claim about a property of your source code. Each assertion is linked to a specific implementation control.
Assertion Types
| Type | What it claims | Example |
|---|---|---|
function_exists |
A named function exists at a location | require_auth in auth/middleware.py |
class_exists |
A named class exists | RateLimiter in security/rate_limit.py |
decorator_present |
A function has a specific decorator | @require_auth on get_user_profile |
function_calls |
One function calls another | logout() calls invalidate_session() |
parameter_validated |
A function parameter has validation | user_id validated in update_profile() |
test_passes |
Matching tests exist and pass | test_require_auth_* in tests/test_auth.py |
test_exists |
Matching tests exist | test_logout_* in tests/test_auth.py |
config_key_exists |
A config key is present | SESSION_TIMEOUT in config.yaml |
config_value_matches |
A config value matches expected | HSTS_MAX_AGE >= 31536000 |
dependency_exists |
A package is in the dependency manifest | bcrypt in requirements.txt |
dependency_version |
A package version satisfies a constraint | cryptography >= 41.0 |
file_exists |
A file exists at a path | security/cors_config.py |
file_hash |
A file matches a cryptographic hash, with reference to the code that pins it | SHA-256 of nginx.conf verified in deploy/verify.py |
pattern_matches |
A regex pattern matches in a file | Strict-Transport-Security in responses |
pattern_absent |
A regex pattern does NOT match | No eval() in user input handlers |
import_present |
A module import exists | from cryptography.fernet import Fernet |
env_var_referenced |
An environment variable is used | DATABASE_ENCRYPTION_KEY referenced |
error_handled |
Error handling exists for specific cases | AuthenticationError caught in login() |
no_plaintext_secret |
No hardcoded secrets matching a pattern | No password = "..." in source files |
middleware_registered |
Named middleware is registered | cors_middleware in app setup |
http_header_set |
An HTTP header is set in responses | X-Content-Type-Options: nosniff |
Creating Assertions
Assertions can be created in two ways:
Via MCP (recommended) — your AI coding agent analyzes the codebase and submits assertions programmatically:
submit_assertions(
model_id: "...",
control_id: "CTRL-07",
assertions_json: [
{
"type": "function_exists",
"params": { "file": "auth/middleware.py", "name": "require_auth" },
"description": "Authentication middleware function exists"
},
{
"type": "test_passes",
"params": { "file": "tests/test_auth.py", "test_name": "test_require_auth_rejects_expired" },
"description": "Auth middleware correctly rejects expired tokens"
}
]
)
Via the UI — on the Assurance page, expand any control and use the Assertions panel to add assertions manually. Select a type, fill in the parameters, and provide a description.
Assertion Lifecycle
- Pending — created but not yet verified by CI
- Pass — CI verified the claim is true
- Fail — CI verified the claim is false (the function doesn't exist, the test fails, etc.)
- Skipped — CI couldn't evaluate this assertion type (e.g., no Python AST parser available)
Workflow: Submit, Verify, Then Mark Implemented
The recommended workflow for each control:
- Submit assertions — create assertions proving the control is implemented. You can submit multiple assertions incrementally.
- CI verifies immediately — assertions are verified on the next CI run regardless of the control's implementation status. This gives you early feedback: you can see which assertions pass or fail before committing to the "implemented" status.
- Fix failures — if any assertions fail, update your code or assertions and resubmit. CI re-verifies on the next run.
- Mark as implemented — once you're confident your assertions pass, mark the control as implemented. At least one assertion is required.
- Platform evaluates — the platform checks: all assertions pass (tier 1 + tier 2), collective sufficiency is sufficient. If everything passes, the control is promoted to verified.
This order is important: CI verification runs continuously on all assertions, not just those on implemented controls. You always get feedback before making a status commitment.
Three-Gate Verification
Verification operates in three gates. A control is only verified when all three pass:
Tier 1 — Mechanical verification (deterministic, no AI required):
- Does the function actually exist at that location? (AST parsing)
- Does the test actually pass? (test runner execution)
- Does the config key exist? (file parsing)
- Cost: near-zero. Runs in seconds.
- Tier 1 always runs for all pending assertions.
Tier 2 — Semantic verification (AI-assisted, uses your existing CI AI tools):
- Does the
require_authfunction actually validate tokens, or is it a no-op? - Does
test_require_authactually test the authentication logic, or is it a trivial stub? - The platform generates a targeted verification prompt per assertion — the control is the primary evaluation target, and the assertion description scopes which aspect of the control this evidence covers.
- Each assertion only needs to prove its stated aspect. Other assertions may cover other aspects — like controls in a mitigation group, each assertion proves one facet.
- Tier 2 runs automatically in CI for all controls with assertions.
Collective Sufficiency — Per-control evaluation (AI-assisted):
- Do all assertions together constitute sufficient proof that the control is fully implemented?
- A single assertion may correctly prove one facet of a multi-faceted control but not the whole thing. Sufficiency evaluates the evidence set as a whole.
- Evaluated automatically server-side when assertions are submitted — no CI round-trip needed. Gives immediate feedback on coverage gaps.
Tier 1 catches structural falsehoods ("this function doesn't exist"). Tier 2 catches semantic falsehoods ("this function exists but doesn't prove the stated aspect of the control"). Sufficiency catches evidence gaps ("these assertions prove authentication exists but don't address the session timeout requirement"). Together, they provide defense against both accidental errors and gaming.
How Tier 2 Prompts Work
The platform generates targeted verification prompts automatically — one per assertion, tailored to the assertion type and linked control. These prompts are template-based (deterministic, consumes no usage credits).
Each prompt presents the control as the primary evaluation target, with the assertion description scoping which aspect of the control this evidence covers. For example, a function_exists assertion for a control about authentication enforcement produces:
Read the function
require_authinauth/middleware.py.Security Control
Enforce authentication on all API endpoints; reject unauthenticated requests with 401.
This Assertion's Scope
This assertion covers one specific aspect of the control above: Authentication middleware validates JWT tokens and rejects expired/invalid tokens with 401. Other assertions may cover other aspects — evaluate only whether this evidence proves the stated aspect.
Answer YES if the function contains meaningful implementation logic that proves the stated aspect of the control. Answer NO if it is a stub, no-op, pass-through, or does not prove it.
All assertion types undergo both Tier 1 and Tier 2 verification. For types like file_hash and no_plaintext_secret, Tier 2 evaluates whether the Tier 1 method is meaningful evidence for the control — for example, whether the regex patterns genuinely catch real secrets, or whether the code referencing a file hash actually verifies integrity as part of the control.
Assertion Coherence Check
When assertions are submitted, the platform runs a coherence check — a lightweight LLM call that validates whether the assertion type and parameters are a reasonable way to prove the linked control.
This prevents type-shopping — for example, a developer submitting file_hash for a control about input validation. The hash would match but prove nothing about the implementation.
How it works:
- The platform evaluates the assertion type, parameters, and description against the control's description and control objective
- Uses only metadata — no source code is needed
- Each assertion type has a natural scope:
file_hashproves integrity,no_plaintext_secretproves secret absence,function_existsproves code structure, etc.
Hard gate enforcement:
- Coherent — the assertion type matches the control's intent. Proceeds normally.
- Incoherent — the assertion type doesn't match (e.g.,
file_hashfor an implementation control). The assertion is saved buttier2_statusis reset toPENDING, preventing it from ever reaching "verified" status.
Incoherent assertions are flagged in the verification report (coherence_warnings count) and returned as warnings in the submission response, giving immediate feedback to the submitter.
Automatic Evaluation
Tier 2 runs automatically in CI for all controls with assertions. Collective sufficiency is evaluated server-side when assertions are submitted — the platform uses its own LLM to check whether the assertion set covers all aspects of the control, giving immediate feedback without waiting for a CI run.
The mipiti-verify CLI
mipiti-verify is an open-source CLI tool that handles the entire verification pipeline in a single command:
pip install mipiti-verify[all]
# Verify a single model
mipiti-verify run <model_id> \
--api-key $MIPITI_API_KEY \
--tier2-provider openai \
--project-root .
# Verify all models in the workspace
mipiti-verify run --all \
--api-key $MIPITI_API_KEY \
--tier2-provider openai \
--project-root .
The CLI:
- Pulls pending Tier 1 assertions from the Mipiti API
- Runs 21 built-in verifiers against your codebase (regex-based, multi-language)
- Submits Tier 1 results
- Pulls pending Tier 2 assertions for controls with evidence marked complete
- Feeds each prompt + relevant source code to your chosen AI provider
- Submits Tier 2 results
- Evaluates collective sufficiency for controls pending sufficiency review
- Submits sufficiency results
Tier 2 AI providers — pluggable, choose one:
| Provider | Flag | Notes |
|---|---|---|
| OpenAI | --tier2-provider openai |
Uses OPENAI_API_KEY env var |
| Anthropic | --tier2-provider anthropic |
Uses ANTHROPIC_API_KEY env var |
| Ollama | --tier2-provider ollama |
Local, no API key needed |
API key scopes:
| Prefix | Scope | Use |
|---|---|---|
mk_ |
Developer | Local development. Runs assertions but does not submit results. |
mv_ |
Verifier | CI pipelines. Runs assertions and submits results to update verification status. |
Developer keys (mk_) skip result submission automatically — no --dry-run needed. Use a developer key for local testing and a verifier key (mv_) in CI.
Additional commands and flags:
mipiti-verify run --all— verify all models in the API key's workspacemipiti-verify list <model_id>— show pending assertions summarymipiti-verify report <model_id>— show verification report--output json— machine-readable output for CI pipelines--output github— GitHub Actions annotations (errors on failures)--dry-run— run verifiers without submitting results (even with a verifier key)
When using --all, the CLI fetches every model in the workspace associated with the API key and verifies each one sequentially. The exit code is non-zero if any model has failures.
Trust model: The CLI is open-source so that auditors can inspect the verification logic, run it independently against the codebase, and trust the results without relying on the company's CI pipeline.
Tier 2 CI Workflow
The recommended approach is to use mipiti-verify, which handles both tiers automatically. If you prefer to integrate manually:
- Pull pending Tier 2 assertions:
GET /api/models/{id}/verification/pending?tier=2- Each assertion includes a
tier2_promptfield with the generated prompt
- Each assertion includes a
- Run the prompt locally: Feed the prompt + relevant source code to your CI's AI tool
- Submit results:
POST /api/models/{id}/verification/resultswithtier: 2per result
Computed Verification Status
A control's verification status is computed, not settable. It reflects the strength of its evidence chain:
| Status | Meaning | Action needed |
|---|---|---|
| Verified | All assertions pass both tiers, coherence checks pass, and sufficiency is "sufficient" | None — control is proven |
| Partially verified | Verification attempted but incomplete — assertion failures, sufficiency gaps, or both | Investigate failing assertions or submit additional assertions for coverage gaps |
| Pending | Assertions exist but some still await CI evaluation | Wait for or trigger a CI run |
| Unverified | No assertions submitted | Submit assertions |
"Verified" is earned through independent CI verification — not self-attestation.
CI Attestation
When your CI pipeline submits verification results, it can include attestation — cryptographic proof that the results came from a real CI environment, not a manual submission.
Two attestation methods
OIDC (zero setup for GitHub Actions / GitLab CI):
- CI provides an OIDC identity token via the
X-CI-Attestationheader - Mipiti validates the token against the CI provider's public keys (fetched automatically via OIDC discovery)
- GitHub Actions and GitLab.com are trusted by default
- For self-hosted GitLab or other OIDC-capable CI, add custom issuers in Workspace Settings > Security > Trusted OIDC Issuers
- Claim restrictions (recommended): Pin tokens to a specific repository, branch, or GitHub Environment to prevent cross-repo attestation — configure in Workspace Settings > Security > Required OIDC Claims
- Includes rich provenance: repository, branch, run ID, actor
ECDSA (universal — works with any CI system):
- Generate an ECDSA P-256 key pair
- Upload the public key in Workspace Settings > Security > Signing Key
- CI signs verification results with the private key
- Mipiti verifies the signature against the workspace's public key
Enforcement
In Workspace Settings, the "Require CI attestation" toggle controls enforcement:
- Off — any verification submission is accepted (API key auth sufficient)
- On — submissions must include either a valid OIDC token or a valid ECDSA signature
Audit trail
Attested verification runs include a permanent, independently verifiable audit bundle:
- The ECDSA signature over the verification results
- The public key that signed it (looked up from key history)
- An openssl command to independently verify the signature
Key history is preserved — even after key rotation, audit bundles reference the specific key that signed them.
Platform notarization
When verification results are submitted with OIDC attestation (no CI-side ECDSA signature), the platform automatically signs the results with the instance's ECDSA P-256 key — the same key used for backup signing. This provides content integrity: the auditor can verify that the results have not been modified since the platform received them.
The verification report shows the key fingerprint and results hash. The instance's public key is available from the admin panel (Settings > Admin > Signing Key) for independent verification.
Control Drift Detection
When CI runs verification periodically (on every push or on a schedule), previously-passing assertions that now fail indicate control drift — a code change invalidated a previously-implemented security control.
The verification report flags drift items and correlates them to specific controls and control objectives, enabling targeted remediation rather than full re-assessment.
Multi-Repository Support
Features often span multiple repositories — frontend, backend, infrastructure. Each assertion can declare which repository it targets via the repo field (e.g., "org/backend"). When omitted, the assertion applies to the default (single-repo) codebase.
How it works
- Assertions include an optional
repofield when submitted via MCP or the API - Each repo's CI runs
mipiti-verifywhich verifies assertions tagged with that repo (plus untagged assertions from single-repo setups) - The platform aggregates results across repos for the same threat model
- The audit package includes commit SHAs per repo — the auditor verifies each independently
Setup
No setup needed for single-repo projects — the repo field defaults to empty and everything works as before. For multi-repo:
- When submitting assertions, include
repo: "org/repo-name"to tag which repo the assertion targets - Add
mipiti-verifyto each repo's CI pipeline with the same model ID - Each repo verifies its own assertions; the platform shows the combined posture
Consistency
Each repo's CI independently pushes results on every commit to its protected branch. The platform stores the latest verified commit per repo. The audit package shows all repo commit SHAs — the auditor verifies each at the pinned commit.
Negative Findings (Gap Discovery)
Evidence verification proves controls are implemented. Negative findings discover controls that are not implemented — the inverse. An AI coding agent proactively scans your codebase against security controls and records structured evidence of what was checked and not found.
Example: "CTRL-07 requires input length validation on the API endpoint. Checked main.py:1680-1720 — no length check found on the messages field."
How Findings Work
Findings can be created three ways: by your AI coding agent (via MCP tools), by external tools calling the REST API directly with an API key, or manually in the UI. Each finding records:
- Title — short summary of the gap
- Description — what was expected but not found
- Severity — critical, high, medium, low, or info
- Checked locations — file paths with line ranges and code snippets where the agent looked
- Checked patterns — function names, decorators, or patterns the agent searched for
- Expected evidence — what a correct implementation would look like
Scan Prompts
The get_scan_prompt MCP tool generates a structured scanning guide for your AI agent — consuming no usage credits. The prompt is a deterministic template that provides:
- The control description and its linked control objective
- What to look for in the codebase
- How to record checked locations with file paths and line ranges
- Severity guidelines based on the control's context
The agent uses this prompt to guide its own codebase analysis, then submits any discovered gaps as findings.
Finding Lifecycle
Findings follow a strict lifecycle with no backward transitions:
discovered → acknowledged → remediated → verified
↓ ↓
dismissed dismissed
- Discovered — initial state when a gap is found
- Acknowledged — team has reviewed the finding and agrees it's a real gap
- Remediated — a fix has been applied (optionally linked to assertions)
- Verified — linked assertions pass CI verification (auto-computed)
- Dismissed — false positive or not applicable (requires a reason)
Remediation Bridge
The key connection between negative findings and positive assertions: when remediating a finding, you can link it to one or more assertion IDs. This bridges "we found a gap" to "we proved we fixed it."
A finding is auto-verified when:
- Its status is "remediated"
- It has at least one linked assertion ID
- All linked assertions pass both Tier 1 and Tier 2 verification
This creates a complete audit trail: gap discovered → fix acknowledged → evidence submitted → CI verified.
Findings in the UI
On the Assurance page, each control card shows a Findings panel (below the Assertions panel). From here you can:
- View findings with severity and status badges
- Expand a finding to see checked locations, patterns, and expected evidence
- Transition findings through their lifecycle (acknowledge, dismiss, remediate)
- Manually add findings for gaps discovered outside MCP
MCP Workflow
The recommended workflow for an AI coding agent:
Positive evidence (prove controls are implemented):
get_controlswithstatus="implemented"— find controls that need assertions- Analyze the codebase — read the relevant source files
- Before each assertion — verify locally with
mipiti-verify verify <type> -p key=value --project-root .to catch bad file paths, missing functions, or wrong params, then read the target file and evaluate whether a reviewer seeing only that code would agree with the assertion's claim. If the code doesn't actually implement the control, fix the implementation first. submit_assertions— create typed claims for each control (sufficiency evaluated immediately server-side; Tier 1 + Tier 2 verified on next CI run)get_verification_report— check overall verification status
Negative findings (discover gaps):
get_scan_prompt— get a structured scanning guide for not-implemented controls- Analyze the codebase — scan for missing implementations following the prompt
submit_findings— record gaps with checked locations and expected evidencelist_findings— review submitted findings by control or statusupdate_finding— transition findings through their lifecycle
Control refinement (fix overly prescriptive controls):
refine_control— propose a new description with justification. AI evaluates whether the mitigation group still satisfies all mapped COs. See Control Refinement for details.
Ongoing maintenance:
get_review_queue— find stale controls needing re-verificationlist_assertions— see existing assertions for a control- Update or replace assertions as the code evolves
Setting Up CI Attestation
Two options — choose based on your CI provider.
Option A: OIDC (GitHub Actions / GitLab CI)
Zero key management. Your CI provider issues a short-lived identity token (JWT) signed with the provider's private key. Mipiti validates it automatically by fetching the provider's public keys from their standard OIDC discovery endpoint ({issuer}/.well-known/jwks). You never need to provision or rotate any keys — the trust is established through the provider's published key infrastructure.
GitHub Actions:
# .github/workflows/mipiti-verify.yml
name: Security Control Verification
on:
push:
branches: [main]
schedule:
- cron: '0 6 * * 1' # Weekly Monday 6am
permissions:
id-token: write # Required for OIDC attestation
contents: read
jobs:
verify:
runs-on: ubuntu-latest
environment: verify
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- uses: Mipiti/mipiti-verify@a4abc180daa1d3035a06f454ce1ccb02794c6dd0 # v0.14.0
with:
api-key: ${{ secrets.MIPITI_API_KEY }}
all: true
tier2-provider: openai
tier2-model: gpt-4o-mini
tier2-api-key: ${{ secrets.OPENAI_API_KEY }}
The action auto-detects OIDC tokens from the GitHub Actions environment. No manual token handling needed.
GitLab CI:
# .gitlab-ci.yml
# Required CI/CD variables (Settings → CI/CD → Variables, masked + protected):
# MIPITI_API_KEY — verifier-scoped API key (mv_...)
# OPENAI_API_KEY — OpenAI API key for Tier 2 verification
verify-controls:
stage: test
id_tokens:
MIPITI_OIDC:
aud: mipiti
script:
- pip install "mipiti-verify[openai]==0.13.0"
# For supply chain safety, generate a lockfile with hashes:
# pip-compile --generate-hashes -o requirements-verify.txt
# Then install with:
# pip install --require-hashes -r requirements-verify.txt
- mipiti-verify run --all
--tier2-provider openai
--tier2-model gpt-4o-mini
--output json
No keys to generate, rotate, or manage. The CI provider handles OIDC automatically.
Self-hosted GitLab or other OIDC providers:
If your CI system uses a custom OIDC issuer (e.g., https://gitlab.example.com), add it in Settings > Workspaces > Security > Trusted OIDC Issuers. Platform defaults (GitHub Actions, GitLab.com) are always trusted — custom issuers are merged with them.
Option B: ECDSA (Any CI System)
For Jenkins, CircleCI, Buildkite, or any CI system without OIDC support. You generate a key pair, upload the public key to Mipiti, and store the private key as a CI secret.
Step 1 — Generate key pair:
# Generate ECDSA P-256 private key
openssl ecparam -genkey -name prime256v1 -noout -out private.pem
# Extract public key
openssl ec -in private.pem -pubout -out public.pem
Step 2 — Upload public key to Mipiti:
- Go to Settings > Workspaces
- Select your team workspace (not personal)
- In the Security section, find ECDSA Signing Key
- Paste the contents of
public.peminto the text area - Click Upload Key
- You should see a green "Configured" badge with the key fingerprint
Optionally, enable Require CI attestation to reject unattested submissions.
Step 3 — Store private key in CI:
Add private.pem as a secret in your CI system. Never commit it to your repository.
| CI System | Where to store |
|---|---|
| GitHub Actions | Repository Settings > Secrets > MIPITI_SIGNING_KEY |
| GitLab CI | Settings > CI/CD > Variables > MIPITI_SIGNING_KEY (masked) |
| Jenkins | Credentials > Secret text > MIPITI_SIGNING_KEY |
| CircleCI | Project Settings > Environment Variables > MIPITI_SIGNING_KEY |
Step 4 — Sign verification results in CI:
Your CI script needs to:
- Create a short-lived JWT for authentication (signed with the private key)
- Compute a hash of the verification results
- Sign the hash with the private key
- Submit results with the JWT and signature
# 1. Create auth JWT (ES256, 5 min lifetime)
# Header: {"alg":"ES256","typ":"JWT"}
# Payload: {"sub":"<workspace_id>","iat":<now>,"exp":<now+300>}
AUTH_JWT=$(python3 -c "
import jwt, time
key = open('private.pem').read()
now = int(time.time())
print(jwt.encode({'sub': '$WORKSPACE_ID', 'iat': now, 'exp': now + 300}, key, algorithm='ES256'))
")
# 2. Compute results hash
RESULTS_JSON='[{"assertion_id":"asrt_001","tier":1,"result":"pass","details":"..."}]'
SIGNED_HASH="sha256:$(echo -n "$RESULTS_JSON" | sha256sum | cut -d' ' -f1)"
# 3. Sign the hash
echo -n "$SIGNED_HASH" | openssl dgst -sha256 -sign private.pem | base64 -w0 > signature.b64
SIGNATURE=$(cat signature.b64)
# 4. Submit with JWT auth + attestation signature
curl -X POST https://api.mipiti.io/api/models/$MODEL_ID/verification/results \
-H "Authorization: Bearer $AUTH_JWT" \
-H "Content-Type: application/json" \
-d "{
\"pipeline\": {\"provider\": \"jenkins\", \"run_id\": \"$BUILD_NUMBER\"},
\"results\": $RESULTS_JSON,
\"signature\": \"$SIGNATURE\",
\"signed_hash\": \"$SIGNED_HASH\"
}"
Key rotation:
- Generate a new key pair
- Upload the new public key in Workspace Settings (the old key is automatically revoked)
- Update the CI secret with the new private key
- Previous audit bundles remain verifiable — key history preserves the old public key
Which should I use?
| OIDC | ECDSA | |
|---|---|---|
| Setup effort | Minimal (one permission line) | Moderate (key generation + upload + CI secret) |
| Key management | None | You manage rotation |
| CI support | GitHub Actions, GitLab CI | Any CI system |
| Provenance | Rich (repo, branch, actor, run ID) | Cryptographic (signature + key fingerprint) |
| Security | Requires claim restrictions + branch-protected Environment for full protection — see Securing CI Attestation | Requires repo secret restrictions for full protection — see Securing CI Attestation |
| Recommended for | GitHub/GitLab shops | Jenkins, CircleCI, Buildkite, or mixed CI |
Both methods produce attested verification runs. You can use different methods for different models or pipelines within the same workspace.
Securing CI Attestation
Attestation proves provenance — but provenance is only meaningful if the CI pipeline itself is trustworthy. This section explains how to configure both OIDC and ECDSA attestation for full protection against cross-repo and cross-branch forgery.
Why attestation alone isn't enough
Attestation proves "this came from CI" — but which CI? Without restrictions:
- OIDC: A user who controls any GitHub repo can generate a valid OIDC token, because GitHub Actions issues tokens to any repo with
id-token: writepermission. Without claim restrictions, Mipiti can't distinguish tokens from your repo vs. an attacker's repo. - ECDSA: A user who can edit workflow files on an unprotected branch can exfiltrate the signing key by adding a step that prints the secret.
Both flows need CI-side protections to be trustworthy.
Securing OIDC attestation
-
Configure required claims in Mipiti — Go to Workspace Settings > Security > Required OIDC Claims. Set:
repository=your-org/your-repo(ensures tokens only from your repo)environment=attestation(ensures tokens only from a protected GitHub Environment)
-
Create a GitHub Environment with branch protection:
- Repository Settings > Environments > New environment:
attestation - Under "Deployment branches": select "Selected branches" > add
main - This means only workflows triggered on
maincan access theattestationenvironment, so PR authors from forks or feature branches cannot produce valid tokens
- Repository Settings > Environments > New environment:
-
Update your workflow to use the environment:
jobs: verify: runs-on: ubuntu-latest environment: attestation # requires branch protection permissions: id-token: write contents: read steps: ... -
How this protects you: A malicious contributor can open a PR that modifies the workflow, but GitHub won't let it run in the
attestationenvironment because the PR branch isn'tmain. Without the environment, the OIDC token won't have theenvironmentclaim and Mipiti will reject it.
Securing ECDSA attestation
-
Store the private key as a repository secret — never as a workflow file or artifact
- GitHub: Repository Settings > Secrets and variables > Actions > New repository secret:
MIPITI_SIGNING_KEY - GitLab: Settings > CI/CD > Variables >
MIPITI_SIGNING_KEY(masked, protected)
- GitHub: Repository Settings > Secrets and variables > Actions > New repository secret:
-
Restrict the secret to protected branches:
- GitHub: Repository secrets are already restricted — they're not exposed to workflows triggered by PRs from forks
- GitLab: Check "Protected" on the variable so it's only available on protected branches
-
How this protects you: A malicious contributor who opens a PR cannot access the signing key. Their workflow runs without the secret, so they can't produce valid signatures. Only workflows on the default/protected branch can sign results.
Security comparison (with protections)
| OIDC + claim restrictions | ECDSA + repo secret | |
|---|---|---|
| Cross-repo protection | Yes (repository claim pinned) | Yes (key per-workspace) |
| Branch protection | Via GitHub Environment | Via repo secret restrictions |
| PR author can forge? | No (environment restricted to main) | No (secret not exposed to forks) |
| Result integrity | Provenance only (token not bound to payload) | Signature covers payload |
| Key management | None (provider-managed keys) | Manual (generate, upload, rotate) |
Secure CI Setup Guide
Attestation proves provenance — but provenance is only meaningful if the CI pipeline itself is trustworthy. The five requirements below ensure that developers cannot influence verification results.
The five requirements for tamper-proof verification:
| # | Requirement | Why | How (GitHub) |
|---|---|---|---|
| 1 | Protected branch | Developers can't push directly — all changes require PR | Branch protection rule: require PR reviews, no force push |
| 2 | Required PR reviews | No unreviewed code reaches the verified branch | Branch protection: require 1+ approving review |
| 3 | CODEOWNERS on workflow files | Changes to the verification workflow require security team approval | .github/CODEOWNERS: /.github/workflows/mipiti-verify.yml @security-team |
| 4 | Pinned tool version | The verification tool can't be silently replaced | pip install mipiti-verify==X.Y.Z --require-hashes --hash=sha256:... (see below) |
| 5 | Environment with branch restriction | Attestation tokens only issued from the protected branch | GitHub Environment attestation restricted to main branch |
Step-by-step setup (GitHub Actions):
- Create branch protection rule on
main: require PR reviews, disable force push, require status checks - Add
.github/CODEOWNERSwith workflow file ownership (security team or repo owner) - Create GitHub Environment
attestationin repo settings, restrict deployment branches tomain - Configure the workflow to use
environment: attestationand pinmipiti-verifywith hash verification (see below) - In Mipiti Workspace Settings, set required OIDC claims:
repository+environment - Enable "Require CI attestation" toggle
What this guarantees to auditors:
- Verification results came from a CI pipeline running on the protected branch
- The workflow definition was reviewed and approved before execution
- The verification tool is a known version that hasn't been tampered with
- The developer whose code is being verified could not have influenced the verification
What would violate this guarantee:
- Granting a developer admin/bypass permissions on the repo (can skip branch protection)
- Removing CODEOWNERS on workflow files (developer can modify the pipeline)
- Using
pip install mipiti-verifywithout version and hash pinning (tool could be replaced by dependency confusion or index substitution) - Using
pull_requesttrigger instead ofpushtomain(PR author's code runs before review) - Removing branch restriction from the GitHub Environment
- Not requiring PR reviews on the protected branch
- Running verification on a self-hosted runner controlled by the developer being verified
Pinning the verification tool:
A version pin (==X.Y.Z) prevents silent upgrades but doesn't protect against index substitution (dependency confusion) or a compromised package index. For a tamper-proof pipeline, pin the exact artifact hash:
# Strongest — pins the exact wheel artifact (immutable)
pip install mipiti-verify==X.Y.Z \
--require-hashes \
--hash=sha256:abc123...
Get the hash from PyPI (pip hash or the "Download files" page). If the artifact changes for any reason, pip will refuse to install it. This is the same approach used by pip-compile --generate-hashes and recommended by the Python Packaging Authority for supply chain security.
Alternatively, pin to a commit SHA (requires git in CI, slower, no wheel cache):
pip install git+https://github.com/mipiti/mipiti-verify@abc123def456...
At minimum, always pin the version (==X.Y.Z). PyPI versions are immutable once published — they cannot be overwritten — but hash pinning provides defense in depth against index-level attacks.
For ECDSA attestation — the same five requirements apply, with one difference: the signing key is stored as a repo secret (automatically restricted to protected branches). The ECDSA signature additionally covers the results payload, providing tamper detection if results are modified in transit.
Self-hosted / private runners
Both approaches work on self-hosted and private runners:
- OIDC tokens are issued by GitHub/GitLab's platform (via API call), not generated locally on the runner. Self-hosted runners request tokens the same way hosted runners do.
- Repo secrets are injected by the platform into the workflow regardless of runner type.
- Environment branch protection is a platform feature — applies to all runners.
The only exception: fully air-gapped runners that can't reach token.actions.githubusercontent.com can't use OIDC at all — use ECDSA in that case.