AI Governance Control-Pack
AI governance evidence engine for Internal Audit and GRC.
NIST AI 600-1 EU AI Act OWASP LLM Top 10 SOC 2 CC ISO 27001 ISO 42001 GDPR HIPAA Responsible AI

AI Governance Evidence Engine. Repo, Docs, Posture.

Review AI systems from code, supporting documents, and declared posture. Produce a reviewer-ready assessment and Evidence Pack.

Built for Internal Audit, GRC, and trust review teams. Same inputs, same pack, same result.

127
Implemented Controls
123 active static · 4 parked runtime
416
Gold Regression Cases
9
Compliance Frameworks
The Gap

AI Governance Review Is Fragmented

Security scanners, policy reviews, and documentation checks rarely tell one coherent story. Review teams still have to reconcile the evidence by hand.

Code Scanners Miss Governance Context

Traditional AppSec tooling can find technical patterns, but it rarely explains governance posture, stated controls, or review boundaries.

Docs Reviews Miss Implementation Reality

Policies and model docs state intent, but reviewers still need proof that the repo and supporting evidence match those claims.

Outputs Aren't Reviewer-Ready

Most teams still stitch together screenshots, notes, and raw findings. The result is slow, inconsistent, and hard to hand off.

How It Works

From Repo And Docs To Review-Ready Evidence

Three engines run in sequence. Each produces named, versioned artifacts so another reviewer, system, or platform can ingest the same conclusion without scraping the UI.
ICB
Input Contract Builder
Takes a GitHub repo, ZIP, or preset source and normalizes it into a structured declaration contract. System and inventory posture are captured in a manifest with pinned version metadata, explicit gaps, and reviewable evidence requirements.
repo / upload / preset -> manifest.json gap_list stubs[]
SIG
Repo Signals Scan
Parser-first analysis scans the codebase for governance-relevant capability evidence: network access, execution primitives, LLM usage, data handling, vector infrastructure, and related runtime surfaces. Signals keep provenance, evidence lines, parser method, and coverage limits so reviewers can see what was actually observed.
manifest.json -> repo_signals.json manifest.json (enriched)
This is the evidence layer: show what was observed, where it came from, and how strong the provenance is. Not exploit detection, but review guidance with evidence samples.
FDY
Foundry Evaluation (Policy Pack)
The enriched manifest is evaluated against the current 127-control pack. In the current static review mode, 123 controls are active while 4 runtime controls remain parked. Each control produces a deterministic outcome with rationale, evidence asks, blocker provenance, and reviewer framing. If declared posture and observed evidence disagree, the result stays reviewable instead of pretending certainty.
manifest.json (enriched) -> decision.json enriched.sarif.json citations.json
Credibility feature: when posture says "no network" but the repo shows network primitives, the outcome stays "requires review." The product is designed to surface uncertainty, not hide it.
RPT
Evidence Pack And Handoff
The canonical output is the Evidence Pack: a reviewer-ready bundle with stable JSON artifacts, review workflow state, lineage, docs-alignment findings, and generated reports when needed. DOCX remains the narrative handoff, but the pack is the system-of-record output.
decision.json + signals + meta -> evidence_pack.zip run_summary.json Report v2 (DOCX)
Evaluation Outcomes

Three Statuses - No Ambiguity

Every control produces one of three outcomes. REVIEW means human verification needed - never an automated pass.
MEETS

No inconsistency detected between declared posture and observed indicators.

REVIEW

Potential inconsistency detected. Requires human verification before posture can be asserted.

FAIL

Hard failure against required evidence or policy threshold. Blocks green posture.

Source of truth: decision.json - deterministic and reproducible on every run
Run Artifacts

What Review Teams Actually Get

Every run produces machine-readable, run-scoped artifacts. The Evidence Pack is the primary handoff, and the underlying contracts remain exportable for downstream systems.
manifest.json
Normalized declaration contract with pinned schema version
repo_signals.json
Signal evidence: severity, match counts, path:lineno snippets
decision.json
Applicable control outcomes for the current run, with rationale and required evidence paths
run_summary.json
Stable assessment summary, reviewer state, and run lineage contract
evidence_pack.zip
Canonical reviewer handoff with decision-ready artifacts, lineage, and reports
Report v2 (DOCX)
Optional narrative handoff alongside the Evidence Pack
Crosswalk Library

Traceability, Not Certification

127 implemented · 123 active in static mode · 4 runtime controls parked
FrameworkTargetsControls Mapped
NIST AI 600-113123
OWASP LLM Top 10 (2025)1047
EU AI Act3087
SOC 2 (CC subset)2197
ISO 27001 Annex A (2022)9380
ISO 420011698
GDPR920
HIPAA825
Responsible AI621
The Gold Suite

Regression-Tested Policy Engine

127 implemented · 123 active in static mode · 4 runtime controls parked
This isn't a checklist. It's a tested policy engine. We can add controls fast without breaking prior decisions.
416
Gold Test Cases
123
Active Static Controls
9
Frameworks Mapped
Input manifest with structured evidence and declarations
Expected audit outcome - overall status + per-control statuses
PASS + FAIL/REVIEW scenarios for every control
Edge cases: prod vs non-prod, risk tiers, tools enabled/disabled
Run on every change to prevent policy drift
Every policy change is validated against all 416 cases before release. Four runtime controls remain parked while Live Observation is disabled.
// Gold Case - consistency check triggers REVIEW { "case_id": "CASE-GOLD-COV-NET-MISMATCH", "input": { "manifest": { "repo_signals_counts": { "SIG-NETWORK": 3, "SIG-NETWORK-RESTRICTED": 1 }, "network": { "mode": "none" } } }, "expected": { "controls": [{ "control_id": "CP-CONSIST-NET-001", "status": "manual_review" }], "overall": "yellow" } }

Evidence-First AI Governance Review.

Current loaded pack