CodeTrust — AI Governance Enforcement Platform
AI agents write code, run commands, and modify your codebase. CodeTrust intercepts destructive actions, catches hallucinated packages with 95% accuracy, and enforces governance across 9 layers — before anything reaches production.
Every scan, every block, every verification — streamed live from production infrastructure. These numbers update in real time.
Impact categories track high-severity threats prevented. Total findings includes all severity levels across every scan.
live
Your AI agents are generating the same patterns right now. The question is whether anyone is catching them.
9 enforcement layers. 2,928 rules. Real-time interception.
pip install codetrust && codetrust init — governance active
in 30 seconds.
One command. Eight enforcement layers. Thirty seconds.
Intercepts every terminal command before execution. git push →
BLOCKED. rm -rf / → BLOCKED. The agent cannot proceed.
Claude Code, Cursor, Claude Desktop — all configured. Gateway + Guardian registered in every IDE automatically.
43 MCP toolsCommit gate. Every staged file scanned. BLOCK findings = rejected commit. No exceptions.
commit gatePR gate. Full scan on every pull request. SARIF upload. Fails the check on BLOCK findings.
PR gateCLAUDE.md and .cursorrules injected. The agent is told to call CodeTrust before every action.
agent-aware.codetrust.toml — enforce mode, audit logging, 82 gateway rules, protected paths.
Verification. Tests every layer. git push → BLOCKED (exit 2). Green = enforced.
Run it after init. Every layer verified. Green means enforced.
SonarQube checks quality. Snyk checks dependencies. Semgrep checks
patterns.
Nobody checks what the AI agent itself is doing.
That requires
governance, not just scanning.
My AI just tried to install a package that doesn't exist. CodeTrust caught it before it shipped.
Hallucination detection · One command setup · Free2,928 rules that tell me exactly what's wrong and how to fix it. Not noise — signal.
Guided remediation · Taint analysis · Zero FPWho's writing my code — my team or their AI? I need visibility and control.
AI Attribution · Model tracking · Policy engineWhich students used AI? Which model? Which lines? Proof, not suspicion.
Attribution trails · Shadow AI detection · Per-line proofYour team uses Claude, Copilot, Cursor, and three tools you don't know
about.
CodeTrust tracks every model, every edit, every commit.
Not "unsafe code detected." Exact line, exact fix, root cause explanation.
The
agent reads the suggestion and self-corrects. Zero back-and-forth.
Students submit code. You need to know: did they write it, or did an
AI?
CodeTrust gives you per-line attribution with model identification.
You asked Claude to build a Stripe integration. It imported
stripe_helpers —
a package that doesn't exist. A typosquatter registered it 12
minutes ago.
CodeTrust blocked it before pip install finished.
CodeTrust works across Cursor, VS Code, and any AI coding tool.
Enforcement
happens before execution — not inside the editor.
Claude Code, Cursor, AI agents — MCP Gateway + Guardian intercept every action. Commands are blocked before execution.
Extension scans on save with inline findings. BASH_ENV guard blocks commands at OS level. GitHub Action enforces rules in CI.
Terminal, CI, scripts, agents — pre-commit hook + GitHub Action + BASH_ENV guard. Same enforcement everywhere.
Your editor doesn't matter. Execution does.
SAST tools were built for human-written code. AI agents don't just write code
—
they execute commands, install packages, and modify files.
That
requires governance, not just scanning.
| CodeTrust | SonarQube | Semgrep | Snyk | |
|---|---|---|---|---|
| AI Governance Gateway | 82 rules | — | — | — |
| Auto-installed Enforcement Hooks | Yes | — | — | — |
| Hallucination Detection | 8 registries | — | — | — |
| Guided Remediation | 2,928 suggestions | — | — | — |
| AI Observability | 26 models | — | — | — |
| Taint Analysis | Cross-language | Yes | Yes | Yes |
| SAST Rules | 2,928 | 5,000+ | 3,000+ | SAST |
| Price / 10 devs | $0 | $32/mo | $400/mo | $250/mo |
One command. Eight layers. Fully verified.
ext install SaidBorna.codetrust
Scan on save. Inline findings. One-click fix.
pip install codetrust
PR gate. Fails the check on BLOCK findings.
app.codetrust.ai
Scan quota, enforcement metrics, compliance status.