MIMIC

AI Code Generation Poisoning

Your AI writes your code. MIMIC writes what it suggests. Suggestion manipulation, training data poisoning, vulnerability injection, completion hijacking, and supply chain dependency confusion — weaponised for authorised red team engagements.

7
Subsystems
36
Techniques
220
Tests
View Documentation GitHub

Poison the Code Before It's Written

MIMIC targets the AI code generation pipeline. Every developer using Copilot, CodeWhisperer, or any AI coding assistant trusts the suggestions they receive. MIMIC proves that trust is the vulnerability. Manipulate what the AI suggests and every codebase it touches inherits your payload.

01

SUGGEST

SUGGESTION MANIPULATION

Influence AI code completions. Context poisoning. Prompt prefix injection. Completion steering. Autocomplete exploitation. IDE integration attacks.

02

TRAIN

TRAINING DATA POISONING

Poison training repositories. Backdoor pattern injection. Star-bombing vulnerable code. Dataset contamination. Fine-tuning exploitation.

03

INJECT

VULNERABILITY INJECTION

Inject subtle vulnerabilities via AI suggestions. SQL injection patterns. Buffer overflows. Insecure deserialization. Authentication bypass. Race conditions.

04

COMPLETE

COMPLETION HIJACKING

Hijack code completion context. Function signature manipulation. Import statement poisoning. Dependency suggestion manipulation. Type confusion injection.

05

REVIEW

REVIEW BYPASS

Craft code that passes AI-powered code review. Semantic obfuscation. Complexity hiding. Diff minimisation. Review fatigue exploitation.

06

SUPPLY

DEPENDENCY CONFUSION

AI-assisted dependency confusion. Package name squatting via suggestion. Internal package namespace poisoning. Lockfile manipulation.

07

ANTIDOTE

MANDATORY RESTORE

Baseline capture before any engagement. Code integrity verification. Suggestion audit trail. Signed restoration certificate.

UNLEASHED Gate

Standard mode detects. UNLEASHED exploits. Ed25519 crypto. Dual-gate safety. One operator.

Detection

Maps code generation attack surfaces. Identifies vulnerable AI coding pipelines. No exploitation. Reports only.

Dry Run

Plans full poisoning campaigns. Shows exactly what would work. Ed25519 required. No execution.

Live Execution

Cryptographic override. Private key controlled. One operator. Founder's machine only.

THIS TOOL IS FOR AUTHORISED SECURITY TESTING ONLY. EVERY EXECUTION IS SIGNED AND LOGGED.

36
Techniques
220
Tests
7
Subsystems
49,301
Ecosystem Tests
Available On

Security Distros & Package Managers

Kali Linux
.deb package
Parrot OS
.deb package
BlackArch
PKGBUILD
REMnux
.deb package
Tsurugi
.deb package
PyPI
pip install

Your AI Writes Your Code. MIMIC Writes What It Suggests.

36 techniques. 7 subsystems. Suggestion manipulation. Training data poisoning. Completion hijacking. The tool that proves your AI coding pipeline isn't safe.