Your AI writes your code. MIMIC writes what it suggests. Suggestion manipulation, training data poisoning, vulnerability injection, completion hijacking, and supply chain dependency confusion — weaponised for authorised red team engagements.
MIMIC targets the AI code generation pipeline. Every developer using Copilot, CodeWhisperer, or any AI coding assistant trusts the suggestions they receive. MIMIC proves that trust is the vulnerability. Manipulate what the AI suggests and every codebase it touches inherits your payload.
Influence AI code completions. Context poisoning. Prompt prefix injection. Completion steering. Autocomplete exploitation. IDE integration attacks.
Poison training repositories. Backdoor pattern injection. Star-bombing vulnerable code. Dataset contamination. Fine-tuning exploitation.
Inject subtle vulnerabilities via AI suggestions. SQL injection patterns. Buffer overflows. Insecure deserialization. Authentication bypass. Race conditions.
Hijack code completion context. Function signature manipulation. Import statement poisoning. Dependency suggestion manipulation. Type confusion injection.
Craft code that passes AI-powered code review. Semantic obfuscation. Complexity hiding. Diff minimisation. Review fatigue exploitation.
AI-assisted dependency confusion. Package name squatting via suggestion. Internal package namespace poisoning. Lockfile manipulation.
Baseline capture before any engagement. Code integrity verification. Suggestion audit trail. Signed restoration certificate.
Standard mode detects. UNLEASHED exploits. Ed25519 crypto. Dual-gate safety. One operator.
Maps code generation attack surfaces. Identifies vulnerable AI coding pipelines. No exploitation. Reports only.
Plans full poisoning campaigns. Shows exactly what would work. Ed25519 required. No execution.
Cryptographic override. Private key controlled. One operator. Founder's machine only.
THIS TOOL IS FOR AUTHORISED SECURITY TESTING ONLY. EVERY EXECUTION IS SIGNED AND LOGGED.
36 techniques. 7 subsystems. Suggestion manipulation. Training data poisoning. Completion hijacking. The tool that proves your AI coding pipeline isn't safe.