pip install red-specter-mimic
Every developer using Copilot, CodeWhisperer, or any AI coding assistant trusts the suggestions they receive. Those suggestions come from training data. Training data comes from public repositories. Public repositories can be poisoned. MIMIC proves that the trust developers place in AI-generated code is the vulnerability.
Developers accept AI code suggestions without examining them for subtle vulnerabilities. Tab completion moves fast. Security review moves slow. MIMIC exploits the gap between the two.
AI coding models train on public repositories. If those repositories contain malicious code patterns, the model learns to suggest them. MIMIC demonstrates how a targeted repository poisoning campaign propagates through an AI model into production codebases.
A single subtle bug in a frequently-suggested function template reaches every developer who accepts that suggestion. SQL injection patterns. Race conditions. Insecure deserialization. Invisible to the reviewer who trusts the AI.
AI completions are context-sensitive. Manipulate the context — inject specific comment patterns, function signatures, import statements — and you steer what the AI suggests next. The developer never sees the manipulation.
AI-powered code review tools share the same training biases as AI coding assistants. Code crafted to evade AI review exploits these shared blind spots. The reviewer approves what the suggester inserted.
AI coding assistants suggest package names. If an attacker controls a package with the same name as an internal one, the AI suggests the public malicious version. Dependency confusion weaponised through the suggestion layer.
Seven subsystems. Each one attacks a different layer of the AI code generation pipeline. From suggestion manipulation to training data poisoning to mandatory restoration — MIMIC covers the full attack lifecycle. Every engagement requires an ANTIDOTE baseline before any execution begins.
| # | Subsystem | Command | What It Does |
|---|---|---|---|
| 01 | SUGGEST | mimic suggest run | Suggestion manipulation. Context poisoning via crafted prefix injection. Completion steering. Autocomplete exploitation. IDE integration attacks. Maps AI assistant susceptibility to context manipulation. |
| 02 | TRAIN | mimic train run | Training data poisoning campaigns. Backdoor pattern injection into public repositories. Star-bombing vulnerable code to raise training signal. Dataset contamination analysis. Fine-tuning exploitation vectors. |
| 03 | INJECT | mimic inject run | Vulnerability injection via AI suggestions. SQL injection patterns. Buffer overflows. Insecure deserialization. Authentication bypass. Race conditions. Each vulnerability crafted to pass standard code review. |
| 04 | COMPLETE | mimic complete run | Completion hijacking through context control. Function signature manipulation. Import statement poisoning. Dependency suggestion steering. Type confusion injection. Tests every context vector that influences completions. |
| 05 | REVIEW | mimic review run | AI code review bypass. Semantic obfuscation of malicious patterns. Complexity hiding. Diff minimisation. Review fatigue exploitation. Crafts code that passes AI-powered review while containing active vulnerabilities. |
| 06 | SUPPLY | mimic supply run | AI-assisted dependency confusion. Package name squatting via AI suggestion steering. Internal package namespace poisoning. Lockfile manipulation. Maps which package names an AI assistant will suggest in target project contexts. |
| 07 | ANTIDOTE | mimic antidote run | Mandatory restoration subsystem. Captures full baseline before any engagement begins. Code integrity verification. Suggestion audit trail. Signed restoration certificate. UNLEASHED gate — ANTIDOTE must complete before INJECT or TRAIN can run. |
MIMIC analyses the target's coding context before crafting payloads. Language, framework, coding style — all factored in to maximise suggestion acceptance rate.
Injected vulnerabilities are crafted to blend with surrounding code. Standard automated and human code review rates measured and reported for every injection.
Every MIMIC engagement generates a cryptographically signed report. Ed25519 signatures. SHA-256 evidence chains. Tamper-evident by design.
No exploitation without a signed baseline. ANTIDOTE captures the pre-engagement state and gates UNLEASHED execution. Restoration is always possible.
Standard mode maps and detects. UNLEASHED exploits. Ed25519 cryptographic dual-gate. One operator. Founder's machine only. ANTIDOTE must complete before any live execution is permitted.
Maps AI code generation attack surfaces. Identifies vulnerable suggestion pipelines, context injection points, and training data exposure. No exploitation. Reports only.
Plans full poisoning campaigns. Shows exactly what would be injected and where. Ed25519 key required. No execution. Full plan output with expected impact assessment.
Cryptographic override. Private key controlled. One operator. Founder's machine only. ANTIDOTE baseline required before any live run is authorised.
THIS TOOL IS FOR AUTHORISED SECURITY TESTING ONLY. EVERY EXECUTION IS SIGNED AND LOGGED.
Red Specter MIMIC is intended for authorised security testing only. Unauthorised use against AI systems, code generation pipelines, or training data repositories you do not own or have explicit written permission to test may violate the Computer Misuse Act 1990 (UK), Computer Fraud and Abuse Act (US), and equivalent legislation in other jurisdictions. Always obtain written authorisation before conducting any security assessments. Apache License 2.0.