MIMIC

Your AI writes your code. MIMIC writes what it suggests. Poison the pipeline. Own every codebase it touches.
7
Subsystems
36
Techniques
220
Tests Passing
32
NIGHTFALL Tool
pip install red-specter-mimic
Your AI coding assistant is trusted implicitly / Every suggestion it makes enters your codebase / Training data determines what gets suggested / Poisoned repos poison the model / Subtle vulnerabilities pass code review / AI-assisted review misses AI-crafted bugs / Dependency confusion via suggestion / Completion hijacking is invisible / You trusted the code your AI wrote Your AI coding assistant is trusted implicitly / Every suggestion it makes enters your codebase / Training data determines what gets suggested / Poisoned repos poison the model / Subtle vulnerabilities pass code review / AI-assisted review misses AI-crafted bugs / Dependency confusion via suggestion / Completion hijacking is invisible / You trusted the code your AI wrote

The AI Writes the Code. Nobody Tests the Suggestions.

Every developer using Copilot, CodeWhisperer, or any AI coding assistant trusts the suggestions they receive. Those suggestions come from training data. Training data comes from public repositories. Public repositories can be poisoned. MIMIC proves that the trust developers place in AI-generated code is the vulnerability.

Implicit Suggestion Trust

Developers accept AI code suggestions without examining them for subtle vulnerabilities. Tab completion moves fast. Security review moves slow. MIMIC exploits the gap between the two.

Poisoned Training Corpus

AI coding models train on public repositories. If those repositories contain malicious code patterns, the model learns to suggest them. MIMIC demonstrates how a targeted repository poisoning campaign propagates through an AI model into production codebases.

Vulnerability Injection at Scale

A single subtle bug in a frequently-suggested function template reaches every developer who accepts that suggestion. SQL injection patterns. Race conditions. Insecure deserialization. Invisible to the reviewer who trusts the AI.

Completion Context Hijacking

AI completions are context-sensitive. Manipulate the context — inject specific comment patterns, function signatures, import statements — and you steer what the AI suggests next. The developer never sees the manipulation.

AI Review Blind Spots

AI-powered code review tools share the same training biases as AI coding assistants. Code crafted to evade AI review exploits these shared blind spots. The reviewer approves what the suggester inserted.

Dependency Confusion via Suggestion

AI coding assistants suggest package names. If an attacker controls a package with the same name as an internal one, the AI suggests the public malicious version. Dependency confusion weaponised through the suggestion layer.

The MIMIC Attack Surface

Seven subsystems. Each one attacks a different layer of the AI code generation pipeline. From suggestion manipulation to training data poisoning to mandatory restoration — MIMIC covers the full attack lifecycle. Every engagement requires an ANTIDOTE baseline before any execution begins.

# Subsystem Command What It Does
01 SUGGEST mimic suggest run Suggestion manipulation. Context poisoning via crafted prefix injection. Completion steering. Autocomplete exploitation. IDE integration attacks. Maps AI assistant susceptibility to context manipulation.
02 TRAIN mimic train run Training data poisoning campaigns. Backdoor pattern injection into public repositories. Star-bombing vulnerable code to raise training signal. Dataset contamination analysis. Fine-tuning exploitation vectors.
03 INJECT mimic inject run Vulnerability injection via AI suggestions. SQL injection patterns. Buffer overflows. Insecure deserialization. Authentication bypass. Race conditions. Each vulnerability crafted to pass standard code review.
04 COMPLETE mimic complete run Completion hijacking through context control. Function signature manipulation. Import statement poisoning. Dependency suggestion steering. Type confusion injection. Tests every context vector that influences completions.
05 REVIEW mimic review run AI code review bypass. Semantic obfuscation of malicious patterns. Complexity hiding. Diff minimisation. Review fatigue exploitation. Crafts code that passes AI-powered review while containing active vulnerabilities.
06 SUPPLY mimic supply run AI-assisted dependency confusion. Package name squatting via AI suggestion steering. Internal package namespace poisoning. Lockfile manipulation. Maps which package names an AI assistant will suggest in target project contexts.
07 ANTIDOTE mimic antidote run Mandatory restoration subsystem. Captures full baseline before any engagement begins. Code integrity verification. Suggestion audit trail. Signed restoration certificate. UNLEASHED gate — ANTIDOTE must complete before INJECT or TRAIN can run.

Context-Aware Attacks

MIMIC analyses the target's coding context before crafting payloads. Language, framework, coding style — all factored in to maximise suggestion acceptance rate.

Subtle by Design

Injected vulnerabilities are crafted to blend with surrounding code. Standard automated and human code review rates measured and reported for every injection.

Ed25519 Signed Reports

Every MIMIC engagement generates a cryptographically signed report. Ed25519 signatures. SHA-256 evidence chains. Tamper-evident by design.

ANTIDOTE Gate

No exploitation without a signed baseline. ANTIDOTE captures the pre-engagement state and gates UNLEASHED execution. Restoration is always possible.

7
Subsystems
36
Techniques
220
Tests Passing
0
Failures
32
NIGHTFALL Tool

UNLEASHED Clearance

Standard mode maps and detects. UNLEASHED exploits. Ed25519 cryptographic dual-gate. One operator. Founder's machine only. ANTIDOTE must complete before any live execution is permitted.

Detection

Maps AI code generation attack surfaces. Identifies vulnerable suggestion pipelines, context injection points, and training data exposure. No exploitation. Reports only.

Dry Run

Plans full poisoning campaigns. Shows exactly what would be injected and where. Ed25519 key required. No execution. Full plan output with expected impact assessment.

Live Execution

Cryptographic override. Private key controlled. One operator. Founder's machine only. ANTIDOTE baseline required before any live run is authorised.

THIS TOOL IS FOR AUTHORISED SECURITY TESTING ONLY. EVERY EXECUTION IS SIGNED AND LOGGED.

Every Finding Mapped

OWASP LLM

OWASP LLM Top 10 — 2025

  • LLM03 Supply Chain Vulnerabilities
  • LLM04 Data and Model Poisoning
  • LLM05 Improper Output Handling
  • LLM01 Prompt Injection (context steering)
  • LLM06 Excessive Agency
  • LLM09 Misinformation (vulnerability injection)
MITRE ATLAS

MITRE ATLAS Mappings

  • AML.T0018 Backdoor ML Model
  • AML.T0020 Poison Training Data
  • AML.T0031 Evade ML Model
  • AML.T0040 ML Supply Chain Compromise
  • AML.T0043 Craft Adversarial Data
  • AML.T0048 Exploit Public-Facing ML
Cryptographic

Report Integrity

  • Ed25519 digital signatures
  • SHA-256 evidence chains
  • RFC 3161 timestamps
  • Tamper-evident by design
  • ANTIDOTE restoration certificates
  • Machine-ingestible JSON output

Security Distros & Package Managers

Kali Linux
.deb package
Parrot OS
.deb package
BlackArch
PKGBUILD
REMnux
.deb package
Tsurugi
.deb package
PyPI
pip install
macOS
pip install
Windows
pip install
Docker
docker pull

Authorised Use Only

Red Specter MIMIC is intended for authorised security testing only. Unauthorised use against AI systems, code generation pipelines, or training data repositories you do not own or have explicit written permission to test may violate the Computer Misuse Act 1990 (UK), Computer Fraud and Abuse Act (US), and equivalent legislation in other jurisdictions. Always obtain written authorisation before conducting any security assessments. Apache License 2.0.

Ed25519 Cryptographic Override
MIMIC UNLEASHED

Cryptographic override. Private key controlled. One operator. Founder's machine only.