pip install red-specter-golem
AI agents are no longer confined to screens. They control robotic arms, drive vehicles, fly drones, and operate surgical systems. These embodied AI systems interact with the physical world — and the physical world can hurt people. Nobody is red-teaming the physical layer.
Industrial robots, surgical systems, and collaborative arms operate with AI decision layers that have never been pen-tested for adversarial input.
Self-driving systems trust sensor data. LiDAR injection, GPS spoofing, and camera adversarial attacks are proven in research but untested in production.
Emergency stops, collision avoidance, and force limiters are tested for mechanical failure. Nobody tests what happens when an attacker deliberately bypasses them.
Delivery drones, inspection UAVs, and autonomous swarms use MAVLink and MQTT — protocols designed for functionality, not security.
Eight vectors. Each one targets a different surface of the embodied AI system. Each one produces structured JSON consumed by the report builder. Each finding maps to MITRE ATT&CK for ICS and MITRE ATLAS. Each finding generates an AI Shield blocking rule.
| # | Vector | Command | Techniques | What It Does |
|---|---|---|---|---|
| 01 | Sensor Spoofing | golem attack --vector sensor_spoof | 6 | LiDAR injection, camera adversarial, GPS spoofing, temp/pressure falsification, ultrasonic jamming, IMU drift |
| 02 | Actuator Hijacking | golem attack --vector actuator_hijack | 5 | Motor command injection, servo override, hydraulic manipulation, brake interception, gripper falsification |
| 03 | Safety Boundary | golem attack --vector safety_boundary | 6 | Speed limit override, force/torque bypass, exclusion zone violation, E-stop suppression, collision bypass |
| 04 | Physics Model | golem attack --vector physics_model | 5 | Map poisoning, object confusion, localisation attack, weight falsification, environmental drift |
| 05 | Human Proximity | golem attack --vector human_proximity | 5 | Detection bypass, safety zone violation, E-stop interception, HITL auth bypass, supervision override |
| 06 | C2 Hijacking | golem attack --vector c2_hijack | 6 | Prompt injection via sensor, mission falsification, waypoint injection, task queue poisoning, fleet compromise |
| 07 | Inter-Agent Trust | golem attack --vector inter_agent_trust | 4 | Swarm coordination poisoning, identity spoofing, consensus manipulation, rogue agent injection |
| 08 | Emergency Bypass | golem attack --vector emergency_bypass | 5 | E-stop suppression, failsafe bypass, watchdog manipulation, safety PLC injection, alarm suppression |
Target a physical AI system, specify the protocol and platform type:
GOLEM speaks native industrial protocols. CAN Bus, Modbus, OPC-UA, ROS2 — not wrappers, not proxies. Raw protocol-level interaction.
Every attack technique includes safety boundaries. GOLEM never exceeds defined limits during testing. Assessment mode validates without actuation.
Every report cryptographically signed with Ed25519. RFC 3161 timestamped. SHA-256 evidence chains. Tamper-evident by design.
Every finding generates an AI Shield blocking rule. GOLEM findings become runtime protection for physical AI systems.
GOLEM speaks ten industrial protocols natively. Every protocol parser written from scratch. No wrappers. No shims. Raw protocol-level interaction with the physical control plane.
Automotive & robotics. Frame injection, arbitration ID spoofing, bus-off attacks.
SCADA & industrial. Register read/write, function code abuse, coil manipulation.
Automation. Node traversal, subscription hijacking, certificate manipulation.
Robotics. Topic injection, service spoofing, action server hijacking, DDS exploitation.
IoT & smart building. Topic subscription, message injection, broker manipulation.
Drones & UAV. Command injection, GPS spoofing, mission upload, geofence bypass.
Motion control. Slave injection, distributed clock manipulation, process data attack.
Power grid. Outstation spoofing, unsolicited response injection, cold restart abuse.
Smart building. Object discovery, property write abuse, COV subscription manipulation.
Medical imaging. C-STORE injection, patient record manipulation, modality spoofing.
Full coverage across T0830–T0890. Every technique mapped to ICS-specific tactics. Manipulate, inhibit, impair, and impact — tested across all ten protocols.
All AI-specific findings mapped to ATLAS techniques. Adversarial ML in the physical domain — sensor perturbation, model evasion, and physical-world attack chains.
Findings mapped to IEC 62443 security levels. Zone and conduit model validation. Defence-in-depth assessment for industrial automation and control systems.
ASIL-rated findings for automotive systems. Systematic fault injection against safety-critical functions. Functional safety meets adversarial security testing.
GOLEM is Stage 9 of the full Red Specter offensive pipeline. It tests the physical layer — the place where AI meets the real world. Findings feed directly into AI Shield as runtime blocking rules.
Red Specter GOLEM is intended for authorised security testing of physical AI systems and safety-critical environments only. Unauthorised use against systems you do not own or have explicit permission to test may violate the Computer Misuse Act 1990 (UK), Computer Fraud and Abuse Act (US), and equivalent legislation in other jurisdictions. Testing physical systems carries inherent safety risks. Always obtain written authorisation, ensure appropriate safety measures are in place, and never test in environments where human safety could be compromised without proper safeguards. Apache License 2.0.
Most ICS security tools are wrappers around Nmap scripts and Metasploit modules. GOLEM is actual engineering. Every protocol parser written from scratch in pure Python. Every attack technique implemented natively. Zero subprocess calls. Zero external tool dependencies.
GOLEM finds out what happens when it shouldn't. Assess your physical AI attack surface before a real adversary does.