AI Security Incidents 2026
Comprehensive breach log and vulnerability timeline for AI/LLM security in 2026
Last updated: April 2026 • View Case Studies
CVE-2026-33579: OpenClaw /pair approve Vulnerability
CriticalApril 6, 2026
A critical vulnerability in OpenClaw's /pair approve endpoint allowed unauthorized pairing of AI agents, enabling attackers to gain administrative control over OpenClaw instances without proper authentication. The flaw was in the pairing token validation logic, which failed to enforce time-bound constraints and could be replayed by malicious actors.
Impact Summary
- 30,000+ OpenClaw instances exposed to unauthorized pairing
- Remote administrative access achievable without credentials
- AI agent hijacking and data exfiltration risks
- Patch released: OpenClaw v2.4.1
References:
CVE-2026-39861: Claude Code Sandbox Escape
CriticalApril 2026
A sandbox escape vulnerability in Claude Code enabled arbitrary file writes outside the designated workspace. Attackers could craft malicious prompts that bypassed path validation, allowing them to write to sensitive system files, overwrite configuration, or plant persistent backdoors on the host system.
Impact Summary
- Arbitrary file write outside sandbox boundaries
- System-level compromise on affected developer machines
- Potential for credential theft and lateral movement
- Exploitable via crafted prompts in Claude Code sessions
References:
CVE-2026-21445: Langflow Authentication Bypass
HighMarch–April 2026
An authentication bypass vulnerability in Langflow allowed unauthenticated attackers to access and execute AI workflows, potentially exposing connected data sources, APIs, and internal LLM configurations. The vulnerability was under active exploitation in the wild, with multiple threat actors targeting exposed Langflow instances.
Impact Summary
- Active exploitation confirmed in the wild
- Unauthenticated workflow execution
- Exposure of connected APIs and data stores
- Recommended immediate patching for all Langflow deployments
References:
Anthropic MCP Vulnerability: 200,000 AI Servers Exposed to RCE
CriticalApril 2026
A critical vulnerability in the Model Context Protocol (MCP) ecosystem exposed approximately 200,000 AI servers to Remote Code Execution (RCE). The flaw allowed attackers to compromise MCP-connected AI agents, traverse networks, and execute arbitrary commands on both the agent and connected server infrastructure. This was one of the largest AI infrastructure exposures in 2026.
Impact Summary
- ~200,000 AI servers vulnerable to RCE
- Full agent compromise and network traversal
- Mass exploitation of MCP-connected infrastructure
- Widespread patch campaigns required across enterprise deployments
References:
LiteLLM Supply Chain Attack: 4TB of AI Training Data Exposed
HighApril 2026
The Mercor breach exposed 4TB of AI training data through a supply chain attack targeting the LiteLLM proxy. Attackers gained access to model configurations, API keys, and proprietary training datasets used by multiple organizations. The incident highlighted the risks of centralized AI middleware and the cascading impact of supply chain compromises in LLM infrastructure.
Impact Summary
- 4TB of proprietary AI training data leaked
- Exposed API keys and model configurations
- Multiple downstream organizations affected via shared middleware
- Highlighted supply chain risks in LLM proxy infrastructure
References:
LLM-Driven Attack Compromises AWS Administrator Privileges in 8 Minutes
CriticalApril 2026
Security researchers demonstrated an LLM-driven attack that escalated from initial access to full AWS administrator privileges in just 8 minutes. The attack leveraged an AI agent with excessive permissions, chaining prompt injection with automated cloud API calls to discover credentials, enumerate IAM policies, and assume a privileged role. The incident underscored the dangers of over-permissioned AI agents in cloud environments.
Impact Summary
- Full AWS admin compromise in 8 minutes
- Demonstrated real-world AI-to-cloud attack chain
- Excessive agency and over-permissioning as root causes
- Critical validation for cloud AI deployment policies
References:
hermes-px: Malicious PyPI Package Steals Prompts and Code
MediumApril 2026
A malicious PyPI package named hermes-px was discovered masquerading as a legitimate Hermes agent utility. When installed, it exfiltrated developer prompts, source code snippets, and environment variables to remote command-and-control servers. The package targeted AI developers using Hermes-based frameworks, exploiting the trust developers place in agent ecosystem packages.
Impact Summary
- Prompt and source code exfiltration from developer environments
- Environment variable theft including API keys
- Supply chain targeting of AI developer ecosystems
- Package removed from PyPI; users advised to rotate credentials
References:
Related Resources
MCP Security Guide
30+ CVEs discovered in 2026. Learn about MCP vulnerabilities and hardening.
Read more →