AI Hacking
AI Security Resources

About AI Hacking

A comprehensive AI/LLM security resource hub built for security professionals.

Our Mission

AI Hacking exists to close the gap in AI security education. As AI systems become deeply embedded in critical infrastructure, the attack surface grows — but the resources to secure them remain fragmented and outdated. We aim to be the definitive, up-to-date resource for anyone working at the intersection of artificial intelligence and information security.

We cover offensive and defensive techniques side by side: you cannot build effective defenses without understanding how attackers think. Every guide is grounded in real CVEs, reproducible techniques, and current tooling.

Who This Site Is For

Security Researchers

Investigate novel attack vectors in LLMs, agents, and AI infrastructure. Access CVE databases, reproducible proof-of-concepts, and emerging threat intelligence.

Developers

Build secure AI applications with defensive patterns for input validation, output filtering, prompt hardening, and API security.

Pentesters

Add AI systems to your assessment scope. Use our methodologies, checklists, and tooling guides to evaluate LLM deployments effectively.

Compliance Teams

Understand AI-specific regulatory requirements, from the EU AI Act to NIST AI RMF, and map them to actionable security controls.

What We Cover

Prompt Injection

Direct, indirect, and multi-turn prompt injection. Jailbreaks, encoding attacks, system prompt extraction, and defensive countermeasures.

Read guide →

RAG Attacks

Document poisoning, retrieval manipulation, embedding attacks, and vector database security in Retrieval-Augmented Generation systems.

Read guide →

MCP Security

Model Context Protocol vulnerabilities, server hardening, tool poisoning, and the 30+ CVEs discovered in 2026.

Read guide →

Red Teaming

Structured methodologies for adversarial AI testing, from reconnaissance to reporting. Tooling and case studies included.

Read guide →

Agentic AI Threats

Goal hijacking, tool misuse, supply chain attacks, and securing autonomous agents with planning and execution capabilities.

Read guide →

LLM API Security

Authentication, rate limiting, key management, cost controls, and output validation for LLM-backed APIs.

Read guide →

Content Freshness Policy

We update content weekly with the latest CVEs, newly released tools, and evolving attack techniques. Our incident timeline is updated in real time as new breaches and disclosures emerge. Guides are reviewed quarterly for accuracy and relevance.

  • Weekly: CVE roundups, new tool additions, incident reports
  • Monthly: Guide updates, methodology revisions, threat landscape summaries
  • Quarterly: Major guide overhauls, new feature pages, structural improvements

Team

🛡️

Jan Hazenberg

Maintainer & Founder

Senior Designer Voice at Odido Netherlands. Jan built AI Hacking to address the lack of centralized, practitioner-focused AI security resources. He maintains the site, curates content, and ensures accuracy across all guides.

Why This Site Exists

AI security knowledge is scattered across research papers, conference talks, and vendor blogs. AI Hacking brings it together in one place — structured, searchable, and kept current — so you spend less time searching and more time securing.

Methodology → Tools → Blog →