AI Pentesting Portal
Ethical, lawful AI security

AI Security Glossary

Key terms and concepts in AI security testing

Prompt Injection

User input that manipulates model behavior.

Model Hallucination

Model invents non-factual output.

Data Poisoning

Training data manipulated to bias outcomes.

SBOM/MBOM

Software/Model Bill of Materials, for supply chain transparency.

Adversarial Example

Input designed to cause model misclassification.

Model Stealing

Extracting model parameters through queries.

Membership Inference

Determining if data was in training set.

Differential Privacy

Technique to protect individual data points.

Red Teaming

Simulating adversary attacks to test defenses.

Jailbreaking

Bypassing model safety restrictions.