Prompt Injection

high

prompt-injection

Attacker manipulates AI model inputs to bypass safety controls, extract sensitive information, or cause unintended behavior

TamperingInformation DisclosureElevation of Privilege

MITRE ATT&CK techniques

IDNameTactic
T1059 Command and Scripting Interpreter Execution

Common Weakness Enumeration

Mitigating controls

ctrl-prompt-1
Implement input validation and sanitization for all prompts
ctrl-prompt-2
Use system prompts that establish clear behavioral boundaries
ctrl-prompt-3
Implement output filtering to prevent sensitive data leakage
ctrl-prompt-4
Monitor and log AI interactions for anomalous patterns
ctrl-prompt-5
Apply rate limiting per user and session
ctrl-prompt-6
Implement platform-specific content safety guardrails (e.g., Bedrock Guardrails, Azure Content Safety, Vertex AI Safety Settings)
ctrl-prompt-7
Isolate user-provided input from system instructions using structured API formats

References