Training Data Poisoning

medium

ai-training-data-poisoning

Attacker manipulates training data or fine-tuning datasets to introduce backdoors or biases into AI models

TamperingElevation of Privilege

MITRE ATT&CK techniques

IDNameTactic
T1565 Data Manipulation Impact

Common Weakness Enumeration

Mitigating controls

ctrl-poison-1
Validate and sanitize all training data sources
ctrl-poison-2
Implement data provenance tracking
ctrl-poison-3
Use anomaly detection on training datasets
ctrl-poison-4
Conduct regular model behavior audits
ctrl-poison-5
Implement access controls and audit logging on training data storage
ctrl-poison-6
Verify training data integrity using checksums or cryptographic hashes before training runs

References