Risk landscape for LLM applications
Security teams and builders need a shared vocabulary for AI-specific failures. The OWASP Top 10 for Large Language Model Applications (2025) is the most widely used community catalog; it was updated to reflect how LLMs are deployed in products (including tool-using and retrieval-augmented systems). The official PDF is published by the OWASP project (Creative Commons BY-SA).
This page summarizes the 2025 list so you can map controls, tests, and incidents to named risks. Use it alongside Threat modeling for LLM apps and Data, prompts, and logs.
OWASP LLM Top 10 (2025) at a glance
| ID | Risk | What goes wrong (in one line) |
|---|---|---|
| LLM01 | Prompt injection | Untrusted input steers the model to ignore policy or abuse tools. |
| LLM02 | Sensitive information disclosure | Secrets, PII, or system text leak via outputs, logs, or errors. |
| LLM03 | Supply chain | Compromised models, datasets, plugins, or dependencies upstream. |
| LLM04 | Data and model poisoning | Bad training or retrieval data teaches wrong or malicious behavior. |
| LLM05 | Improper output handling | Downstream code treats model output as safe when it is not. |
| LLM06 | Excessive agency | Tools or integrations can take high-impact actions without enough checks. |
| LLM07 | System prompt leakage | Instructions or hidden policy text are exposed to users or attackers. |
| LLM08 | Vector and embedding weaknesses | Retrieval can be manipulated or leak data via embeddings/RAG design. |
| LLM09 | Misinformation | Confident wrong answers harm decisions, safety, or compliance. |
| LLM10 | Unbounded consumption | Abuse of tokens/APIs causes cost, denial of service, or resource drain. |
Deep-dive pages for each risk live on the OWASP GenAI site (e.g. consumption, agency, injection).
How to use this in practice
- Map — For each feature (chat, RAG, tools, plugins), note which rows in the table apply.
- Measure — Tie tests to rows: red-team prompts, tool-abuse cases, retrieval poisoning fixtures, cost limits.
- Manage — Assign owners for controls (schema design, egress rules, logging, human approval). Cross-link to NIST AI RMF alignment for governance cadence.
Relationship to classical AppSec
Traditional OWASP categories (injection, broken access control, SSRF) still matter in services around the model (API gateways, orchestrators, n8n, MCP servers). The LLM Top 10 names risks that appear at the model and prompt boundary—they are not a replacement for secure SDLC, but an add-on lens.
Further reading
- OWASP Top 10 for LLM Applications (2025) — canonical list and articles
- AI Security — hub
- AI Automation — operational patterns for agents and workflows