AI Security
This section covers governance, risk methodology, and platform-level security for AI systems—how to name risks, align with recognized frameworks, and model threats before you tune prompts or wire tools.
It complements AI Automation, which focuses on building and operating agent workflows and n8n safely.
Start here
| Topic | Article |
|---|---|
| LLM risk vocabulary (OWASP) | Risk landscape |
| Organizational governance (NIST) | Governance with the NIST AI RMF |
| Design reviews & trust boundaries | Threat modeling for LLM apps |
| Prompts, logs, retention | Data, prompts, and logs |
How this fits CSN Docs
- Labs — Hands-on web security scenarios (DVWA, Juice Shop).
- AI Automation — Practical controls for tools, MCP, sandboxing, and n8n.
- AI Security — The “why and how we govern” layer: maps to OWASP LLM Top 10 (2025) and NIST AI RMF without replacing your own policies or legal advice.