Skip to main content

AI Security

This section covers governance, risk methodology, and platform-level security for AI systems—how to name risks, align with recognized frameworks, and model threats before you tune prompts or wire tools.

It complements AI Automation, which focuses on building and operating agent workflows and n8n safely.

Start here

TopicArticle
LLM risk vocabulary (OWASP)Risk landscape
Organizational governance (NIST)Governance with the NIST AI RMF
Design reviews & trust boundariesThreat modeling for LLM apps
Prompts, logs, retentionData, prompts, and logs

How this fits CSN Docs

  • Labs — Hands-on web security scenarios (DVWA, Juice Shop).
  • AI Automation — Practical controls for tools, MCP, sandboxing, and n8n.
  • AI Security — The “why and how we govern” layer: maps to OWASP LLM Top 10 (2025) and NIST AI RMF without replacing your own policies or legal advice.

External references (authoritative)