Governance with the NIST AI RMF
The NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0), published January 2023, is voluntary U.S. government guidance for organizations that design, develop, deploy, or use AI systems. The full text is NIST.AI.100-1. NIST also publishes an AI RMF Playbook with suggested actions.
This page does not replace the source documents; it shows how CSN readers can align LLM product work (see Risk landscape) with a governance structure auditors recognize.
The four core functions
NIST organizes AI risk activities into four functions:
| Function | Purpose | Examples for LLM products |
|---|---|---|
| Govern | Culture, policies, accountability | AI security policy; who approves model changes; vendor rules for foundation models. |
| Map | Context and known risks | Data flows for prompts; tool/MCP inventory; user populations; abuse scenarios. |
| Measure | Assess and track risk | Evaluations for jailbreaks; leakage tests; cost/DoS monitoring; bias/safety metrics where required. |
| Manage | Prioritize and mitigate | Human review for high-risk tools; rate limits; incident playbooks; patching supply-chain deps. |
Govern is cross-cutting: it should inform how Map, Measure, and Manage are run, not be a one-time checklist.
Trustworthiness characteristics
NIST highlights characteristics to consider across the lifecycle, including security and resilience, privacy, accountability and transparency, safety, and fairness with managed bias. For security-focused readers, the point is simple: security is one dimension; product and legal stakeholders may prioritize others in parallel.
Mapping OWASP LLM risks to RMF activities
Rough alignment (not official NIST text):
- Map — Identify which OWASP LLM items apply per feature; document trust boundaries (Threat modeling).
- Measure — Run structured tests (prompt injection suites, tool-abuse cases, retrieval attacks, token abuse).
- Manage — Implement controls from AI Automation (tool narrowing, sandboxing, n8n hardening) and organizational policy.
Practical takeaway
Use the RMF as a conversation framework with leadership and GRC: “We mapped these LLM risks, measure them with these tests, and manage them with these controls and owners.” That pairs engineering work with defensible documentation.