Skip to main content

n8n AI use cases

These patterns assume you already configured public URLs and encryption.

Call an LLM API safely

  1. Create Credentials for the provider (API key stored encrypted—not in the workflow JSON).
  2. Use the vendor’s official node or HTTP Request with credential reference.
  3. Never pass the raw key through chat-style prompts; keep keys in credential records only.

Add retry with backoff on HTTP nodes to avoid burning quota on transient errors and reduce runaway cost from retries.

Webhook-triggered automation

Webhooks are convenient and risky:

  • Use hard-to-guess paths or signed payloads where the upstream supports it.
  • Validate body schema before calling downstream systems.
  • Apply rate limiting at the reverse proxy to reduce abuse.

Human approval before irreversible actions

Mirror human-in-the-loop:

  • Branch workflows so high-impact actions (payments, bulk email) require a manual execution or approval ticket step—not fully unattended first runs.

Observability

  • Tag executions with correlation IDs from the triggering system.
  • Monitor execution time and error rates per workflow; sudden spikes may indicate abuse or misconfiguration.