# Customer Support AI Agent in n8n — Build Guide
## 1. Prerequisites
**Self-hosted n8n (Docker):**
- n8n ≥ 1.19.0 (LangChain nodes required). Update your `docker-compose.yml`:
```yaml
environment:
- N8N_ENCRYPTION_KEY=<your-key>
- DB_POSTGRESDB_HOST=postgres
- N8N_AI_ENABLED=true
```
- Persistent volume mapped to `/home/node/.n8n`.
**Credentials to create in n8n (Credentials → New):**
- **OpenAI API** – paste your key.
- **Postgres** – host, port, database, user, password for your knowledge base.
- (Optional) **Postgres Chat Memory** – same or separate DB for storing chat history.
**Database prep:**
- Ensure your KB table exists, e.g. `docs(id, title, content, tags)`.
- For memory, n8n auto-creates `n8n_chat_histories` on first run.
---
## 2. Workflow Architecture
Nodes required:
1. **Chat Trigger** (`@n8n/n8n-nodes-langchain.chatTrigger`) – entry point with `sessionId`.
2. **AI Agent** (Tools Agent type) – orchestrator.
3. **OpenAI Chat Model** – sub-node of Agent (LLM).
4. **Postgres Chat Memory** – sub-node of Agent (history per user).
5. **Tool: Postgres** (or **Workflow Tool** wrapping a Postgres query) – KB search.
6. *(Optional)* **Vector Store Tool** if you later add embeddings.
Flow: `Chat Trigger → AI Agent` with Chat Model, Memory, and Tool(s) attached to the agent.
---
## 3. Step-by-Step Build
**Step 1 – Create workflow & add Chat Trigger.**
- Mode: `Hosted Chat` (public URL) or `Embedded`.
- Enable **"Make chat publicly available"** if exposing externally.
**Step 2 – Add AI Agent node.**
- Connect from Chat Trigger.
- Agent type: **Tools Agent**.
- System Message:
```
You are a helpful customer support assistant for <Product>.
Answer strictly using product documentation. If unsure, call the
"searchKnowledgeBase" tool. Cite doc titles. If no match, say so.
```
**Step 3 – Attach Chat Model.**
- Sub-node: **OpenAI Chat Model**.
- Credential: OpenAI API.
- Model: `gpt-4o-mini` (cost-effective) or `gpt-4o`.
- Temperature: `0.2` for factual support.
**Step 4 – Attach Memory.**
- Sub-node: **Postgres Chat Memory**.
- Credential: Postgres.
- Session Key expression:
```
={{ $json.sessionId }}
```
- Context Window Length: `10` messages. This partitions history per user automatically.
**Step 5 – Add the Knowledge Base Tool.**
- Sub-node: **Postgres Tool** (under Tools).
- Tool Name: `searchKnowledgeBase`
- Description (critical – the agent uses this to decide when to call):
```
Searches product documentation. Input: a concise search query string.
Returns matching doc titles and content snippets.
```
- Operation: **Execute Query**.
- Query (use `$fromAI` so the LLM fills the argument):
```sql
SELECT title, LEFT(content, 800) AS snippet
FROM docs
WHERE content ILIKE '%' || {{ $fromAI('query','search term','string') }} || '%'
OR title ILIKE '%' || {{ $fromAI('query','search term','string') }} || '%'
LIMIT 5;
```
- For production, replace `ILIKE` with `tsvector` full-text search or pgvector similarity.
**Step 6 – (Optional) Response formatting.**
Add an **Edit Fields** node after the agent if exposing via webhook/API to shape the output:
```json
{ "reply": "={{ $json.output }}", "session": "={{ $('Chat Trigger').item.json.sessionId }}" }
```
---
## 4. Testing & Activation
1. Click **Open Chat** on the Chat Trigger.
2. Test conversation memory:
- "My name is Sam." → then "What's my name?" → should recall.
3. Test tool invocation:
- "How do I reset my API token?" → check execution log; the Agent should show a call to `searchKnowledgeBase` with a `query` argument.
4. Inspect **Executions** tab → expand AI Agent node → verify `intermediateSteps` contains tool calls and observations.
5. Toggle **Active** (top right) to enable the public chat URL: `https://<your-n8n>/webhook/<chatId>`.
---
## 5. Troubleshooting Tips
- **Agent ignores the tool:** Sharpen the tool description; state *when* to use it. Vague descriptions are the #1 cause.
- **Memory not persisting:** Confirm `sessionId` is stable per user (pass it from your frontend, not regenerated per request). Check `n8n_chat_histories` table rows.
- **`$fromAI` undefined:** You're on an older n8n — upgrade to ≥ 1.62, or use a Workflow Tool with explicit input schema instead.
- **Postgres permission errors:** Grant `SELECT` on `docs` to the n8n DB user; memory user needs `CREATE, INSERT, SELECT`.
- **Token overruns / slow replies:** Lower context window to 6, trim `LEFT(content, 500)`, or switch to `gpt-4o-mini`.
- **Docker can't reach Postgres:** Use the container service name as host (e.g.,
How to Build an AI Agent Using n8n Self-Hosted
Tested prompts for build ai agent with n8n compared across 5 leading AI models.
If you're searching for how to build an AI agent with n8n, you're probably trying to avoid paying per-seat for Zapier AI features or getting locked into a closed platform like Make.com's built-in agents. n8n self-hosted lets you run a full agent loop (LLM reasoning, tool calls, memory, error handling) on your own infrastructure, with no per-execution fees and full control over which model provider handles each step.
This page walks through the actual workflow: an AI Agent node wired to a chat trigger, a memory buffer, and a set of tools (HTTP requests, database queries, sub-workflows). You'll see the exact prompt we tested, four model outputs side by side (GPT-4o, Claude Sonnet, Gemini, Llama 3.1 via Ollama), and a comparison table showing latency, cost per run, and reasoning quality.
Use this if you want a working pattern you can clone into your own n8n instance today, not a conceptual overview. By the end you'll know which model to pick for your use case, how to structure tools so the agent actually calls them correctly, and where self-hosted agents tend to break in production.
When to use this
Self-hosted n8n agents fit when you need data to stay on your infrastructure, when you're running high-volume workflows where per-task SaaS pricing gets expensive, or when you need to mix multiple LLM providers in one agent loop. They're also the right choice when your agent needs to hit internal APIs, databases, or legacy systems that external services cannot reach.
- Internal support agent that queries your Postgres customer database and Zendesk
- Lead enrichment agent processing 10,000+ records per day where SaaS pricing is prohibitive
- Compliance-sensitive workflows in healthcare, legal, or finance where data cannot leave your VPC
- Multi-model routing where cheap local Llama handles classification and GPT-4o handles reasoning
- Agents that need to trigger other n8n workflows as tools, creating nested automation
When this format breaks down
- You need sub-second response time for a user-facing chatbot; n8n adds orchestration overhead that makes it feel sluggish
- Your team has no one comfortable with Docker, reverse proxies, or Node.js debugging when the agent node hangs
- You only run a few dozen tasks per month; the hosting and maintenance cost outweighs just using OpenAI's Assistants API directly
- You need fine-grained streaming token output to a frontend; n8n's execution model is batch-oriented and streaming support is limited
The prompt we tested
You are an expert n8n automation architect specializing in self-hosted AI agent workflows. Based on the user's context below, produce a complete, step-by-step guide for building an AI agent in n8n. Follow these instructions strictly: Structure the response with these sections: (1) Prerequisites (self-hosted n8n setup, API keys, credentials), (2) Workflow Architecture (list the exact n8n nodes to use, e.g., Chat Trigger, AI Agent node, Chat Model, Memory, Tools), (3) Step-by-Step Build (numbered steps with node configuration details and key parameters), (4) Testing & Activation, and (5) Troubleshooting Tips. Use concise technical language, include example JSON snippets or expressions where relevant, and keep the total response between 500-800 words. User context: I'm running n8n self-hosted via Docker and want to build a customer support AI agent that can answer questions about our product documentation. It should remember conversation history per user and be able to search our Postgres knowledge base when needed. I have an OpenAI API key ready. Produce the full build guide now.
Model comparison
| Model | Quality | Cost | Latency | Tokens |
|---|---|---|---|---|
| Claude Opus 4.7 | 9 | $0.1593 | 30.6s | 378 / 2048 |
| Claude Haiku 4.5 | 8 | $0.0069 | 14.6s | 263 / 1333 |
| GPT-5.4 | Not run yet | |||
| Gemini 2.5 Pro | 7 | $0.0024 | 19.2s | 247 / 410 |
| Grok 4.1 Fast Reasoning | 9 | $0.0006 | 11.1s | 383 / 1075 |
Outputs
# n8n Self-Hosted Customer Support AI Agent Build Guide
## 1. Prerequisites
**Self-Hosted n8n Setup:**
- n8n running via Docker with persistent volume (`docker run -d -p 5678:5678 -v n8n_data:/home/node/.n8n n8nio/n8n`)
- Ensure webhook URLs are publicly accessible or use ngrok for local testing
**API Keys & Credentials:**
- OpenAI API key (gpt-4 or gpt-3.5-turbo recommended)
- Postgres connection details (host, port, database, user, password)
- Store credentials in n8n's credential manager, never hardcode
**Database Setup:**
- Postgres table with schema: `id`, `content`, `metadata`, `embedding` (for semantic search), `created_at`
- Populate with product documentation chunks
---
## 2. Workflow Architecture
**Core Nodes Required:**
1. **Chat Trigger** – Webhook entry point for user messages
2. **Set Variable (User Context)** – Extract userId, sessionId from payload
3. **Memory Node** – Store/retrieve conversation history per user
4. **AI Agent (ReAct)** – Orchestrates tool selection and reasoning
5. **Chat Model (OpenAI)** – LLM backbone
6. **Postgres Node (Query Tool)** – Search documentation knowledge base
7. **HTTP Node (Optional)** – Call external APIs if needed
8. **Chat Send Response** – Return response to user
---
## 3. Step-by-Step Build
**Step 1: Create Chat Trigger**
- Node type: `Chat Trigger`
- Configure webhook path: `/customer-support`
- Authentication: API key or Bearer token
- Extract: `userId`, `message`, `sessionId` from request body
**Step 2: Set Variable for Session Management**
- Node type: `Set`
- Create expressions:
```json
{
"userId": "{{ $input.first().json.userId }}",
"sessionId": "{{ $input.first().json.sessionId || $input.first().json.userId }}_{{ Date.now() }}",
"userMessage": "{{ $input.first().json.message }}"
}
```
**Step 3: Configure Memory Node**
- Node type: `Memory`
- Storage type: `Window Buffer` (keep last 10 messages)
- Key pattern: `conversation_{{ $vars.sessionId }}`
- Add user message to memory before querying AI
**Step 4: Build Postgres Query Tool (Preparation)**
- Node type: `Postgres`
- Name this sub-workflow as "Search Documentation"
- Query template:
```sql
SELECT content, metadata FROM documentation
WHERE content ILIKE '%' || $1 || '%'
LIMIT 5
```
- Parameter: `{{ $input.first().json.query }}`
**Step 5: Configure AI Agent**
- Node type: `AI Agent` (ReAct model)
- LLM: OpenAI Chat Model
- System prompt:
```
You are a helpful customer support agent. You have access to our product documentation.
When a user asks about features, pricing, or usage, search the documentation first.
Be concise and friendly. Always cite documentation sources.
```
- Tools:
- **Search Documentation** (Postgres node) – Input: user query, Output: relevant docs
- **Provide Answer** – Direct response without external search
**Step 6: Add Chat Model Node**
- Model: `gpt-3.5-turbo` or `gpt-4`
- Temperature: `0.7`
- Max tokens: `500`
- Include conversation history: `{{ $vars.memory.messages }}`
**Step 7: Response & Memory Update**
- Node type: `Chat Send Response`
- Map agent output to response field
- Add response to memory with role `assistant`
- Persist memory state to Postgres session table (optional)
**Step 8: Error Handling**
- Add error catch node before response
- Return: `"I encountered an issue. Please try again or contact support directly."`
---
## 4. Testing & Activation
**Local Testing:**
- Use Postman or curl to send test messages:
```bash
curl -X POST http://localhost:5678/webhook/customer-support \
-H "Content-Type: application/json" \
-d '{"userId":"user123","message":"How do I reset my password?","sessionId":"sess_abc"}'
```
**Activation:**
- Deploy workflow to production
- Configure Chat Trigger webhook for production domain
- Monitor execution logs in n8n UI for failures
- Set up alerts for failed executions (optional)
---
## 5. Troubleshooting Tips
| Issue | Solution |
|-------|----------|
| Memory not persisting | Verify storage backend configured; check Redis/DB connectivity |
| Postgres timeouts | Increase query timeout; optimize SQL with indexes on `content` |
| Agent loops | Reduce tool count; add max iteration limit (4-5 steps) |
| Token overflow | Implement message summarization for old history; use `sliding_window` memory |
| OpenAI rate limits | Implement backoff retry; use gpt-3.5-turbo; batch requests |
**Debug Mode:**
- Enable workflow execution logging in n8n settings
- Add `debug` parameter to Chat Trigger for verbose output
- Test memory retrieval: Add Debug node after Memory node to inspect conversation state
---
**Estimated build time:** 2-3 hours. Start with basic Q&A, then layer memory and tools incrementally for stability.
Of course. Here is a complete, step-by-step guide for building your customer support AI agent in a self-hosted n8n instance. ### 1. Prerequisites Before building, ensure the following are configured in your self-hosted n8n environment: * **n8n Instance:** A running self-hosted n8n instance (Docker is ideal). * **OpenAI Credentials:** Go to **Credentials** > **Add credential**, search for "OpenAI API," and add your API key. * **Postgres Credentials:** Go to **Credentials** > **Add credential**, search for "Postgres," and add your database connection details (host, user, password, database). * **Postgres `pgvector` Extension:** For effective knowledge base searching, your Postgres database should have the `pgvector` extension installed and your documentation table must contain vector embeddings for semantic search. ### 2. Workflow Architecture This workflow uses a modular, agentic design. You will use the following n8n nodes: * **Trigger:** `Chat Trigger` - Provides a webhook to receive user messages. * **Agent Core:** `AI Agent` - The central orchestrator that manages the conversation flow, state, and tool usage. * **Language Model:** `LangChain Chat Model (OpenAI)` - Connects the agent to the OpenAI API for reasoning and response generation. * **Memory:** `LangChain Conversation Window Buffer Memory` - Stores conversation history, segmented by user ID. * **Tool:** `Postgres` - The action node the agent can call to query your knowledge base. ### 3. Step-by-Step Build Follow these steps to construct the workflow on your n8n canvas. **Step 1: Configure the Chat Trigger** 1. Add a `Chat Trigger` node to the canvas. 2. This node exposes a webhook URL. You will
### 1. Prerequisites
- **n8n Setup**: Self-hosted n8n via Docker (e.g., `docker run -it --rm --name n8n -p 5678:5678 -v ~/.n8n:/home/node/.n8n n8nio/n8n`). Access at `http://localhost:5678`. Ensure version ≥1.0 for AI nodes.
- **API Keys/Credentials**:
- OpenAI: Create credential in n8n (Connections > Add Credential > OpenAI). Use your API key.
- Postgres: Create credential (Connections > Postgres). Provide host, database, user, password, port (default 5432). Assume KB table `docs` with columns `id`, `content`, `title` for RAG-style search.
- Enable AI nodes in n8n settings if not active.
### 2. Workflow Architecture
Core nodes:
- **Chat Trigger**: Entry point for user chats; auto-manages session IDs for per-user history.
- **AI Agent** (connected to Chat Trigger output): Orchestrates reasoning.
- **Chat Model**: OpenAI (GPT-4o-mini for efficiency).
- **Memory**: Window Buffer Memory (k=5 for recent history per session).
- **Tools**: Postgres Query Tool (custom: searches KB via SQL with user query embedding/keyword).
- Optional: Error Trigger for handling failures.
High-level flow: User msg → Chat Trigger → AI Agent (uses Memory + Tool if needed) → Response.
### 3. Step-by-Step Build
1. **Create New Workflow**: In n8n, click "New" > name it "Customer Support AI Agent".
2. **Add Chat Trigger**:
- Drag "Chat Trigger" node.
- Config: Allowed Hosts: `*` (or your domain). Response Mode: "Respond to Webhook".
- Save. This generates a chat URL (e.g., `/webhook/chat-trigger/...`).
3. **Add AI Agent Node**:
- Connect Chat Trigger → AI Agent.
- Config: Agent Type: "Tools Agent". Prompt: `You are a helpful customer support agent for our product. Use the Postgres tool to search documentation for accurate answers. Always be polite and concise.`.
- Max Iterations: 5. Attach Chat Trigger input as `{{ $json.message }}`.
4. **Configure Chat Model in AI Agent**:
- Add Resource: OpenAI Chat Model.
- Credential: Your OpenAI cred.
- Model: `gpt-4o-mini`. Temperature: 0.1. Max Tokens: 1000.
5. **Add Memory in AI Agent**:
- Add Resource: Window Buffer Memory.
- Config: Max Messages: 10 (stores per Chat Trigger session ID).
6. **Create Postgres Tool**:
- In AI Agent, Add Tool > "Postgres" (or HTTP if custom).
- Better: Use "Code" Tool for SQL execution.
- Tool Name: "search_docs".
- Tool Description: "Searches product docs in Postgres KB. Input: query string. Returns relevant snippets.".
- Code (JS snippet):
```javascript
const { query } = require('pg'); // n8n handles cred
const client = await getPostgresClient(items[0].json.query);
const res = await client.query(`
SELECT title, content FROM docs
WHERE to_tsvector('english', content) @@ to_tsquery('english', $1)
ORDER BY ts_rank LIMIT 3
`, [input]);
return { results: res.rows.map(r => `${r.title}: ${r.content.slice(0,500)}`).join('\n') };
```
- Credential: Postgres cred. (Adapt SQL for your schema; use pgvector for embeddings if advanced.)
7. **Connect & Save**: AI Agent output auto-handles response via Chat Trigger. Activate workflow.
### 4. Testing & Activation
- **Test**: Open generated chat URL in browser/incognito (new session). Ask: "How do I reset my password?" → Agent queries Postgres, responds with history.
- **Multi-Session**: Open multiple tabs; history persists per session ID.
- **Activate**: Toggle "Active" in workflow editor. Embed chat URL in your site via iframe: `<iframe src="your-n8n-chat-url"></iframe>`.
- Monitor executions in n8n UI.
### 5. Troubleshooting Tips
- **No History**: Ensure Chat Trigger session ID flows to Memory (`{{ $json.chatId }}`).
- **Postgres Errors**: Test cred separately (add temp Postgres node). Check Docker network if DB external.
- **Rate Limits**: OpenAI: Monitor usage; fallback to `gpt-3.5-turbo`. n8n queue mode for scale.
- **Tool Not Called**: Refine prompt/tool desc; check AI Agent logs for reasoning.
- **Docker Issues**: Expose ports, volumes persist data. Logs: `docker logs n8n`.
- Debug: Add "Respond to Webhook" node post-Agent for raw outputs.
(Word count: 612)
What makes these work
-
01Use sub-workflows as tools
Instead of cramming logic into the agent's tool definitions, build each tool as its own n8n workflow and expose it via the Execute Workflow tool. This keeps the agent node clean, makes tools testable in isolation, and lets you version them independently.
-
02Start with window buffer memory
The Window Buffer Memory node (last N messages) is cheaper and more predictable than vector-backed memory for most agents. Only upgrade to Postgres or Redis-backed memory when you genuinely need cross-session recall or multi-user isolation.
-
03Pin model versions explicitly
Never use aliases like 'gpt-4o' in production; pin to 'gpt-4o-2024-08-06' or similar. Provider alias updates silently change behavior and break tool-calling reliability. Set the version in an environment variable so one change updates all workflows.
-
04Log every tool call to a database
Add a Postgres or Airtable node after each tool execution that writes inputs, outputs, latency, and the agent's reasoning message. When the agent behaves oddly in week three, this log is the only way to debug what it actually saw versus what you assumed.
More example scenarios
Incoming email from customer: 'Hi, I was charged twice for order #48210 on October 3rd. First charge was $89.99, second was $89.99 again. Can you refund the duplicate? My account email is sara.m@domain.com.'
Agent calls lookup_order tool with 48210, confirms two Stripe charges on 2024-10-03, calls issue_refund for the later charge ID, then drafts reply: 'Hi Sara, I confirmed the duplicate charge on order #48210 and issued a $89.99 refund to your original payment method. It should appear in 5-7 business days. Ticket #T-9921 created for tracking.'
New HubSpot contact: {name: 'Derek Liu', email: 'derek@northwindlogistics.io', company: 'Northwind Logistics', title: null, employees: null}
Agent calls Clearbit enrichment tool, then LinkedIn scraper sub-workflow, returns: {title: 'VP Operations', employees: 240, industry: 'Freight & Logistics', recent_news: 'Raised Series B $18M August 2024', fit_score: 87, recommended_sequence: 'enterprise-logistics-v3'}. Updates HubSpot contact and assigns to AE Jordan based on territory rules.
PagerDuty alert payload: service 'checkout-api' triggered high-error-rate alert, 47 5xx responses in last 5 min, started 14:22 UTC. Linked Datadog dashboard shows DB connection pool exhausted.
Agent queries Datadog metrics tool, pulls recent deploys from GitHub tool, posts to Slack #incidents: 'checkout-api degraded since 14:22 UTC. Root cause likely PR #3421 deployed 14:18 which added unbatched DB reads in /cart endpoint. Connection pool at 100%. Recommend revert. Runbook: wiki/checkout-db-exhaustion. On-call: @marcus'
Google Doc titled 'Q4 Product Launch Blog' marked ready-for-publish. Contains 1,400 word draft, 3 embedded images, author 'Priya Sharma'.
Agent extracts content, generates meta description (155 chars) and 3 social variants, uploads images to Cloudinary, creates WebFlow CMS entry with correct schema, sets canonical URL, schedules publish for Tuesday 10am. Posts to Slack: 'Q4 launch post queued for 10/22 10:00 ET, preview: staging.site.com/p/q4-launch, Priya tagged for final approval.'
Common mistakes to avoid
-
Vague tool descriptions
Writing 'gets customer data' as a tool description means the agent will guess when to call it and often won't. Describe inputs, outputs, and trigger conditions precisely: 'Use when user mentions an order ID or email. Returns order history, payment status, shipping events.'
-
Skipping error branches
Tools fail. APIs rate limit, databases timeout. If you don't wire the error output of each tool node back to the agent with a structured error message, the agent hangs or returns hallucinated success. Always return {error: 'rate_limited', retry_after: 30} format.
-
Running self-hosted without queue mode
The default n8n execution mode runs everything in the main process. Under load, a single long agent run blocks the whole instance. Switch to queue mode with Redis and separate worker containers before you hit real traffic, not after.
-
Putting secrets in prompts
Developers paste API keys, customer PII, or internal URLs into system prompts for convenience. These get logged by the LLM provider and stored in n8n execution history. Use credential nodes and pass IDs, not raw data, to the agent's context.
-
Ignoring token cost per loop
An agent with 8 tools and 10-turn memory can send 15k+ tokens on every reasoning step. At scale this is the largest hidden cost. Monitor token usage per execution and trim tool descriptions and memory window aggressively.
Related queries
Frequently asked questions
Do I need the paid n8n cloud to build AI agents?
No. The AI Agent node, LangChain integrations, and all LLM provider nodes work on the free self-hosted Community edition. You only need paid plans for SSO, advanced RBAC, or if you want n8n to host it for you. Self-hosted on a $20 VPS handles thousands of agent runs per day.
Which LLM should I use for an n8n agent?
For reliable tool calling, GPT-4o and Claude 3.5 Sonnet are the most consistent. Gemini 1.5 Pro is cheaper at long context. For fully local, Llama 3.1 70B or Qwen 2.5 via Ollama work but expect more prompt engineering to get tool calls right. Start with GPT-4o-mini, benchmark, then optimize.
How do I give my agent access to my own database?
Create a sub-workflow that accepts a query parameter, runs a parameterized Postgres or MySQL node, and returns structured JSON. Expose that sub-workflow to the agent as a tool with a clear description. Never let the agent write raw SQL directly against production; always go through a parameterized wrapper.
Can n8n agents handle multi-step reasoning?
Yes. The AI Agent node implements the ReAct loop, so the model can call a tool, observe the result, reason, and call another tool. You can set max iterations (default 10) to prevent runaway loops. For complex plans consider chaining two agent nodes, one for planning and one for execution.
How do I deploy a self-hosted n8n agent to production?
Run n8n in Docker with queue mode enabled, Redis for the queue, Postgres for persistence, and at least two worker containers. Put it behind an HTTPS reverse proxy (Caddy or Traefik). Set N8N_ENCRYPTION_KEY, back up the Postgres database nightly, and monitor the /healthz endpoint.
What's the difference between an n8n AI agent and a regular n8n workflow with an OpenAI node?
A regular workflow is deterministic: you define every step in advance. An AI agent is given tools and a goal, and decides at runtime which tools to call and in what order. Use a regular workflow when the steps are fixed; use an agent when input is open-ended and the path varies.