Previous All Posts Next

Build an AI Agent with n8n: Step-by-Step Tutorial

Posted: March 11, 2026 to Technology.

An AI agent built with n8n is an automated workflow that combines large language model reasoning with real-world tool execution, allowing the agent to read emails, query databases, call APIs, and take actions based on natural language instructions. Unlike simple chatbots, n8n AI agents can chain multiple tools together, maintain conversation memory, and execute complex multi-step tasks without human intervention.


Key Takeaways

  • n8n is a self-hosted, open-source workflow automation platform with native AI agent capabilities built in since version 1.30
  • AI agents in n8n use a ReAct (Reasoning + Acting) loop that lets the LLM decide which tools to call and in what order
  • You can connect AI agents to 400+ integrations including databases, email, CRMs, and custom APIs without writing code
  • Self-hosting n8n on your own infrastructure keeps all data, prompts, and outputs private, which is critical for compliance-regulated businesses
  • A functional AI agent can be built and deployed in under two hours using this guide

Why n8n for AI Agents

The AI agent landscape in 2026 is crowded. LangChain, CrewAI, AutoGen, and dozens of other frameworks compete for developer attention. Most of them require Python expertise, complex dependency management, and significant custom code.

n8n takes a fundamentally different approach. It provides a visual workflow builder where AI agent logic is constructed by connecting nodes on a canvas. The underlying execution engine handles retries, error handling, and credential management. You get the power of a code-based agent framework with the maintainability of a visual tool.

For businesses that need AI automation but cannot afford a full-time AI engineering team, n8n bridges the gap. At Petronella Technology Group, we deploy n8n as the backbone of our AI automation services because it lets us build production-grade agents in hours instead of weeks.

Key Advantages Over Code-Based Frameworks

Feature n8n LangChain/CrewAI Custom Python
Setup time 15 minutes (Docker) 1-2 hours (pip, dependencies) 4+ hours
Visual debugging Yes (execution history) No (log parsing) No
Credential management Built-in vault Manual env vars Manual
400+ integrations Native nodes Custom code per integration Custom code
Self-hosted option Yes Yes Yes
Maintenance overhead Low (auto-updates) High (dependency conflicts) High
Non-developer friendly Yes No No

Prerequisites

Before starting, you need:

  • A Linux server (Ubuntu 22.04+ or NixOS) with at least 4 GB RAM and 2 CPU cores
  • Docker and Docker Compose installed
  • An API key from at least one LLM provider (OpenAI, Anthropic, or a local Ollama instance)
  • Basic familiarity with REST APIs (helpful but not required)

Step 1: Deploy n8n with Docker Compose

Create a directory for your n8n deployment and add the following Docker Compose configuration:

mkdir -p ~/n8n-compose && cd ~/n8n-compose
# docker-compose.yml
version: "3.8"
services:
  n8n:
    image: n8nio/n8n:latest
    container_name: n8n
    restart: unless-stopped
    ports:
      - "5678:5678"
    environment:
      - N8N_HOST=localhost
      - N8N_PORT=5678
      - N8N_PROTOCOL=http
      - GENERIC_TIMEZONE=America/New_York
      - N8N_AI_ENABLED=true
    volumes:
      - n8n_data:/home/node/.n8n

  postgres:
    image: postgres:16
    container_name: n8n-postgres
    restart: unless-stopped
    environment:
      - POSTGRES_USER=n8n
      - POSTGRES_PASSWORD=your_secure_password_here
      - POSTGRES_DB=n8n
    volumes:
      - postgres_data:/var/lib/postgresql/data

volumes:
  n8n_data:
  postgres_data:

Start the stack:

docker compose up -d

Navigate to http://your-server-ip:5678 in your browser. Create your admin account on the first visit. This account controls all workflows and credentials.

Step 2: Configure LLM Credentials

Before building an agent, you need to register your LLM provider credentials in n8n.

  1. Go to Settings > Credentials > Add Credential
  2. Search for your provider:
    • OpenAI: Enter your API key
    • Anthropic: Enter your API key
    • Ollama (local): Set the base URL to http://host.docker.internal:11434 (no API key needed)

For HIPAA-regulated environments, use Ollama pointed at your private LLM deployment. This keeps all inference local with zero data leaving your network.

Step 3: Build Your First AI Agent Workflow

The Agent Node

n8n's AI Agent node is the core of every agent workflow. It implements a ReAct loop:

  1. The LLM receives your prompt and available tools
  2. It reasons about which tool to call first
  3. It executes the tool and receives the result
  4. It reasons again, potentially calling another tool
  5. It repeats until it has enough information to respond

Create a new workflow and add these nodes:

Node 1: Trigger (Webhook)

  • Add a Webhook node as the trigger
  • Set method to POST
  • This gives you a URL to send requests to your agent

Node 2: AI Agent

  • Add an AI Agent node
  • Connect it to the Webhook trigger
  • Configure:
    • Agent Type: Tools Agent
    • System Message: Define the agent's role, constraints, and personality
    • LLM: Select your configured credential (OpenAI GPT-4, Claude, or Ollama)

Node 3: Tools (Connect to the Agent)

Tools are what make an agent useful. Without tools, it is just a chatbot. Add tools as sub-nodes connected to the AI Agent:

HTTP Request Tool: Lets the agent call any REST API

Name: "Search Company Database"
Description: "Search the internal company database for customer information by name or account number"
URL: https://your-api.internal/search
Method: GET
Query Parameters: q={{ $json.query }}

Postgres Tool: Direct database queries

Name: "Query Sales Data"
Description: "Run SQL queries against the sales database to retrieve revenue, deals, and pipeline information"
Operation: Execute Query

Gmail Tool: Read and send emails

Name: "Send Follow-Up Email"
Description: "Send a follow-up email to a prospect with personalized content"

Code Tool: Execute custom JavaScript

Name: "Calculate Pricing"
Description: "Calculate service pricing based on company size and requirements"

Node 4: Memory (Optional but Recommended)

Add a Window Buffer Memory node connected to the AI Agent to maintain conversation context across multiple interactions. Configure the window size (typically 10-20 messages) to balance context quality with token costs.

Step 4: Write an Effective System Prompt

The system prompt determines 80% of your agent's behavior. A weak prompt produces unreliable results regardless of model quality.

You are a sales assistant for Petronella Technology Group.
Your job is to help the sales team research prospects and
prepare for meetings.

## Rules
- Always verify information by checking our CRM before responding
- Never fabricate company details or contact information
- When asked about pricing, use the Calculate Pricing tool
  with the prospect's employee count and industry
- Format all responses in clear, professional language
- If you cannot find information, say so explicitly

## Available Context
- Company: Petronella Technology Group, cybersecurity and AI consulting
- Services: Managed IT, CMMC compliance, private AI deployment
- Location: Raleigh, NC | Phone: 919-348-4912

Prompt Engineering Tips for n8n Agents

  • Be explicit about when to use each tool. LLMs perform better with clear decision criteria
  • Include negative instructions ("never fabricate") alongside positive ones
  • Define output format expectations in the prompt
  • Test with edge cases: what happens when the database returns no results?

Step 5: Add Error Handling

Production agents need graceful failure handling. n8n provides several mechanisms:

Retry on fail: Enable on HTTP Request and database nodes. Set 3 retries with exponential backoff.

Error trigger workflow: Create a separate workflow that fires when the main agent fails. Use it to send alerts via Slack, email, or SMS.

Output validation: Add a Code node after the AI Agent that validates the output format before passing it downstream:

const response = $input.first().json;

if (!response.output || response.output.length < 10) {
  throw new Error('Agent produced empty or insufficient response');
}

return [{ json: { validated: true, output: response.output } }];

Step 6: Deploy and Test

Testing Workflow

  1. Activate the workflow in n8n
  2. Send a test request to the webhook URL:
curl -X POST https://your-n8n-instance/webhook/your-agent \
  -H "Content-Type: application/json" \
  -d '{"message": "What is the current pipeline value for Acme Corp?"}'
  1. Check the execution log in n8n. Every node execution is visible, including the LLM's reasoning chain, tool calls, and final output.

Production Checklist

  • Webhook secured with authentication header
  • All credentials stored in n8n's built-in credential vault
  • Error handling workflow connected
  • Rate limiting configured (prevent runaway tool calls)
  • Execution history retention set (default: 30 days)
  • Backup configured for n8n data volume

Practical Agent Examples

Example 1: Lead Research Agent

Trigger: New CRM lead arrives. Agent researches the company website, LinkedIn, and news. Outputs a pre-meeting brief with company size, industry, potential pain points, and recommended talking points.

Example 2: IT Support Triage Agent

Trigger: Support ticket email received. Agent classifies severity, checks knowledge base for existing solutions, and either responds with a solution or escalates to the correct technician with context.

Example 3: Compliance Document Reviewer

Trigger: Document uploaded to shared folder. Agent reads the document, checks it against CMMC/HIPAA/SOC 2 requirements stored in a vector database, and flags gaps or missing controls.

Example 4: Daily Operations Digest

Trigger: Scheduled (every morning at 7 AM). Agent queries multiple systems (CRM, ticketing, monitoring), summarizes overnight events, and sends a digest email to the team lead.

For production implementations of agents like these, our AI agent development team can build custom solutions tailored to your specific workflows.

Security Considerations for Self-Hosted n8n

Self-hosting n8n gives you complete control, but that control comes with responsibility:

  • Never expose n8n directly to the internet. Place it behind a reverse proxy (NGINX, Caddy) with TLS and authentication
  • Rotate API keys quarterly. n8n stores credentials encrypted, but key rotation limits blast radius
  • Restrict network access. The n8n container should only reach the services it needs, nothing more
  • Audit execution logs. Review what your agents are doing weekly. LLMs can behave unpredictably with novel inputs
  • Keep n8n updated. Security patches ship regularly. Subscribe to the n8n changelog

For businesses in regulated industries, these security measures are not optional. They are the baseline for maintaining compliance while leveraging AI automation. Learn more about our approach to secure IT services.

What Comes Next

Once your first agent is running, the natural progression is:

  1. Add more tools: Connect additional APIs, databases, and services
  2. Build multi-agent workflows: Chain agents together where one agent's output triggers another
  3. Implement vector memory: Add a Pinecone or Qdrant node for long-term knowledge retrieval
  4. Monitor and optimize: Track token usage, response times, and error rates
  5. Scale horizontally: Deploy multiple n8n workers behind a load balancer for high-throughput workloads

Get Started with AI Automation

Petronella Technology Group builds and deploys AI agent systems for businesses across healthcare, defense, and professional services. Whether you need a single workflow automated or an entire operations platform built on n8n, our team has the expertise to deliver.

We are Raleigh's only consultancy combining AI development with CMMC-certified cybersecurity (RP-1372). That dual expertise means your AI agents are built secure from day one.

Call 919-348-4912 or visit petronellatech.com/contact/ to discuss your automation needs.


About the Author: Craig Petronella is the CEO of Petronella Technology Group, Inc., with over 30 years of experience in IT infrastructure and cybersecurity. As a CMMC Registered Practitioner (RP-1372), Craig specializes in building secure automation systems for compliance-regulated organizations. He hosts the Petronella Technology Group podcast and writes extensively on the intersection of AI and cybersecurity.


Frequently Asked Questions

Is n8n free to use?

n8n offers a Community Edition that is completely free and open source under the Sustainable Use License. You can self-host it on your own infrastructure with no user limits or workflow restrictions. n8n also offers a paid Cloud edition starting at $24 per month for teams that prefer managed hosting.

Can n8n AI agents use local LLMs instead of OpenAI?

Yes. n8n has native Ollama integration. Point the Ollama credential at your local instance (typically http://localhost:11434), and the AI Agent node will use your self-hosted model for all reasoning and tool-calling. This keeps all data on your infrastructure with zero external API calls.

How many concurrent agents can n8n handle?

A single n8n instance on a 4-core server with 8 GB RAM comfortably handles 50 to 100 concurrent workflow executions. For higher throughput, n8n supports a queue mode with multiple worker processes. Enterprise deployments typically run 3 to 5 workers behind a load balancer.

What LLMs work best with n8n's AI Agent node?

For tool-calling reliability, Anthropic Claude 3.5 Sonnet and OpenAI GPT-4o produce the most consistent results. For local deployment, Llama 3.1 70B Instruct via Ollama handles most agent tasks well. Smaller models (7B-13B) work for simple single-tool agents but struggle with complex multi-tool reasoning chains.

Is n8n secure enough for healthcare or government data?

Self-hosted n8n gives you full control over data residency, encryption, and access controls. When deployed on your own infrastructure behind proper network security, n8n meets the technical requirements for HIPAA, CMMC, and SOC 2 compliance. The key is the deployment configuration, not the software itself.

How does n8n compare to Zapier or Make for AI workflows?

n8n is self-hosted, giving you data sovereignty that Zapier and Make cannot provide. n8n's AI Agent node supports true autonomous tool-calling, while Zapier and Make treat AI as simple text-in/text-out nodes. For compliance-regulated businesses that need AI agents with access to sensitive data, n8n is the clear choice.

Can I version-control my n8n workflows?

Yes. n8n workflows export as JSON files that can be committed to Git. We recommend exporting workflows after every significant change and storing them in a private repository. This gives you rollback capability and change tracking. The n8n API also supports programmatic workflow export for automated backups.


{
  "@context": "https://schema.org",
  "@type": "FAQPage",
  "mainEntity": [
    {
      "@type": "Question",
      "name": "Is n8n free to use?",
      "acceptedAnswer": {
        "@type": "Answer",
        "text": "n8n offers a Community Edition that is completely free and open source under the Sustainable Use License. You can self-host it on your own infrastructure with no user limits or workflow restrictions. n8n also offers a paid Cloud edition starting at $24 per month."
      }
    },
    {
      "@type": "Question",
      "name": "Can n8n AI agents use local LLMs instead of OpenAI?",
      "acceptedAnswer": {
        "@type": "Answer",
        "text": "Yes. n8n has native Ollama integration. Point the Ollama credential at your local instance, and the AI Agent node will use your self-hosted model for all reasoning and tool-calling. This keeps all data on your infrastructure with zero external API calls."
      }
    },
    {
      "@type": "Question",
      "name": "How many concurrent agents can n8n handle?",
      "acceptedAnswer": {
        "@type": "Answer",
        "text": "A single n8n instance on a 4-core server with 8 GB RAM comfortably handles 50 to 100 concurrent workflow executions. For higher throughput, n8n supports queue mode with multiple worker processes."
      }
    },
    {
      "@type": "Question",
      "name": "What LLMs work best with n8n's AI Agent node?",
      "acceptedAnswer": {
        "@type": "Answer",
        "text": "For tool-calling reliability, Anthropic Claude 3.5 Sonnet and OpenAI GPT-4o produce the most consistent results. For local deployment, Llama 3.1 70B Instruct via Ollama handles most agent tasks well."
      }
    },
    {
      "@type": "Question",
      "name": "Is n8n secure enough for healthcare or government data?",
      "acceptedAnswer": {
        "@type": "Answer",
        "text": "Self-hosted n8n gives you full control over data residency, encryption, and access controls. When deployed on your own infrastructure behind proper network security, n8n meets the technical requirements for HIPAA, CMMC, and SOC 2 compliance."
      }
    },
    {
      "@type": "Question",
      "name": "How does n8n compare to Zapier or Make for AI workflows?",
      "acceptedAnswer": {
        "@type": "Answer",
        "text": "n8n is self-hosted, giving you data sovereignty that Zapier and Make cannot provide. n8n's AI Agent node supports true autonomous tool-calling, while Zapier and Make treat AI as simple text-in/text-out nodes."
      }
    },
    {
      "@type": "Question",
      "name": "Can I version-control my n8n workflows?",
      "acceptedAnswer": {
        "@type": "Answer",
        "text": "Yes. n8n workflows export as JSON files that can be committed to Git. Export workflows after every significant change and store them in a private repository for rollback capability and change tracking."
      }
    }
  ]
}
Need help implementing these strategies? Our cybersecurity experts can assess your environment and build a tailored plan.
Get Free Assessment

About the Author

Craig Petronella, CEO and Founder of Petronella Technology Group
CEO, Founder & AI Architect, Petronella Technology Group

Craig Petronella founded Petronella Technology Group in 2002 and has spent more than 30 years working at the intersection of cybersecurity, AI, compliance, and digital forensics. He holds the CMMC Registered Practitioner credential (RP-1372) issued by the Cyber AB, is an NC Licensed Digital Forensics Examiner (License #604180-DFE), and completed MIT Professional Education programs in AI, Blockchain, and Cybersecurity. Craig also holds CompTIA Security+, CCNA, and Hyperledger certifications.

He is an Amazon #1 Best-Selling Author of 15+ books on cybersecurity and compliance, host of the Encrypted Ambition podcast (95+ episodes on Apple Podcasts, Spotify, and Amazon), and a cybersecurity keynote speaker with 200+ engagements at conferences, law firms, and corporate boardrooms. Craig serves as Contributing Editor for Cybersecurity at NC Triangle Attorney at Law Magazine and is a guest lecturer at NCCU School of Law. He has served as a digital forensics expert witness in federal and state court cases involving cybercrime, cryptocurrency fraud, SIM-swap attacks, and data breaches.

Under his leadership, Petronella Technology Group has served 2,500+ clients, maintained a zero-breach record among compliant clients, earned a BBB A+ rating every year since 2003, and been featured as a cybersecurity authority on CBS, ABC, NBC, FOX, and WRAL. The company leverages SOC 2 Type II certified platforms and specializes in AI implementation, managed cybersecurity, CMMC/HIPAA/SOC 2 compliance, and digital forensics for businesses across the United States.

CMMC-RP NC Licensed DFE MIT Certified CompTIA Security+ Expert Witness 15+ Books
Related Service
Enterprise IT Solutions & AI Integration

From AI implementation to cloud infrastructure, PTG helps businesses deploy technology securely and at scale.

Explore AI & IT Services
Previous All Posts Next
Free cybersecurity consultation available Schedule Now