← Back to Self-Hosted AI Workflow Pack

How to Build Self-Hosted AI Workflows with n8n + Ollama

A practical guide to running AI automation on your own hardware — no API keys, no monthly bills, no data leaving your machine. Includes a free, complete email classifier workflow.

Why Self-Hosted AI Matters

Every time you send data to OpenAI, Anthropic, or Google's APIs, three things happen:

  1. Your data leaves your control. Customer emails, internal documents, support tickets — all sent to third-party servers.
  2. You pay per token. A single workflow processing 100 emails/day can cost $50–200/month on GPT-4.
  3. You depend on uptime you don't control. Rate limits, outages, and API deprecations can break your automation without warning.

Self-hosted AI with Ollama eliminates all three problems:

The tradeoff: Local models are smaller than GPT-4. But for structured tasks like classification, summarization, data extraction, and drafting — the ones you actually want to automate — a well-prompted 8B parameter model running locally is more than sufficient.

How n8n + Ollama Work Together

n8n is an open-source workflow automation tool (self-hosted alternative to Zapier). Ollama is a local LLM runner that exposes models via a REST API on localhost:11434.

The integration is straightforward: n8n's HTTP Request node calls Ollama's /api/generate endpoint. No special plugins needed.

[Trigger] → [n8n Workflow] → [HTTP Request to Ollama] → [Process Response] → [Output]
                                          ↓
                               localhost:11434 (Llama 3, etc.)

Setup (5 minutes)

1. Install Ollama

curl -fsSL https://ollama.ai/install.sh | sh
ollama pull llama3:8b

2. Install n8n

docker run -d --name n8n -p 5678:5678 \
  --add-host=host.docker.internal:host-gateway \
  -v n8n_data:/home/node/.n8n \
  n8nio/n8n

Docker note: The --add-host flag is critical if n8n runs in Docker — it lets n8n reach Ollama on the host machine. Inside your workflows, use http://host.docker.internal:11434 instead of localhost.

3. Verify Ollama is reachable

curl http://localhost:11434/api/generate -d '{
  "model": "llama3:8b",
  "prompt": "Say hello in one sentence.",
  "stream": false
}'

You should get a JSON response with a response field. If that works, you're ready.

Free Example: Email Classifier Workflow

Here's a complete, working n8n workflow that classifies incoming emails using Ollama. Import the JSON below directly into n8n.

What it does:

  1. Checks for unread emails every 5 minutes (IMAP)
  2. Sends each email to Ollama for classification (URGENT, QUESTION, FEEDBACK, SPAM, OTHER)
  3. Filters out spam
  4. Drafts a context-aware reply using Ollama
  5. Outputs the draft for your review

The Workflow JSON

Copy this and import via n8n > Workflows > Import from JSON:

Click to expand full workflow JSON
{
  "name": "AI Email Classifier + Auto-Responder (Ollama)",
  "nodes": [
    {
      "parameters": {
        "rule": {
          "interval": [{ "field": "minutes", "minutesInterval": 5 }]
        }
      },
      "id": "schedule",
      "name": "Check Every 5 Minutes",
      "type": "n8n-nodes-base.scheduleTrigger",
      "typeVersion": 1.2,
      "position": [240, 300]
    },
    {
      "parameters": {
        "operation": "getAll",
        "returnAll": false,
        "limit": 10,
        "filters": {
          "readStatus": "unread"
        }
      },
      "id": "get-emails",
      "name": "Get Unread Emails (IMAP)",
      "type": "n8n-nodes-base.emailReadImap",
      "typeVersion": 2,
      "position": [460, 300],
      "credentials": {
        "imap": {
          "id": "REPLACE_WITH_YOUR_IMAP_CREDENTIAL_ID",
          "name": "Your IMAP Account"
        }
      }
    },
    {
      "parameters": {
        "url": "http://localhost:11434/api/generate",
        "sendBody": true,
        "specifyBody": "json",
        "jsonBody": "={{ JSON.stringify({ model: 'llama3:8b', prompt: 'Classify this email into one of these categories: URGENT, QUESTION, FEEDBACK, SPAM, OTHER.\\n\\nFrom: ' + $json.from + '\\nSubject: ' + $json.subject + '\\nBody: ' + ($json.text || '').substring(0, 1000) + '\\n\\nRespond with ONLY the category name, nothing else.', stream: false, options: { temperature: 0.1, num_predict: 20 } }) }}",
        "options": { "timeout": 60000 }
      },
      "id": "classify",
      "name": "Classify Email (Ollama)",
      "type": "n8n-nodes-base.httpRequest",
      "typeVersion": 4.2,
      "position": [680, 300]
    },
    {
      "parameters": {
        "conditions": {
          "options": { "caseSensitive": false },
          "combinator": "or",
          "conditions": [
            {
              "leftValue": "={{ JSON.parse($json.data).response.trim() }}",
              "rightValue": "SPAM",
              "operator": { "type": "string", "operation": "equals" }
            }
          ]
        }
      },
      "id": "filter-spam",
      "name": "Filter Spam",
      "type": "n8n-nodes-base.filter",
      "typeVersion": 2,
      "position": [900, 300]
    },
    {
      "parameters": {
        "url": "http://localhost:11434/api/generate",
        "sendBody": true,
        "specifyBody": "json",
        "jsonBody": "={{ JSON.stringify({ model: 'llama3:8b', prompt: 'Write a professional, helpful email reply to this message. Be concise and friendly.\\n\\nFrom: ' + $('Get Unread Emails (IMAP)').item.json.from + '\\nSubject: ' + $('Get Unread Emails (IMAP)').item.json.subject + '\\nBody: ' + ($('Get Unread Emails (IMAP)').item.json.text || '').substring(0, 2000) + '\\n\\nWrite ONLY the reply body. Sign off as the team.', stream: false, options: { temperature: 0.5, num_predict: 1000 } }) }}",
        "options": { "timeout": 120000 }
      },
      "id": "draft-reply",
      "name": "Draft Reply (Ollama)",
      "type": "n8n-nodes-base.httpRequest",
      "typeVersion": 4.2,
      "position": [1120, 300]
    },
    {
      "parameters": {
        "assignments": {
          "assignments": [
            {
              "id": "draft",
              "name": "draft_reply",
              "value": "={{ JSON.parse($json.data).response }}",
              "type": "string"
            },
            {
              "id": "original_from",
              "name": "original_from",
              "value": "={{ $('Get Unread Emails (IMAP)').item.json.from }}",
              "type": "string"
            },
            {
              "id": "original_subject",
              "name": "original_subject",
              "value": "={{ 'Re: ' + $('Get Unread Emails (IMAP)').item.json.subject }}",
              "type": "string"
            },
            {
              "id": "category",
              "name": "category",
              "value": "={{ JSON.parse($('Classify Email (Ollama)').item.json.data).response.trim() }}",
              "type": "string"
            }
          ]
        }
      },
      "id": "prepare-output",
      "name": "Prepare Draft for Review",
      "type": "n8n-nodes-base.set",
      "typeVersion": 3.4,
      "position": [1340, 300]
    }
  ],
  "connections": {
    "Check Every 5 Minutes": {
      "main": [[{ "node": "Get Unread Emails (IMAP)", "type": "main", "index": 0 }]]
    },
    "Get Unread Emails (IMAP)": {
      "main": [[{ "node": "Classify Email (Ollama)", "type": "main", "index": 0 }]]
    },
    "Classify Email (Ollama)": {
      "main": [[{ "node": "Filter Spam", "type": "main", "index": 0 }]]
    },
    "Filter Spam": {
      "main": [[{ "node": "Draft Reply (Ollama)", "type": "main", "index": 0 }]]
    },
    "Draft Reply (Ollama)": {
      "main": [[{ "node": "Prepare Draft for Review", "type": "main", "index": 0 }]]
    }
  },
  "settings": { "executionOrder": "v1" },
  "tags": [{ "name": "AI" }, { "name": "Ollama" }, { "name": "Email" }]
}

How to Use It

  1. Import the JSON into n8n
  2. Set up your IMAP credentials (Settings > Credentials > Add IMAP)
  3. If n8n runs in Docker, replace localhost with host.docker.internal in the Ollama URLs
  4. Activate the workflow

Key Implementation Details

Why temperature: 0.1 for classification? Low temperature makes the model deterministic — you want consistent category labels, not creative ones. For drafting replies, temperature: 0.5 adds enough variation to sound natural.

Why num_predict: 20 for classification? The model only needs to output one word (the category). Limiting output tokens prevents rambling and speeds up response time.

Why substring the email body? Ollama models have context windows (typically 4096–8192 tokens). Truncating to 1000–2000 characters keeps you safely within limits while capturing the important content.

Parsing the response: Ollama returns {"response": "URGENT", ...}. The workflow uses JSON.parse($json.data).response.trim() to extract the clean category label.

What's in the Full Pack

The Self-Hosted AI Workflow Pack includes 11 production-ready n8n workflows, all powered by Ollama. Every workflow includes complete JSON, in-workflow documentation via sticky notes, and prompts tuned for reliable structured output.

Content & Marketing

Sales & Business

Productivity

Get All 11 Workflows for $39

One-time purchase. No subscriptions. No API costs. Unlimited runs forever.

Get the Full Pack — $39

30-day money-back guarantee. Instant download after purchase.

Wrapping Up

Self-hosted AI with n8n + Ollama is the practical path for anyone who wants AI automation without vendor lock-in, recurring costs, or data privacy headaches. The email classifier above is a working starting point — import it, tweak it, and see what local AI can do.

If you want to skip the hours of prompt engineering and workflow building, the full pack of 11 templates is ready to import and run.