How to Build an AI Email Auto-Responder with n8n and Ollama (No API Keys Needed)

Published March 24, 2026 · 18 min read · Beginner-friendly

Your inbox is a time sink. The average professional spends over 2 hours per day reading and responding to emails, and most of those replies follow predictable patterns. What if you could build an AI system that reads every incoming email, classifies it by type and urgency, and drafts a context-aware reply — all running on your own machine with zero API costs?

In this n8n Ollama tutorial, you will build exactly that: a self-hosted AI email automation pipeline that connects your inbox to a locally-running AI model. No OpenAI key. No cloud dependency. No per-request charges. Just your hardware, doing the work.

By the end, you will have a working n8n AI workflow that polls your inbox every 5 minutes, classifies each message (support, sales, spam, personal), rates the urgency, and drafts a reply that you can review and send with one click.

Want to skip ahead? We have published the complete workflow template as a free open-source repo: n8n-ollama-email-responder on GitHub. Clone it, import the JSON into n8n, and you are running in under 5 minutes.

Table of Contents

Prerequisites

You do not need to be an AI expert. If you have installed software from the terminal before, you can follow this tutorial. Here is what you need:

Hardware 8 GB RAM minimum. 16 GB recommended. A GPU speeds things up but is not required.
n8n Self-hosted, version 1.0 or later. Docker install recommended.
Ollama Free, open-source local AI runner. We will install it in Step 1.
Email Account Any IMAP-compatible email (Gmail, Outlook, Fastmail, self-hosted).

Time required: approximately 20 minutes for the first setup. After that, the workflow runs unattended.

How the Workflow Works

Before we start building, here is the high-level flow of the local AI email automation system:

  1. IMAP Trigger — n8n polls your inbox on a schedule (every 5 minutes by default)
  2. AI Classification — Ollama reads the email and returns a structured JSON classification: category, urgency, sentiment
  3. Smart Routing — A Switch node routes emails to different handling paths based on the classification
  4. AI Reply Drafting — Ollama generates a context-aware reply, tailored to the email category
  5. Output — The drafted reply is saved to a Google Sheet, sent via webhook, or placed into your drafts folder

Each step is a node in n8n. The entire pipeline runs locally. Your email content never touches a third-party AI provider.

Step 1: Install Ollama and Pull a Model

1 Install Ollama

Ollama is a lightweight runtime that lets you run large language models locally. Installation takes about 60 seconds.

# Linux or macOS (one-line install)
curl -fsSL https://ollama.ai/install.sh | sh

# Windows: download the installer from https://ollama.ai/download

2 Pull the llama3:8b model

This is a strong general-purpose model that handles classification and text generation well. The download is approximately 4.7 GB.

ollama pull llama3:8b

If you have limited RAM (8 GB), use mistral:7b instead — it is slightly smaller and still performs well for email tasks.

3 Verify Ollama is running

curl http://localhost:11434/api/tags

You should see a JSON response listing your installed models. If you get a "connection refused" error, start Ollama with ollama serve.

Step 2: Set Up n8n and Verify the Connection

4 Start n8n with Docker

Docker is the easiest way to run n8n. The --add-host flag is critical — it lets n8n inside Docker reach Ollama on your host machine.

docker run -d --name n8n \
  -p 5678:5678 \
  -v n8n_data:/home/node/.n8n \
  --add-host=host.docker.internal:host-gateway \
  n8nio/n8n

Open http://localhost:5678 in your browser and create your admin account.

Alternative: if you prefer npm, run npm install n8n -g && n8n start.

5 Test the n8n-to-Ollama connection

Create a new workflow in n8n. Add a Manual Trigger node, then add an HTTP Request node with these settings:

Click "Execute Workflow." If you see your model list in the output, the connection works. If you installed n8n with npm (not Docker), use http://localhost:11434/api/tags instead.

Step 3: Connect Your Inbox with IMAP

6 Add an IMAP Email trigger

Delete the Manual Trigger and replace it with an IMAP Email node. Configure it with your email credentials:

Gmail users: You need to generate an App Password. Go to myaccount.google.com/apppasswords, create a password for "Mail", and use that in n8n. Two-factor authentication must be enabled on your Google account first.

The IMAP node will poll for new emails on whatever schedule you set in the workflow settings (we will set it to every 5 minutes later). Each new email arrives as a JSON object with from, subject, text, and date fields.

Step 4: Build the AI Classification Node

This is where the self-hosted AI automation starts. We send the email content to Ollama and ask it to classify the message into structured categories.

7 Add an HTTP Request node after the IMAP trigger

Configure it to POST to Ollama's generate endpoint:

Set the JSON body to:

{
  "model": "llama3:8b",
  "prompt": "You are an email classification assistant. Analyze this email and return a JSON object.\n\nClassify into exactly ONE category: support, sales, partnership, personal, newsletter, spam\nRate urgency: low, medium, high\nDetect sentiment: positive, neutral, negative\nProvide a 1-sentence summary.\n\nEmail from: {{ $json.from }}\nSubject: {{ $json.subject }}\nBody: {{ $json.text }}\n\nReturn ONLY valid JSON, no other text:\n{\"category\": \"...\", \"urgency\": \"...\", \"sentiment\": \"...\", \"summary\": \"...\"}",
  "stream": false,
  "options": {
    "temperature": 0.1
  }
}

A few things to note about this prompt:

Step 5: Parse and Route by Category

8 Add a Code node to parse the AI response

Ollama returns the generated text in a response field. We need to extract the JSON from it:

// Extract JSON from the AI response
const aiResponse = $input.first().json.response;
const jsonMatch = aiResponse.match(/\{[\s\S]*\}/);

if (!jsonMatch) {
  throw new Error('AI did not return valid JSON. Raw response: ' + aiResponse);
}

const classification = JSON.parse(jsonMatch[0]);

return [{
  json: {
    ...classification,
    original_from: $('IMAP Email').first().json.from,
    original_subject: $('IMAP Email').first().json.subject,
    original_body: $('IMAP Email').first().json.text,
    processed_at: new Date().toISOString()
  }
}];

The regex /\{[\s\S]*\}/ captures everything between the first { and last }, which handles cases where the model includes extra text around the JSON.

9 Add a Switch node to route by category

Create a Switch node with the routing field set to {{ $json.category }}. Add these outputs:

This routing means each email category gets a specialized prompt for reply generation, which produces much better output than a one-size-fits-all approach.

Step 6: Draft Smart Replies with AI

Now we build the reply-drafting nodes. Each category gets its own HTTP Request node calling Ollama, but with a different prompt tailored to that type of email.

10 Support reply node

{
  "model": "llama3:8b",
  "prompt": "Draft a professional support reply to this email.\n\nFrom: {{ $json.original_from }}\nSubject: {{ $json.original_subject }}\nBody: {{ $json.original_body }}\n\nGuidelines:\n- Be empathetic and acknowledge the issue\n- Provide specific, actionable troubleshooting steps\n- Offer to escalate if the steps don't resolve it\n- Keep it under 150 words\n- Professional but warm tone\n\nWrite ONLY the reply body, no subject line or signature.",
  "stream": false,
  "options": { "temperature": 0.5 }
}

11 Sales reply node

{
  "model": "llama3:8b",
  "prompt": "Draft a reply to this sales inquiry.\n\nFrom: {{ $json.original_from }}\nSubject: {{ $json.original_subject }}\nBody: {{ $json.original_body }}\n\nGuidelines:\n- Thank them for their interest\n- Answer their specific question if one was asked\n- Briefly highlight key benefits relevant to their inquiry\n- Include a clear call to action (book a demo, start a trial, etc.)\n- Confident but not pushy\n- Keep it under 150 words\n\nWrite ONLY the reply body.",
  "stream": false,
  "options": { "temperature": 0.5 }
}

For the generic/default reply, use a neutral prompt that acknowledges the email and says you will follow up shortly. The higher temperature (0.5) gives the replies more natural variation compared to the classification step.

Step 7: Send or Queue the Drafted Reply

12 Choose your output method

You have several options for what to do with the drafted reply:

Start with drafts, not auto-send. Even the best AI makes mistakes. Run the workflow in draft mode for at least a week and review every response before upgrading to auto-send on any category.

13 Set the schedule

Go to your workflow settings and set it to run on a schedule: every 5 minutes is a reasonable starting point. Activate the workflow, and your n8n AI workflow is live.

Going to Production

Once you have verified the workflow handles your emails correctly, here are some improvements for production use:

Add Error Handling

Wrap the AI classification and reply nodes in n8n's error workflow feature. If Ollama times out or returns garbage, the error handler can log the failure and forward the email to you unprocessed rather than losing it.

Filter Before Classification

Add a filter node before the AI step to skip emails you never want to process: automated notifications from services, calendar invites, delivery receipts. This saves processing time and keeps your logs clean.

Log Everything

Add a Google Sheets or database node at the end of every path to log the email address, classification result, and draft reply. This gives you an audit trail and lets you measure classification accuracy over time.

Model Selection for Production

For email classification specifically, llama3:8b hits the sweet spot of speed and accuracy. If you need faster processing on CPU-only hardware, phi3:mini (3.8B parameters) handles classification well at roughly 3x the speed. For reply drafting where quality matters more, stick with the 8B model or go larger if your hardware allows.

Troubleshooting

Connection Issues

AI Output Issues

Performance Issues

IMAP Issues

Next Steps

You now have a working local AI email automation pipeline. Here are a few ways to extend it:

Get the template: The complete workflow JSON is available for free on GitHub: n8n-ollama-email-responder. Clone the repo, import the JSON file into n8n, update your IMAP credentials, and you are running in under 5 minutes.

Want All 11 Workflows?

The email auto-responder is one of 11 production-ready n8n + Ollama workflows in our complete pack. You also get an AI blog writer, social media content generator, lead scoring system, document summarizer, competitor intelligence monitor, and more.

$39 one-time payment — no subscriptions, no API costs

Get the Full Pack →

30-day money-back guarantee. Instant download.

Related Tutorials

Free Samples

Try these open-source workflows to see the quality before buying the pack: