If you've been wanting to add AI to your automation workflows but don't want to pay for OpenAI API keys or send your data to the cloud, this tutorial is for you. We'll connect n8n (the open-source workflow automation tool) with Ollama (local AI model runner) to build powerful automations that run entirely on your own hardware.
1 Install Ollama
# One-line install (Linux/macOS)
curl -fsSL https://ollama.ai/install.sh | sh
# Pull a model
ollama pull llama3:8b
# Test it works
curl http://localhost:11434/api/generate -d '{
"model": "llama3:8b",
"prompt": "Say hello in JSON format",
"stream": false
}'
You should see a JSON response with the model's reply. If you get a connection error, run ollama serve first.
2 Install n8n
# Option A: Docker (recommended)
docker run -d --name n8n \
-p 5678:5678 \
-v n8n_data:/home/node/.n8n \
--add-host=host.docker.internal:host-gateway \
n8nio/n8n
# Option B: npm
npm install n8n -g && n8n start
Open http://localhost:5678 and create your admin account.
3 Verify the connection
In n8n, create a new workflow with an HTTP Request node:
http://localhost:11434/api/tags (or http://host.docker.internal:11434/api/tags for Docker)Execute it. You should see a list of your installed models. If this works, n8n can talk to Ollama.
Our workflow will have 4 nodes:
Start with a Manual Trigger for testing. Later, you can swap it for an IMAP Email trigger to process real emails automatically.
Add a Set node after the trigger to simulate an email:
{
"from": "john@example.com",
"subject": "Can't log in to my account",
"body": "Hi, I've been trying to log in for the past hour but keep getting an 'invalid credentials' error. I've tried resetting my password twice. This is really frustrating. Can someone help?"
}
Add an HTTP Request node to call Ollama:
URL: http://localhost:11434/api/generate
Method: POST
Body (JSON):
{
"model": "llama3:8b",
"prompt": "Classify this email into exactly ONE category: support, sales, partnership, feedback, spam.\n\nAlso rate the urgency: low, medium, high.\nAnd detect the sentiment: positive, neutral, negative.\n\nEmail from: {{ $json.from }}\nSubject: {{ $json.subject }}\nBody: {{ $json.body }}\n\nRespond with ONLY JSON:\n{\"category\": \"...\", \"urgency\": \"...\", \"sentiment\": \"...\", \"reason\": \"...\"}",
"stream": false,
"options": { "temperature": 0.1 }
}
Low temperature (0.1) ensures consistent, deterministic classification.
Add a Code node to parse the AI response:
const response = $input.first().json.response;
const match = response.match(/\{[\s\S]*\}/);
if (match) {
const classification = JSON.parse(match[0]);
return [{ json: {
...classification,
original_email: $('Set').first().json
}}];
}
throw new Error('Could not parse AI classification');
Then add a Switch node that routes based on {{ $json.category }}:
support → Draft support responsesales → Draft sales responsespam → Archive/deleteFor each route, add another HTTP Request to Ollama:
{
"model": "llama3:8b",
"prompt": "Draft a helpful reply to this support email.\n\nFrom: {{ $json.original_email.from }}\nSubject: {{ $json.original_email.subject }}\nBody: {{ $json.original_email.body }}\n\nBe empathetic, professional, and solution-oriented. Include specific troubleshooting steps.\n\nWrite the reply only, no subject line needed.",
"stream": false,
"options": { "temperature": 0.5 }
}
This basic pattern — send text to Ollama, parse the JSON response, route based on the result — powers virtually every AI automation. Here's what you can build with it:
ollama servehttp://host.docker.internal:11434 instead of localhostmistral:7b) or get a GPU. Even a modest GPU (RTX 3060) makes 5-10x differenceSkip the setup. We've built 11 production-ready n8n + Ollama workflows covering content generation, email automation, lead scoring, document processing, and more.
$39 one-time — import into n8n in 5 minutes
Get 11 Ready-Made Workflows →Try before you buy:
Published by WorkflowForge · Self-Hosted AI Workflow Pack