How to Automate Customer Support with n8n + Ollama (Free AI Ticket Router)
Most support helpdesks charge $25–150/seat/month and still dump every ticket into the same inbox for a human to sort. With n8n and Ollama you can build an AI ticket router that runs entirely on your own hardware — zero API costs, full data privacy, and unlimited tickets.
In this tutorial you'll build a webhook-triggered workflow that:
- Receives incoming support tickets via HTTP webhook
- Classifies each ticket into billing, technical, feature request, or spam using local AI
- Routes each category to the right team channel or queue
- Generates a first-response template automatically
- Returns a confirmation with routing details to the sender
Why Use Local AI for Support Automation?
Customer support tickets contain some of the most sensitive data your business handles: account details, billing complaints, product bugs that reveal your architecture, and occasionally personal health or financial information from users explaining their situation.
Routing those tickets through a third-party AI API means every message leaves your servers. That's a GDPR concern, a SOC 2 concern, and just generally a bad idea if you care about customer trust.
Local AI with Ollama solves all three problems:
| Concern | Cloud AI (OpenAI, Claude) | Local AI (Ollama) |
|---|---|---|
| Data privacy | Tickets sent to third-party servers | Data never leaves your machine |
| API cost | $0.01–0.06 per ticket at scale | $0.00 — runs on your hardware |
| Rate limits | Throttled during traffic spikes | Only limited by your hardware |
| Compliance | Requires DPA, data processing agreements | No third-party data sharing |
| Customization | Prompt engineering only | Full model fine-tuning possible |
Ticket classification is also exactly the kind of structured task that small local models handle well. You don't need GPT-4 to tell you that "my invoice is wrong" is a billing ticket. A well-prompted 8B-parameter model running on a $400 server gets this right >95% of the time.
Real numbers: A business processing 200 support tickets per day at $0.03/ticket via OpenAI API spends ~$180/month, or $2,160/year. Running Ollama locally on a machine you already own costs $0. The workflow below pays for itself in the first week.
Architecture Overview
The ticket routing pipeline follows a simple, linear flow:
[Webhook: Ticket arrives]
|
v
[Ollama: Classify ticket]
(billing / technical / feature-request / spam)
|
v
[IF Node: Route by category]
/ | | \
v v v v
[Billing] [Tech] [Feature] [Spam]
| | | |
v v v v
[Generate first-response template]
|
v
[Respond to webhook with routing result]
One Ollama call per ticket for classification, taking 2–4 seconds on modest hardware. The classification result drives all downstream routing. You can fan this out to as many channels as you need: email, Slack, Jira, Zendesk, or a simple database log.
The Four Ticket Categories
The classifier routes every ticket into one of four categories. Here's what each covers and how you should handle them:
| Category | Examples | Recommended Routing |
|---|---|---|
| billing | Invoice errors, subscription cancellations, refund requests, payment failures, pricing questions | Finance team email / billing queue |
| technical | Bug reports, integration errors, performance issues, API questions, setup help | Engineering Slack channel / Jira backlog |
| feature-request | New feature ideas, UI suggestions, integration requests, missing functionality | Product backlog / Notion database |
| spam | Automated messages, solicitations, gibberish, test submissions, off-topic messages | Discard / low-priority archive |
These four categories cover >98% of real-world support volume. You can extend the workflow with additional categories (e.g., "enterprise-sales", "compliance") by updating the classification prompt.
The Classification Prompt
The prompt is the core of the system. Here's the exact prompt used in this workflow:
You are a customer support ticket classifier. Your job is to read a support
ticket and categorize it into exactly one of these four categories:
- billing: anything related to payments, invoices, subscriptions, refunds,
pricing, or account charges
- technical: bug reports, errors, integration issues, API problems, performance
issues, or requests for setup help
- feature-request: suggestions for new features, UI improvements, integration
requests, or missing functionality the user wants added
- spam: automated messages, solicitations, gibberish, test submissions, or
messages completely unrelated to product support
Ticket subject: {{subject}}
Ticket body: {{body}}
Sender email: {{email}}
Respond in this EXACT JSON format, nothing else:
{
"category": "<billing|technical|feature-request|spam>",
"confidence": <0.0-1.0>,
"summary": "<one sentence summary of the ticket>",
"priority": "<low|medium|high>"
}
Priority rules:
- high: account locked, payment failed, data loss, service down
- medium: bug affecting workflow, billing discrepancy
- low: general questions, feature requests, minor issues
Key design decisions in this prompt:
- Explicit category definitions: Listing what falls into each bucket prevents ambiguous edge cases from being misrouted
- Forced JSON output: Structured output makes downstream parsing trivial and reliable
- Confidence score: Tickets below 0.7 confidence can be flagged for human review instead of auto-routed
- Priority field: Lets you route high-priority tickets to an urgent queue without a separate AI call
- Low temperature: Set
temperature: 0.1in Ollama for classification tasks — you want deterministic, consistent output
Prerequisites
You need n8n and Ollama installed. If you're starting from scratch:
# Install Ollama and pull a model
curl -fsSL https://ollama.ai/install.sh | sh
ollama pull llama3:8b
# Run n8n in Docker
docker run -d --name n8n -p 5678:5678 \
--add-host=host.docker.internal:host-gateway \
-v n8n_data:/home/node/.n8n \
n8nio/n8n
Docker networking: If n8n runs in Docker, use http://host.docker.internal:11434 for all Ollama URLs. If n8n runs natively, use http://localhost:11434. The workflow JSON below uses localhost — swap it if needed.
Free Workflow: AI Support Ticket Router
Here's a complete, working n8n workflow. Import the JSON directly into your n8n instance via Settings → Import Workflow.
Step-by-Step Breakdown
Exposes a POST /webhook/support-ticket endpoint. Your contact form, help widget, or email-to-webhook service sends ticket data here as JSON with fields subject, body, email, and optionally name.
Sends the ticket content to Ollama using the prompt above. Returns a JSON object with category, confidence, summary, and priority. Uses temperature: 0.1 for consistent, deterministic output.
A Set node extracts the classification result from the Ollama response. Handles edge cases where the model wraps JSON in markdown code fences (strips them with a regex). Falls back gracefully if parsing fails.
Four IF nodes check the category field and branch execution accordingly. Each branch can connect to whatever notification or ticketing system you use — Slack, email, Jira, Zendesk, a database, or all of the above.
Each routing branch generates a category-appropriate first-response message. Billing tickets get a response acknowledging the payment issue and setting an SLA. Technical tickets get a bug report confirmation. Feature requests get an "added to our roadmap" message. Spam gets silently dropped.
Returns a JSON response to the original requester with the ticket ID, assigned category, priority, and the auto-generated first-response text. Your frontend can display this immediately to the user as confirmation.
The Workflow JSON
Click to expand full workflow JSON
{
"name": "AI Support Ticket Router (Ollama)",
"nodes": [
{
"parameters": {
"httpMethod": "POST",
"path": "support-ticket",
"responseMode": "responseNode",
"options": {}
},
"id": "webhook",
"name": "Receive Ticket",
"type": "n8n-nodes-base.webhook",
"typeVersion": 2,
"position": [240, 300],
"webhookId": "support-ticket"
},
{
"parameters": {
"url": "http://localhost:11434/api/generate",
"sendBody": true,
"specifyBody": "json",
"jsonBody": "={{ JSON.stringify({ model: 'llama3:8b', prompt: 'You are a customer support ticket classifier. Categorize the ticket into exactly one of: billing, technical, feature-request, spam.\\n\\n- billing: payments, invoices, subscriptions, refunds, pricing, account charges\\n- technical: bug reports, errors, integration issues, API problems, setup help\\n- feature-request: new feature suggestions, UI improvements, missing functionality\\n- spam: automated messages, solicitations, gibberish, off-topic messages\\n\\nTicket subject: ' + ($json.body.subject || 'No subject') + '\\nTicket body: ' + ($json.body.body || 'No body') + '\\nSender email: ' + ($json.body.email || 'Unknown') + '\\n\\nRespond in this EXACT JSON format, nothing else:\\n{\"category\": \"\", \"confidence\": <0.0-1.0>, \"summary\": \"\", \"priority\": \"\"}\\n\\nPriority rules: high = account locked/payment failed/data loss/service down, medium = bug affecting workflow/billing discrepancy, low = general questions/feature requests/minor issues', stream: false, options: { temperature: 0.1, num_predict: 200 } }) }}",
"options": { "timeout": 60000 }
},
"id": "classify-ticket",
"name": "Classify Ticket (Ollama)",
"type": "n8n-nodes-base.httpRequest",
"typeVersion": 4.2,
"position": [460, 300]
},
{
"parameters": {
"assignments": {
"assignments": [
{
"id": "raw_response",
"name": "raw_response",
"value": "={{ $json.response }}",
"type": "string"
},
{
"id": "classification",
"name": "classification",
"value": "={{ (() => { try { const r = $json.response.replace(/```json\\n?|```/g, '').trim(); return JSON.parse(r); } catch(e) { return { category: 'technical', confidence: 0.5, summary: 'Parse error - needs manual review', priority: 'medium' }; } })() }}",
"type": "object"
},
{
"id": "ticket_id",
"name": "ticket_id",
"value": "={{ 'TKT-' + Date.now() }}",
"type": "string"
},
{
"id": "original_subject",
"name": "original_subject",
"value": "={{ $('Receive Ticket').item.json.body.subject || 'No subject' }}",
"type": "string"
},
{
"id": "original_email",
"name": "original_email",
"value": "={{ $('Receive Ticket').item.json.body.email || 'Unknown' }}",
"type": "string"
}
]
}
},
"id": "parse-result",
"name": "Parse Classification",
"type": "n8n-nodes-base.set",
"typeVersion": 3.4,
"position": [680, 300]
},
{
"parameters": {
"conditions": {
"options": { "caseSensitive": false },
"conditions": [
{
"leftValue": "={{ $json.classification.category }}",
"rightValue": "billing",
"operator": { "type": "string", "operation": "equals" }
}
]
}
},
"id": "if-billing",
"name": "Is Billing?",
"type": "n8n-nodes-base.if",
"typeVersion": 2,
"position": [900, 200]
},
{
"parameters": {
"conditions": {
"options": { "caseSensitive": false },
"conditions": [
{
"leftValue": "={{ $json.classification.category }}",
"rightValue": "technical",
"operator": { "type": "string", "operation": "equals" }
}
]
}
},
"id": "if-technical",
"name": "Is Technical?",
"type": "n8n-nodes-base.if",
"typeVersion": 2,
"position": [900, 340]
},
{
"parameters": {
"conditions": {
"options": { "caseSensitive": false },
"conditions": [
{
"leftValue": "={{ $json.classification.category }}",
"rightValue": "feature-request",
"operator": { "type": "string", "operation": "equals" }
}
]
}
},
"id": "if-feature",
"name": "Is Feature Request?",
"type": "n8n-nodes-base.if",
"typeVersion": 2,
"position": [900, 480]
},
{
"parameters": {
"assignments": {
"assignments": [
{
"id": "response_text",
"name": "response_text",
"value": "=Hi there,\n\nThank you for reaching out about your billing concern. We've received your message and our billing team will review your account within 1 business day.\n\nTicket ID: {{ $json.ticket_id }}\nPriority: {{ $json.classification.priority }}\n\nIf your issue is urgent (e.g. payment failure or account lockout), please reply to this message and we'll escalate immediately.\n\nBest,\nSupport Team",
"type": "string"
},
{
"id": "routing_destination",
"name": "routing_destination",
"value": "billing-team@yourcompany.com",
"type": "string"
}
]
}
},
"id": "billing-response",
"name": "Billing Response",
"type": "n8n-nodes-base.set",
"typeVersion": 3.4,
"position": [1120, 160]
},
{
"parameters": {
"assignments": {
"assignments": [
{
"id": "response_text",
"name": "response_text",
"value": "=Hi there,\n\nThank you for reporting this issue. We've logged your technical support request and our engineering team will investigate.\n\nTicket ID: {{ $json.ticket_id }}\nPriority: {{ $json.classification.priority }}\n\nTo help us resolve this faster, please include any error messages, screenshots, or steps to reproduce the issue if you haven't already.\n\nBest,\nSupport Team",
"type": "string"
},
{
"id": "routing_destination",
"name": "routing_destination",
"value": "#engineering-support (Slack)",
"type": "string"
}
]
}
},
"id": "technical-response",
"name": "Technical Response",
"type": "n8n-nodes-base.set",
"typeVersion": 3.4,
"position": [1120, 300]
},
{
"parameters": {
"assignments": {
"assignments": [
{
"id": "response_text",
"name": "response_text",
"value": "=Hi there,\n\nThank you for your feedback! We've added your feature request to our product backlog for the team to review.\n\nTicket ID: {{ $json.ticket_id }}\n\nWe review all feature requests during our monthly planning sessions. While we can't guarantee every request will be implemented, your input directly shapes our roadmap.\n\nBest,\nSupport Team",
"type": "string"
},
{
"id": "routing_destination",
"name": "routing_destination",
"value": "product-backlog (Notion/Linear)",
"type": "string"
}
]
}
},
"id": "feature-response",
"name": "Feature Request Response",
"type": "n8n-nodes-base.set",
"typeVersion": 3.4,
"position": [1120, 440]
},
{
"parameters": {
"assignments": {
"assignments": [
{
"id": "response_text",
"name": "response_text",
"value": "=Message classified as spam. No response sent.",
"type": "string"
},
{
"id": "routing_destination",
"name": "routing_destination",
"value": "spam-archive",
"type": "string"
}
]
}
},
"id": "spam-response",
"name": "Spam Handler",
"type": "n8n-nodes-base.set",
"typeVersion": 3.4,
"position": [1120, 580]
},
{
"parameters": {
"respondWith": "json",
"responseBody": "={{ JSON.stringify({ ticket_id: $json.ticket_id, category: $json.classification.category, priority: $json.classification.priority, confidence: $json.classification.confidence, summary: $json.classification.summary, routed_to: $json.routing_destination, auto_response: $json.response_text }) }}",
"options": {}
},
"id": "respond",
"name": "Return Routing Result",
"type": "n8n-nodes-base.respondToWebhook",
"typeVersion": 1.1,
"position": [1340, 380]
}
],
"connections": {
"Receive Ticket": {
"main": [[{ "node": "Classify Ticket (Ollama)", "type": "main", "index": 0 }]]
},
"Classify Ticket (Ollama)": {
"main": [[{ "node": "Parse Classification", "type": "main", "index": 0 }]]
},
"Parse Classification": {
"main": [[
{ "node": "Is Billing?", "type": "main", "index": 0 },
{ "node": "Is Technical?", "type": "main", "index": 0 },
{ "node": "Is Feature Request?", "type": "main", "index": 0 }
]]
},
"Is Billing?": {
"main": [
[{ "node": "Billing Response", "type": "main", "index": 0 }],
[]
]
},
"Is Technical?": {
"main": [
[{ "node": "Technical Response", "type": "main", "index": 0 }],
[]
]
},
"Is Feature Request?": {
"main": [
[{ "node": "Feature Request Response", "type": "main", "index": 0 }],
[{ "node": "Spam Handler", "type": "main", "index": 0 }]
]
},
"Billing Response": {
"main": [[{ "node": "Return Routing Result", "type": "main", "index": 0 }]]
},
"Technical Response": {
"main": [[{ "node": "Return Routing Result", "type": "main", "index": 0 }]]
},
"Feature Request Response": {
"main": [[{ "node": "Return Routing Result", "type": "main", "index": 0 }]]
},
"Spam Handler": {
"main": [[{ "node": "Return Routing Result", "type": "main", "index": 0 }]]
}
},
"settings": { "executionOrder": "v1" },
"tags": [
{ "name": "AI" },
{ "name": "Ollama" },
{ "name": "Customer Support" },
{ "name": "Ticket Routing" }
]
}
Testing the Workflow
Once imported and activated, test each category with curl:
# Test a billing ticket
curl -X POST http://localhost:5678/webhook/support-ticket \
-H "Content-Type: application/json" \
-d '{
"subject": "Charged twice for my subscription",
"body": "Hi, I noticed two charges of $49 on my credit card this month for my Pro subscription. Can you refund one of them? My account email is john@example.com.",
"email": "john@example.com",
"name": "John Smith"
}'
Expected response:
{
"ticket_id": "TKT-1711234567890",
"category": "billing",
"priority": "medium",
"confidence": 0.97,
"summary": "Customer was charged twice for their Pro subscription and requests a refund",
"routed_to": "billing-team@yourcompany.com",
"auto_response": "Hi there,\n\nThank you for reaching out about your billing concern..."
}
# Test a technical ticket
curl -X POST http://localhost:5678/webhook/support-ticket \
-H "Content-Type: application/json" \
-d '{
"subject": "API returning 500 errors on /v2/export endpoint",
"body": "Getting intermittent 500 errors when calling the export API. Happens about 1 in 10 requests. Error: Internal Server Error. Started yesterday around 3pm UTC.",
"email": "dev@startup.io",
"name": "Alex Developer"
}'
# Test a feature request
curl -X POST http://localhost:5678/webhook/support-ticket \
-H "Content-Type: application/json" \
-d '{
"subject": "Dark mode please!",
"body": "Would love a dark mode option in the dashboard. My eyes hurt after long sessions. Other tools like Figma and Linear have this. Would be a great addition!",
"email": "user@gmail.com",
"name": "Casey"
}'
Connecting to Real Channels
The workflow above has placeholder routing destinations. Here's how to wire up real integrations after each branch:
Slack Notifications
Add a Slack node after the Technical Response step. Map $json.ticket_id, $json.classification.summary, and $json.classification.priority to a message block. High-priority tickets can @mention the on-call engineer.
Email to Team Inbox
Add an Email Send node (Gmail, SMTP, or Outlook) after the Billing Response step. Use $json.original_email as the reply-to and route to your billing team's inbox. Attach the AI summary and ticket ID in the subject line.
Jira / Linear / GitHub Issues
After the Technical Response branch, add a Jira or Linear node to automatically create an issue. Map the ticket summary to the issue title, set priority from $json.classification.priority, and tag it with the right label.
Notion / Airtable for Feature Requests
After the Feature Request Response branch, add a Notion Database Item or Airtable Record node. Log the ticket ID, summary, sender email, and date so your product team has a structured backlog.
Low-Confidence Escalation
Add a final IF node before routing that checks $json.classification.confidence < 0.7. Route low-confidence tickets to a "needs human review" Slack channel instead of auto-routing them. This catches genuinely ambiguous tickets that span categories.
Spam filtering tip: If you're getting many false positives (legitimate tickets classified as spam), add a rule: only classify as spam if confidence is above 0.9. Below that threshold, fall back to "technical" as the default category. This prevents losing real customer messages.
Tuning the Prompt for Your Business
The default four-category prompt works for most SaaS products. Here's how to adapt it:
Add Custom Categories
Insert additional category definitions into the prompt. For example, if you have an enterprise tier, add a "sales-inquiry" category for upsell and expansion conversations. Update the IF node chain to handle the new category.
Industry-Specific Language
If your customers use specialized terminology (medical, legal, financial), add a context line to the prompt: "This is a support system for a healthcare software company. Clinical terminology is expected and normal." This prevents the model from misclassifying domain-specific language as spam.
Language and Tone Detection
Add a "language": "<ISO 639-1 code>" field to the JSON output format. Route non-English tickets to a localization queue or flag them for human review. Ollama's multilingual models handle this without extra configuration.
Model Selection
llama3:8b is the right default for this task — fast, accurate on structured classification, and small enough to run on a single consumer GPU. If you process tickets in multiple languages or have complex edge cases, try mistral:7b-instruct which has stronger multilingual support.
Comparison: Local AI vs. Support Helpdesk Tools
| n8n + Ollama (This Workflow) | Zendesk / Freshdesk AI | |
|---|---|---|
| Monthly cost | $0 (plus n8n hosting) | $25–115/agent/month |
| Data privacy | Tickets never leave your server | Processed on vendor servers |
| Customization | Full control over categories and routing logic | Limited to vendor's AI model |
| Integration | Any tool with an API or n8n node | Native integrations only |
| Ticket volume | Unlimited | Tiered by plan |
| Setup time | ~45 minutes | ~2 hours (plus onboarding) |
This workflow is not a replacement for a full helpdesk with a human agent UI, SLA tracking, and customer reply threads. It's a routing layer that sits in front of your existing tools and eliminates the manual triage step. You save time and ensure every ticket lands in the right queue immediately.
Want the Production-Ready Version?
The Self-Hosted AI Workflow Pack includes an advanced support router with CRM sync, Slack alerts, SLA escalation, confidence-based human review queuing, and 10 more AI workflows — all running locally with Ollama.
Get All 11 Workflows — $39One-time purchase. No subscriptions. 30-day money-back guarantee.
Next Steps
- Import the workflow — Copy the JSON above into n8n via Settings → Import Workflow
- Test each category — Use the curl examples to verify classification is working
- Wire up your channels — Add Slack, email, or Jira nodes after each routing branch
- Point your contact form at the webhook — Replace your current form action with the n8n webhook URL
- Monitor for 1 week — Check classification accuracy and tune the prompt for any misrouted tickets
If you want to explore more n8n + Ollama workflows, check out our other tutorials: