Automate Sentiment Analysis with n8n + Ollama (Brand Monitoring Pipeline)

Published March 24, 2026 · 11 min read

Brand monitoring tools like Brandwatch, Mention, or Sprout Social charge $99–999/month for sentiment analysis. They're powerful, but they send every customer review, social mention, and support ticket to external servers for classification. For companies that handle customer feedback in bulk — e-commerce, SaaS, agencies — that's both expensive and a privacy concern.

With n8n and Ollama, you can build a sentiment analysis pipeline that classifies text as positive, negative, or neutral, extracts key topics, scores urgency, and routes alerts — all running on your own hardware with zero per-request costs.

This tutorial builds a workflow that:

  1. Collects feedback from multiple sources (email, forms, RSS feeds, webhooks)
  2. Classifies sentiment (positive / neutral / negative) with confidence scores
  3. Extracts topics and key phrases from each piece of feedback
  4. Scores urgency based on sentiment + keywords (complaints, bugs, cancellation)
  5. Routes alerts: negative + urgent goes to Slack immediately, everything gets logged

Why LLMs Beat Traditional Sentiment Analysis

Traditional sentiment analysis uses keyword matching or pre-trained classifiers (VADER, TextBlob). These fail on sarcasm, context, and domain-specific language. LLMs understand nuance:

TextTraditional ToolsLLM (Ollama)
"Great, another update that breaks everything"Positive (detected "great")Negative (sarcasm detected)
"The product is fine I guess"Positive/NeutralNegative (lukewarm = dissatisfaction)
"I literally can't stop using this"Negative (detected "can't")Positive (enthusiastic)
"Took 3 days but support finally fixed it"Positive (detected "fixed")Mixed (resolution + frustration with delay)

Why local AI works here: Sentiment classification is a structured task — categorize input into a fixed set of labels with a confidence score. An 8B parameter model handles this with high accuracy. You don't need GPT-4 for classification; you need it for open-ended generation. A local model processes hundreds of texts per hour at zero marginal cost.

The Architecture

Feedback Sources (Email / Forms / RSS / Webhooks)
    ↓
[Trigger / Schedule] → [Collect Feedback Items]
                                   ↓
                          [Batch Processing Loop]
                                   ↓
                         [Ollama: Classify Sentiment]
                                   ↓
                          [Parse JSON Response]
                                   ↓
                         [Route by Urgency Score]
                         /          |          \
                    [Urgent]    [Normal]    [Positive]
                       ↓          ↓           ↓
                 [Slack Alert]  [Log to DB]  [Log + Thank]

Step 1: Collect Feedback from Multiple Sources

Set up triggers for each feedback source. The workflow normalizes all inputs to a common format:

// Function Node: Normalize Feedback
// Input from any source, output standardized format

const items = $input.all();
return items.map(item => {
  const data = item.json;
  return {
    json: {
      id: data.id || data.messageId || crypto.randomUUID(),
      text: data.text || data.body || data.content || data.review,
      source: data.source || 'webhook',
      author: data.author || data.email || data.username || 'anonymous',
      timestamp: data.timestamp || data.date || new Date().toISOString(),
      metadata: {
        rating: data.rating || null,
        product: data.product || null,
        channel: data.channel || null
      }
    }
  };
});

Source Examples

Step 2: Sentiment Classification with Ollama

The core classification prompt extracts multiple dimensions from each piece of feedback:

// HTTP Request to Ollama
URL: http://localhost:11434/api/generate
Method: POST
Body:
{
  "model": "llama3.1:8b",
  "prompt": "Analyze the following customer feedback and return a JSON object with these fields:\n\n1. sentiment: 'positive', 'negative', 'neutral', or 'mixed'\n2. confidence: 0.0 to 1.0\n3. urgency: 'critical', 'high', 'medium', 'low'\n4. topics: array of 1-3 key topics mentioned\n5. summary: one-sentence summary of the feedback\n6. actionable: boolean - does this require a response or action?\n\nRules:\n- 'critical' urgency: mentions of data loss, security issues, billing errors, cancellation intent\n- 'high' urgency: bugs, broken features, strong complaints\n- 'medium' urgency: feature requests, moderate complaints\n- 'low' urgency: general feedback, praise, minor suggestions\n\nRespond ONLY with the JSON object, no other text.\n\nFeedback:\n\"\"\"{{ $json.text }}\"\"\"\n\nJSON:",
  "stream": false,
  "format": "json",
  "options": {
    "temperature": 0.1,
    "num_predict": 300
  }
}

Key technique: Using "format": "json" in the Ollama API forces the model to output valid JSON. Combined with temperature: 0.1, you get consistent, parseable results every time. This eliminates the need for regex parsing or error handling for malformed responses.

Step 3: Parse and Validate the Response

// Function Node: Parse Sentiment Result
const raw = $json.data || $json.response;
let result;

try {
  result = typeof raw === 'string' ? JSON.parse(raw) : raw;
  // Handle Ollama wrapping response in { response: "..." }
  if (result.response) {
    result = JSON.parse(result.response);
  }
} catch (e) {
  // Fallback for unparseable responses
  result = {
    sentiment: 'unknown',
    confidence: 0,
    urgency: 'medium',
    topics: ['parse_error'],
    summary: 'Could not parse AI response',
    actionable: true
  };
}

// Validate and normalize
const validSentiments = ['positive', 'negative', 'neutral', 'mixed'];
if (!validSentiments.includes(result.sentiment)) result.sentiment = 'unknown';
result.confidence = Math.max(0, Math.min(1, Number(result.confidence) || 0));

return [{
  json: {
    ...result,
    originalText: $('Normalize Feedback').item.json.text,
    source: $('Normalize Feedback').item.json.source,
    author: $('Normalize Feedback').item.json.author,
    feedbackId: $('Normalize Feedback').item.json.id,
    analyzedAt: new Date().toISOString()
  }
}];

Step 4: Route by Urgency

Use a Switch node to route feedback based on urgency + sentiment:

// Switch Node: Route by Priority
//
// Route 1 (Urgent Alert):
//   urgency = 'critical' OR (urgency = 'high' AND sentiment = 'negative')
//   → Send to Slack #alerts + create support ticket
//
// Route 2 (Needs Response):
//   actionable = true AND sentiment != 'positive'
//   → Add to response queue
//
// Route 3 (Positive Feedback):
//   sentiment = 'positive' AND confidence > 0.7
//   → Log + optionally send thank-you reply
//
// Default:
//   → Log to database for weekly analysis

Slack Alert for Urgent Feedback

// Slack Message (for critical/negative feedback)
Channel: #customer-alerts
Message:
"*Urgent Customer Feedback*
*Sentiment:* {{ $json.sentiment }} ({{ Math.round($json.confidence * 100) }}% confidence)
*Urgency:* {{ $json.urgency }}
*Topics:* {{ $json.topics.join(', ') }}
*Summary:* {{ $json.summary }}
*Source:* {{ $json.source }} | *Author:* {{ $json.author }}

> {{ $json.originalText.substring(0, 500) }}"

Step 5: Store Results for Analytics

// PostgreSQL Insert (or any database)
INSERT INTO sentiment_analysis (
  feedback_id, sentiment, confidence, urgency,
  topics, summary, actionable, source, author,
  original_text, analyzed_at
) VALUES (
  '{{ $json.feedbackId }}',
  '{{ $json.sentiment }}',
  {{ $json.confidence }},
  '{{ $json.urgency }}',
  '{{ JSON.stringify($json.topics) }}',
  '{{ $json.summary }}',
  {{ $json.actionable }},
  '{{ $json.source }}',
  '{{ $json.author }}',
  '{{ $json.originalText.replace(/'/g, "''") }}',
  '{{ $json.analyzedAt }}'
);

Batch Processing for High Volume

If you're processing hundreds of feedback items daily, batch them to avoid overwhelming Ollama:

// Use n8n's SplitInBatches node:
// Batch size: 5
// Pause between batches: 2 seconds
//
// This processes 5 items, waits 2s, then the next 5.
// On a decent GPU, each item takes 1-3 seconds,
// so a batch of 5 completes in ~15 seconds.
//
// Throughput: ~200 items/hour on RTX 3060
// Throughput: ~500 items/hour on RTX 4090

Complete Workflow JSON

Click to expand full workflow JSON
{
  "name": "Sentiment Analysis Pipeline (Ollama + Multi-Source)",
  "nodes": [
    {
      "parameters": {
        "rule": { "interval": [{ "field": "hours", "hoursInterval": 1 }] }
      },
      "id": "schedule",
      "name": "Hourly Check",
      "type": "n8n-nodes-base.scheduleTrigger",
      "typeVersion": 1.2,
      "position": [240, 300]
    },
    {
      "parameters": {
        "url": "http://localhost:11434/api/generate",
        "sendBody": true,
        "specifyBody": "json",
        "jsonBody": "={{ JSON.stringify({ model: 'llama3.1:8b', prompt: 'Analyze this customer feedback. Return JSON with: sentiment (positive/negative/neutral/mixed), confidence (0-1), urgency (critical/high/medium/low), topics (array of 1-3), summary (one sentence), actionable (boolean).\\n\\nFeedback: \"' + ($json.text || '').substring(0, 2000).replace(/\"/g, '\\\\\"') + '\"\\n\\nJSON:', stream: false, format: 'json', options: { temperature: 0.1, num_predict: 300 } }) }}",
        "options": { "timeout": 60000 }
      },
      "id": "sentiment",
      "name": "Analyze Sentiment (Ollama)",
      "type": "n8n-nodes-base.httpRequest",
      "typeVersion": 4.2,
      "position": [680, 300]
    },
    {
      "parameters": {
        "conditions": {
          "string": [{
            "value1": "={{ $json.urgency }}",
            "operation": "oneOf",
            "value2": "critical,high"
          }]
        }
      },
      "id": "urgency-check",
      "name": "Is Urgent?",
      "type": "n8n-nodes-base.if",
      "typeVersion": 2,
      "position": [900, 300]
    }
  ],
  "connections": {
    "Hourly Check": {
      "main": [[{ "node": "Analyze Sentiment (Ollama)", "type": "main", "index": 0 }]]
    },
    "Analyze Sentiment (Ollama)": {
      "main": [[{ "node": "Is Urgent?", "type": "main", "index": 0 }]]
    }
  },
  "settings": { "executionOrder": "v1" },
  "tags": [{ "name": "AI" }, { "name": "Ollama" }, { "name": "Sentiment Analysis" }, { "name": "Brand Monitoring" }]
}

Wrapping Up

Sentiment analysis with n8n + Ollama gives you an always-on feedback classification system without per-request API costs or data leaving your servers. The LLM approach outperforms traditional keyword-based tools on sarcasm, context, and nuance — the exact cases where automated analysis matters most.

Start with the template above, connect your feedback sources, and customize the urgency rules for your business. The classification prompt is the key lever — adjust the urgency criteria and topic categories to match what matters for your product.

Want 11 Production-Ready AI Workflows?

The Self-Hosted AI Workflow Pack includes sentiment analysis, email automation, document processing, chatbots, and 7 more n8n + Ollama templates. One payment, unlimited runs, zero API costs.

Get the Full Pack — $39