Your application logs contain critical signals buried in noise. A spike in 500 errors, an unusual authentication pattern, a slow database query cascade — these patterns are hard to catch with static regex rules but obvious to an AI model that understands context.
This guide shows you how to build an automated log analysis and anomaly detection pipeline using n8n and Ollama. It ingests logs from multiple sources, classifies errors, detects anomalous patterns, and sends alerts to Slack or PagerDuty — all running on your own infrastructure with zero API costs.
Traditional log monitoring relies on pattern matching: grep for "ERROR", count 500 status codes, alert on threshold breaches. This works for known failure modes but misses:
An LLM can read log entries like a senior SRE would — understanding context, recognizing patterns across services, and explaining what's actually happening in plain language.
The pipeline has four stages:
mistral:7b or llama3:8b. Install OllamaStart with a Cron trigger that runs every 5 minutes. Connect it to a node that reads your logs. For file-based logs, use the Execute Command node:
# Read last 5 minutes of logs from multiple sources
# Application logs
tail -n 500 /var/log/myapp/application.log
# Nginx access logs (last 5 min)
awk -v d="$(date -d '5 min ago' '+%d/%b/%Y:%H:%M')" '$4 >= "["d' /var/log/nginx/access.log
# System journal
journalctl --since "5 min ago" --no-pager -q
If your logs are in a cloud service, use the HTTP Request node to query them via API:
// CloudWatch Logs Insights query via n8n HTTP Request
{
"logGroupName": "/aws/lambda/my-function",
"queryString": "fields @timestamp, @message | filter @message like /ERROR|WARN|Exception/ | sort @timestamp desc | limit 200",
"startTime": {{ $now.minus(5, 'minutes').toMillis() }},
"endTime": {{ $now.toMillis() }}
}
Send the log batch to Ollama for classification. Use the HTTP Request node pointed at your Ollama API:
POST http://localhost:11434/api/generate
{
"model": "mistral:7b",
"prompt": "You are a senior SRE analyzing application logs. Classify each issue found and assess overall system health.\n\nLOGS:\n{{ $json.logs }}\n\nRespond in JSON format:\n{\n \"overall_health\": \"healthy|degraded|critical\",\n \"error_count\": number,\n \"warning_count\": number,\n \"issues\": [\n {\n \"severity\": \"critical|high|medium|low|info\",\n \"category\": \"authentication|database|network|memory|disk|application|security\",\n \"summary\": \"Brief description\",\n \"affected_service\": \"service name\",\n \"log_lines\": [\"relevant lines\"],\n \"possible_cause\": \"What likely caused this\",\n \"suggested_action\": \"What to do about it\"\n }\n ],\n \"patterns\": [\"any recurring patterns noticed\"],\n \"anomalies\": [\"anything unusual compared to normal operation\"]\n}",
"stream": false,
"options": {
"temperature": 0.1,
"num_predict": 2048
}
}
temperature: 0.1 for log analysis. You want consistent, deterministic classifications — not creative interpretations. Higher temperatures cause the model to hallucinate issues that don't exist in the logs.
| Model | Speed | Best For |
|---|---|---|
phi3:mini |
~200ms first token | Simple error counting, basic classification |
mistral:7b |
~350ms first token | Good balance of speed and accuracy. Handles stack traces well. |
llama3:8b |
~400ms first token | Best at understanding complex multi-service correlations |
codellama:13b |
~800ms first token | Stack trace analysis, code-level root cause identification |
After classification, use a Function node to compare current results against a rolling baseline. This catches gradual degradation that individual checks miss:
// Anomaly detection logic
const current = $input.first().json;
const baseline = $('Read Baseline').first().json;
const anomalies = [];
// Error rate spike detection
const errorRate = current.error_count / (current.error_count + current.warning_count + 1);
const baselineErrorRate = baseline.avg_error_rate || 0.05;
if (errorRate > baselineErrorRate * 3) {
anomalies.push({
type: 'error_rate_spike',
severity: 'high',
current: errorRate,
baseline: baselineErrorRate,
message: `Error rate ${(errorRate * 100).toFixed(1)}% is ${(errorRate / baselineErrorRate).toFixed(1)}x above baseline`
});
}
// New error category detection
const knownCategories = baseline.known_categories || [];
const newCategories = current.issues
.map(i => i.category)
.filter(c => !knownCategories.includes(c));
if (newCategories.length > 0) {
anomalies.push({
type: 'new_error_category',
severity: 'medium',
categories: newCategories,
message: `New error categories detected: ${newCategories.join(', ')}`
});
}
// Critical issue detection
const criticalIssues = current.issues.filter(i => i.severity === 'critical');
if (criticalIssues.length > 0) {
anomalies.push({
type: 'critical_issues',
severity: 'critical',
count: criticalIssues.length,
issues: criticalIssues,
message: `${criticalIssues.length} critical issue(s) detected`
});
}
// Service health degradation
if (current.overall_health === 'critical' && baseline.last_health !== 'critical') {
anomalies.push({
type: 'health_degradation',
severity: 'critical',
from: baseline.last_health,
to: current.overall_health,
message: `System health degraded from ${baseline.last_health} to critical`
});
}
// Update baseline (rolling average)
const updatedBaseline = {
avg_error_rate: (baselineErrorRate * 0.9) + (errorRate * 0.1),
known_categories: [...new Set([...knownCategories, ...current.issues.map(i => i.category)])],
last_health: current.overall_health,
last_updated: new Date().toISOString()
};
return {
json: {
anomalies,
should_alert: anomalies.some(a => a.severity === 'critical' || a.severity === 'high'),
analysis: current,
updated_baseline: updatedBaseline
}
};
Use an IF node to route based on severity. Critical and high alerts go to Slack/PagerDuty immediately. Medium alerts batch into a daily digest.
// Slack message builder (Function node)
const data = $input.first().json;
const blocks = [
{
type: "header",
text: {
type: "plain_text",
text: data.analysis.overall_health === 'critical'
? "CRITICAL: System Health Alert"
: "Warning: Anomaly Detected"
}
},
{
type: "section",
fields: [
{ type: "mrkdwn", text: `*Health:* ${data.analysis.overall_health}` },
{ type: "mrkdwn", text: `*Errors:* ${data.analysis.error_count}` },
{ type: "mrkdwn", text: `*Warnings:* ${data.analysis.warning_count}` },
{ type: "mrkdwn", text: `*Anomalies:* ${data.anomalies.length}` }
]
}
];
// Add each anomaly as a section
for (const anomaly of data.anomalies) {
blocks.push({
type: "section",
text: {
type: "mrkdwn",
text: `*${anomaly.severity.toUpperCase()}* — ${anomaly.message}`
}
});
}
// Add suggested actions from AI analysis
const actions = data.analysis.issues
.filter(i => i.severity === 'critical' || i.severity === 'high')
.map(i => `• *${i.summary}*: ${i.suggested_action}`)
.join('\n');
if (actions) {
blocks.push({
type: "section",
text: { type: "mrkdwn", text: `*Suggested Actions:*\n${actions}` }
});
}
return { json: { blocks } };
Store the rolling baseline in a JSON file or database so the pipeline learns what's "normal" for your system over time:
// Write baseline to file (Execute Command node)
echo '{{ JSON.stringify($json.updated_baseline) }}' > /data/log-analysis-baseline.json
// Or use n8n's built-in SQLite:
// INSERT OR REPLACE INTO baselines (key, value, updated_at)
// VALUES ('log_analysis', '{{ JSON.stringify($json.updated_baseline) }}', datetime('now'))
For microservice architectures, feed logs from multiple services into a single Ollama analysis pass. The model can identify cross-service failure cascades that per-service monitoring misses:
{
"prompt": "Analyze logs from multiple services and identify any correlated failures or cascade patterns.\n\nAPI GATEWAY LOGS:\n{{ $json.gateway_logs }}\n\nAUTH SERVICE LOGS:\n{{ $json.auth_logs }}\n\nDATABASE LOGS:\n{{ $json.db_logs }}\n\nFocus on:\n1. Timeline correlation (did failures happen in sequence?)\n2. Error propagation (did one service cause failures in others?)\n3. Resource contention (are services competing for the same resources?)\n4. Common request IDs across services\n\nRespond in JSON with a 'cascade_analysis' field."
}
The same pipeline can detect security-relevant patterns in auth logs:
{
"prompt": "Analyze these authentication logs for security anomalies.\n\nLOGS:\n{{ $json.auth_logs }}\n\nLook for:\n1. Brute force attempts (multiple failed logins from same IP)\n2. Credential stuffing patterns (failures across many accounts from few IPs)\n3. Unusual login times or locations\n4. Privilege escalation attempts\n5. Session anomalies (token reuse, impossible travel)\n\nRespond in JSON:\n{\n \"threat_level\": \"none|low|medium|high|critical\",\n \"findings\": [...],\n \"recommended_blocks\": [\"IPs to block\"],\n \"recommended_actions\": [\"immediate steps\"]\n}"
}
Don't send all log lines to Ollama. Pre-filter aggressively:
// Pre-filter function (before Ollama)
const lines = $input.first().json.raw_logs.split('\n');
const filtered = lines.filter(line => {
// Skip health checks, static assets, metrics scrapes
if (line.includes('/health') || line.includes('/metrics')) return false;
if (line.includes('.css') || line.includes('.js') || line.includes('.png')) return false;
if (line.includes('kube-probe')) return false;
// Keep errors, warnings, slow requests, auth events
if (/ERROR|WARN|Exception|FATAL/i.test(line)) return true;
if (/5\d{2}\s/.test(line)) return true; // 5xx status codes
if (/timeout|refused|unreachable/i.test(line)) return true;
if (/login|auth|token|session/i.test(line)) return true;
// Keep requests slower than 2 seconds
const duration = line.match(/(\d+)ms/);
if (duration && parseInt(duration[1]) > 2000) return true;
return false;
});
return { json: { logs: filtered.join('\n'), total_lines: lines.length, filtered_lines: filtered.length } };
For high-volume systems (>10,000 lines/minute), batch and sample:
Teams running this pipeline typically see:
Download the ready-to-import n8n workflow JSON with all nodes pre-configured, including the anomaly detection baseline system and Slack alert templates.
Download Free Templates