Automate Code Reviews with n8n + Ollama (AI Pull Request Reviewer)
Code reviews are essential but time-consuming. Senior developers spend 4–8 hours per week reviewing pull requests, and backlogs pile up fast. Cloud AI code review tools like CodeRabbit or Sourcery charge $15–30/developer/month and require sending your proprietary code to external servers.
With n8n and Ollama, you can build an AI code reviewer that runs entirely on your own machine — zero API costs, your code never leaves your infrastructure, and it works on any Git repository.
In this tutorial, you'll build a webhook-triggered workflow that:
- Receives a GitHub webhook when a pull request is opened or updated
- Fetches the diff from the GitHub API
- Sends the diff to Ollama for multi-dimensional analysis
- Posts the AI review as a comment on the pull request
- Flags critical issues with severity levels
Why Automate Code Reviews with Local AI?
Manual code reviews have three problems that AI can address:
| Problem | Impact | How AI Helps |
|---|---|---|
| Review bottlenecks | PRs wait 1–3 days for review | Instant first-pass review within seconds |
| Inconsistent standards | Different reviewers catch different things | Consistent checklist applied every time |
| Missed security issues | Vulnerabilities slip through tired eyes | AI checks for OWASP Top 10, SQL injection, XSS |
| Style debates | Bikeshedding wastes meeting time | AI enforces team conventions automatically |
Why local AI works for code review: Code review is a structured analysis task — check for patterns, compare against rules, identify anomalies. An 8B–14B parameter model like Llama 3 or CodeLlama handles this well. You don't need GPT-4 to spot a missing null check or an unhandled error.
The Architecture
The workflow uses a GitHub webhook to trigger reviews automatically:
GitHub PR Event
↓
[Webhook Trigger] → [Fetch PR Diff] → [Chunk Diff] → [Ollama Review]
↓
[Format Review]
↓
[Post PR Comment]
Step 1: Set Up the Webhook Trigger
The workflow starts when GitHub sends a webhook event for pull request activity.
In n8n, add a Webhook node:
- HTTP Method:
POST - Path:
/github/pr-review
Then configure your GitHub repository:
- Go to Settings → Webhooks → Add webhook
- Payload URL:
https://your-n8n-domain.com/webhook/github/pr-review - Content type:
application/json - Events: Select "Pull requests" only
Step 2: Fetch the Pull Request Diff
Use an HTTP Request node to get the diff from GitHub's API:
// HTTP Request Node Configuration
URL: https://api.github.com/repos/{{ $json.body.repository.full_name }}/pulls/{{ $json.body.number }}
Method: GET
Headers:
Accept: application/vnd.github.v3.diff
Authorization: Bearer YOUR_GITHUB_TOKEN
This returns the raw diff of all changed files. The Accept: application/vnd.github.v3.diff header tells GitHub to return the unified diff format instead of JSON.
Step 3: Chunk Large Diffs
Large PRs can exceed Ollama's context window. Split the diff into per-file chunks:
// Function Node: Split Diff by File
const diff = $input.first().json.data;
const files = diff.split(/^diff --git /m).filter(f => f.trim());
return files.map(file => {
const lines = file.split('\n');
const filename = lines[0].match(/a\/(.+?) b\//)?.[1] || 'unknown';
return {
json: {
filename,
diff: 'diff --git ' + file,
pr_number: $('Webhook').item.json.body.number,
repo: $('Webhook').item.json.body.repository.full_name
}
};
});
Step 4: AI Review with Ollama
This is the core of the workflow. Send each file's diff to Ollama with a structured review prompt:
// HTTP Request to Ollama
URL: http://localhost:11434/api/generate
Method: POST
Body (JSON):
{
"model": "llama3:8b",
"prompt": "You are an expert code reviewer. Review this git diff and provide feedback.\n\nAnalyze for:\n1. **Bugs**: Logic errors, off-by-one, null pointer risks, race conditions\n2. **Security**: SQL injection, XSS, command injection, hardcoded secrets\n3. **Performance**: N+1 queries, unnecessary allocations, missing indexes\n4. **Readability**: Unclear naming, missing error handling, complex logic\n5. **Best Practices**: Framework conventions, design patterns, test coverage\n\nFor each issue found, respond in this format:\n- [SEVERITY: critical/warning/info] FILE:LINE - Description of issue and suggested fix\n\nIf the code looks good, say so briefly.\n\nDiff:\n```\n${diff}\n```",
"stream": false,
"options": {
"temperature": 0.2,
"num_predict": 2000
}
}
Prompt engineering tip: Use temperature: 0.2 for code review. You want consistent, analytical output — not creative responses. The structured format ([SEVERITY: level] FILE:LINE) makes it easy to parse results programmatically if needed.
Step 5: Format and Post the Review
Aggregate all file reviews and post as a single PR comment:
// Function Node: Format Review Comment
const reviews = $input.all();
let comment = '## AI Code Review\n\n';
comment += '> Automated review by local AI (Ollama). ';
comment += 'This is a first-pass review — human review is still recommended.\n\n';
let criticalCount = 0;
let warningCount = 0;
for (const review of reviews) {
const response = JSON.parse(review.json.data).response;
comment += `### \`${review.json.filename}\`\n\n`;
comment += response + '\n\n---\n\n';
criticalCount += (response.match(/\[SEVERITY: critical\]/gi) || []).length;
warningCount += (response.match(/\[SEVERITY: warning\]/gi) || []).length;
}
comment += `**Summary:** ${criticalCount} critical, ${warningCount} warnings\n`;
return [{ json: { comment, pr_number: reviews[0].json.pr_number, repo: reviews[0].json.repo } }];
Then use another HTTP Request node to post the comment:
// Post Comment to PR
URL: https://api.github.com/repos/{{ $json.repo }}/issues/{{ $json.pr_number }}/comments
Method: POST
Headers:
Authorization: Bearer YOUR_GITHUB_TOKEN
Body:
{
"body": "{{ $json.comment }}"
}
Advanced: Model Selection for Code Review
Different models have different strengths for code review:
| Model | Size | Best For | Speed |
|---|---|---|---|
| codellama:7b | 3.8 GB | Code-specific analysis, security checks | Fast |
| llama3:8b | 4.7 GB | General review, readability, best practices | Fast |
| codellama:13b | 7.4 GB | Complex architecture review | Medium |
| deepseek-coder:6.7b | 3.8 GB | Multi-language review, type checking | Fast |
| qwen2.5-coder:7b | 4.7 GB | Latest training data, modern framework knowledge | Fast |
Recommendation: Start with llama3:8b for general reviews. Switch to codellama:7b if you need more code-specific analysis (it's trained specifically on code and understands more languages). For teams with 16GB+ RAM, codellama:13b gives noticeably better architecture-level feedback.
Production Considerations
Rate Limiting
If your team opens 20+ PRs/day, Ollama on a single machine can become a bottleneck. Solutions:
- Add a queue (n8n's built-in "Execute Workflow" with concurrency limits)
- Skip review for PRs under 5 lines changed (likely typo fixes)
- Only trigger on
openedandsynchronizeevents, notlabeledorassigned
Context Window Limits
Most Ollama models have 4096–8192 token context windows. For PRs with large files:
- Truncate diffs to the first 3000 characters per file
- Focus on added lines (
+lines in the diff) rather than removed lines - Skip binary files and lock files automatically
Reducing False Positives
AI code reviewers sometimes flag things that aren't actual issues. Reduce noise by:
- Adding project-specific context to the prompt ("We use TypeScript strict mode", "All database access goes through the ORM")
- Excluding test files from security reviews (test fixtures often contain intentionally "bad" data)
- Using severity levels so developers can focus on
criticalissues first
Cost Comparison
| Cloud AI Review Tools | This Workflow (Local) | |
|---|---|---|
| Monthly cost (10 devs) | $150–300/month | $0 |
| Code privacy | Sent to cloud servers | Never leaves your machine |
| Customization | Limited to tool's options | Full control over prompts and rules |
| Setup time | 5 minutes | 30 minutes |
| Works offline | No | Yes (except GitHub API calls) |
| GDPR/SOC2 | Depends on vendor | Compliant by default |
Complete Workflow JSON
Import this directly into n8n to get started:
Click to expand full workflow JSON
{
"name": "AI Code Review Bot (Ollama + GitHub)",
"nodes": [
{
"parameters": {
"httpMethod": "POST",
"path": "github/pr-review",
"responseMode": "responseNode"
},
"id": "webhook",
"name": "GitHub PR Webhook",
"type": "n8n-nodes-base.webhook",
"typeVersion": 2,
"position": [240, 300]
},
{
"parameters": {
"conditions": {
"options": { "caseSensitive": true },
"combinator": "and",
"conditions": [
{
"leftValue": "={{ $json.body.action }}",
"rightValue": "opened",
"operator": { "type": "string", "operation": "oneOf", "rightValue": ["opened", "synchronize"] }
}
]
}
},
"id": "filter-action",
"name": "Filter PR Events",
"type": "n8n-nodes-base.filter",
"typeVersion": 2,
"position": [460, 300]
},
{
"parameters": {
"url": "=https://api.github.com/repos/{{ $json.body.repository.full_name }}/pulls/{{ $json.body.number }}",
"sendHeaders": true,
"headerParameters": {
"parameters": [
{ "name": "Accept", "value": "application/vnd.github.v3.diff" },
{ "name": "Authorization", "value": "Bearer YOUR_GITHUB_TOKEN" }
]
},
"options": { "timeout": 30000 }
},
"id": "fetch-diff",
"name": "Fetch PR Diff",
"type": "n8n-nodes-base.httpRequest",
"typeVersion": 4.2,
"position": [680, 300]
},
{
"parameters": {
"url": "http://localhost:11434/api/generate",
"sendBody": true,
"specifyBody": "json",
"jsonBody": "={{ JSON.stringify({ model: 'llama3:8b', prompt: 'You are an expert code reviewer. Review this pull request diff.\\n\\nAnalyze for:\\n1. Bugs: logic errors, null pointer risks, race conditions\\n2. Security: SQL injection, XSS, hardcoded secrets, command injection\\n3. Performance: N+1 queries, unnecessary allocations\\n4. Readability: unclear naming, missing error handling\\n\\nFor each issue: [SEVERITY: critical/warning/info] Description and fix.\\nIf the code looks good, say so briefly.\\n\\nDiff:\\n' + ($json.data || '').substring(0, 6000), stream: false, options: { temperature: 0.2, num_predict: 2000 } }) }}",
"options": { "timeout": 120000 }
},
"id": "review",
"name": "AI Review (Ollama)",
"type": "n8n-nodes-base.httpRequest",
"typeVersion": 4.2,
"position": [900, 300]
},
{
"parameters": {
"url": "=https://api.github.com/repos/{{ $('GitHub PR Webhook').item.json.body.repository.full_name }}/issues/{{ $('GitHub PR Webhook').item.json.body.number }}/comments",
"sendHeaders": true,
"headerParameters": {
"parameters": [
{ "name": "Authorization", "value": "Bearer YOUR_GITHUB_TOKEN" }
]
},
"sendBody": true,
"specifyBody": "json",
"jsonBody": "={{ JSON.stringify({ body: '## AI Code Review\\n\\n> Automated review by local AI (Ollama). Human review still recommended.\\n\\n' + JSON.parse($json.data).response }) }}"
},
"id": "post-comment",
"name": "Post Review Comment",
"type": "n8n-nodes-base.httpRequest",
"typeVersion": 4.2,
"position": [1120, 300]
}
],
"connections": {
"GitHub PR Webhook": {
"main": [[{ "node": "Filter PR Events", "type": "main", "index": 0 }]]
},
"Filter PR Events": {
"main": [[{ "node": "Fetch PR Diff", "type": "main", "index": 0 }]]
},
"Fetch PR Diff": {
"main": [[{ "node": "AI Review (Ollama)", "type": "main", "index": 0 }]]
},
"AI Review (Ollama)": {
"main": [[{ "node": "Post Review Comment", "type": "main", "index": 0 }]]
}
},
"settings": { "executionOrder": "v1" },
"tags": [{ "name": "AI" }, { "name": "Ollama" }, { "name": "DevOps" }, { "name": "Code Review" }]
}
Wrapping Up
Automated code review with n8n + Ollama gives your team instant first-pass PR reviews without sending code to external servers. It won't replace human reviewers, but it catches the obvious issues — security vulnerabilities, missing error handling, style violations — so your senior devs can focus on architecture and design decisions.
The workflow above is a starting point. Customize the prompt with your team's coding standards, add file-type filters, or integrate with Slack notifications for critical findings.
Want 11 Production-Ready AI Workflows?
The Self-Hosted AI Workflow Pack includes code review, email automation, lead scoring, document processing, and 7 more n8n + Ollama templates. One payment, unlimited runs, zero API costs.
Get the Full Pack — $39