Build an AI Meeting Notes Summarizer with n8n + Ollama
After every meeting, someone has to turn a messy transcript into structured notes with action items. It takes 15–30 minutes per meeting. For a team with 5 meetings per day, that's over 10 hours per week spent on meeting notes.
In this tutorial, you'll build an n8n workflow that does this automatically using Ollama — local AI, zero API costs, your data stays on your machine.
What the Workflow Does
- Receives a meeting transcript via webhook (paste from Zoom, Teams, Google Meet, or Otter.ai)
- Generates a structured summary with key decisions and discussion points
- Extracts action items with owner and deadline for each
- Drafts a follow-up email ready to send to attendees
- Returns everything as a clean JSON response
Works with any transcript source: Zoom's auto-transcription, Google Meet captions, Otter.ai exports, Microsoft Teams transcripts, or even manually typed notes. If it's text, this workflow can summarize it.
Why Local AI for Meeting Notes?
Meeting transcripts contain some of the most sensitive business data: strategy discussions, financial plans, HR decisions, client negotiations. Sending this to OpenAI or Google's APIs means your internal conversations leave your infrastructure.
| Concern | Cloud AI (GPT-4, Claude API) | Local AI (Ollama) |
|---|---|---|
| Data privacy | Transcripts sent to third-party | Never leaves your server |
| Cost per meeting | $0.05–0.50 (depends on length) | $0 |
| GDPR compliance | Requires DPA with provider | Compliant by default |
| Rate limits | Yes — problematic at scale | None |
| Works offline | No | Yes |
For meeting summarization, a local 8B-parameter model is more than capable. The task is well-defined: extract key information from text and organize it. No creative reasoning needed.
Prerequisites
# Install Ollama + model
curl -fsSL https://ollama.ai/install.sh | sh
ollama pull llama3:8b
# Run n8n
docker run -d --name n8n -p 5678:5678 \
--add-host=host.docker.internal:host-gateway \
-v n8n_data:/home/node/.n8n \
n8nio/n8n
Free Workflow: Meeting Notes Summarizer
This workflow processes a meeting transcript through two AI stages: summarization and action item extraction. Both run locally via Ollama.
The transcript is sent to Ollama with a prompt that extracts: meeting purpose, key decisions, discussion highlights, and open questions. Output is structured markdown.
A second Ollama call focuses specifically on action items. It identifies: what needs to be done, who is responsible, and suggested deadlines. Output is JSON for easy integration.
Using the summary and action items, a third call drafts a professional follow-up email ready to send to meeting attendees.
The Workflow JSON
Click to expand full workflow JSON
{
"name": "AI Meeting Notes Summarizer (Ollama)",
"nodes": [
{
"parameters": {
"httpMethod": "POST",
"path": "summarize-meeting",
"responseMode": "responseNode",
"options": {}
},
"id": "webhook",
"name": "Receive Transcript",
"type": "n8n-nodes-base.webhook",
"typeVersion": 2,
"position": [240, 300],
"webhookId": "summarize-meeting"
},
{
"parameters": {
"url": "http://localhost:11434/api/generate",
"sendBody": true,
"specifyBody": "json",
"jsonBody": "={{ JSON.stringify({ model: 'llama3:8b', prompt: 'You are an expert meeting summarizer. Analyze this meeting transcript and produce a structured summary.\\n\\nMeeting: ' + ($json.body.title || 'Untitled Meeting') + '\\nAttendees: ' + ($json.body.attendees || 'Not specified') + '\\n\\nTranscript:\\n' + ($json.body.transcript || '').substring(0, 4000) + '\\n\\nProduce a summary in this EXACT format:\\n\\n## Summary\\n[2-3 sentence overview of the meeting]\\n\\n## Key Decisions\\n- [Decision 1]\\n- [Decision 2]\\n\\n## Discussion Highlights\\n- [Point 1]\\n- [Point 2]\\n\\n## Open Questions\\n- [Question 1]\\n\\nBe concise. Focus on what matters.', stream: false, options: { temperature: 0.3, num_predict: 1000 } }) }}",
"options": { "timeout": 120000 }
},
"id": "summarize",
"name": "Summarize Meeting (Ollama)",
"type": "n8n-nodes-base.httpRequest",
"typeVersion": 4.2,
"position": [460, 200]
},
{
"parameters": {
"url": "http://localhost:11434/api/generate",
"sendBody": true,
"specifyBody": "json",
"jsonBody": "={{ JSON.stringify({ model: 'llama3:8b', prompt: 'Extract ALL action items from this meeting transcript. For each action item, identify who is responsible and suggest a deadline.\\n\\nTranscript:\\n' + ($json.body.transcript || '').substring(0, 4000) + '\\n\\nRespond in this EXACT JSON format, nothing else:\\n{\"action_items\": [{\"task\": \"description\", \"owner\": \"person name or Unknown\", \"deadline\": \"suggested date or ASAP\", \"priority\": \"high|medium|low\"}]}', stream: false, options: { temperature: 0.2, num_predict: 800 } }) }}",
"options": { "timeout": 120000 }
},
"id": "extract-actions",
"name": "Extract Action Items (Ollama)",
"type": "n8n-nodes-base.httpRequest",
"typeVersion": 4.2,
"position": [460, 420]
},
{
"parameters": {
"assignments": {
"assignments": [
{
"id": "summary",
"name": "summary",
"value": "={{ JSON.parse($('Summarize Meeting (Ollama)').item.json.data).response }}",
"type": "string"
},
{
"id": "action_items",
"name": "action_items",
"value": "={{ JSON.parse($('Extract Action Items (Ollama)').item.json.data).response }}",
"type": "string"
},
{
"id": "title",
"name": "title",
"value": "={{ $('Receive Transcript').item.json.body.title || 'Meeting Summary' }}",
"type": "string"
}
]
}
},
"id": "combine",
"name": "Combine Results",
"type": "n8n-nodes-base.set",
"typeVersion": 3.4,
"position": [680, 300]
},
{
"parameters": {
"url": "http://localhost:11434/api/generate",
"sendBody": true,
"specifyBody": "json",
"jsonBody": "={{ JSON.stringify({ model: 'llama3:8b', prompt: 'Write a professional follow-up email for this meeting.\\n\\nMeeting: ' + $json.title + '\\nAttendees: ' + ($('Receive Transcript').item.json.body.attendees || 'team') + '\\n\\nSummary:\\n' + $json.summary + '\\n\\nAction Items:\\n' + $json.action_items + '\\n\\nWrite a concise, professional email that:\\n1. Thanks attendees\\n2. Lists key decisions\\n3. Lists action items with owners\\n4. Mentions next steps\\n\\nWrite ONLY the email body. Start with \"Hi everyone,\"', stream: false, options: { temperature: 0.4, num_predict: 800 } }) }}",
"options": { "timeout": 120000 }
},
"id": "draft-email",
"name": "Draft Follow-up Email (Ollama)",
"type": "n8n-nodes-base.httpRequest",
"typeVersion": 4.2,
"position": [900, 300]
},
{
"parameters": {
"respondWith": "json",
"responseBody": "={{ JSON.stringify({ title: $('Combine Results').item.json.title, summary: $('Combine Results').item.json.summary, action_items: (() => { try { return JSON.parse($('Combine Results').item.json.action_items) } catch(e) { return { raw: $('Combine Results').item.json.action_items } } })(), follow_up_email: JSON.parse($json.data).response }) }}",
"options": {}
},
"id": "respond",
"name": "Return Results",
"type": "n8n-nodes-base.respondToWebhook",
"typeVersion": 1.1,
"position": [1120, 300]
}
],
"connections": {
"Receive Transcript": {
"main": [[
{ "node": "Summarize Meeting (Ollama)", "type": "main", "index": 0 },
{ "node": "Extract Action Items (Ollama)", "type": "main", "index": 0 }
]]
},
"Summarize Meeting (Ollama)": {
"main": [[{ "node": "Combine Results", "type": "main", "index": 0 }]]
},
"Extract Action Items (Ollama)": {
"main": [[{ "node": "Combine Results", "type": "main", "index": 0 }]]
},
"Combine Results": {
"main": [[{ "node": "Draft Follow-up Email (Ollama)", "type": "main", "index": 0 }]]
},
"Draft Follow-up Email (Ollama)": {
"main": [[{ "node": "Return Results", "type": "main", "index": 0 }]]
}
},
"settings": { "executionOrder": "v1" },
"tags": [{ "name": "AI" }, { "name": "Ollama" }, { "name": "Productivity" }, { "name": "Meeting Notes" }]
}
Testing the Workflow
curl -X POST http://localhost:5678/webhook/summarize-meeting \
-H "Content-Type: application/json" \
-d '{
"title": "Q2 Product Planning",
"attendees": "Sarah (PM), Mike (Engineering), Lisa (Design)",
"transcript": "Sarah: Okay, lets kick off Q2 planning. Mike, wheres the API rewrite at? Mike: Were about 70% done. Should be ready for internal testing by April 10th. The main blocker is the auth migration - we need to decide if we go OAuth2 or stick with API keys. Sarah: Lets go OAuth2. Its what enterprise customers keep asking for. Lisa, can you have the new dashboard mockups ready by April 5th? Lisa: Yes, I can do that. But I need the API spec from Mike first to know what data endpoints are available. Mike: Ill send that over by end of this week. Sarah: Perfect. Also, we need to discuss the pricing change. Were moving from $29 to $39 for the pro plan starting May 1st. Existing customers keep their current price for 6 months. Lisa: Should I update the pricing page? Sarah: Yes, but not until April 25th. We want to announce it in the April newsletter first. Mike: What about the mobile app? Sarah: Pushed to Q3. We dont have the bandwidth with the API rewrite. Lets focus and ship what we have."
}'
Expected response (abbreviated):
{
"title": "Q2 Product Planning",
"summary": "## Summary\nThe team discussed Q2 priorities...\n\n## Key Decisions\n- Moving to OAuth2 for authentication\n- Price increase from $29 to $39 (May 1st)\n- Mobile app pushed to Q3\n...",
"action_items": {
"action_items": [
{
"task": "Send API spec to Lisa",
"owner": "Mike",
"deadline": "End of this week",
"priority": "high"
},
{
"task": "Complete dashboard mockups",
"owner": "Lisa",
"deadline": "April 5th",
"priority": "high"
},
{
"task": "Update pricing page",
"owner": "Lisa",
"deadline": "April 25th",
"priority": "medium"
}
]
},
"follow_up_email": "Hi everyone,\n\nThanks for a productive Q2 planning session..."
}
How the Prompts Work
Summarization Prompt Strategy
The summary prompt uses a fixed output template (## Summary, ## Key Decisions, etc.). This forces the model to organize information into consistent sections rather than free-form text. The temperature: 0.3 keeps output focused without being repetitive.
Action Item Extraction
Action items require the most precise output. The prompt asks for JSON with specific fields (task, owner, deadline, priority). At temperature: 0.2, the model is highly deterministic — critical when downstream systems need to parse the output.
Why Three Separate Calls?
Splitting into three focused prompts (summarize, extract actions, draft email) produces better results than one massive "do everything" prompt. Each call has a single responsibility, making outputs more reliable and easier to debug.
Performance tip: The summarize and action-item calls run in parallel (both connect directly from the webhook trigger). Only the email draft runs sequentially because it needs both outputs. This cuts total processing time by 30–40%.
Integrating with Your Meeting Tools
Zoom
Enable Zoom's cloud recording transcription. When a recording is processed, use n8n's Zoom trigger to automatically fetch the transcript and pipe it to this workflow.
Google Meet
Google Meet saves transcripts to Google Drive as .txt files. Set up a Google Drive trigger in n8n that fires when a new file is added to your Meet transcripts folder.
Otter.ai
Otter.ai has a webhook integration. Point it at this workflow's webhook URL and meeting notes are processed within seconds of the meeting ending.
Manual Input
No transcription tool? Copy-paste notes from any source into a simple form that POSTs to the webhook. Works with hand-typed notes too — the AI adapts to any input format.
Production Tips
- Long transcripts: The workflow truncates to 4000 characters. For hour-long meetings, add a pre-processing step that splits the transcript into chunks and summarizes each, then combines.
- Multiple languages: Llama 3 handles English, Spanish, French, German, and other major languages. The same workflow works for multilingual teams.
- Storing results: Add a Google Sheets or PostgreSQL node after the response to build a searchable archive of all meeting summaries.
- Slack integration: Add a Slack node that posts the summary + action items to your team channel immediately after processing.
Want the Advanced Version?
The Self-Hosted AI Workflow Pack includes a production meeting summarizer with Slack integration, Google Calendar linking, action item tracking, and 10 more AI workflows — all running locally with Ollama.
Get All 11 Workflows — $39One-time purchase. No subscriptions. 30-day money-back guarantee.