How to Automate Social Media Content with n8n + Ollama (Free Workflow Included)
Creating social media content for multiple platforms is time-consuming. Each platform has its own tone, character limits, and audience expectations. What works on LinkedIn doesn't work on Twitter, and neither works on Reddit.
In this tutorial, you'll build an n8n workflow that takes a single topic and generates optimized content for 4 platforms — Twitter/X, LinkedIn, Reddit, and Instagram — using Ollama for local AI processing. No API keys, no monthly costs, no data leaving your machine.
Why Local AI for Content Generation?
Using ChatGPT or Claude APIs for social media content works, but has drawbacks:
| Factor | Cloud AI APIs | Ollama (Local) |
|---|---|---|
| Cost per post | $0.01–0.05 | $0 (free forever) |
| Monthly cost (daily use) | $10–50+ | $0 |
| Data privacy | Sent to cloud | Stays on your machine |
| Rate limits | Yes | No |
| Works offline | No | Yes |
| Setup complexity | API key + billing | One install command |
For content generation, local AI models like llama3.2 or mistral produce quality that's comparable to cloud APIs — especially for short-form social media content.
Prerequisites
- n8n — Self-hosted (docker run -it --rm -p 5678:5678 n8nio/n8n) or n8n.io cloud
- Ollama — Install from ollama.ai, then ollama pull llama3.2
That's it. No API keys, no accounts, no billing setup.
How the Workflow Works
You enter a topic (e.g., "benefits of self-hosted AI")
│
▼
n8n sends topic to Ollama with platform-specific prompts
│
├── Twitter/X: Concise, punchy, hashtags, 280 chars
├── LinkedIn: Professional, insights, 1200 chars
├── Reddit: Conversational, detailed, community-friendly
└── Instagram: Visual-focused, emoji-rich, 30 hashtags
│
▼
AI Quality Review pass (checks tone, engagement, accuracy)
│
▼
4 ready-to-post pieces of content
The workflow uses two AI passes: one for generation, one for quality review. The review pass catches issues like:
- Content that's too generic or salesy
- Platform tone mismatches (e.g., LinkedIn post that reads like a tweet)
- Missing or excessive hashtags
- Grammar and clarity issues
Step-by-Step Setup
Download the JSON workflow file from GitHub:
# Direct download
curl -O https://github.com/bonskari/n8n-ollama-social-content/raw/main/workflow.json
# Or clone the repo
git clone https://github.com/bonskari/n8n-ollama-social-content.git
- Open n8n (usually at http://localhost:5678)
- Click Add workflow → Import from file
- Select the downloaded workflow.json
In the workflow, each HTTP Request node points to Ollama's API. By default, it's configured for:
URL: http://localhost:11434/api/generate
Model: llama3.2
If your Ollama is on a different host or you want a different model, update the URL and model name in the "Set Parameters" node at the top of the workflow.
Click Execute Workflow. Enter your topic in the manual trigger dialog. In about 30–60 seconds, you'll have 4 platform-optimized posts ready to copy and paste.
Example Output
Input topic: "Why developers should try self-hosted AI tools"
Twitter/X Output
🔓 Stop paying for AI APIs when you can run the same models locally.
Self-hosted AI tools give you:
✅ Zero API costs
✅ Full data privacy
✅ No rate limits
✅ Works offline
Try Ollama + n8n — free, open-source, and yours to keep.
#SelfHosted #AI #OpenSource #n8n #Ollama
LinkedIn Output
I've been experimenting with running AI models locally instead of using
cloud APIs, and the results have surprised me.
Here's what changed:
→ Monthly AI costs dropped from $47 to $0
→ Sensitive data never leaves my network
→ No more rate limit errors at 2am
→ Everything works even without internet
The tools that made this possible: Ollama for running models locally,
and n8n for building automation workflows around them.
If you're building AI-powered automation and cost or privacy matters
to you, self-hosted is worth exploring.
What's your experience with local AI models?
Customization Tips
- Change the model: Swap llama3.2 for mistral, gemma2, or any model Ollama supports
- Add brand voice: Edit the system prompts to include your brand guidelines, tone, and key messaging
- Add more platforms: Duplicate a generation node and customize the prompt for YouTube Shorts, TikTok, newsletters, etc.
- Schedule it: Replace the manual trigger with a Cron node to generate content on a schedule
- Connect to posting tools: Add nodes to auto-post via Buffer, Hootsuite, or platform APIs
Recommended Ollama Models for Content
| Model | Size | Best For | Speed |
|---|---|---|---|
| llama3.2 | 3B | Fast, good quality general content | Fast |
| llama3.1:8b | 8B | Higher quality, still fast enough | Medium |
| mistral | 7B | Strong writing quality, European language support | Medium |
| gemma2:9b | 9B | Google's model, good for factual content | Medium |
For social media content, the 3B–8B range gives the best speed-to-quality ratio. Larger models are better for long-form blog posts.
Want All 11 AI Workflows?
This social media generator is one of 11 production-ready workflows in the full pack. Also includes: blog writer, email auto-responder, lead scoring, document summarizer, meeting notes, competitor monitor, and more.
Get the Full Pack — $39One-time purchase. 30-day money-back guarantee. Instant delivery.
More Free Workflows
Try these other free n8n + Ollama workflows:
- AI Blog Writer Pipeline — Research, outline, draft, and edit blog posts with local AI
- AI Email Auto-Responder — Classify emails, filter spam, draft replies automatically
- All Free Samples — 3 free workflows + documentation
Troubleshooting
Ollama connection refused
Make sure Ollama is running (ollama serve) and accessible at http://localhost:11434. If n8n runs in Docker, use http://host.docker.internal:11434 instead of localhost.
Slow generation
Try a smaller model (llama3.2 instead of llama3.1:8b). If you have a GPU, make sure Ollama is using it — check with ollama ps.
Output quality issues
Upgrade to a larger model, or edit the system prompts in the workflow to be more specific about your desired output format and tone.