How to Automate Social Media Content with n8n + Ollama (Free Workflow Included)

Published March 24, 2026 · 8 min read · Get the free workflow →

Creating social media content for multiple platforms is time-consuming. Each platform has its own tone, character limits, and audience expectations. What works on LinkedIn doesn't work on Twitter, and neither works on Reddit.

In this tutorial, you'll build an n8n workflow that takes a single topic and generates optimized content for 4 platforms — Twitter/X, LinkedIn, Reddit, and Instagram — using Ollama for local AI processing. No API keys, no monthly costs, no data leaving your machine.

What you'll build: A one-click workflow that generates platform-specific social media posts with AI quality review. Total setup time: ~10 minutes.

Why Local AI for Content Generation?

Using ChatGPT or Claude APIs for social media content works, but has drawbacks:

FactorCloud AI APIsOllama (Local)
Cost per post$0.01–0.05$0 (free forever)
Monthly cost (daily use)$10–50+$0
Data privacySent to cloudStays on your machine
Rate limitsYesNo
Works offlineNoYes
Setup complexityAPI key + billingOne install command

For content generation, local AI models like llama3.2 or mistral produce quality that's comparable to cloud APIs — especially for short-form social media content.

Prerequisites

That's it. No API keys, no accounts, no billing setup.

How the Workflow Works

You enter a topic (e.g., "benefits of self-hosted AI")
    │
    ▼
n8n sends topic to Ollama with platform-specific prompts
    │
    ├── Twitter/X: Concise, punchy, hashtags, 280 chars
    ├── LinkedIn: Professional, insights, 1200 chars
    ├── Reddit: Conversational, detailed, community-friendly
    └── Instagram: Visual-focused, emoji-rich, 30 hashtags
    │
    ▼
AI Quality Review pass (checks tone, engagement, accuracy)
    │
    ▼
4 ready-to-post pieces of content

The workflow uses two AI passes: one for generation, one for quality review. The review pass catches issues like:

Step-by-Step Setup

STEP 1: Download the Workflow

Download the JSON workflow file from GitHub:

# Direct download
curl -O https://github.com/bonskari/n8n-ollama-social-content/raw/main/workflow.json

# Or clone the repo
git clone https://github.com/bonskari/n8n-ollama-social-content.git
STEP 2: Import into n8n
  1. Open n8n (usually at http://localhost:5678)
  2. Click Add workflowImport from file
  3. Select the downloaded workflow.json
STEP 3: Configure Ollama Connection

In the workflow, each HTTP Request node points to Ollama's API. By default, it's configured for:

URL: http://localhost:11434/api/generate
Model: llama3.2

If your Ollama is on a different host or you want a different model, update the URL and model name in the "Set Parameters" node at the top of the workflow.

STEP 4: Run It

Click Execute Workflow. Enter your topic in the manual trigger dialog. In about 30–60 seconds, you'll have 4 platform-optimized posts ready to copy and paste.

Example Output

Input topic: "Why developers should try self-hosted AI tools"

Twitter/X Output

🔓 Stop paying for AI APIs when you can run the same models locally.

Self-hosted AI tools give you:
✅ Zero API costs
✅ Full data privacy
✅ No rate limits
✅ Works offline

Try Ollama + n8n — free, open-source, and yours to keep.

#SelfHosted #AI #OpenSource #n8n #Ollama

LinkedIn Output

I've been experimenting with running AI models locally instead of using
cloud APIs, and the results have surprised me.

Here's what changed:

→ Monthly AI costs dropped from $47 to $0
→ Sensitive data never leaves my network
→ No more rate limit errors at 2am
→ Everything works even without internet

The tools that made this possible: Ollama for running models locally,
and n8n for building automation workflows around them.

If you're building AI-powered automation and cost or privacy matters
to you, self-hosted is worth exploring.

What's your experience with local AI models?

Customization Tips

Recommended Ollama Models for Content

ModelSizeBest ForSpeed
llama3.23BFast, good quality general contentFast
llama3.1:8b8BHigher quality, still fast enoughMedium
mistral7BStrong writing quality, European language supportMedium
gemma2:9b9BGoogle's model, good for factual contentMedium

For social media content, the 3B–8B range gives the best speed-to-quality ratio. Larger models are better for long-form blog posts.

Want All 11 AI Workflows?

This social media generator is one of 11 production-ready workflows in the full pack. Also includes: blog writer, email auto-responder, lead scoring, document summarizer, meeting notes, competitor monitor, and more.

Get the Full Pack — $39

One-time purchase. 30-day money-back guarantee. Instant delivery.

More Free Workflows

Try these other free n8n + Ollama workflows:

Troubleshooting

Ollama connection refused

Make sure Ollama is running (ollama serve) and accessible at http://localhost:11434. If n8n runs in Docker, use http://host.docker.internal:11434 instead of localhost.

Slow generation

Try a smaller model (llama3.2 instead of llama3.1:8b). If you have a GPU, make sure Ollama is using it — check with ollama ps.

Output quality issues

Upgrade to a larger model, or edit the system prompts in the workflow to be more specific about your desired output format and tone.