n8n

How to Automate Ollama Fact Checks for Content QA?

Quickly check long articles for factual errors before they go live. Ideal for content, PR, and marketing teams that need clear feedback on what is true, what is false, and what needs review. Results are grouped and summarized so editors can act fast.

The flow accepts raw text and a reference facts list. It splits the text into sentences, turns each sentence into a claim, and uses a small local AI model from Ollama to judge each claim against the facts. Only flagged claims move forward, then they are grouped and turned into a short report by a second model. You can run it by hand for testing or call it from other n8n workflows.

Setup is simple and private because models run on your own machine. Expect faster reviews, fewer errors, and consistent checks across many writers. Install the small fact check model and a compact summarizer model in Ollama, map the facts and text inputs, and you are ready. Use it for blog posts, landing pages, product updates, and social copy where accuracy matters.

What are the key features?

  • Manual test and callable entry point so teams can run checks by hand or from other workflows
  • Code step splits input text into clean sentences for precise claim checks
  • Split Out creates one claim per item to evaluate each statement independently
  • Ollama chat model reviews each claim against the provided facts and returns a clear yes or no
  • Filter removes correct claims and keeps only those that need attention
  • Aggregate compiles all flagged claims into one structured list
  • Second LLM summarizes issues into a short, editor friendly report

What are the benefits?

  • Reduce manual review time from 30 minutes to 5 minutes per article
  • Improve fact accuracy by flagging risky claims before publish
  • Scale review to handle 5 times more drafts with the same team
  • Keep data private by running models on your own machine
  • Plug into other n8n flows to enforce content quality checks

How do you set it up?

  1. Import the template into n8n: Create a new workflow in n8n > Click the three dots menu > Select 'Import from File' > Choose the downloaded JSON file.
  2. You'll need accounts with Ollama. See the Tools Required section above for links to create accounts with these services.
  3. Install and run Ollama on your machine or server. Make sure the Ollama API is reachable from n8n, usually at a local URL.
  4. In n8n, open the Ollama nodes and in the Credential to connect with dropdown click Create new credential, then follow the on screen instructions to connect Ollama.
  5. Download the small fact check model and the compact summarizer model in Ollama so both are available before testing.
  6. Open the When Executed by Another Workflow node and confirm it accepts two inputs named facts and text.
  7. Open the Set node and replace any sample facts with your real facts or remove it if you pass facts from another workflow.
  8. Check the Code node to ensure it reads the text field and splits it into sentences. Keep the run once per item setting as provided.
  9. Open the Filter node and confirm it is set to keep only claims marked as incorrect or uncertain by the model.
  10. Click Test workflow to run with sample content. Review the final summary to confirm it lists only problems and is easy to read.
  11. If the model returns empty results, verify that your facts field is populated and that the Ollama service is running and reachable.
  12. For production, trigger this workflow from another n8n flow using the execute workflow node and pass the facts and text fields.

Tools Required

$24 / mo or $20 / mo billed annually to use n8n in the cloud. However, the local or self-hosted n8n Community Edition is free.

Ollama

Sign up

Free tier: $0 (self-hosted local API)

Similar Templates

Join Futurise to access 1,200+ automation templates

Get instant access to ready-made automation workflows for n8n, Make.com, AI agents, and more. Download, customise, and deploy in minutes.