n8n

How to Automate Ollama Chat Request Parsing?

Turn live chat messages into clean structured data. The setup listens to each message and extracts key details like name, surname, and communication type. It works well for support intake and chat lead capture where clear fields speed up response and routing.

A chat event starts the flow when a message arrives. The message goes into a basic LLM chain that asks the model to fill a JSON schema and uses the current date for context. The same Ollama model powers both the chain and an auto fixer. A structured output parser checks the JSON against a manual schema. If the check fails, the auto fixer asks the model to correct the format and try again. A final step pulls the output JSON so it is ready for the next system. An error path keeps failures from breaking the run and makes testing easier.

Set up a running Ollama server with the mistral nemo model and keep the temperature low for steady, repeatable JSON. Expect less manual editing and faster triage because data arrives in a tidy object. Use it for support request intake, chat qualification, and routing by communication preference. Teams can process more chats with the same staff and keep data consistent across tools.

What are the key features?

  • Chat event trigger starts the flow whenever a new message is received.
  • Basic LLM chain prompts the model to extract fields into a JSON schema and includes the current date.
  • Ollama chat model configured with low temperature, memory lock, and keep alive for steady output.
  • Structured output parser validates the response against a manual schema for name, surname, and communication type.
  • Auto fixing output parser retries with a correction prompt when the JSON does not match the schema.
  • Extract JSON step outputs the cleaned JSON field for easy connection to downstream systems.

What are the benefits?

  • Reduce manual review from 5 minutes to 30 seconds per chat
  • Improve data accuracy by 80 percent by enforcing a JSON schema
  • Handle 3 times more chat volume with the same team
  • Eliminate 90 percent of formatting errors with the auto fix loop
  • Unify chat data into one consistent JSON format for easy handoff

How do you set it up?

  1. Import the template into n8n: Create a new workflow in n8n > Click the three dots menu > Select 'Import from File' > Choose the downloaded JSON file.
  2. You'll need accounts with Ollama. See the Tools Required section above for links to create accounts with these services.
  3. Prepare Ollama: Install Ollama on your server or local machine, run the service, and pull the model mistral-nemo:latest. Confirm the API is reachable at the base URL you plan to use.
  4. Create the Ollama credential in n8n Cloud: Open the Ollama Chat Model node, choose Credential to connect with, click Create new credential, enter your Ollama base URL, add any token if your server requires one, and save. Give the credential a clear name.
  5. Open the When chat message received trigger and make sure it is enabled so new chat messages start the flow.
  6. In the Basic LLM Chain node, confirm the Prompt Source uses the user message. Keep the provided instruction that requests a JSON object and uses the current date for context.
  7. Review the Structured Output Parser node and verify the manual schema fields match what you need, such as name, surname, and communication type.
  8. Check the Auto fixing Output Parser node. Leave the default correction prompt in place so invalid JSON is retried automatically using the same Ollama model.
  9. Open the Extract JSON Output node and confirm it reads the output path {{ $json.output }} so downstream steps receive clean JSON.
  10. Enable the workflow, send a test chat message with a name and request, and review the execution data. If parsing fails, lower the temperature further or simplify the schema descriptions.
  11. If you switch data sources later, update the Prompt Source setting in the Basic LLM Chain so the right message is analyzed.

Tools Required

$24 / mo or $20 / mo billed annually to use n8n in the cloud. However, the local or self-hosted n8n Community Edition is free.

Ollama

Sign up

Free tier: $0 (self-hosted local API)

Similar Templates

Join Futurise to access 1,200+ automation templates

Get instant access to ready-made automation workflows for n8n, Make.com, AI agents, and more. Download, customise, and deploy in minutes.