n8n

How to Automate Ollama Chat Structured Responses?

Turn incoming chat messages into clean, structured data that your apps can use. The flow captures each message, asks a local AI model for a reply, and formats both the question and the answer as a simple JSON object. It suits teams that need consistent outputs for support chats, intake notes, or internal tools.

When a chat message arrives, the trigger sends it into a basic language model chain powered by Ollama using the llama3.2 model. A strict prompt instructs the model to return two fields only Prompt and Response. A set node converts the model text into a real object and shapes what gets returned to the user. If anything fails, an error branch sends a safe fallback so the user still gets feedback.

Setup is simple if you already run Ollama. Make sure the llama3.2 model is available and connect the Ollama credential in n8n. Expect faster handoffs to your systems, fewer formatting mistakes, and cleaner logs. Use it to standardize chat summaries, store Q and A pairs in databases, or feed downstream automations that require JSON.

What are the key features?

  • On message chat trigger captures each user input instantly
  • Basic LLM Chain enforces a strict prompt for two JSON fields
  • Ollama Model uses the llama3.2 version for local inference
  • JSON to Object mapping turns model text into a usable object
  • Structured Response node controls which fields are returned to the user
  • Error Response branch returns a safe fallback when the model fails
  • Manual mapping allows precise control over field names and values

What are the benefits?

  • Reduce manual formatting from 30 minutes to 1 minute per chat batch
  • Automate 100 percent of text to JSON conversion for chat messages
  • Cut formatting errors by 90 percent with a fixed two field schema
  • Handle up to 10 times more chat requests without extra staff
  • Keep responses uniform across tools by enforcing the same structure

How do you set it up?

  1. Import the template into n8n: Create a new workflow in n8n > Click the three dots menu > Select 'Import from File' > Choose the downloaded JSON file.
  2. You'll need accounts with Ollama. See the Tools Required section above for links to create accounts with these services.
  3. Install and run Ollama on your server or local machine, then make sure the llama3.2 model is available.
  4. In n8n, double click the Ollama Model node, open the Credential to connect with dropdown, click Create new credential, then follow the on screen steps to connect Ollama.
  5. Set the Ollama base URL to where Ollama is running, for example http://localhost:11434 if local, or your server URL if remote.
  6. Open the Basic LLM Chain node and review the prompt. Keep the two fields Prompt and Response if you want consistent JSON output.
  7. Open the JSON to Object node and confirm the manual mapping matches the two fields returned by the model. Adjust field names if you changed the prompt.
  8. Check the Structured Response node to choose which fields are included in the final chat reply.
  9. Start the workflow and send a test message through the chat interface. Confirm the output is valid JSON with Prompt and Response.
  10. Test the error path by temporarily stopping Ollama and sending a message. You should see the Error Response. Restart Ollama after testing.
  11. Fine tune model behavior by adjusting model settings in the Ollama node, such as temperature, to balance creativity with format control.
  12. Enable the workflow so it stays active. Monitor executions in n8n for any errors and update mappings if your schema changes.

Tools Required

$24 / mo or $20 / mo billed annually to use n8n in the cloud. However, the local or self-hosted n8n Community Edition is free.

Ollama

Sign up

Free tier: $0 (self-hosted local API)

Similar Templates

Join Futurise to access 1,200+ automation templates

Get instant access to ready-made automation workflows for n8n, Make.com, AI agents, and more. Download, customise, and deploy in minutes.