Turn incoming chat messages into clean, structured data that your apps can use. The flow captures each message, asks a local AI model for a reply, and formats both the question and the answer as a simple JSON object. It suits teams that need consistent outputs for support chats, intake notes, or internal tools.
When a chat message arrives, the trigger sends it into a basic language model chain powered by Ollama using the llama3.2 model. A strict prompt instructs the model to return two fields only Prompt and Response. A set node converts the model text into a real object and shapes what gets returned to the user. If anything fails, an error branch sends a safe fallback so the user still gets feedback.
Setup is simple if you already run Ollama. Make sure the llama3.2 model is available and connect the Ollama credential in n8n. Expect faster handoffs to your systems, fewer formatting mistakes, and cleaner logs. Use it to standardize chat summaries, store Q and A pairs in databases, or feed downstream automations that require JSON.