Turn your chat inbox into a smart help desk. Messages are captured and answered by AI so customers get fast, clear replies. Ideal for teams that want fast support without complex tools.
A chat message starts the flow. The input can go to a simple LLM Chain that uses a local Ollama DeepSeek model with a large context window. An AI Agent is also available with a fixed system message and a window memory, powered by the DeepSeek OpenAI compatible API. Two HTTP nodes show direct calls to the DeepSeek endpoint using JSON and raw bodies. You can switch between local and cloud models to balance speed, privacy, and cost.
You will need a DeepSeek API key or a running Ollama server with the deepseek r1 model. Set credentials in n8n, choose the model in each node, and test a message in the chat UI. Expect faster replies, lower costs for common questions, and more consistent answers because the memory keeps context. Use it for FAQs, triage before handoff, or after hours self service.