n8n

How to Connect DeepSeek Chat for Customer Support?

Turn your chat inbox into a smart help desk. Messages are captured and answered by AI so customers get fast, clear replies. Ideal for teams that want fast support without complex tools.

A chat message starts the flow. The input can go to a simple LLM Chain that uses a local Ollama DeepSeek model with a large context window. An AI Agent is also available with a fixed system message and a window memory, powered by the DeepSeek OpenAI compatible API. Two HTTP nodes show direct calls to the DeepSeek endpoint using JSON and raw bodies. You can switch between local and cloud models to balance speed, privacy, and cost.

You will need a DeepSeek API key or a running Ollama server with the deepseek r1 model. Set credentials in n8n, choose the model in each node, and test a message in the chat UI. Expect faster replies, lower costs for common questions, and more consistent answers because the memory keeps context. Use it for FAQs, triage before handoff, or after hours self service.

What are the key features?

  • Chat trigger listens for new messages and starts the flow.
  • Basic LLM Chain with a clear system message for stable tone.
  • Ollama DeepSeek model with 16384 context and 0.6 temperature for local replies.
  • Conversational Agent with a window memory to keep recent context.
  • OpenAI compatible DeepSeek node for cloud reasoning responses.
  • HTTP Request node with JSON body to call DeepSeek chat completions.
  • HTTP Request node with raw body to test custom payloads and headers.
  • Header based auth for API calls and easy model switching per node.

What are the benefits?

  • Reduce first reply time from 5 minutes to under 30 seconds
  • Automate up to 70% of common questions with context memory
  • Lower API spend by routing simple chats to a local model
  • Support more chats at once by offloading routine answers
  • Connect cloud and local AI in one place for flexible control

How do you set it up?

  1. Import the template into n8n: Create a new workflow in n8n > Click the three dots menu > Select 'Import from File' > Choose the downloaded JSON file.
  2. You'll need accounts with DeepSeek and Ollama. See the Tools Required section above for links to create accounts with these services.
  3. Create a DeepSeek API key in your DeepSeek account and keep it safe.
  4. In the n8n credentials manager, create a new OpenAI API credential named DeepSeek. Set the base URL to https://api.deepseek.com or https://api.deepseek.com/v1 and paste your API key. Save and test.
  5. In the credentials manager, create an HTTP Header Auth credential for DeepSeek. Add a header named Authorization with the value Bearer YOUR_API_KEY. Save and test.
  6. Install and run Ollama on your machine or server. Download the deepseek r1 model so it is available locally.
  7. In the n8n credentials manager, create an Ollama credential pointing to http://127.0.0.1:11434. Save and test.
  8. Open the Ollama DeepSeek node in the workflow. Select your Ollama credential, choose model deepseek-r1:14b, keep format default, and confirm temperature 0.6.
  9. Open the DeepSeek node (OpenAI compatible). Select the DeepSeek OpenAI credential you created and keep the system message as needed.
  10. Open the DeepSeek JSON Body and DeepSeek Raw Body nodes. Select the HTTP Header Auth credential. Confirm the URL is https://api.deepseek.com/chat/completions and the model fields match your target model.
  11. Start the workflow in n8n and open the chat interface. Send a short message and confirm you receive a reply from the local Ollama chain.
  12. If you get errors, check that Ollama is running, verify the API key for DeepSeek, and confirm the Authorization header format. For long chats, adjust context settings or switch to the cloud model for deeper reasoning.

Tools Required

$24 / mo or $20 / mo billed annually to use n8n in the cloud. However, the local or self-hosted n8n Community Edition is free.

DeepSeek

Sign up

$0.035/1M input tokens (cache hit), $0.135/1M input tokens (cache miss), $0.550/1M output tokens

Ollama

Sign up

Free tier: $0 (self-hosted local API)

Similar Templates

Join Futurise to access 1,200+ automation templates

Get instant access to ready-made automation workflows for n8n, Make.com, AI agents, and more. Download, customise, and deploy in minutes.