n8n

How to Automate OpenRouter Chat Support?

Turn incoming chat messages into fast, helpful replies. A configurable AI assistant answers common questions and keeps short term memory for better context. Best for support teams that want quick responses without complex setup.

A chat event starts the flow when a message arrives. A settings step adds a model name to the data so the model can be changed without edits to the rest of the build. The OpenRouter model node reads that value and calls the selected language model. A chat memory block stores recent turns by session id so the assistant remembers what was said. The agent reads the prompt from the message, combines it with memory, and returns a clear answer. You can swap models across providers like OpenAI, Google, DeepSeek, Mistral, and Qwen in seconds.

You only need an OpenRouter account and an API key. Set a default model, pass a session id from your site or app, and test a few questions to tune tone and length. Expect faster first replies, fewer simple tickets, and better handoffs to humans for complex issues. Great for FAQs, order status checks, and basic troubleshooting where quick and consistent answers matter.

What are the key features?

  • Chat message trigger starts the flow the moment a user sends a message
  • Settings step adds a model field so the assistant can use any supported model id
  • LLM node connects to OpenRouter using your API key to run the selected model
  • Chat memory stores recent turns per session id so replies stay in context
  • AI agent reads the prompt from the message and returns a clear response
  • Model choice is data driven so you can test providers without code changes

What are the benefits?

  • Cut first reply time by up to 60 percent for common questions
  • Automate up to 70 percent of simple support requests
  • Improve answer consistency by 40 percent with short term chat memory
  • Switch models in minutes without rebuilding the flow
  • Serve many users at once without adding extra staff

How do you set it up?

  1. Import the template into n8n: Create a new workflow in n8n > Click the three dots menu > Select 'Import from File' > Choose the downloaded JSON file.
  2. You'll need accounts with OpenRouter. See the Tools Required section above for links to create accounts with these services.
  3. In your OpenRouter account, create an API key from the API page and copy it to a safe place.
  4. In the n8n credentials manager, create a new OpenAI credential and choose OpenRouter as the provider. Paste your API key and follow the on screen steps to save. If asked for a base URL, use https://openrouter.ai/api/v1.
  5. Open the LLM Model node and select the OpenRouter credential you created. Confirm the model field uses the expression that reads the model from the incoming data.
  6. Open the Settings node and set a default model id from the OpenRouter models page. This value controls which model runs by default.
  7. Open the Chat Memory node and confirm the session key uses the session id from the incoming message. Make sure your chat source passes a stable session id for each user.
  8. Start a test chat in n8n. Send a greeting, then ask a follow up that refers to the first message. Verify the reply uses context from earlier messages.
  9. Change the model id in the Settings node to another supported model and run a new test to compare speed and style.
  10. If you see an unauthorized error, check the OpenRouter API key. If replies are empty, confirm the prompt field maps to the incoming message. If memory seems lost, ensure the session id is present and the same across messages.

Tools Required

$24 / mo or $20 / mo billed annually to use n8n in the cloud. However, the local or self-hosted n8n Community Edition is free.

OpenRouter

Sign up

Free models: $0 via API, 20 requests/min; 50/day or 1000/day with ≥10 credits

Similar Templates

Join Futurise to access 1,200+ automation templates

Get instant access to ready-made automation workflows for n8n, Make.com, AI agents, and more. Download, customise, and deploy in minutes.