n8n

How to Launch Gemini Website Chat for Customer Support?

Add an AI chat assistant to your website that replies in seconds and remembers context. It helps visitors get instant answers and guides them through simple tasks. Ideal for support teams that want fewer basic tickets and faster first replies.

Messages enter through the When Chat Message Received node, which also hosts a simple chat page. A memory buffer stores recent conversation so the bot stays on topic. The Google Gemini Chat Model generates replies, while a custom prompt in the Construct and Execute LLM Prompt node defines tone and rules and preserves {chat_history} and {input}. Output returns from the last node so the user sees a clean reply in the chat.

You only need a Google Gemini API key and basic n8n access. Expect shorter queues, lower costs for simple issues, and higher satisfaction from quick answers. Common uses include FAQ support, pre sales guidance, and internal help during off hours.

What are the key features?

  • Public chat endpoint that hosts a simple chat UI and supports allowed origins.
  • Conversation memory window that keeps recent messages so replies stay on topic.
  • Google Gemini chat model with adjustable temperature and safety settings.
  • Prompt template that preserves {chat_history} and {input} to guide the agent.
  • Last node response mode returns a clean message to the chat interface.
  • Easy model swapping through the language model input field.

What are the benefits?

  • Reduce first response time from minutes to seconds
  • Eliminate up to 70 percent of repetitive FAQ tickets
  • Handle up to 10 times more concurrent chats with the same team
  • Keep tone and answers consistent with a single prompt template
  • Launch a working chat assistant in under one hour

How do you set it up?

  1. Import the template into n8n: Create a new workflow in n8n > Click the three dots menu > Select 'Import from File' > Choose the downloaded JSON file.
  2. You'll need accounts with Google Gemini. See the Tools Required section above for links to create accounts with these services.
  3. In the n8n credentials manager, create a new credential for Google Gemini. Choose the Google Gemini(PaLM) Api type, name the credential clearly, and paste your API key from the official API page at ai.google.dev.
  4. Open the Google Gemini Chat Model node. Select your new credential, set the model to models/gemini-2.0-flash-exp, and review temperature and safety settings.
  5. Open the When Chat Message Received node. Make sure Public is enabled, set the chat title, set Allowed Origins to your domain for production or * for testing, and enable loading previous session from memory if needed.
  6. Open the Store Conversation History node and set the memory window size to control how many recent turns the bot remembers.
  7. Open the Construct and Execute LLM Prompt node. Edit the agent personality and instructions, but keep the {chat_history} and {input} placeholders unchanged.
  8. Click Chat in the editor to run a quick test. Send several messages and confirm the bot keeps context across turns.
  9. Activate the workflow and copy the public chat URL from the trigger node. Open it in a browser and confirm you receive replies without errors.
  10. Troubleshooting: If replies fail, check your Gemini API key and credential selection. If context resets, confirm the memory node connects to both the trigger and the prompt node. If you see a CORS error, update Allowed Origins in the trigger node.

Tools Required

$24 / mo or $20 / mo billed annually to use n8n in the cloud. However, the local or self-hosted n8n Community Edition is free.

Google Gemini

Sign up

Free tier: $0 via Gemini API; e.g., Gemini 2.5 Flash-Lite free limits 1,000 requests/day (15 RPM, 250k TPM). Paid from $0.10/1M input tokens and $0.40/1M output tokens.

Similar Templates

Join Futurise to access 1,200+ automation templates

Get instant access to ready-made automation workflows for n8n, Make.com, AI agents, and more. Download, customise, and deploy in minutes.