n8n

How to Connect Google Drive to Supabase Q&A Support?

Turn your stored documents into a helpful chat that answers questions using your own data. Content is pulled from Google Drive, indexed in Supabase, and used to answer live queries. Teams can deliver fast, consistent replies without digging through files.

The flow starts when a chat message arrives. The system downloads a file from Google Drive, loads it as text, splits it into small chunks, and creates embeddings with OpenAI. These vectors are stored in Supabase so they can be searched by meaning, not just keywords. For each question, the message is embedded, matched with the best chunks through the Supabase function, and passed to an OpenAI chat model to craft a clear answer. A final step formats the output for the chat.

Setup needs pgvector enabled, a table with an embedding column that matches the OpenAI model size, and the match_documents function. The same embedding model is used for insert, search, and upsert to keep results accurate. You can also update records using the upsert path to refresh content as it changes. Expect quicker replies, fewer escalations, and lower handling time for FAQs, product manuals, and policy guides.

What are the key features?

  • Public chat trigger captures questions in real time and returns only the final answer.
  • Google Drive node downloads the source file by URL for ingestion.
  • Default Data Loader reads EPUB content and converts it into clean text.
  • Recursive text splitter creates small chunks to boost retrieval quality.
  • OpenAI Embeddings generate vectors for both insertion and query to keep dimensions aligned.
  • Supabase vector store inserts, updates, and retrieves documents using pgvector.
  • Vector Retriever and a Q&A chain select the best chunks and draft a clear answer.
  • Set node shapes the output field so the chat shows only the answer text.
  • Supabase table row retrieval helps with audits and ID lookups for maintenance.

What are the benefits?

  • Reduce manual lookups from 30 minutes to under 60 seconds
  • Automate up to 70 percent of common support questions
  • Improve answer consistency by using one verified source
  • Handle thousands of pages with vector search at scale
  • Connect Google Drive and Supabase without custom code
  • Keep data in your own database for better control

How do you set it up?

  1. Import the template into n8n: Create a new workflow in n8n > Click the three dots menu > Select 'Import from File' > Choose the downloaded JSON file.
  2. You'll need accounts with Supabase, Google Drive and OpenAI. See the Tools Required section above for links to create accounts with these services.
  3. In your Supabase project, enable the pgvector extension. Create a table with columns for embedding VECTOR(1536), metadata JSONB, and content TEXT. Add the match_documents function with the same vector size as your embedding model.
  4. In Supabase, check Row Level Security and policies so your API key can read and write to the vector table.
  5. In n8n, open the OpenAI Embeddings and Chat nodes, click Credential to connect with, choose Create new credential, and paste your OpenAI API key from the OpenAI dashboard. Give it a clear name.
  6. In n8n, open the Supabase nodes, create new credentials, and enter your Supabase URL and service role or anon API key. Name the credential so your team can recognize it later.
  7. For Google Drive, if the file is private, create a Google Drive credential in n8n and select it on the Google Drive node. If the file is public, the download by URL will work without auth.
  8. On the Google Drive node, set Operation to download and paste the file URL. Confirm the file ID is correct.
  9. Open the Default Data Loader and choose the EPUB loader. Confirm data type is binary and that the Recursive Text Splitter is connected for chunking.
  10. Set text-embedding-3-small on all Embeddings nodes used for insertion, retrieval, and upsert so vector dimensions stay consistent with your table.
  11. On the Supabase vector store nodes, set your table name and queryName to match_documents. Set topK in the retriever to control how many chunks are returned.
  12. Publish the chat. Use the public chat URL in n8n to ask a question contained in your document. Check the Executions view to verify the query embedding, match results, and final answer.
  13. If you see no matches, confirm your match_documents function exists, the table vector size is 1536, and your policies allow access. For deletions, use an HTTP Request node to call the Supabase REST API with a DELETE filter on the record ID.

Tools Required

$24 / mo or $20 / mo billed annually to use n8n in the cloud. However, the local or self-hosted n8n Community Edition is free.

Google Drive

Sign up

Drive API: $0 (no additional cost; quota-limited)

OpenAI

Sign up

Pay-as-you-go: GPT-5 at $1.25 per 1M input tokens and $10 per 1M output tokens

Supabase

Sign up

Free: $0 / mo — unlimited API requests; 500 MB database; 5 GB bandwidth; 1 GB storage; 50,000 MAUs.

Similar Templates

Join Futurise to access 1,200+ automation templates

Get instant access to ready-made automation workflows for n8n, Make.com, AI agents, and more. Download, customise, and deploy in minutes.