n8n

How to Build Qdrant AI Assistant for Regulations?

Turn long tax rule PDFs into a simple chat assistant your team can use. The build loads public documents, organizes them by chapter and section, and answers questions with clear citations. Finance and compliance teams can find the right clause in minutes.

The flow starts with a manual run that downloads a zip of PDFs, unzips them, and extracts text. It maps chapters and section labels with pattern matching, then splits text into 2000 character parts with metadata for chapter and section. Embeddings are created with Mistral AI and saved to a Qdrant collection. A batching loop with a short wait and batch size of 5 helps avoid rate limits. A chat webhook uses an OpenAI agent and two tools: Ask for semantic search via the Qdrant Search API, and Search to fetch exact sections via the Qdrant Scroll API. Chat memory keeps context for follow up questions.

You need a Qdrant endpoint, a Mistral AI key, and an OpenAI key. Set your collection name and base URL, then run the index one time to load the data. After that, send chat questions to the webhook to get relevant snippets or full section text on request. Teams cut research time, improve answer consistency, and can reuse the same design for any policy or rulebook library.

What are the key features?

  • Downloads and unzips a PDF archive, then extracts text from each file
  • Parses chapters and section labels with pattern matching for clean structure
  • Splits content into 2000 character chunks and adds chapter and section metadata
  • Creates embeddings with Mistral AI and stores vectors in a Qdrant collection
  • Uses a batch loop and a short wait to respect API rate limits
  • Provides an Ask tool that runs semantic search via the Qdrant Search API
  • Provides a Search tool that fetches exact sections via the Qdrant Scroll API
  • Runs an OpenAI agent with chat memory to handle natural questions and follow ups
  • Routes tool calls and responses with clear logic for reliable outputs

What are the benefits?

  • Reduce policy research time from hours to minutes
  • Automate most repetitive document lookups with structured sections
  • Improve answer consistency by retrieving exact chapter and section text
  • Handle large document sets with batching and chunking
  • Connect AI models to a vector database for accurate search
  • Support ongoing chats with memory for better follow up answers

How do you set it up?

  1. Import the template into n8n: Create a new workflow in n8n > Click the three dots menu > Select 'Import from File' > Choose the downloaded JSON file.
  2. You'll need accounts with Qdrant, Mistral AI and OpenAI. See the Tools Required section above for links to create accounts with these services.
  3. In the n8n credentials manager, create a Qdrant credential: double click a Qdrant node, choose 'Credential to connect with' > 'Create new credential', enter your Qdrant base URL and API key, then save.
  4. Create a Mistral AI credential: open the Embeddings or HTTP Request node for embeddings, choose 'Create new credential', paste your Mistral AI API key from the Mistral dashboard, then save.
  5. Create an OpenAI credential: open the OpenAI Chat Model node, select 'Create new credential', paste your OpenAI API key from the OpenAI account page, and save.
  6. Open the Qdrant Vector Store node and set the collection name you want to use. Keep the same name in the HTTP Request nodes that call the Qdrant Search and Scroll APIs.
  7. Review the SplitInBatches node and the 1 second Wait node. If you see rate limit errors from Mistral AI, lower the batch size or increase the wait time.
  8. Run a test with the manual trigger to build the index: click Test workflow, then watch the PDF extraction, section mapping, chunking, and vector uploads complete.
  9. Validate the index: in the Use Qdrant Scroll API node, execute the node to confirm points exist and that payload metadata includes chapter and section.
  10. Open the When chat message received node and copy the webhook URL. Send a question with a tool like curl or Postman to verify the agent returns results.
  11. If answers are empty, check the Get Mistral Embeddings node output and confirm the Qdrant Search API node receives a vector array. Verify credentials and collection names match.
  12. Optional: adjust the Text Splitter chunk size or the section parsing expressions if your documents have different formatting.

Tools Required

$24 / mo or $20 / mo billed annually to use n8n in the cloud. However, the local or self-hosted n8n Community Edition is free.

Mistral AI

Sign up

Free API tier: $0 (usage-limited). Lowest paid usage: Mistral Embed at $0.10 per 1M tokens.

OpenAI

Sign up

Pay-as-you-go: GPT-5 at $1.25 per 1M input tokens and $10 per 1M output tokens

Qdrant

Sign up

Free tier: $0, 1 GB free cluster (no credit card), accessible via REST/GRPC API

Similar Templates

Join Futurise to access 1,200+ automation templates

Get instant access to ready-made automation workflows for n8n, Make.com, AI agents, and more. Download, customise, and deploy in minutes.