n8n

How to Automate Qdrant Tax Code Answers?

Turn long tax documents into a helpful chat assistant. Ask plain questions or request exact sections and get clear answers fast. Great for finance and legal teams that need quick and accurate tax guidance.

The flow downloads a zip of public tax PDFs, unzips it, and reads each file. It separates the text into chapters and sections and adds metadata like chapter and section id. Long content is split into safe chunks. Mistral AI creates embeddings for each piece, and the vectors with metadata are saved in a Qdrant collection. A chat entry point listens for messages and sends them to an AI agent that uses an OpenAI model. The agent has two tools. Ask turns a question into an embedding and searches Qdrant. Search looks up full sections by id or by chapter using Qdrant Scroll. A small delay and batch size help avoid rate limits.

You need API keys for Mistral AI, Qdrant, and OpenAI. Expect big time savings as manual research drops from hours to minutes. This is useful for internal tax support, compliance checks, and training new staff. After you set credentials and confirm the collection name, run the build once to index, then chat with your data any time.

What are the key features?

  • Downloads and unzips a full set of tax PDFs automatically.
  • Extracts PDF text and splits content into chapters and sections.
  • Adds clear metadata like chapter and section id for precise filtering.
  • Chunks long text to safe sizes to respect API limits.
  • Generates embeddings with Mistral AI and stores vectors in Qdrant.
  • Uses batching and a one second wait to control request rates.
  • Exposes a chat webhook entry that sends messages to an AI agent.
  • OpenAI model answers with context retrieved from Qdrant.
  • Two tools: Ask uses Qdrant Search, Search returns full sections via Qdrant Scroll.
  • Switch logic routes each chat request to the right tool.

What are the benefits?

  • Reduce manual tax research from hours to minutes
  • Improve answer precision by storing section level metadata
  • Handle thousands of pages with batching and chunking
  • Connect Mistral AI, Qdrant, and OpenAI in one flow
  • Lower API errors with rate limit controls and delays
  • Support many queries with fast vector search

How do you set it up?

  1. Import the template into n8n: Create a new workflow in n8n > Click the three dots menu > Select 'Import from File' > Choose the downloaded JSON file.
  2. You'll need accounts with Mistral AI, Qdrant and OpenAI. See the Tools Required section above for links to create accounts with these services.
  3. In the n8n credentials manager, create a Mistral AI API Key credential. You can also double click the Embeddings Mistral Cloud or Get Mistral Embeddings node, click 'Create new credential', and follow the on screen steps. Generate your API key in your Mistral AI dashboard and paste it into the credential.
  4. Set up Qdrant access. If you use Qdrant Cloud, get your cluster URL and API key. If you self host, use your base URL and API key. In the Qdrant Vector Store node and the two HTTP Request nodes named Use Qdrant Search API1 and Use Qdrant Scroll API, choose your Qdrant credential.
  5. Create an OpenAI API Key and add it in n8n. Double click the OpenAI Chat Model node, pick 'Create new credential', and paste your key. Select your preferred model in the node if needed.
  6. Check the collection name used across nodes. The HTTP requests target the collection named 'texas_tax_codes'. Make sure your Qdrant Vector Store node writes to the same collection name.
  7. Review the data source. Open the Get Tax Code Zip File node and confirm the URL points to your desired zip of PDFs. Replace it if you want to index a different document set.
  8. Validate the parsing logic. Open the Extract From Chapter and Map To Sections Set nodes. If your PDFs use different labels, adjust the regex and mapping to match your structure.
  9. Control rate limits. In the For Each Section... node you can change the batch size. The 1sec Wait node can be adjusted to reduce API errors if you see rate limit warnings.
  10. Index your data. Click Test workflow to run the manual build. Watch the Qdrant Vector Store node for success. In your Qdrant dashboard, confirm points are created and metadata fields like chapter and section exist.
  11. Test the chat. Open the When chat message received node and copy the webhook URL or use the built in chat UI. Ask a question like 'What is the rule for sales tax exemptions' or request 'Return section 151.009'. Check that Ask uses Qdrant Search and Search uses Qdrant Scroll.
  12. Troubleshoot results. If nothing returns, verify the Qdrant credentials, collection name, and that embeddings were created. If answers look off, tune chunk size and the Recursive Character Text Splitter. If sections are missing, adjust the parsing rules.

Tools Required

$24 / mo or $20 / mo billed annually to use n8n in the cloud. However, the local or self-hosted n8n Community Edition is free.

Mistral AI

Sign up

Free API tier: $0 (usage-limited). Lowest paid usage: Mistral Embed at $0.10 per 1M tokens.

OpenAI

Sign up

Pay-as-you-go: GPT-5 at $1.25 per 1M input tokens and $10 per 1M output tokens

Qdrant

Sign up

Free tier: $0, 1 GB free cluster (no credit card), accessible via REST/GRPC API

Similar Templates

Join Futurise to access 1,200+ automation templates

Get instant access to ready-made automation workflows for n8n, Make.com, AI agents, and more. Download, customise, and deploy in minutes.