n8n

How to Streamline Telegram Knowledge Search?

Turn a Telegram chat into a live knowledge base for your team. Send a PDF to the bot to store it, then ask questions in chat to get clear answers from your own documents. Great for teams that work on mobile and need quick facts from manuals, price lists, or playbooks.

When a Telegram message arrives, the flow checks if it is a document or a question. For documents, the bot downloads the file, tags it as a PDF, splits it into small parts, creates vector embeddings with OpenAI, and saves them in a Pinecone index. A reply confirms how many pages were stored. For questions, the flow searches Pinecone for the best chunks, sends them to a Groq large language model, and returns a focused answer back to the same chat. This keeps answers grounded in your files and reduces guesswork.

You need a Telegram bot token, Pinecone and OpenAI API keys, and a Groq API key. The configuration uses a 3000 character chunk size with 200 overlap, which fits long PDFs well. Teams in operations, sales, or support can cut search time and answer staff or customer questions faster by chatting with the bot.

What are the key features?

  • Telegram Trigger listens for new messages and files in real time.
  • If node routes documents to storage and text messages to the chat answer path.
  • Telegram get File downloads the uploaded document from the chat.
  • Code node sets the file type to application/pdf to ensure consistent parsing.
  • Recursive Character Text Splitter breaks large PDFs into 3000 character chunks with 200 overlap.
  • Default Data Loader prepares the chunks for vector storage and search.
  • OpenAI Embeddings convert chunks into vectors for fast similarity search.
  • Pinecone Vector Store inserts new vectors and later retrieves the best matches.
  • Groq Chat Model with a Q and A Chain forms clear answers grounded in retrieved text.
  • Telegram Response confirms pages saved and sends chat answers, with a Limit node to avoid duplicate notices.

What are the benefits?

  • Reduce document lookup from 30 minutes to 1 minute
  • Automate up to 80% of routine questions from staff
  • Improve answer accuracy by using your own files as context
  • Handle thousands of pages without manual sorting or tagging
  • Connect Telegram, Pinecone, OpenAI and Groq in one flow

How do you set it up?

  1. Import the template into n8n: Create a new workflow in n8n > Click the three dots menu > Select 'Import from File' > Choose the downloaded JSON file.
  2. You'll need accounts with Telegram, Pinecone, OpenAI and Groq. See the Tools Required section above for links to create accounts with these services.
  3. Create a Telegram bot with BotFather and copy the bot token. Start a chat with the bot so it can receive messages.
  4. In the n8n credentials manager, create a Telegram credential using your bot token. Open the Telegram Trigger, Telegram get File and Telegram Response nodes and select this credential.
  5. Open the Telegram Trigger node and ensure it listens for message updates. Save and activate the workflow so Telegram events reach n8n.
  6. In your Pinecone dashboard, create an index named telegram and note the environment and API key. In n8n, create a Pinecone credential and select it in both Pinecone Vector Store nodes.
  7. In your OpenAI account, create an API key. In n8n, create an OpenAI credential and assign it to both Embeddings nodes.
  8. In your Groq account, create an API key. In n8n, create a Groq credential and select it in the Groq Chat Model node. Confirm the model is set to llama-3.1-70b-versatile.
  9. Send a PDF to your Telegram bot. In n8n, check the execution to see Telegram get File, Code, and Pinecone Vector Store run. You should receive a Telegram reply showing how many pages were saved.
  10. Ask a question in the same chat. Verify that the Vector Store Retriever and Question and Answer Chain run and that a clear answer is sent back to Telegram.
  11. If pages saved shows 0, confirm the file is a valid PDF and the Code node sets application/pdf. If no answer returns, check that your Pinecone index has vectors and that OpenAI and Groq keys are valid.
  12. Adjust chunk size and overlap in the Text Splitter if your files are short or very long. Keep the Limit node to one to prevent multiple upload confirmations.

Tools Required

$24 / mo or $20 / mo billed annually to use n8n in the cloud. However, the local or self-hosted n8n Community Edition is free.

Free tier: $0, API key usable via API (rate‑limited)

OpenAI

Sign up

Pay-as-you-go: GPT-5 at $1.25 per 1M input tokens and $10 per 1M output tokens

Pinecone

Sign up

Starter (Free): $0 / mo; includes 2 GB storage, 2M write units / mo, 1M read units / mo, up to 5 indexes; API access.

Telegram

Sign up

Free: $0, Telegram Bot API usage is free for developers

Similar Templates

Join Futurise to access 1,200+ automation templates

Get instant access to ready-made automation workflows for n8n, Make.com, AI agents, and more. Download, customise, and deploy in minutes.