n8n

How to Streamline Telegram Document Q&A Support?

Turn your Telegram bot into a smart helper that stores PDFs and answers questions from those files. Users send a document or a message in Telegram, and the bot either loads the file into a knowledge base or replies with a clear answer sourced from your data. This fits customer support, onboarding, and internal help desks that need fast, accurate responses.

Incoming messages follow two clear paths. If the message includes a document, the bot downloads the file, sets it to PDF, splits the text into chunks, creates embeddings with OpenAI, and saves them into a Pinecone index. After saving, it replies in Telegram with how many pages were stored. If the message is plain text, it searches the vector store with a retriever, passes the best chunks to a Groq chat model, and returns a precise answer in the same chat. The flow uses Telegram Trigger, If routing, a code node for file type, a text splitter with 3000 size and 200 overlap, OpenAI embeddings, Pinecone for storage and retrieval, and a Groq LLM for answers.

Set up a Telegram bot, OpenAI, Pinecone, and Groq accounts. Expect shorter response times, less manual searching, and a chat channel that scales with your data. Use it for product FAQs, policy questions, or training guides where users want answers inside Telegram.

What are the key features?

  • Telegram Trigger listens for new messages and routes them in real time.
  • If node checks whether the message includes a document or plain text.
  • Telegram get File downloads the attached file using the file id.
  • Code node sets the file MIME type to application/pdf to standardize processing.
  • Recursive Character Text Splitter uses 3000 character chunks with 200 overlap for better recall.
  • OpenAI Embeddings converts text chunks into vectors ready for search.
  • Pinecone Vector Store saves and indexes document chunks for fast retrieval.
  • Vector Store Retriever finds the most relevant chunks for each question.
  • Groq Chat Model llama 3.1 70b generates clear answers from retrieved context.
  • Telegram Response sends the final answer or a confirmation of pages saved.

What are the benefits?

  • Reduce manual document search from hours to minutes by answering inside Telegram
  • Handle many more chat questions without adding staff by using retrieval based answers
  • Improve answer consistency by sourcing replies from saved documents
  • Cut new agent ramp up time by centralizing knowledge in a vector database
  • Lower repeat questions by giving clear, cited answers from the same chat

How do you set it up?

  1. Import the template into n8n: Create a new workflow in n8n > Click the three dots menu > Select 'Import from File' > Choose the downloaded JSON file.
  2. You'll need accounts with Telegram, OpenAI, Pinecone and Groq. See the Tools Required section above for links to create accounts with these services.
  3. Create a Telegram bot: In Telegram, message BotFather, create a new bot, and copy the bot token. In n8n, open the Telegram Trigger node, click the credential dropdown, choose Create new credential, and paste the token.
  4. Set Telegram Trigger updates to message so the bot receives both texts and documents. Activate the workflow to register the webhook.
  5. Open the OpenAI Embeddings nodes. In the credential dropdown, click Create new credential. Add your OpenAI API key from the OpenAI account API page. Name the credential clearly, for example openai-embeddings-prod.
  6. Prepare Pinecone: In your Pinecone dashboard, create an index named telegram. Choose a dimension that matches your selected OpenAI embedding model. In n8n, open each Pinecone node, click Create new credential, and paste your Pinecone API key and environment.
  7. Set up Groq: Get your API key from the Groq console. In the Groq Chat Model node, create a new credential, paste the key, and confirm the model is set to llama-3.1-70b-versatile.
  8. Review the If node conditions so messages with a document route to the file path and plain text goes to the Q&A path. Keep the default JSON paths if you used the provided template.
  9. Check the Code node that sets application/pdf. Leave it as is unless your files use other formats.
  10. Validate ingestion: Send a PDF to your bot. You should receive a reply that shows how many pages were saved on Pinecone. If it fails, confirm the Pinecone index name is telegram and that your OpenAI credentials are active.
  11. Validate Q&A: Send a question that is covered by the uploaded PDF. The bot should reply with a clear answer. If the answer is empty or generic, make sure documents were loaded and that embeddings are created without errors.
  12. Troubleshoot: If Telegram does not receive messages, recheck the bot token and ensure the workflow is active. If Pinecone insert fails, verify API key, environment, and index dimension. If answers look off, reduce chunk size or increase overlap to improve retrieval quality.

Tools Required

$24 / mo or $20 / mo billed annually to use n8n in the cloud. However, the local or self-hosted n8n Community Edition is free.

Free tier: $0, API key usable via API (rate‑limited)

OpenAI

Sign up

Pay-as-you-go: GPT-5 at $1.25 per 1M input tokens and $10 per 1M output tokens

Pinecone

Sign up

Starter (Free): $0 / mo; includes 2 GB storage, 2M write units / mo, 1M read units / mo, up to 5 indexes; API access.

Telegram

Sign up

Free: $0, Telegram Bot API usage is free for developers

Similar Templates

Join Futurise to access 1,200+ automation templates

Get instant access to ready-made automation workflows for n8n, Make.com, AI agents, and more. Download, customise, and deploy in minutes.