Run fair split tests on your chat assistant prompts. Each chat session is assigned to either a baseline prompt or an alternative prompt and stays on that path for the whole conversation. This helps product and marketing teams compare tone, instructions, and style with clean data.
A chat message triggers the flow. A Set node stores both prompt versions. Supabase checks a table called split_test_sessions for the session id. If the session is new, it gets a random assignment. Another Set node picks the correct prompt for that session. The AI Agent then answers with the OpenAI Chat Model and saves conversation history in Postgres so the bot remembers context. This design keeps each session stable, which makes results easier to measure.
To set it up, connect Supabase, OpenAI, and your Postgres database. Update the baseline and alternative prompts in the Define Path Values node, activate the workflow, and test in the n8n chat. Expect faster experiments, less manual work, and clearer insight into which prompt leads to better results like higher engagement or better answer quality.