Use this n8n build to spot images that do not match your crop library. You send an image URL and get a clear message that says which crop it most likely matches or if it is an outlier. Data teams and ops leads use it to keep training data clean and to screen uploads fast.
The flow starts with an Execute Workflow Trigger that receives the image URL. A Voyage AI call turns the image into a vector using the voyage multimodal model. The workflow then queries your Qdrant collection and checks similarity against the medoid for each crop class, using the thresholds stored with those centers. Set nodes hold your Qdrant URL, collection name, and center types so you can switch between medoid and text anchor medoid. A code step compares the highest score to the correct threshold type and returns a simple text result. The number of crop classes is pulled from Qdrant, so the query always covers all labels.
You will need a Qdrant cluster with your crop data and thresholds already loaded, plus a Voyage AI API key. Expect faster reviews, fewer manual checks, and easy scaling as you add new crops. This setup works well for dataset curation, field upload screening, and quality checks for labeling teams.