Zero-shot classification is a technique that allows a model to classify text into categories it was never explicitly trained on. Unlike traditional classifiers that require labeled training data for each category, a zero-shot model can generalize to any set of labels at inference time. You simply provide the text and the categories you want, and the model determines which ones fit best.
This is possible because the model was trained on natural language inference (NLI) — the task of determining whether a hypothesis follows from a premise. For each candidate label, the model evaluates whether the statement “This text is about [label]” is entailed by the input text. The confidence scores reflect how strongly the model believes each label applies.
Zero-shot classification repurposes an NLI model. For each candidate label, the input text is paired with a hypothesis like “This example is [label]”. The model then predicts whether the hypothesis is entailed (true), contradicted (false), or neutral. The entailment score for each label becomes the classification confidence.
This means the model runs one forward pass per candidate label. More labels means more computation, but each label is evaluated independently. This architecture is what makes it “zero-shot” — the model has never seen your specific labels during training, but it understands language well enough to judge semantic compatibility.
In single-label mode (default), the scores are normalized with softmax so they sum to 100%. The model picks the single best-fitting category. In multi-label mode, each label is scored independently using sigmoid, so multiple labels can have high scores simultaneously. Use multi-label when text can belong to several categories at once — for example, a news article might be about both “technology” and “business”.
Zero-shot classification demonstrates one of the most powerful capabilities of modern language models: transfer learning. A model trained on one task (NLI) can be repurposed for a completely different task (classification) without any additional training. This eliminates the need to collect and label training data for every new classification problem, making NLP accessible to anyone who can describe their categories in natural language.
Enter any text in the input area and define your categories as a comma-separated list. Click “Classify” to run the model. Try the preset examples to see how the model handles sentiment analysis, topic classification, intent detection, and emotion recognition — all with the same underlying model and no task-specific training. The model runs entirely in your browser using Transformers.js — your text never leaves your device.