Zero-Shot / Few-Shot Learning
FundamentalsTechniques where a model performs tasks with no examples (zero-shot) or only a handful of examples (few-shot) provided in the prompt, without any additional training.
Zero-shot learning refers to a model's ability to perform a task it was never explicitly trained on, using only a natural language instruction. For example, asking a language model to classify sentiment without providing any labeled examples. The model relies entirely on its pre-trained knowledge to interpret the instruction and generate a correct response.
Few-shot learning extends this by including a small number of input-output examples (typically 1-5) in the prompt to guide the model's behavior. These examples act as implicit instructions, helping the model understand the expected format and reasoning pattern. For instance, showing three examples of translating English to French before asking for a new translation.
Both techniques became prominent with the release of GPT-3, which demonstrated that large language models could perform competitively on many benchmarks without fine-tuning. Zero-shot and few-shot prompting are now foundational techniques in prompt engineering, allowing developers to adapt powerful models to specific tasks without collecting training data or retraining the model.
Related Terms
Last updated: March 1, 2026