>_TheQuery
← Glossary

Prompt Engineering

NLP

The practice of carefully crafting input text to elicit desired behavior from large language models, including techniques like few-shot examples, chain-of-thought reasoning, and system instructions.

Prompt engineering is the art of designing inputs that guide LLMs toward producing useful, accurate, and appropriately formatted outputs. Since LLMs are next-token predictors, the way a question or instruction is phrased dramatically affects the quality of the response. Small changes in wording can cause large changes in output, making prompt design both powerful and brittle.

Key techniques include zero-shot prompting (direct instructions), few-shot prompting (providing examples of desired input-output pairs), chain-of-thought (asking the model to show its reasoning step-by-step), and system prompts (setting context and behavioral guidelines). More advanced approaches include retrieval-augmented prompts (injecting relevant context), structured output instructions (requesting JSON or specific formats), and prompt chaining (breaking complex tasks into sequential simpler prompts).

While prompt engineering has become a critical skill for working with LLMs, it has limitations. Prompting is inherently brittle -- the same prompt can produce different results across model versions or even across runs. For production systems, prompts should be versioned like code, tested against regression suites, and monitored for output quality. When consistent, domain-specific behavior is needed, fine-tuning often provides more reliable results than prompt engineering alone.

Last updated: February 22, 2026