Chain of Thought
FundamentalsA prompting and reasoning technique where an AI model generates intermediate steps before arriving at a final answer, improving accuracy on complex tasks.
Chain of thought (CoT) is a technique where a large language model produces explicit intermediate reasoning steps on the way to solving a problem, rather than jumping directly to an answer. First demonstrated in a 2022 paper by Wei et al. at Google, the approach showed that simply adding "Let's think step by step" to a prompt could dramatically improve model performance on math, logic, and common-sense reasoning tasks.
Chain of thought works because it breaks complex problems into manageable sub-steps, allowing the model to use its own generated text as working memory. A math word problem that a model gets wrong when asked for a direct answer often becomes solvable when the model is prompted to show its work. The technique can be applied through few-shot prompting (providing examples with step-by-step solutions) or zero-shot prompting (simply asking the model to reason through the problem).
The concept has evolved beyond a prompting trick into a core architectural feature of reasoning models. OpenAI's o-series models and similar reasoning-focused systems generate internal chains of thought automatically, using additional inference-time compute to think through problems before responding. This has led to a new scaling paradigm called inference-time scaling or test-time compute, where model capability improves not just by training larger models but by letting them think longer on harder problems. Chain of thought has become foundational to how modern AI systems handle tasks requiring multi-step logic, from mathematical proofs to complex code generation.
Last updated: February 27, 2026