Context Rot
NLPThe phenomenon where large language model performance degrades as the input context grows longer, with the model becoming less accurate at retrieving and reasoning over information in large prompts.
Context rot is a phenomenon where large language models become measurably less effective as the amount of text in their context window increases. Even when information is present in the prompt, models struggle to find and use it reliably as the surrounding context grows. The term was coined in 2025 following research that demonstrated consistent, predictable performance degradation with increasing input length across leading models.
The degradation follows specific patterns. When context exceeds roughly 50% of the window capacity, models begin favoring recent tokens over earlier ones, with information in the middle of long prompts being most likely to be lost - a pattern sometimes called the "lost in the middle" effect. Adobe researchers demonstrated this with advanced needle-in-a-haystack tests where accuracy dropped dramatically in larger contexts, even though the information was explicitly present. Lower semantic similarity between the target information and the query accelerates the degradation further.
Context rot has significant practical implications for retrieval-augmented generation systems and AI agents that accumulate long conversation histories. Solutions being explored include recursive language models (RLMs) that compress and fold context hierarchically rather than processing it as a flat sequence, context engineering techniques that strategically organize information within the prompt, and retrieval strategies that minimize unnecessary context. The recursive language model approach, introduced by Alex Zhang in late 2025, has shown that a smaller model with recursive context folding can outperform larger models on long-context benchmarks while costing less per query.
Related Terms
Last updated: February 28, 2026