>_TheQuery
← Glossary

Kimi K2

LLM Models

Moonshot AI's open-source 1 trillion parameter MoE model with 32B active parameters, outperforming GPT-5 and Claude Sonnet 4.5 on reasoning benchmarks.

Kimi K2, released by Beijing-based Moonshot AI in July 2025, is a trillion-parameter open-source model built on a Mixture-of-Experts (MoE) architecture with 32 billion parameters activated per token. Designed specifically for agentic tasks, K2 combines massive scale with efficient inference through sparse activation.

The Kimi K2 Thinking variant outperforms GPT-5 and Claude Sonnet 4.5 on several key benchmarks. On BrowseComp, K2 Thinking scores 60.2%, decisively beating GPT-5 (54.9%) and Claude 4.5 (24.1%). It edges GPT-5 on GPQA Diamond (85.7% vs 84.5%) and matches it on AIME 2025 and HMMT 2025. On MATH-500, K2 achieves 97.4%, outperforming GPT-4o and Claude Sonnet 3.5. In coding, it scores 65.8% on SWE-bench.

Kimi K2 is notable for being trained at a cost of just $4.6 million, a fraction of what Western labs typically spend on frontier models. This cost efficiency, combined with its strong benchmark performance and open-source availability, has made it one of the most significant Chinese AI models and a demonstration that frontier-level capabilities can be achieved with comparatively modest budgets.

Last updated: February 22, 2026