>_TheQuery
← Glossary

OpenAI o4-mini

LLM Models

OpenAI's smaller reasoning model released in April 2025, achieving 92.7% on AIME 2025 and 99.5% with Python interpreter access.

OpenAI o4-mini is a compact reasoning model released in April 2025, optimized for speed and cost while maintaining exceptional reasoning capabilities. With a 200K token context window matching its larger sibling o3, o4-mini demonstrates that effective reasoning doesn't always require the largest models.

Remarkably, o4-mini achieves 92.7% on AIME 2025, actually surpassing the larger o3 model on this challenging mathematics benchmark. On SWE-bench, it scores 68.1%, showing strong software engineering capabilities. When given access to a Python interpreter, its AIME 2025 performance jumps to an extraordinary 99.5%, demonstrating the power of tool-augmented reasoning.

The model represents a significant achievement in efficient AI design, offering exceptional math and coding performance for its size and cost. This makes it an attractive option for applications requiring reasoning capabilities but operating under budget or latency constraints, proving that smaller models with proper training can compete with or exceed larger counterparts on specific tasks.

Last updated: February 22, 2026