>_TheQuery
← Glossary

Artificial Intelligence

Fundamentals

The field of computer science focused on building systems that can perform tasks typically requiring human intelligence, including reasoning, learning, perception, and decision-making.

Artificial intelligence (AI) is a broad field of computer science concerned with building systems capable of performing tasks that traditionally require human cognition. These tasks include understanding language, recognizing images, making decisions, solving problems, and learning from experience. The field encompasses everything from rule-based expert systems to modern deep learning models that learn patterns from massive datasets.

AI is typically divided into narrow AI (systems designed for specific tasks like image classification or language translation) and general AI (hypothetical systems with human-level reasoning across all domains). All current AI systems, including the most capable large language models, are narrow AI - they excel at defined tasks but lack the flexible, generalizable intelligence that humans possess. The pursuit of artificial general intelligence (AGI) remains an active research goal with significant debate about timelines and feasibility.

The modern AI landscape is dominated by machine learning approaches, particularly deep learning and transformer-based architectures that power large language models like GPT, Claude, and Gemini. These models have brought AI from a research discipline into mainstream daily use through products like ChatGPT, Claude Code, and AI-powered features embedded in search engines, email, and productivity tools. The rapid acceleration of AI capabilities since 2022 has triggered broad economic and societal discussions about automation, employment, safety, and the appropriate level of human oversight over increasingly autonomous systems.

Last updated: February 27, 2026