Loss Function
FundamentalsA function that measures the difference between a model's predictions and the actual target values, guiding the optimization process during training.
A loss function (also called a cost function or objective function) quantifies how well or poorly a model's predictions match the expected outputs. During training, the goal is to minimize this function, which drives the model to learn the correct mapping from inputs to outputs.
Common loss functions include mean squared error (MSE) for regression tasks, cross-entropy loss for classification tasks, and specialized losses like contrastive loss for embedding learning or adversarial loss for GANs. The choice of loss function encodes assumptions about the task and directly affects what the model learns to optimize.
Designing appropriate loss functions is a critical aspect of machine learning engineering. A poorly chosen loss function can lead to models that optimize for the wrong objective, producing technically low-loss outputs that fail to meet real-world requirements. Researchers continue to develop novel loss functions tailored to specific domains, including perceptual losses for image generation and reward-based losses for reinforcement learning from human feedback.
Related Terms
Last updated: February 20, 2026