Underfitting
FundamentalsWhen a model is too simple to capture the underlying patterns in the data, resulting in poor performance on both training and test sets.
Underfitting occurs when a model has high bias -- it cannot represent the complexity of the true relationship between inputs and outputs. A classic example is fitting a straight line (degree-1 polynomial) to data that follows a sine wave: the model is fundamentally incapable of capturing the curved pattern, resulting in high error on both training and test data.
Underfitting is the opposite end of the bias-variance spectrum from overfitting. While overfitting shows low training error but high test error (the model memorized noise), underfitting shows high error everywhere (the model cannot even learn the signal). Signs of underfitting include training loss that plateaus at a high value, similar training and validation errors (both poor), and predictions that miss systematic patterns visible in the data.
To address underfitting, increase model complexity: add more features, use a more expressive model (e.g., switch from linear to polynomial, or increase neural network depth/width), reduce regularization strength, or train for more epochs. Feature engineering can also help by making patterns more accessible to the model. The key diagnostic is comparing training error to an acceptable baseline -- if training error is too high, the model is underfitting.
Related Terms
Last updated: February 22, 2026