>_TheQuery
← Glossary

SVM

Fundamentals

Support Vector Machine - a supervised learning algorithm that finds the optimal boundary (hyperplane) separating data into classes, effective for classification and regression tasks.

A Support Vector Machine (SVM) is a supervised machine learning algorithm that finds the optimal hyperplane to separate data points into distinct classes. The "optimal" hyperplane is the one that maximizes the margin, the distance between the decision boundary and the nearest data points from each class. These closest points are called support vectors, and they alone determine the position of the boundary.

SVMs handle non-linearly separable data through the kernel trick, a mathematical technique that maps data into a higher-dimensional space where a linear boundary can separate the classes. Common kernels include polynomial, radial basis function (RBF), and sigmoid. This allows SVMs to learn complex decision boundaries without explicitly computing the transformation, making them powerful for tasks where classes overlap in the original feature space.

Before deep learning dominated, SVMs were among the most widely used algorithms for text classification, image recognition, and bioinformatics. They remain relevant for problems with small to medium datasets where deep learning would overfit, for high-dimensional data like genomics, and in situations where model interpretability matters. SVMs are also used as baselines in research to benchmark newer approaches. While neural networks have overtaken them on large-scale tasks, SVMs remain a fundamental algorithm in the machine learning toolkit.

Last updated: February 27, 2026