This page demonstrates basic classifiers: k nearest neighbors, decision tree, and linear classifiers. The linear classifiers shown here are:
naive Bayes, logistic regression, and support vector machine. Naive Bayes is part of a larger family of Bayes classifiers which include
linear discriminant analysis (LDA) and quadratic discriminant analysis (QDA). Logistic regression is part of a larger family called generalized
linear models. Linear classifiers base their decision on a linear combination of the features. Any linear model can be converted into a
more flexible model such as quadratic by engineering new features from the old. For example, given x1 x2, create x3 = x1^2, x4 = x2^2,
x5 = x1x2. Then you have a quadratic model. None-the-less, this is still a linear combination of features. It can be shown that naive Bayes,
logistic regression, and support vector machines are all similar mathematically. They all make their decision based on a linear combination
of features. For exampe, given x1 x2, they predict y is blue if and only if w0+w1x1+w2x2 > 0. They vary only in how they calculate the weights: w0,w1,w2.
More advanced classifiers are created by combining multiple instances of these basic classifiers. Examples are neural networks, random forests, bagging,
boosting, and stacking. Advanced classifiers create decision boundaries flexible enough to model any pattern. Click
here to play with a neural network. Click here to play with an application of neural networks.