Machine Learning Model

How Machine Learning Model Works

Machine Learning Model Representation

An SVM classifies data by finding the linear decision boundary (hyperplane) that separates data points of one class from data points of the other class. The best hyperplane for an SVM has the largest margin between the two classes when the data is linearly separable. If the data is not linearly separable, a loss function penalizes points on the wrong side of the hyperplane. SVMs sometimes use a kernel transformation to project nonlinearly separable data into higher dimensions where a linear decision boundary can be found.

SVM regression algorithms work like SVM classification algorithms, but the regression algorithms are modified to predict continuous responses. They find a model that deviates from the measured data with minimal parameter values to reduce sensitivity to errors.

Classification

SVM model

Regression

SVM Regression model

Decision Tree

A decision tree lets you predict responses to data by following the decisions in the tree from the root (beginning) down to a leaf node. A tree consists of branching conditions where the value of a predictor is compared to a trained weight. The number of branches and the values of weights are determined in the training process. Additional modification, or pruning, may be used to simplify the model.

Regression tree model

Ensemble Trees

In ensemble methods, several weaker decision trees are combined into a stronger ensemble. A bagged decision tree consists of trees that are trained independently on bootstrap samples of the input data.

Boosting involves iteratively adding and adjusting the weight of weak learners. There is an emphasis on misclassified observations or fitting new learners to minimize the mean-squared error between the observed response and the aggregated prediction of all previously grown learners.

Regression tree ensembles model

Generalized Additive Model (GAM)

GAM models explain class scores or response variables using a sum of univariate and bivariate shape functions of predictors. These models use a shape function, such as a boosted tree, for each predictor and, optionally, each pair of predictors. The shape function can capture a nonlinear relation between predictors and predictions.

gam

Inspired by the human brain, a neural network consists of interconnected nodes or neurons in a layered structured that relate the inputs to the desired outputs. The machine learning model is trained by iteratively modifying the strengths of the connections so that given inputs map to the correct response.

The neurons between the input and output layers of a neural network are referred to as hidden layers. Shallow neural networks typically have few hidden layers. Deep neural networks have more hidden layers than shallow neural networks. They can have hundreds of hidden layers.

Neural networks can be configured to solve classification or regression problems by placing a classification or regression output layer at the end of the network. For deep learning tasks, such as image recognition, you can use pretrained deep learning models. Common types of deep neural network are CNNs and RNNs.

Deep Neural Network model