Validation in Machine Learning

 

The “machine learning black box process” consists of training and testing phases.  The training phase is when we use an algorithm to train a model and in the testing, we evaluate the performance of the model among different other models.

The ML black box process
The ML black-box process

The main challenge in machine learning is to avoid overfitting. Overfitting is when a model you trained is very complicated and perfectly suited to the trained data.  This model could classify the trained data perfectly, but will fail when new and never seen instances arrive in the test.

The validation process manages the train/test process in a way that we could estimate the algorithm true performance. The validation process decides how to split the data correctly and how to calculate the model performance in the most accurate way that takes overfitting into account.

Choosing the right validation methods will result in a more generic model that could deal with new instances.

Simple Split

This method suggests splitting the data into two datasets; one trained and the other is for a test. Usually, the trained dataset is larger (70% train, 30% test). The trained data is used for training the model and then calculates the error of the test and passes it to Evaluation.

This is a straightforward and understandable methodology.  Still, there are many drawbacks: we lose data for training the model due to the test data, we might have had good results just because the test was “easy” to classify.

validation-simple-split
validation using simple split

Leave one out (LOO)

Loop over the instances and use one for a test and the rest for training, eventually calculate the error for all the experiments, and pass it to Evaluation. LOO is a good validation process for very small datasets.

validation-leave-one-out
validation using leave 1 out

K folds’ Cross-validation

Randomly split the data into K partitions. We iterate the partition and use the partition as a test and the other partitions as training, and for each fold, we calculate the error and pass it to Evaluation. K-fold is one of the most common validation techniques:

validation-k-fold
validation using K fold cross

Leave a Reply

Your email address will not be published. Required fields are marked *