February 13, 2017

Experiment Utilities

Evaluators

In Machine Learning it is often necessary to evaluate the performances of a classification or regression function that is derived through a learning process. Generally, it means measuring performance indicators over a test dataset. These performances can be then used to decide whether the learning algorithm with its parameterization is good enough for the task of interest. This is a pattern that is repeated every time a new experiment is necessary.

Performance measures are the same for many different tasks, thus their computation can be easily standardized in order to support many scenarios. In KeLP, we provided the Evaluator abstract class, which serves as a base class for other performance evaluations classes. The Evaluator class contains a public implemented method, whose name is getPerformanceMeasure(String, Object…) that will access the internal class methods by means of Java reflection to return a performance measure. So for example, if a specific implementation of an evaluator offers the method to compute the accuracy, whose name is getAccuracy(), then the getPerformanceMeasure can be invoked as getPerformanceMeasure(“Accuracy”). It serves as a general interface to retrieve the performance measures computed by a specific evaluators. Notice that an evaluator must contain methods whose name is get<MeasureName> to be compliant to the getPerformanceMeasure mechanism. This implementation pattern is necessary to support the generic instantiation of an evaluator in the case automatic classes for experiments will be provided.

The Evaluator class contains 4 abstract methods that should be implemented by its sub-classes in order to respect the Evaluator contract.
The four abstract methods of the Evaluator class are:

  • addCount(Example, Prediction): it is the main interface with which an external program will call the evaluator to add the Prediction of an Example to the counts that will be adopted to compute the final performance measures. Notice that KeLP does not force to adopt any particular internal mechanisms for computing the performances;
  • compute(): this method is called internally by getPerformanceMeasures method to force the computation of all the performances;
  • clear(): it serves, eventually, to reset the evaluator;
  • duplicate(): it should implement a method to duplicate the evaluator.

In KeLP some implementation of Evaluator class are available: BinaryClassificationEvaluator, MulticlassClassificationEvaluator and RegressorEvaluator. They are intended to satisfy the major needs when dealing with binary classification tasks, multi-class classification tasks and regression tasks.

Experiment Utils

In ML it is often necessary to tune the classifier parameters or to make more reliable measures in an experiment via cross-validation. These activities are repetitive, and it is easy to extract patterns in code that can be re-used. In KeLP we provide several methods for automatizing some of these activities, collected in the class ExperimentUtils.
First, this class contains a method test(PredictionFunction, Evaluator, Dataset) the will produce in output a List<Prediction> and will update the “counters” of the Evaluator. It serves to automatize the repetitive operations that are executed for classifying a test set. For example, the following code uses the ExperimentUtils.test method to evaluate a binary classifier with a fixed train-test split:

 

Instead, the following code uses the ExperimentUtils.nFoldCrossValidation method to perform a 5 cross-fold validation: