Training methods for the decision forest library fertilized forests

1University of Augsburg

Abstract

Worked with Christoph Lassner on his library fertilized forests and implemented several boosted training methods like AdaBoost and SAMME.

You can have a look at the project homepage or read the paper which received an honorable mention at the ACM MM OSSC 2015.


Library Description

The fertilized forests project has the aim to provide an easy to use, easy to extend, yet fast library for decision forests. It summarizes the research in this field and provides a solid platform to extend it.

The library is thoroughly tested and highly flexible. Its development started at the Multimedia Computing and Computer Vision Lab of the University of Augsburg. It is available under the permissive 2-clause BSD license. If you use this software for your research, please consider citing it.

Feature highlights are:

  • Object oriented model of the unified decision forest model of Antonio Criminisi and Jamie Shotton, as well as extensions (e.g., Hough forests).
  • Templated C++ classes for maximum memory and calculation efficiency.
  • Compatible to the Microsoft Visual C++, the GNU, the Intel and Clang compiler.
  • Platform and interface independent save/load mechanics: train forests and trees on a Linux cluster using C++ and use them on a Windows PC with MATLAB.
  • Documented and consistent interfaces in C++, Python and Matlab.

Minimal examples:

C++

#include <fertilized/fertilized.h>
using namespace fertilized;
auto soil = Soil<>();
auto forest = soil.StandardClassificationForest(2,  // # classes
                                                3); // # features
Array<float, 2, 2> data = allocate(10, 3);
// Fill the data array (10 samples, 3 features).
Array<uint, 2, 2> annotation = allocate(10, 1);
// Fill the annotation array (10 samples, 1 annotation).
forest -> fit(data, annotation);
Array<float, 2, 2> new_data = allocate(100, 3);
// Fill the new data (100 samples, 3 features) and get predictions.
Array<double, 2, 2> predictions = forest -> predict(new_data);

Python

import fertilized
import numpy as np
soil = fertilized.Soil()
forest = soil.StandardClassificationForest(2, # # classes
                                           3) # # features
data = np.empty(size=(10, 3), dtype='float')
# Fill the data array (10 samples, 3 features).
annotation = np.empty(size=(10, 1), dtype='uint')
# Fill the annotation array (10 samples, 1 annotation).
forest.fit(data, annotation)
new_data = np.empty(size=(100, 3), dtype='float')
# Fill the new data (100 samples, 3 features) and get predictions.
predictions = forest.predict(new_data)

Matlab

addpath(strcat('pathto', filesep, 'fertilized'));

soil = Soil();
forest = soil.StandardClassificationForest(2,  % # classes
                                           3); % # features
data = single(zeros(10, 3));
% Fill the data array (10 samples, 3 features).
annotation = uint32(zeros(10, 1));
% Fill the annotation array (10 samples, 1 annotation);
forest.fit(data, annotation);
new_data = single(zeros(100, 3));
% Fill the new data (100 samples, 3 features) and get predictions.
predictions = forest.predict(new_data);