Supplementary website for

AN ADAPTIVE OPTIMAL ENSEMBLE CLASSIFIER VIA

BAGGING AND RANK AGGREGATION WITH APPLICATIONS TO HIGH DIMENSIONAL DATA

Susmita Datta*, Vasyl Pihur, Somnath Datta

*e-mail: susmita.datta_AT_louisville.edu

 

Abstract

Background: Generally speaking, different classifiers tend to work well for certain types of data and conversely, it is usually not known a priori which algorithm will be optimal in any given classification application. In addition, for most classification problems, selecting the best performing classification algorithm amongst a number of competing algorithms is a difficult task for various reasons. As for example, the order of performance may depend on the performance measure employed for such a comparison. In this work, we present a novel adaptive ensemble classifier constructed by combining bagging and rank aggregation that is capable of adaptively changing its performance depending on the type of data that is being classified. The attractive feature of the proposed classifier is its multi-objective nature where the classification results can be simultaneously optimized with respect to several performance measures, for example, accuracy, sensitivity and specificity. We also show that our somewhat complex strategy has better predictive performance as judged on test samples than a more naïve approach that attempts to directly identify the optimal classifier based on the training data performances of the individual classifiers.

 

Results: We illustrate the proposed method with two simulated and two real-data examples. In all cases, the ensemble classifier performs at the level of the best individual classifier comprising the ensemble or better.

 

Conclusions: For complex high-dimensional datasets resulting from present day high-throughput experiments, it may be wise to consider a number of classification algorithms combined with dimension reduction techniques rather than a fixed standard algorithm set a priori.

 

R code for the ensemble classifier and a small example illustrating the main functions is available here.

 

Algorithms

Accuracy

Sensitivity

Specificity

AUC

SVM

0.842100

0.843600

0.840600

0.920356

 

(0.00380)

(0.00648)

(0.00637)

(0.00281)

PLS+LDA

0.799400

0.790600

0.808200

0.877788

 

(0.00437)

(0.00710)

(0.00607)

(0.00383)

PCA+LDA

0.488500

0.487200

0.489800

0.484688

 

(0.02341)

(0.02265)

(0.02466)

(0.03004)

PLS+RF

0.820300

0.818600

0.822000

0.894322

 

(0.00452)

(0.00739)

(0.00690)

(0.00386)

PLS+QDA

0.833800

0.837400

0.830200

0.909156

 

(0.00404)

(0.00572)

(0.00583)

(0.00339)

PLR

0.758700

0.744200

0.773200

0.838310

 

(0.00472)

(0.00824)

(0.00661)

(0.00473)

Greedy

0.832500

0.831200

0.833800

0.910976

 

(0.00446)

(0.00733)

(0.00642)

(0.00355)

Ensemble

0.839400

0.839200

0.839600

0.913498

 

(0.00390)

(0.00650)

(0.00624)

(0.00301)

Supplementary Table 1. Average accuracy, sensitivity, specificity and AUC for 100 datasets from the threenorm data with N=100 and d=20. Standard errors are shown in parentheses.

 

Algorithms

Accuracy

Sensitivity

Specificity

AUC

SVM

0.546000

0.496200

0.595800

0.550556

 

(0.00733)

(0.01818)

(0.02133)

(0.00881)

PLS+LDA

0.599300

0.485200

0.713400

0.592678

 

(0.00493)

(0.00732)

(0.00765)

(0.00606)

PCA+LDA

0.500900

0.503600

0.498200

0.502516

 

(0.00697)

(0.00571)

(0.01008)

(0.00711)

PLS+RF

0.736000

0.671200

0.800800

0.825376

 

(0.00517)

(0.00806)

(0.00774)

(0.00515)

PLS+QDA

0.810000

0.748800

0.871200

0.879586

 

(0.00404)

(0.00661)

(0.00571)

(0.00386)

PLR

0.596700

0.484000

0.709400

0.585926

 

(0.00500)

(0.00746)

(0.00778)

(0.00597)

Greedy

0.804800

0.745600

0.864000

0.875836

 

(0.00447)

(0.00667)

(0.00643)

(0.00404)

Ensemble

0.841800

0.764600

0.919000

0.941364

 

(0.00462)

(0.00759)

(0.00509)

(0.00332)

Supplementary Table 2. Average accuracy, sensitivity, specificity and AUC for 100 datasets from the ringnorm data with N=100 and d=20. Standard errors are shown in parentheses.