An Empirical Comparison of Supervised Learning Algorithms Using Different Performance Metrics
MetadataShow full item record
Caruana, Rich; Niculescu-Mizil, Alex
We present the results of a large-scale empirical comparison between seven learning methods: SVMs, neural nets, decision trees, memory-based learning, bagged trees, boosted trees, and boosted stumps. A novel aspect of our study is that we compare these methods on nine different performance criteria: accuracy, squared error, cross entropy, ROC Area, F-score, precision/recall break-even point, average precision, lift, and probability calibration. The models with the best performance overall are neural nets, SVMs, and bagged trees. However, if we apply Platt calibration to boosted trees, they become the best model overall. Detailed examination of the results shows that even the best models perform poorly on some problems or metrics, and that even the worst models sometimes yield the best performance.
computer science; technical report
Previously Published As