Journal of University of Science and Technology of China ›› 2016, Vol. 46 ›› Issue (3): 222-230.DOI: 10.3969/j.issn.0253-2778.2016.03.007

• Original Paper • Previous Articles    

Research on Boosting theory and its applications

ZHANG Wensheng, YU Tingzhao   

  1. Institute of Automation, Chinese Academy of Science, Beijing 100190, China
  • Received:2015-09-12 Revised:2015-12-29 Accepted:2015-12-29 Online:2015-12-29 Published:2015-12-29

Abstract: Boosting is one of the most popular ensemble algorithms in machine learning, and it has been widely used in machine learning and pattern recognition. There are mainly two frameworks of Boosting, learnable theory and statistical theory. Boosting was first proposed from the theory of weak learnability which illustrates the theory of boosting a group of weak learners into a strong learner. After a finite number of iterations, the combination of weak learners could be boosted into any accuracy on the training set, and the only requirement for a weak learner is that the accuracy be slightly better than a random guess. From the statistical point of view, Boosting is an additive model, and the equivalence between these two models has already been proved. The theory of weak learnability is reviewed from the PAC perspective, and the challenges Boosting may face are presented, includeing effectiveness for high dimension data and the Margin theory. Then, various Boosting algorithms are discussed from the above two viewpoints and their new applications with Boosting framework. Finally, the future of Boosting is discussed.

Key words: Boosting, weak learnability, margin theory, ensemble learning, AdaBoost

CLC Number: