Abstract
Ensemble classification methods train several classifiers and combine their results through a voting process. Many ensemble classifiers [1,2] have been proposed. These classifiers include consensus theoretic classifiers [3] and committee machines [4]. Boosting and bagging are widely used ensemble methods. Bagging (or bootstrap aggregating) [5] is based on training many classifiers on bootstrapped samples from the training set and has been shown to reduce the variance of the classification. In contrast, boosting uses iterative re-training, where the incorrectly classified samples are given more weight in successive training iterations. This makes the algorithm slow (much slower than bagging) while in most cases it is considerably more accurate than bagging. Boosting generally reduces both the variance and the bias of the classification and has been shown to be a very accurate classification method. However, it has various drawbacks: it is computationally demanding, it can overtrain, and is also sensitive to noise [6]. Therefore, there is much interest in investigating methods such as random forests.
Original language | English |
---|---|
Title of host publication | Signal and Image Processing for Remote Sensing |
Publisher | CRC Press |
Pages | 327-344 |
Number of pages | 18 |
ISBN (Electronic) | 9781420003130 |
ISBN (Print) | 0849350913, 9780849350917 |
Publication status | Published - 1 Jan 2006 |
Bibliographical note
Publisher Copyright:© 2007 by Taylor & Francis Group, LLC.