Random forest classification of remote sensing data

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

5 Citations (Scopus)


Ensemble classification methods train several classifiers and combine their results through a voting process. Many ensemble classifiers [1,2] have been proposed. These classifiers include consensus theoretic classifiers [3] and committee machines [4]. Boosting and bagging are widely used ensemble methods. Bagging (or bootstrap aggregating) [5] is based on training many classifiers on bootstrapped samples from the training set and has been shown to reduce the variance of the classification. In contrast, boosting uses iterative re-training, where the incorrectly classified samples are given more weight in successive training iterations. This makes the algorithm slow (much slower than bagging) while in most cases it is considerably more accurate than bagging. Boosting generally reduces both the variance and the bias of the classification and has been shown to be a very accurate classification method. However, it has various drawbacks: it is computationally demanding, it can overtrain, and is also sensitive to noise [6]. Therefore, there is much interest in investigating methods such as random forests.

Original languageEnglish
Title of host publicationImage Processing for Remote Sensing
PublisherCRC Press
Number of pages18
ISBN (Electronic)9781420066654
ISBN (Print)1420066641, 9781420066647
Publication statusPublished - 1 Jan 2007

Bibliographical note

Publisher Copyright:
© 2008 by Taylor & Francis Group, LLC.


Dive into the research topics of 'Random forest classification of remote sensing data'. Together they form a unique fingerprint.

Cite this