Random forest classification of remote sensing data

Sveinn R. Joelsson, Jon A. Benediktsson, Johannes R. Sveinsson

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

13 Citations (Scopus)

Abstract

Ensemble classification methods train several classifiers and combine their results through a voting process. Many ensemble classifiers [1,2] have been proposed. These classifiers include consensus theoretic classifiers [3] and committee machines [4]. Boosting and bagging are widely used ensemble methods. Bagging (or bootstrap aggregating) [5] is based on training many classifiers on bootstrapped samples from the training set and has been shown to reduce the variance of the classification. In contrast, boosting uses iterative re-training, where the incorrectly classified samples are given more weight in successive training iterations. This makes the algorithm slow (much slower than bagging) while in most cases it is considerably more accurate than bagging. Boosting generally reduces both the variance and the bias of the classification and has been shown to be a very accurate classification method. However, it has various drawbacks: it is computationally demanding, it can overtrain, and is also sensitive to noise [6]. Therefore, there is much interest in investigating methods such as random forests.

Original languageEnglish
Title of host publicationSignal and Image Processing for Remote Sensing
PublisherCRC Press
Pages327-344
Number of pages18
ISBN (Electronic)9781420003130
ISBN (Print)0849350913, 9780849350917
Publication statusPublished - 1 Jan 2006

Bibliographical note

Publisher Copyright:
© 2007 by Taylor & Francis Group, LLC.

Fingerprint

Dive into the research topics of 'Random forest classification of remote sensing data'. Together they form a unique fingerprint.

Cite this