Abstract
Cloud contamination greatly limits the potential utilization of optical images for geoscience applications. An effective alternative is to extract data from synthetic aperture radar (SAR) images to remove clouds due to the strong penetration ability of microwaves. In this article, we propose a novel unified spatial-spectral residual network that utilizes SAR images as auxiliary data to remove clouds from optical images. The method can better establish the relationship between SAR and optical images and be divided into two modules: feature extraction and fusion module and reconstruction module. In the feature extraction and fusion module, a gated convolutional layer is introduced to discriminate cloud pixels from clean pixels, which makes up for the lack of distinguishing ability of vanilla convolutional layers and avoids the error of cloud areas in feature extraction. In the reconstruction module, spatial and channel attention mechanisms are introduced to obtain global spatial and spectral information. The network is tested on three datasets with different spatial resolutions and compositions of land covers to verify the effectiveness and applicability of the method. The results show that the method outperforms other mainstream algorithms that simultaneously use SAR images as auxiliary data with a gain of about 2.3 dB in terms of peak signal-To-noise ratio PSNR on the SEN12MS-CR dataset.
Original language | English |
---|---|
Article number | 5600820 |
Pages (from-to) | 1-20 |
Number of pages | 20 |
Journal | IEEE Transactions on Geoscience and Remote Sensing |
Volume | 62 |
DOIs | |
Publication status | Published - 2024 |
Bibliographical note
Publisher Copyright:© 1980-2012 IEEE.
Other keywords
- Cloud removal
- data fusion
- deep learning
- residual network
- synthetic aperture radar (SAR)-optical