Skip to main content

Research Repository

Advanced Search

Unmixing-based Spatiotemporal Image Fusion Based on the Self-trained Random Forest Regression and Residual Compensation

Li, Xiaodong; Wang, Yalan; Zhang, Yihang; Hou, Shuwei; Zhou, Pu; Wang, Xia; Du, Yun; Foody, Giles

Unmixing-based Spatiotemporal Image Fusion Based on the Self-trained Random Forest Regression and Residual Compensation Thumbnail


Authors

Xiaodong Li

Yalan Wang

Yihang Zhang

Shuwei Hou

Pu Zhou

Xia Wang

Yun Du

GILES FOODY giles.foody@nottingham.ac.uk
Professor of Geographical Information



Abstract

Spatiotemporal satellite image fusion (STIF) has been widely applied in land surface monitoring to generate high spatial and high temporal reflectance images from satellite sensors. This paper proposed a new unmixing-based spatiotemporal fusion method that is composed of a self-trained random forest machine learning regression (R), low resolution (LR) endmember estimation (E), high resolution (HR) surface reflectance image reconstruction (R), and residual compensation (C), that is, RERC. RERC uses a self-trained random forest to train and predict the relationship between spectra and the corresponding class fractions. This process is flexible without any ancillary training dataset, and does not possess the limitations of linear spectral unmixing, which requires the number of endmembers to be no more than the number of spectral bands. The running time of the random forest regression is about ~1% of the running time of the linear mixture model. In addition, RERC adopts a spectral reflectance residual compensation approach to refine the fused image to make full use of the information from the LR image. RERC was assessed in the fusion of a prediction time MODIS with a Landsat image using two benchmark datasets, and was assessed in fusing images with different numbers of spectral bands by fusing a known time Landsat image (seven bands used) with a known time very-high-resolution PlanetScope image (four spectral bands). RERC was assessed in the fusion of MODIS-Landsat imagery in large areas at the national scale for the Republic of Ireland and France. The code is available at https://www.researchgate.net/proiile/Xiao_Li52.

Citation

Li, X., Wang, Y., Zhang, Y., Hou, S., Zhou, P., Wang, X., …Foody, G. (2023). Unmixing-based Spatiotemporal Image Fusion Based on the Self-trained Random Forest Regression and Residual Compensation. IEEE Transactions on Geoscience and Remote Sensing, 61, Article 5406319. https://doi.org/10.1109/tgrs.2023.3308902

Journal Article Type Article
Acceptance Date Aug 19, 2023
Online Publication Date Aug 28, 2023
Publication Date 2023
Deposit Date Aug 31, 2023
Publicly Available Date Aug 31, 2023
Journal IEEE Transactions on Geoscience and Remote Sensing
Print ISSN 0196-2892
Electronic ISSN 1558-0644
Publisher Institute of Electrical and Electronics Engineers (IEEE)
Peer Reviewed Peer Reviewed
Volume 61
Article Number 5406319
DOI https://doi.org/10.1109/tgrs.2023.3308902
Keywords Landsat, self-trained regression, spatiotemporal image fusion, sub-pixel analysis, unmixing
Public URL https://nottingham-repository.worktribe.com/output/24802411
Publisher URL https://ieeexplore.ieee.org/document/10231141

Files




You might also like



Downloadable Citations