IMHE OpenIR
山地多源中高空间分辨率遥感影像时空融合方法研究 ——以Landsat 8 和Sentinel-2A 为例
Alternative TitleSpatio-temporal Satellite Image Fusion Using Multi-sensor Data with Medium/High Spatial Resolution in Moutainous Area ——Take Landsat 8 and Sentinel-2A as Examples
钟函笑
Subtype硕士
Thesis Advisor李爱农
2018
Degree Grantor中国科学院大学
Place of Conferral北京
Degree Discipline地图学与地理信息系统
Keyword时空融合 Landsat 8 Sentinel-2A 辐射一致性 冗余字典对学习
Abstract山地常年存在的云雾遮挡、季节性积雪覆盖、地形起伏荫蔽等影响,限制了山地光学遥感影像数据的时空连续性。通过多源遥感影像时空融合来重建信息完整、空间连续的高时间分辨率影像数据是解决该问题的一个重要方法。传统的影像融合方法用于多源遥感影像的时空融合时,要求在融合前对影像上由观测条件不同引起的差异进行一系列辐射或光谱归一化处理,以提升多源遥感影像间的辐射一致性。而基于冗余字典对稀疏表示的融合方法由于其对空间降尺度模型的非线性表达特性,可以在保留多源影像辐射差异的情况下进行时空融合。本文以30m空间分辨率的Landsat 8 OLI和10/20m空间分辨率Sentinel-2A MSI多光谱影像为例,研究了影像在大气校正、BRDF校正、光谱通道校正等辐射校正前后的辐射一致性;在此基础上,采用基于单波段冗余字典对学习的融合方法对影像进行时空融合实验,探究了融合过程中误差参数的自适应设置规律;并发展了基于多波段字典对学习的时空融合方法、基于HIS变换和字典对学习的融合方法、基于小波变换和字典对学习的融合方法;最后将以上四种方法与传统方法——基于HIS变换的融合方法、基于小波变换的融合方法的融合结果进行对比分析。通过实验及分析得到以下结论:(1)虽然未经校正的Landsat 8 OLI和Sentinel-2A MSI原始TOA影像之间已具有较高的辐射一致性(各波段R2均大于0.9),但大气校正和BRDF校正都还能进一步增强影像间的辐射一致性(如蓝波段反射率RMSD依次减小22%、10%),但光谱通道校正却增大了影像之间的辐射差异。在BRDF校正后进行的Sandmeier物理模型地形辐射校正虽然能够显著改善地形辐射效应,恢复地形荫蔽区该有的光谱特征且使得阳坡辐射差异略微减小,但是也增大了Landsat 8和Sentinel-2A影像在阴坡的辐射差异。(2)在位于不同梯度或具有不同土地覆盖类型的局部影像的时空融合实验中发现,基于单波段冗余字典对学习的时空融合法的空间结构重建能力和光谱保真度较好(结构相似度大多在0.8以上,光谱扭曲度小于8),并且能够在两个时相影像存在不同程度的山地地形辐射效应的情况下,较准确地展现待预测时相的空间和光谱信息。此外,实验中还发现最佳误差阈值与高分辨率影像的标准差相关(线性拟合R2达到0.75),该规律可以指导更大范围影像融合时误差阈值的自适应设置。(3)相比于单波段字典对学习时空融合方法,基于多波段字典对学习的时空融合在空间细节特征保持能力方面与前者相当(结构相似度统计值比前者平均高出4%),在保留光谱差异的情况下具有更好的光谱保真性能(光谱扭曲度最多比前者小约30%)。另外,基于HIS变换的影像融合在细节保真度方面表现较出色,但其光谱保持效果较差,而基于HIS变换和字典对学习的融合在两方面的表现却与之相反。此外,基于小波变化的融合结果中容易出现斑块化现象和“振铃”效应,且不能预测融合时相应有的光谱特征;而基于小波变化和字典对学习的融合结果在大多数情况下的空间结构保持性能较好,但其光谱保持性能表现在影像纹理尺度较小的情况下不如仅基于字典对学习的两种方法稳定。最后,对于不同时相山地影像上存在不同程度地形辐射效应的情况,以上五种方法中只有基于HIS变换和基于小波变换的融合方法不能很好地恢复阴影区应有的地物场景。因此,综合空间几何结构保持效果、光谱保真性能,以及融合方法对不同地形梯度或不同土地覆盖类型的场景的稳定性看来,对于山地Landsat 8 OLI和Sentinel-2A MIS影像的时空融合,在以上几种方法中,基于多波段字典对学习的时空融合方法最为有效。
Other AbstractSince constant fog and clouds, seasonal snow cover and topographic shadow in mountainous area are three factors that restrict the development of the acquiring of spatio-temporally continuous remote sensing data, multi-sensor image fusion has been proposed to produce space-continuous and information-intact remote sensing data with high spatial and temporal resolution. However,traditional image fusion methods usually require the multi-source data to be fused to have high consistency in geometry, radiance and spectrum. Considering these facts, this research first discussed the radiometric difference between the corresponding bands acquired almost simultaneously by OLI and MSI, before and after atmospheric correction, BRDF correction, bandpass adjustment and topographic correction of Sandmeier model. Then the overcomplete-dictionary-pair-learning based spatio-temoral image fusion was performed on these two images. Based on the mentioned method, this study also proposed multi-spectral-dictionary-pair-learning based fusion method, HIS-transformation and dictionary-pair-learning based fusion method, WT and dictionary-pair-earning based fusion method. Fusion experiments were carried out on 7 different local areas, and the results or conclusion are as follows: First, the study indicated that there was high radiometric consistency between OLI-L1T and MSI-L1C images with the R2 of linear regression greater than 0.9 for each band involved. Then higher consistency was found after the 6S atmospheric correction and C-factor BRDF correction with the RMSDs between two images decreased for most bands(decreased by 22% and further 10% for the blue bands as an example). However, no remarkable improves were found after the fixed-parameter bandpass adjustment. Then the topographic correction improved the consistency of non-shadow area but brought in additional radiometric difference to the shadowed area(R2 decreased by 22% for NIR band).Second, fusion experiments over 7 local areas of different topographic gradient and different land cover condition indicated that the dictionary-pair-learning based fusion method performed well on both structure prediction ability and spectral holdability(SSIMs in most aresa were greater than 0.8 and the ERGASs were less than 7). And the topographic shadow appeared only in the images of the kown date(not the prediction date) was mostly removed by the method. Moreover, the optimal error shreshold set for fusion of different area was found to be linear related to the standard deviation of input image with R2 the higher than 0.75.Third, compared with the fusion method mentioned above, the proposed multi-spectral-dictionary-pair based method performed better on the spectral holdability with the ERGASs decreased as much as 30%. However, while the HIS-trasnformation based fusion method performed well on the detail prediction but failed to preserve the spectral property, the HIS-transformation and dictionary-pair-learning based method performed the opposite way. Meanwhile, the WT based fusion method and the WT and dictionary-pair-learning based fusion method were less robust, and there appeared to be some block effects or ring effectis on some of the fusion results. 
Pages109
Language中文
Document Type学位论文
Identifierhttp://ir.imde.ac.cn/handle/131551/24762
Collection中国科学院水利部成都山地灾害与环境研究所
Affiliation中国科学院成都山地灾害与环境研究所
First Author Affilication中国科学院水利部成都山地灾害与环境研究所
Recommended Citation
GB/T 7714
钟函笑. 山地多源中高空间分辨率遥感影像时空融合方法研究 ——以Landsat 8 和Sentinel-2A 为例[D]. 北京. 中国科学院大学,2018.
Files in This Item:
File Name/Size DocType Version Access License
山地多源中高空间分辨率遥感影像时空融合方(8347KB)学位论文 开放获取CC BY-NC-SAView Application Full Text
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[钟函笑]'s Articles
Baidu academic
Similar articles in Baidu academic
[钟函笑]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[钟函笑]'s Articles
Terms of Use
No data!
Social Bookmark/Share
File name: 山地多源中高空间分辨率遥感影像时空融合方法研究 ——以Landsat 8 和Sentinel-2A 为例.pdf
Format: Adobe PDF
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.