收稿日期: 2022-03-29
网络出版日期: 2023-03-23
基金资助
国家自然科学基金(62072183)
A landscape simulation modeling method based on remote sensing images
Received date: 2022-03-29
Online published: 2023-03-23
传统虚拟地形建模通常采用基于人工设计的过程化生成方法, 无法满足军事仿真等需要对真实环境进行还原的仿真建模任务. 针对此类任务, 提出了一种基于遥感图像的虚拟地形仿真建模方法, 其核心是地形混合纹理生成网络(landscape blended texture generation network, LBTG-Net). 该方法利用地形混合纹理生成器(blended texture generator, BTG), 在风格鉴别器(style discriminator, SD)以及多级分类损失的约束下生成地形混合纹理贴图, 基于该混合纹理贴图生成结果对地形环境进行程序化构建. 该方法包含2个核心特点: ① 对输入遥感图像进行准确的地表覆盖类型分类, 以保证对输入遥感图像环境的还原; ② 生成高质量地形混合纹理贴图, 以提高虚拟地形建模质量. LBTG-Net使用Sentinel-2多光谱遥感图像数据集进行训练和验证. 实验结果表明, 该方法在各地表覆盖类型分类评价指标下均有良好表现, 能够在准确还原输入遥感图像的环境分布的同时完成高质量虚拟地形仿真建模.
王泽华 , 高岩 , 陈敏刚 . 基于遥感图像的虚拟地形仿真建模方法[J]. 华东师范大学学报(自然科学版), 2023 , 2023(2) : 82 -94 . DOI: 10.3969/j.issn.1000-5641.2023.02.010
Traditional virtual terrain modeling commonly uses a procedural generation method based on manual design, which cannot be used for competent simulation modeling tasks that need to restore real environments, such as in military applications. In this paper, we proposed a landscape simulation modeling method based on remote sensing images. The core of our proposed method is a landscape blended texture generation network (LBTG-Net); this method uses a blended texture generator (BTG) to generate landscape blended textures with the supervision of a style discriminator (SD) and multi-stage classification loss. Then, we procedurally build the complete virtual environment based on the blended texture generated by LBTG-Net. Our method has two main features: (1) accurate land-cover classification ability of remote sensing image inputs; and (2) high quality landscape blended texture outputs to guarantee virtual landscape modeling quality. We used multispectral image data from the Sentinel-2 satellite as the experimental dataset. The experimental results showed that our method offered high performance under mainstream land-cover classification evaluating indicators and can accurately reproduce the environmental distribution of input remote sensing images while completing high-quality virtual terrain simulation modeling.
1 | RODRIGUEZ-GALIANO V F, CHICA-OLMO M, ABARCA-HERNANDEZ F, et al. Random forest classification of Mediterranean land cover using multi-seasonal imagery and multi-seasonal texture. Remote Sensing of Environment, 2012, 121, 93- 107. |
2 | ZHANG T X, SU J Y, XU Z Y, et al. Sentinel-2 satellite imagery for urban land cover classification by optimized random forest classifier. Applied Sciences, 2021, 11 (2): 543. |
3 | RAKHLIN A, DAVYDOW A, NIKOLENKO S. Land cover classification from satellite imagery with u-net and lovász-softmax loss [C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. IEEE, 2018: 262-266. |
4 | RONNEBERGER O, FISCHER P, BROX T. U-Net: Convolutional networks for biomedical image segmentation [C]// International Conference on Medical Image Computing and Computer-Assisted Intervention. Cham: Springer, 2015: 234-241. |
5 | ZHANG W, TANG P, ZHAO L. Fast and accurate land-cover classification on medium-resolution remote-sensing images using segmentation models. International Journal of Remote Sensing, 2021, 42 (9): 3277- 3301. |
6 | WAMBUGU N, CHEN Y, XIAO Z, et al. A hybrid deep convolutional neural network for accurate land cover classification. International Journal of Applied Earth Observation and Geoinformation, 2021, 103, 102515. |
7 | LI R, ZHENG S Y, DUAN C X, et al. Land cover classification from remote sensing images based on multi-scale fully convolutional network. Geo-Spatial Information Science, 2022, 25 (2): 278- 294. |
8 | SMELIK R M, DE KRAKER K J, GROENEWEGEN S, et al. A survey of procedural methods for terrain modelling [C]// Proceedings of the CASA Workshop on 3D Advanced Media in Gaming and Simulation (3AMIGAS). 2009: 25-34. |
9 | 陈晓铮, 李震霄. 一种基于表面积比例的地形纹理贴图算法. 计算机仿真, 2007, 24 (8): 174- 177. |
10 | GOODFELLOW I J, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial nets [C]// Proceedings of the 27th International Conference on Neural Information Processing Systems. Cambridge: MIT Press, 2014: 2672-2680 |
11 | ZHU J Y, PARK T, ISOLA P, et al. Unpaired image-to-image translation using cycle-consistent adversarial networks [C]// 2017 IEEE International Conference on Computer Vision (ICCV). IEEE, 2017: 2242-2251. DOI: 10.1109/ICCV.2017.244. |
12 | ISOLA P, ZHU J Y, ZHOU T, et al. Image-to-image translation with conditional adversarial networks [C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2017: 1125-1134. DOI: 10.1109/CVPR.2017.632. |
13 | LI W, LIANG Z X, MA P, et al. Hausdorff GAN: Improving GAN generation quality with Hausdorff metric [J]. IEEE Transactions on Cybernetics, 2022, 52(10): 10407-10419. DOI: 10.1109/TCYB.2021.3062396. |
14 | ZHANG H, KOH J Y, BALDRIDGE J, et al. Cross-modal contrastive learning for text-to-image generation [C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2021: 833-842. DOI: 10.1109/CVPR46437.2021.00089. |
15 | LEE K S, TRAN N T, CHEUNG N M. Infomax-GAN: Improved adversarial image generation via information maximization and contrastive learning [C]// 2021 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, 2021: 3942-3952. DOI: 10.1109/WACV48630.2021.00399. |
16 | CHEN J, LIU G, CHEN X. AnimeGAN: A novel lightweight gan for photo animation [C]// International Symposium on Intelligence Computation and Applications. Singapore: Springer, 2019: 242-256. |
17 | XU W J, LONG C J, WANG R S, et al. DRB-GAN: A dynamic resblock generative adversarial network for artistic style transfer [C]// 2021 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2021: 6383-6392. DOI: 10.1109/ICCV48922.2021.00632. |
18 | XIAN W Q, SANGKLOY P, AGRAWAL V, et al. Texturegan: Controlling deep image synthesis with texture patches [C]// 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2018: 8456-8465. DOI: 10.1109/CVPR.2018.00882. |
19 | FRüHSTüCK A, ALHASHIM I, WONKA P. Tilegan: Synthesis of large-scale non-homogeneous textures. ACM Transactions on Graphics (ToG), 2019, 38 (4): 0:1- 0:11. |
20 | KIM H, DISCHLER J M, RUSHMEIER H, et al. Edge-based procedural textures. The Visual Computer, 2021, 37 (9): 2595- 2606. |
21 | BECKHAM C, PAL C. A step towards procedural terrain generation with GANs [EB/OL]. (2017-07-11)[2022-02-20]. https://arxiv.org/pdf/1707.03383v1.pdf. |
22 | DRUSCH M, DEL BELLO U, CARLIER S, et al. Sentinel-2: ESA’s optical high-resolution mission for GMES operational services. Remote Sensing of Environment, 2012, 120, 25- 36. |
/
〈 |
|
〉 |