Abstract:To address the problems of poor robustness and poor detection accuracy of multimodal remote sensing image saliency detection, this paper proposes a method based on the novel and efficient Multi-modal Edge aware Guidance Network(MEGNet), which mainly consists of a salient detection backbone network for multi-modal remote sensing images, a cross-modal feature sharing module and an edge aware guidance network. First of all, a Cross-modal Feature Sharing Module(CFSM) is used during feature extraction for remote sensing image pairs, which encourages different modalities to complement each other in the feature extraction process and suppresses the influence of defective feature data from different modalities. Secondly, based on the Edge Aware Guidance Network(EAGN), the effectiveness of edge features is detected through the edge map supervision module and the final salient detection map will have clear boundaries. Finally, experiments are carried out on three kinds of saliency objects detection remote sensing image datasets. The average Fβ![]()
, Mean Absolute Error(MAE) and Sm![]()
scores are 0.917 6, 0.009 5 and 0.919 9, respectively. The experimental results show that the proposed MEGNet is suitable for saliency detection in multi-modal scenes.