登录    注册    忘记密码    使用帮助

详细信息

Kds-radfnet: A distributed thermal infrared and visible image fusion framework based on knowledge distillation and semantic segmentation  ( SCI-EXPANDED收录 EI收录)  

文献类型:期刊文献

英文题名:Kds-radfnet: A distributed thermal infrared and visible image fusion framework based on knowledge distillation and semantic segmentation

作者:Feng, Siling[1];Wang, QiaoYun[1];Lin, Cong[1,2];Huang, Mengxing[1]

机构:[1]Hainan Univ, Sch Informat & Commun Engn, Haikou 570228, Peoples R China;[2]Guangdong Ocean Univ, Coll Elect & Informat Engn, Zhanjiang 524088, Peoples R China

年份:2025

卷号:81

期号:10

外文期刊名:JOURNAL OF SUPERCOMPUTING

收录:SCI-EXPANDED(收录号:WOS:001528723600003)、、EI(收录号:20252918791708)、Scopus(收录号:2-s2.0-105010567495)、WOS

基金:This research was funded by the National Natural Science Foundation of China (Grants 62466016, 82260362).

语种:英文

外文关键词:Image fusion; Knowledge distillation; Teacher network; Student network; Semantic segmentation

外文摘要:Conventional deep neural network-based methods for fusing thermal infrared and visible images often incur knowledge loss, negatively impacting fused image quality. To address this limitation, this paper presents a novel method named KDS-RADFNet, employing knowledge distillation to fuse these modalities. The approach features a specifically designed knowledge distillation network architecture. A teacher network, utilizing a dual-branch structure, separately extracts thermal radiation features from thermal infrared images and gradient texture features from visible images. Complementary knowledge, formed after cross-modal feature alignment, serves as supervisory signals. A student network learns and integrates this information. A distillation loss function facilitates the transfer of the teacher's feature representations to the student network, establishing a balance between thermal radiation sensitivity and textural-structural fidelity. This process effectively suppresses feature interference during the fusion of multimodal source images. Furthermore, an unsupervised semantic segmentation network is cascaded with the fusion network. Joint supervision using semantic loss and fusion loss guides the training, enhancing the utility of the fused images for downstream high-level vision tasks. Extensive experiments on multiple public datasets demonstrate that the fusion network enhanced by the knowledge distillation method encodes knowledge more effectively, leading to improved fusion performance.

参考文献:

正在载入数据...

版权所有©广东海洋大学 重庆维普资讯有限公司 渝B2-20050021-8 
渝公网安备 50019002500408号 违法和不良信息举报中心