登录    注册    忘记密码    使用帮助

详细信息

SkinSwinViT: A Lightweight Transformer-Based Method for Multiclass Skin Lesion Classification with Enhanced Generalization Capabilities  ( SCI-EXPANDED收录)   被引量:1

文献类型:期刊文献

英文题名:SkinSwinViT: A Lightweight Transformer-Based Method for Multiclass Skin Lesion Classification with Enhanced Generalization Capabilities

作者:Tang, Kun[1];Su, Jing[1];Chen, Ruihan[1,2];Huang, Rui[1];Dai, Ming[1];Li, Yongjiang[1]

机构:[1]Guangdong Ocean Univ, Sch Math & Comp, Zhanjiang 524008, Peoples R China;[2]Int Macau Inst Acad Res, Artificial Intelligence Res Inst, Taipa 999078, Peoples R China

年份:2024

卷号:14

期号:10

外文期刊名:APPLIED SCIENCES-BASEL

收录:SCI-EXPANDED(收录号:WOS:001232734800001)、、Scopus(收录号:2-s2.0-85194488579)、WOS

基金:No Statement Available

语种:英文

外文关键词:skin lesions; ISIC2018; transformer; data enhancement; multiclassification

外文摘要:In recent decades, skin cancer has emerged as a significant global health concern, demanding timely detection and effective therapeutic interventions. Automated image classification via computational algorithms holds substantial promise in significantly improving the efficacy of clinical diagnoses. This study is committed to mitigating the challenge of diagnostic accuracy in the classification of multiclass skin lesions. This endeavor is inherently formidable owing to the resemblances among various lesions and the constraints associated with extracting precise global and local image features within diverse dimensional spaces using conventional convolutional neural network methodologies. Consequently, this study introduces the SkinSwinViT methodology for skin lesion classification, a pioneering model grounded in the Swin Transformer framework featuring a global attention mechanism. Leveraging the inherent cross-window attention mechanism within the Swin Transformer architecture, the model adeptly captures local features and interdependencies within skin lesion images while additionally incorporating a global self-attention mechanism to discern overarching features and contextual information effectively. The evaluation of the model's performance involved the ISIC2018 challenge dataset. Furthermore, data augmentation techniques augmented training dataset size and enhanced model performance. Experimental results highlight the superiority of the SkinSwinViT method, achieving notable metrics of accuracy, recall, precision, specificity, and F1 score at 97.88%, 97.55%, 97.83%, 99.36%, and 97.79%, respectively.

参考文献:

正在载入数据...

版权所有©广东海洋大学 重庆维普资讯有限公司 渝B2-20050021-8 
渝公网安备 50019002500408号 违法和不良信息举报中心