登录    注册    忘记密码    使用帮助

详细信息

PTE: Prompt tuning with ensemble verbalizers  ( SCI-EXPANDED收录 EI收录)  

文献类型:期刊文献

英文题名:PTE: Prompt tuning with ensemble verbalizers

作者:Liang, Liheng[1];Wang, Guancheng[2];Lin, Cong[2];Feng, Zhuowen[3]

机构:[1]Guangdong Ocean Univ, Fac Math & Comp Sci, Zhanjiang 524088, Peoples R China;[2]Guangdong Ocean Univ, Coll Elect & Informat Engn, Zhanjiang 524088, Peoples R China;[3]Guangdong Ocean Univ, Coll Literature & News Commun, Zhanjiang 524088, Peoples R China

年份:2025

卷号:262

外文期刊名:EXPERT SYSTEMS WITH APPLICATIONS

收录:SCI-EXPANDED(收录号:WOS:001350422300001)、、EI(收录号:20244517313790)、Scopus(收录号:2-s2.0-85207932524)、WOS

基金:This work was supported in part by the Undergraduate Innovation Team Project of Guangdong Ocean University under Grant CXTD2024011, in part by the Technology Achievement Transformation Project Team of Guangdong Ocean University (JDTD2024003) , and in part by the Program for Scientific Research Start-up Funds of Guangdong Ocean University ID : 060302112401.

语种:英文

外文关键词:Prompt tuning; Few-shot learning; Text classification; Pre-trained language models

外文摘要:Prompt tuning has achieved remarkable success in facilitating the performance of Pre-trained Language Models (PLMs) across various downstream NLP tasks, particularly in scenarios with limited downstream data. Reframing tasks as fill-in-the-blank questions represents an effective approach within prompt tuning. However, this approach necessitates the mapping of labels through a verbalizer consisting of one or more label tokens, constrained by manually crafted prompts. Furthermore, most existing automatic crafting methods either introduce external resources or rely solely on discrete or continuous optimization strategies. To address this issue, we have proposed a methodology for optimizing discrete verbalizers based on gradient descent, which we refer to this approach as PTE. This method integrates discrete tokens into verbalizers that can be continuously optimized, combining the distinct advantages of both discrete and continuous optimization strategies. In contrast to prior approaches, ours eschews reliance on prompts generated by other models or prior knowledge, merely augmenting a matrix. This approach boasts remarkable simplicity and flexibility, enabling prompt optimization while preserving the interpretability of output label tokens without constraints imposed by discrete vocabularies. Finally, employing this method in text classification tasks, we observe that PTE achieves results comparable to, if not surpassing, previous methods even under extreme conciseness. This furnishes a simple, intuitive, and efficient solution for automatically constructing verbalizers. Moreover, through quantitative analysis of optimized verbalizers, we uncover that language models likely rely not only on semantic information but also on other features for text classification. This revelation unveils new avenues for future research and model enhancements.

参考文献:

正在载入数据...

版权所有©广东海洋大学 重庆维普资讯有限公司 渝B2-20050021-8 
渝公网安备 50019002500408号 违法和不良信息举报中心