详细信息
An Efficient AdaBoost Algorithm with the Multiple Thresholds Classification ( SCI-EXPANDED收录 EI收录) 被引量:28
文献类型:期刊文献
英文题名:An Efficient AdaBoost Algorithm with the Multiple Thresholds Classification
作者:Ding, Yi[1];Zhu, Hongyang[2];Chen, Ruyun[2];Li, Ronghui[1]
机构:[1]Guangdong Ocean Univ, Maritime Coll, Zhanjiang 524091, Peoples R China;[2]Guangdong Ocean Univ, Coll Math & Comp, Zhanjiang 524091, Peoples R China
年份:2022
卷号:12
期号:12
外文期刊名:APPLIED SCIENCES-BASEL
收录:SCI-EXPANDED(收录号:WOS:000816508600001)、、EI(收录号:20220090583)、Scopus(收录号:2-s2.0-85132124246)、WOS
基金:This research was funded by the Program for Scientific Research Start-up Funds of Guangdong Ocean University, grant number: R17015.
语种:英文
外文关键词:AdaBoost; Multiple Thresholds Classification; accuracy; generalization
外文摘要:Featured Application A new Weak Learn algorithm which classifies examples is proposed based on multiple thresholds. The weight assigning scheme of the Weak Learn algorithm is changed correspondingly for the AdaBoost algorithm in this paper. Theoretical identification is provided to show the superiority. Experimental studies are also presented to verify the effectiveness of the method. Adaptive boost (AdaBoost) is a prominent example of an ensemble learning algorithm that combines weak classifiers into strong classifiers through weighted majority voting rules. AdaBoost's weak classifier, with threshold classification, tries to find the best threshold in one of the data dimensions, dividing the data into two categories-1 and 1. However, in some cases, this Weak Learning algorithm is not accurate enough, showing poor generalization performance and a tendency to over-fit. To solve these challenges, we first propose a new Weak Learning algorithm that classifies examples based on multiple thresholds, rather than only one, to improve its accuracy. Second, in this paper, we make changes to the weight allocation scheme of the Weak Learning algorithm based on the AdaBoost algorithm to use potential values of other dimensions in the classification process, while the theoretical identification is provided to show its generality. Finally, comparative experiments between the two algorithms on 18 datasets on UCI show that our improved AdaBoost algorithm has a better generalization effect in the test set during the training iteration.
参考文献:
正在载入数据...