Improving Chinese Fact Checking via Prompt Based Learning and Low Rank Adaptation
Author: Yu-Yen Ting, Chia-Hui Chang
Publish Year: 2023-11
Update by: March 25, 2025
摘要
Verifying the accuracy of information is a constant task as the prevalence of misinformation on the Web. In this paper, we focus on Chinese fact-checking (CHEF dataset) [1] and improve the performance through prompt-based learning in both evidence retrieval and claim verification. We adopted the Automated Prompt Engineering (APE) technique to generate the template and compared various prompt-based learning training strategies, such as prompt tuning and low-rank adaptation (LoRA) for claim verification. The research results show that prompt-based learning can improve the macro-F1 performance of claim verification by 2%-3% (from 77.62 to 80.29) using golden evidences and 110M BERT based model. For evidence retrieval, we use both the supervised SentenceBERT [2] and unsupervised PromptBERT [3] models to improve evidence retrieval performance. Experimental results show that the micro-F1 performance of evidence retrieval is significantly improved from 11.86% to 30.61% and 88.15% by PromptBERT and SentenceBERT, respectively. Finally, the overall fact-checking performance, i.e. the macro-F1 performance of claim verification, can be significantly improved from 61.94% to 80.16% when the semantic ranking-based evidence retrieval is replaced by SentenceBERT.