Few-shot Sentiment Analysis Based on Adaptive Prompt Learning and Contrastive Learning
DOI:
https://doi.org/10.5755/j01.itc.52.4.34021Keywords:
Few-shot Sentiment Analysis, Adaptive Prompt Learning, Contrastive Learning, Dot-Product Attention, Semantic information of testsAbstract
Traditional deep learning-based strategies for sentiment analysis rely heavily on large-scale labeled datasets for model training, but these methods become less effective when dealing with small-scale datasets. Fine-tuning large pre-trained models on small datasets is currently the most commonly adopted approach to tackle this issue. Recently, prompt-based learning has gained significant attention as a promising research area. Although prompt-based learning has the potential to address data scarcity problems by utilizing prompts to reformulate downstream tasks, the current prompt-based methods for few-shot sentiment analysis are still considered inefficient. To tackle this challenge, an adaptive prompt-based learning method is proposed, which includes two aspects. Firstly, an adaptive prompting construction strategy is proposed, which can capture the semantic information of texts by utilizing a dot-product attention structure, improving the quality of the prompt templates. Secondly, contrastive learning is applied to the implicit word vectors obtained twice during the training stage to alleviate over-fitting in few-shot learning processes. This improves the model’s generalization ability by achieving data enhancement while keeping the semantic information of input sentences unchanged. Experimental results on the ERPSTMT datasets of FewCLUE demonstrate that the proposed method have great ability to construct suitable adaptive prompts and outperforms the state-of-the-art baselines.Downloads
Published
2024-01-12
Issue
Section
Articles
License
Copyright terms are indicated in the Republic of Lithuania Law on Copyright and Related Rights, Articles 4-37.