Fine-Tuning Fine-Tuned Models: Towards a Practical Methodology for Sentiment Analysis with Small In-Domain Supervised Dataset
[ 1 ] Instytut Informatyki, Wydział Informatyki i Telekomunikacji, Politechnika Poznańska | [ P ] employee
2025
chapter in monograph / paper
english
EN Sentiment classifiers are typically built by annotating a relatively small data sample and fine-tuning a pre-trained language model. This approach overlooks the opportunity created by the emergence of open-source sentiment classifiers trained on large collections of supervised data from a variety of domains. These models often exhibit superior classification performance and can be used out-of-the-box, but still their performance may be negatively affected by the domain shift. This could potentially be eliminated by annotating in-domain data and further fine-tuning the model, but fine-tuning of the already fine-tuned models has not been investigated in the context of sentiment analysis and has often been unsuccessful for other NLP tasks. This paper presents an experimental analysis of this issue, studying the performance of three off-the-shelf sentiment classifiers fine-tuned using 14 different methods on customer reviews in three languages. The results show that fine-tuning of already fine-tuned models on in-domain data leads to significant performance improvements. In particular, unsupervised domain adaptation techniques in this new setup outperform standard supervised fine-tuning of general-purpose language models.
24.06.2025
20
70