Comparison of Paragram and Glove Results for Similarity Benchmarks
[ 1 ] Instytut Automatyki, Robotyki i Inżynierii Informatycznej, Wydział Elektryczny, Politechnika Poznańska | [ P ] employee
2019
chapter in monograph / paper
english
- language models
- vector spaces
- world embedding
PL Distributional Semantics Models (DSM) derive word space from linguistic items in context. In this paper we provide comparison between two methods for post process improvements to the baseline DSM vectors. The counter-fitting method which enforces antonymy and synonymy constraints into the Paragram vector space representations recently showed improvement in the vectors’ capability for judging semantic similarity. The second method is our novel RESM method applied to GloVe baseline vectors. By applying the hubness reduction method, implementing relational knowledge into the model by retrofitting synonyms and providing a new ranking similarity definition RESM that gives maximum weight to the top vector component values we equal the results for the ESL and TOEFL sets in comparison with our calculations using the Paragram and Paragram + Counter-fitting methods. The Paragram or our cosine retrofitting method are state-of-the-art results for the SIMLEX-999 gold standard. Apparently relational knowledge and counter-fitting is more important for judging semantic similarity than sense determination for words.
236 - 248
20