2504000074
  • Open Access
  • Article
Concatenated Vector Representation with the Asymmetric GloVe Model
  • Junfeng Shi *,   
  • Jihong Li

Received: 07 Nov 2023 | Accepted: 31 Dec 2023 | Published: 25 Mar 2025

Abstract

The GloVe model is a widely used model for word vector representation learning. The word vector trained by the model can encode some semantic and syntactic information, and the conventional GloVe model trains the word vector representation by collecting the context word within a symmetric window for a given target word. Obviously, such collection does not obtain the left/right side information between the context word and the target word, which is linguistically critical information for learning a word representation of syntactic information. Therefore, the word vector trained by the GloVe model performs poorly in syntax-based tasks such as the part-of-speech tagging task (abbreviated as the POS task) and the chunking task. In order to solve this problem, a concatenated vector representation is proposed with the asymmetric GloVe model, which distinguishes left contexts from right contexts of the target word and exhibits more syntactic similarity than the original GloVe vector representation in looking for the target word’s neighbor words. By using the syntactic test set, the concatenated vector representation performs well for the word analogy task, and the syntax-based tasks such as the POS task and the chunking task. At the same time, the dimension of the concatenated vector representation is the half dimension of the original GloVe vector representation, reducing the running time greatly.

References 

  • 1.
    Mikolov, T.; Chen, K.; Corrado, G.; et al. Efficient estimation of word representations in vector space. In Proceedings of the 1st International Conference on Learning Representations, Scottsdale, AZ, USA, 24 May 2013; ICLR, 2013.
  • 2.
    Pennington, J.; Socher, R.; Manning, C. GloVe: Global vectors for word representation. In Proceedings of 2014 Conference on Empirical Methods in Natural Language Processing, Doha, Qatar, 25–29 October 2014; Association for Computational Linguistics, 2014; pp. 1532–1543. doi: 10.3115/v1/D14-1162
  • 3.
    Peters, M.E.; Neumann, M.; Iyyer, M.; et al. Deep contextualized word representations. In Proceedings of 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, New Orleans, LA, USA, 16 June 2018; Association for Computational Linguistics, 2018; pp. 2227–2237. doi: 10.18653/v1/N18-1202
  • 4.
    Devlin, J.; Chang, M.W.; Lee, K.; et al. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Minneapolis, Minnesota, 27 June 2019; Association for Computational Linguistics, 2019; pp. 4171–4186. doi: 10.18653/v1/N19-1423
  • 5.
    Radford, A.; Narasimhan, K.; Salimans, T.; et al. Improving language understanding by generative pre-training. 2018. Available online: https://paperswithcode.com/paper/improving-language-understanding-by.
  • 6.
    Radford, A.; Wu, J.; Child, R.; et al. Language models are unsupervised multitask learners. 2019. Available online: https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf.
  • 7.
    Brown, T.B.; Mann, B.; Ryder, N.; et al. Language models are few-shot learners. In Proceedings of the 34th International Conference on Neural Information Processing Systems, New York, 6 December 2020; ACM: Vancouver, 2020; p. 159. doi: 10.5555/3495724.3495883
  • 8.
    Ranjan, S.; Mishra, S. Perceiving university students’ opinions from Google app reviews. Concurr. Comput. Pract. Exp., 2022, 34: e6800. doi: 10.1002/cpe.6800
  • 9.
    Mahalakshmi, P.; Fatima, N.S.; Balaji, R.; et al. An effective multilingual retrieval with query optimization using deep learning technique. Adv. Eng. Softw., 2022, 173: 103244. doi: 10.1016/j.advengsoft.2022.103244
  • 10.
    Chen, X.Y.; Zhang, M.; Xiong, S.W.; et al. On the form of parsed sentences for relation extraction. Knowl.-Based Syst., 2022, 251: 109184. doi: 10.1016/j.knosys.2022.109184
  • 11.
    Brants, T. Part-of-speech tagging. In Encyclopedia of Language & Linguistics, 2nd ed.; Brown, K., Ed.; Elsevier: Oxford, 2006; pp. 221–230.
  • 12.
    Sang, E.F.T.K.; Buchholz, S. Introduction to the CoNLL-2000 shared task chunking. In Proceedings of CoNLL-2000 and LLL-2000, Lisbon, Portugal; Association for Computational Linguistics, 2000; pp. 127–132.
  • 13.
    Ling, W.; Dyer, C.; Black, A.W.; et al. Two/too simple adaptations of Word2Vec for syntax problems. In Proceedings of 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Denver, CO, USA, 31 May–5 June 2015; Association for Computational Linguistics, 2015; pp. 1299–1304. doi: 10.3115/v1/N15-1142
  • 14.
    Ling, W.; Tsvetkov, Y.; Amir, S.; et al. Not all contexts are created equal: Better word representations with variable attention. In Proceedings of 2015 Conference on Empirical Methods in Natural Language Processing, Lisbon, Portugal, 1721 September 2015; Association for Computational Linguistics, 2015; pp. 1367–1372. doi: 10.18653/v1/D15-1161
  • 15.
    Song, Y.; Shi, S.M.; Li, J.; et al. Directional skip-gram: Explicitly distinguishing left and right context for word embeddings. In Proceedings of 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, New Orleans, LA, USA, 1–6 June 2018; Association for Computational Linguistics, 2018, pp. 175–180. doi: 10.18653/v1/N18-2028
  • 16.
    Chelba, C.; Mikolov, T.; Schuster, M.; et al. One billion word benchmark for measuring progress in statistical language modeling. In Proceedings of the 15th Annual Conference of the International Speech Communication Association, Singapore, 14–18 September 2014; ISCA: Singapore, 2014.
  • 17.
    Yang, J.; Zhang, Y. NCRF++: An open-source neural sequence labeling toolkit. In Proceedings of ACL 2018, System Demonstrations, Melbourne, Australia, 15–20 July 2018; Association for Computational Linguistics, 2018; pp. 74–79. doi: 10.18653/v1/P18-4013
  • 18.
    Gimpel, K.; Schneider, N.; O’Connor, B.; et al. Part-of-speech tagging for twitter: Annotation, features, and experiments. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Portland, OR, USA, 19–24 June 2011; Association for Computational Linguistics, 2011; 42–47.
Share this article:
How to Cite
Shi, J.; Li, J. Concatenated Vector Representation with the Asymmetric GloVe Model. International Journal of Network Dynamics and Intelligence 2025, 4 (1), 100002. https://doi.org/10.53941/ijndi.2025.100002.
RIS
BibTex
Copyright & License
article copyright Image
Copyright (c) 2025 by the authors.