Preview

Modeling and Analysis of Information Systems

Advanced search

Word Embedding for Semantically Relative Words: an Experimental Study

https://doi.org/10.18255/1818-1015-2018-6-726-733

Abstract

The ability to identify semantic relations between words has made a word2vec model widely used in NLP tasks. The idea of word2vec is based on a simple rule that a higher similarity can be reached if two words have a similar context. Each word can be represented as a vector, so the closest coordinates of vectors can be interpreted as similar words. It allows to establish semantic relations (synonymy, relations of hypernymy and hyponymy and other semantic relations) by applying an automatic extraction. The extraction of semantic relations by hand is considered as a time-consuming and biased task, requiring a large amount of time and some help of experts. Unfortunately, the word2vec model provides an associative list of words which does not consist of relative words only. In this paper, we show some additional criteria that may be applicable to solve this problem. Observations and experiments with well-known characteristics, such as word frequency, a position in an associative list, might be useful for improving results for the task of extraction of semantic relations for the Russian language by using word embedding. In the experiments, the word2vec model trained on the Flibusta and pairs from Wiktionary are used as examples with semantic relationships. Semantically related words are applicable to thesauri, ontologies and intelligent systems for natural language processing.

About the Authors

Maria S. Karyaeva
P.G. Demidov Yaroslavl State University
Russian Federation

graduate student

14 Sovetskaya str., Yaroslavl 150003



Pavel I. Braslavski
Ural Federal University
Russian Federation

PhD, Docent

19 Mira str., Ekaterinburg 620002



Valery A. Sokolov
P.G. Demidov Yaroslavl State University
Russian Federation

Doctor, Professor

14 Sovetskaya str., Yaroslavl 150003



References

1. Mikolov T., Yih W., Zweig G., “Linguistic Regularities in Continuous Space Word Representations”, HLT-NAACL, 2013, 746–751.

2. Sienˇcnik S.K., “Adapting word2vec to named entity recognition”, Proceedings of the 20th nordic conference of computational linguistics, 2015, 239–243.

3. Lilleberg J., Zhu Y., Zhang Y., “Support vector machines and word2vec for text classification with semantic features”, Cognitive Informatics & Cognitive Computing, IEEE 14th International Conference, 2015, 136–140.

4. Ling W. et al., “Two/too simple adaptations of word2vec for syntax problems”, Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2015, 1299–1304.

5. Najafabadi M.M. et al., “Deep learning applications and challenges in big data analytics”, Journal of Big Data, 2 (2015), 1.

6. Kutuzov A., Andreev I., “Texts in, meaning out: neural language models in semantic similarity task for Russian”, 2015, https://arxiv.org/abs/1504.08183.

7. Hearst M. A., “Automatic acquisition of hyponyms from large text corpora”, Proceedings of the 14th conference on Computational linguistics – Association for Computational Linguistics, 2 (1992), 539–545.

8. Klaussner C., Zhekova D., “Lexico-syntactic patterns for automatic ontology building”, Proceedings of the Second Student Research Workshop associated with RANLP, 2011, 109– 114.

9. Maedche A., Pekar V., Staab S., “Ontology learning part one—on discovering taxonomic relations from the web”, Web Intelligence, 2003, 301–319.

10. Snow R., Jurafsky D., Ng A. Y., “Learning syntactic patterns for automatic hypernym discovery”, Advances in Neural Information Processing Systems, 2005, 1297–1304.

11. Panchenko A.,et al.,“Human and Machine Judgements for Russian Semantic Relatedness”, Analysis of Images, Social Networks and Texts: 5th International Conference, AIST 2016, (Yekaterinburg, Russia, April 7–9, 2016, Revised Selected Papers), 2017, 221–235.

12. Fu R., et al., “Learning semantic hierarchies via word embeddings”, Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, 1 (2014), 1199–1209.

13. Ustalov D., Arefyev N., Biemann C., Panchenko A., “Negative sampling improves hypernymy extraction based on projection learning”, 2017, https://arxiv.org/pdf/ 1707.03903.pdf.

14. Wang C., Cao L., Zhou B., “Medical Synonym Extraction with Concept Space Models”, 2015, https://arxiv.org/pdf/1506.00528.pdf.

15. Rei M., Briscoe T., “Looking for hyponyms in vector space”, Proceedings of the Eighteenth Conference on Computational Natural Language Learning, 2014, 68–77.

16. Turney P., Pantel P., “From frequency to meaning: Vector space models of semantics”, Journal of artificial intelligence research, 37 (2010), 141–188.

17. Matsuo Y., Ishizuka M., “Keyword extraction from a single document using word cooccurrence statistical information”, International Journal on Artificial Intelligence Tools, 13:1 (2004), 157–169.


Review

For citations:


Karyaeva M.S., Braslavski P.I., Sokolov V.A. Word Embedding for Semantically Relative Words: an Experimental Study. Modeling and Analysis of Information Systems. 2018;25(6):726-733. (In Russ.) https://doi.org/10.18255/1818-1015-2018-6-726-733

Views: 1332


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.


ISSN 1818-1015 (Print)
ISSN 2313-5417 (Online)