Preview

Modeling and Analysis of Information Systems

Advanced search

Text classification by CEFR levels using machine learning methods and BERT language model

https://doi.org/10.18255/1818-1015-2023-3-202-213

Abstract

This paper presents a study of the problem of automatic classification of short coherent texts (essays) in English according to the levels of the international CEFR scale. Determining the level of text in natural language is an important component of assessing students knowledge, including checking open tasks in e-learning systems. To solve this problem, vector text models were considered based on stylometric numerical features of the character, word, sentence structure levels. The classification of the obtained vectors was carried out by standard machine learning classifiers. The article presents the results of the three most successful ones: Support Vector Classifier, Stochastic Gradient Descent Classifier, LogisticRegression. Precision, recall and F-score served as quality measures. Two open text corpora, CEFR Levelled English Texts and BEA-2019, were chosen for the experiments. The best classification results for six CEFR levels and sublevels from A1 to C2 were shown by the Support Vector Classifier with F-score 67 % for the CEFR Levelled English Texts. This approach was compared with the application of the BERT language model (six different variants). The best model, bert-base-cased, provided the F-score value of 69 %. The analysis of classification errors showed that most of them are between neighboring levels, which is quite understandable from the point of view of the domain. In addition, the quality of classification strongly depended on the text corpus, that demonstrated a significant difference in F-scores during application of the same text models for different corpora. In general, the obtained results showed the effectiveness of automatic text level detection and the possibility of its practical application.

About the Authors

Nadezhda S. Lagutina
P.G. Demidov Yaroslavl State University
Russian Federation


Ksenia V. Lagutina
P.G. Demidov Yaroslavl State University
Russian Federation


Anastasya M. Brederman
P.G. Demidov Yaroslavl State University
Russian Federation


Natalia N. Kasatkina
P.G. Demidov Yaroslavl State University
Russian Federation


References

1. E. del Gobbo, A. Guarino, B. Cafarelli, L. Grilli, and P. Limone, “Automatic evaluation of open-ended questions for online learning. A systematic mapping,” Studies in Educational Evaluation, vol. 77, p. 101258, 2023.

2. N. V. Galichev and P. S. Shirogorodskaya, “Problema avtomaticheskogo izmereniya slozhnyh konstruktov cherez otkrytye zadaniya,” in HXI Mezhdunarodnaya nauchno-prakticheskaya konferenciya molodyh issledovatelej obrazovaniya, 2022, pp. 695–697.

3. L. E. Adamova, O. V. Surikova, I. G. Bulatova, and O. O. Varlamov, “Application of the mivar expert system to evaluate the complexity of texts,” News of the Kabardin-Balkar scientific center of RAS, no. 2, pp. 11–29, 2021.

4. D. Ramesh and S. K. Sanampudi, “An automated essay scoring systems: a systematic literature review,” Artificial Intelligence Review, vol. 55, no. 3, pp. 2495–2527, 2022.

5. K. P. Yancey, G. Laflair, A. Verardi, and J. Burstein, “Rating Short L2 Essays on the CEFR Scale with GPT-4,” in Proceedings of the 18th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2023), 2023, pp. 576–584.

6. A. Gasparetto, M. Marcuzzo, A. Zangari, and A. Albarelli, “A survey on text classification algorithms: From text to predictions,” Information, vol. 13, no. 2, p. 83, 2022.

7. V. Ramnarain-Seetohul, V. Bassoo, and Y. Rosunally, “Similarity measures in automated essay scoring systems: A ten-year review,” Education and Information Technologies, vol. 27, no. 4, pp. 5573–5604, 2022.

8. P. Yang, L. Li, F. Luo, T. Liu, and X. Sun, “Enhancing topic-to-essay generation with external commonsense knowledge,” in Proceedings of the 57th annual meeting of the association for computational linguistics, 2019, pp. 2002–2012.

9. N. N. Mikheeva and E. V. Shulyndina, “Features of training written Internet communication in a non-linguistic university,” Tambov University Review. Series: Humanities, vol. 28, no. 2, pp. 405–414, 2023.

10. V. J. Schmalz and A. Brutti, “Automatic assessment of English CEFR levels using BERT embeddings,” 2021.

11. Y. Arase, S. Uchida, and T. Kajiwara, “CEFR-Based Sentence Difficulty Annotation and Assessment,” in Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 2022, pp. 6206–6219.

12. R. Jalota, P. Bourgonje, J. Van Sas, and H. Huang, “Mitigating Learnerese Effects for CEFR classification,” in Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022), 2022, pp. 14–21.

13. T. Gaillat et al., “Predicting CEFR levels in learners of English: The use of microsystem criterial features in a machine learning approach,” ReCALL, vol. 34, no. 2, pp. 130–146, 2022.

14. E. Kerz, D. Wiechmann, Y. Qiao, E. Tseng, and M. Str"obel, “Automated classification of written proficiency levels on the CEFR-scale through complexity contours and RNNs,” in Proceedings of the 16th Workshop on Innovative Use of NLP for Building Educational Applications, 2021, pp. 199–209.

15. Y. Yang and J. Zhong, “Automated essay scoring via example-based learning,” in Web Engineering, 2021, pp. 201–208.

16. E. Mayfield and A. W. Black, “Should you fine-tune BERT for automated essay scoring?,” in Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications, 2020, pp. 151–162.

17. J. M. Imperial, “BERT Embeddings for Automatic Readability Assessment,” in Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021), 2021, pp. 611–618.

18. C. Bryant, M. Felice, O. E. Andersen, and T. Briscoe, “The BEA-2019 shared task on grammatical error correction,” in Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, 2019, pp. 52–75.

19. K. V. Lagutina and A. M. Manakhova, “Automated Search and Analysis of the Stylometric Features That Describe the Style of the Prose of 19th--21st Centuries,” Automatic Control and Computer Sciences, vol. 55, no. 7, pp. 866–876, 2021.

20. A. M. Manakhova and N. S. Lagutina, “Analysis of the impact of the stylometric characteristics of different levels for the verification of authors of the prose,” Modeling and Analysis of Information Systems, vol. 28, no. 3, pp. 260–279, 2021.

21. J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding,” in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2019, vol. 1, pp. 4171–4186.

22. V. Sanh, L. Debut, J. Chaumond, and T. Wolf, “DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter.” 2020.


Review

For citations:


Lagutina N.S., Lagutina K.V., Brederman A.M., Kasatkina N.N. Text classification by CEFR levels using machine learning methods and BERT language model. Modeling and Analysis of Information Systems. 2023;30(3):202-213. (In Russ.) https://doi.org/10.18255/1818-1015-2023-3-202-213

Views: 410


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.


ISSN 1818-1015 (Print)
ISSN 2313-5417 (Online)