EMSMOTE: Ensemble multiclass synthetic minority oversampling technique to improve accuracy of multilingual sentiment analysis on imbalance data
Downloads
Published
DOI:
https://doi.org/10.58414/SCIENTIFICTEMPER.2024.15.4.17Keywords:
Sentiment analysis, Natural language processing, Multilingual dataset, Imbalance classification, SMOTE.Dimensions Badge
Issue
Section
License
Copyright (c) 2024 The Scientific Temper

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Abstract
Natural language processing (NLP) tasks, such as multilingual sentiment analysis, are inherently challenging, especially when dealing with unbalanced data. A dataset is considered imbalanced when one class significantly dominates the others, creating an unbalanced distribution. In many domains, the minority class holds crucial information, presenting unique challenges. This research addresses these challenges using an ensemble-based oversampling technique, EMSMOTE (Ensemble Multiclass Synthetic Minority Oversampling Technique). By leveraging SMOTE, EMSMOTE generates multiple synthetic datasets to train various classifiers. The proposed model, when combined with an ensemble random forest classifier, attained an impressive accuracy of 90.73%. This ensemble approach not only mitigates the effects of noisy synthetic samples introduced by SMOTE but also showcases significant enhancement in the overall performance in tackling class imbalances.
How to Cite
Downloads