Enhanced GANified-SMOTE with Latent Factor for Improved Classifier Performance on Imbalanced Datasets
Main Article Content
Abstract
Introduction: Imbalanced datasets cause significant issues in classification tasks that might have a negative impact on the model's performance. It frequently results in minority classes having worse predictive accuracy. This leads to lower accuracy for minority classes. This issue affects model performance and risks missing crucial insights that inform decision-making.
Objectives: This study presents a novel methodology combining the Generative Adversarial Network-Based Synthetic Minority Oversampling Technique (GANified-SMOTE) with latent factor approaches to enhance classifier performance on imbalanced datasets.
Methods: We evaluate the effectiveness of this framework across various datasets, demonstrating its ability to generate high-quality synthetic samples that accurately reflect the underlying data distribution.
Results: Our experimental results show that the Enhanced GANified-SMOTE significantly improves accuracy when integrated with classifiers like Random Forest (RF). Specifically, our method achieves an outstanding accuracy of 0.999971 in the Credit Card Fraud Detection task, along with near-perfect precision and recall metrics.
Conclusions: These results underscore the potential of our approach to improve classification reliability and reduce false negatives in critical applications, addressing the limitations of traditional classification techniques in imbalanced contexts.