Hybrid Deep Learning for Advanced Fake News Detection Using Explainable AI and Fast Text

Main Article Content

Malatthi Sivasundaram, V. Prakasham, R. Rathika, P. Subhashini, P. Karthi, A. Priyadharshini

Abstract

In order to enhance transparency and interpretability, the main goal of this project is to create a hybrid deep learning model for fake news detection by fusing Explainable AI (XAI) techniques like SHapley Additive exPlanations (SHAP) with XLNet, FastText, and CNN Algorithm.


Introduction: Fake news rapid spread in the digital age has turned into a significant issue that influences social stability, public opinion, and political outcomes . False information has spread by virtue of social media platforms' inability to distinguish between authentic and fraudulent content . Despite their effectiveness, traditional fact-checking methods are time-consuming and unable to handle the volume of data generated daily . As a result, automated systems for detecting false news that utilize advanced artificial Intelligence demonstrated impressive performance in text classification tasks,such as identifying false news. It is challenging to comprehend how these models make decisions, though, because they function as black-box systems. In order to improve interpretability, explainable AI (XAI) techniques have been developed. The SHapley Additive exPlanations (SHAP) method is one that offers details on model predictions .


Objectives: The objective of this project is to develop a sophisticated fake news detection system that combines advanced natural language processing and machine learning techniques. By integrating XLNet for superior language understanding, FastText for efficient word representation, and Convolutional Neural Networks (CNNs) for robust feature extraction, the system aims to enhance detection accuracy. Additionally, incorporating Explainable AI techniques, particularly SHAP, will provide clear and interpretable explanations of the model's predictions. This dual focus on performance and transparency seeks to create a reliable tool for identifying misinformation, ultimately fostering greater public trust in digital information sources.


Methods: Convolutional Neural Networks (CNN), XL Net, and SHAP with Fast Text are examples of Explainable AI (XAI) techniques that were used in the study's hybrid deep learning methodology. Group 1: Robert and Bert Although methods are effective, they are not transparent enough for users to comprehend and have faith in their predictions. Group 2: Explainable AI and Fat text were used in combination with the Hybrid Model.


Results: The hybrid model's accuracy of 92.3% represents a 5.6% improvement over the baseline accuracy of 87.4%. This shows that the hybrid approach is more effective at correctly distinguishing between real and fake news articles. Additionally, the hybrid model is more effective at reducing false positives, as evidenced by its 90.5% accuracy, which is 6.2% higher than the baseline model's 85.2% accuracy. Similarly, from 86.1% in the baseline model to 91.8% in the hybrid model, the hybrid model's recall increases by 6.6%, indicating that it is better at spotting fake news. Finally, the F1-score, which strikes a balance between recall and precision, increased from 85.6% to 91.1%, a 6.4% improvement.


Conclusions: By combining XL Net, Fast Text, CNN, and Explainable AI techniques, the proposed hybrid deep learning model significantly increases the accuracy of fake news detection while maintaining interpretability. This tactic provides a robust and transparent framework for effectively combating misinformation. 

Article Details

Section
Articles