A Hybrid Model for Detecting Fake Profiles in Online Social Networks: Enhancing User Trust
Main Article Content
Abstract
The multiplication of fake profiles in online social systems (OSNs) has developed as a basic challenge, debilitating client believe and security. This paper presents a novel cross breed show planned to distinguish fake profiles in OSNs by combining progressed machine learning methods to improve discovery precision and keep up client believe. Our approach coordinating both Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) models, known for their quality in preparing successive information, into a crossover show that leverages the qualities of each method for more compelling location. Employing a dataset collected from Twitter, we conducted a comparative examination of different machine learning models, counting Naïve Bayes, Irregular Woodland, AdaBoost, and Support Vector Machines (SVM), to assess their execution in identifying fake profiles. In expansion, repetitive neural systems, such as LSTM and GRU, were tried separately some time recently being combined into the proposed crossover demonstrate. Each model's execution was assessed based on accuracy, review, F1 score, and exactness. Our findings illustrate that the cross breed LSTM-GRU show outflanks conventional machine learning calculations and person repetitive models, accomplishing prevalent discovery exactness and decreasing untrue positives. By saddling the complementary qualities of LSTM's capacity to capture long-term conditions and GRU's computational proficiency, the half breed demonstrate offers an progressed arrangement to fake profile location. This inquire about gives a comprehensive system for distinguishing fake profiles in OSNs, pointing to move forward the keenness of client intuitive and upgrade believe in online stages. The proposed show has critical suggestions for OSN security, especially in moderating the rising risk of false accounts.