LifeGuardAI : A Privacy-Preserving Multimodal Framework for Suicide and Self-Harm Risk Detection on Social Media Using Blockchain and Hybrid Deep Learning.
Main Article Content
Abstract
Introduction: Suicide and self-harm risks are increasingly prevalent on social media, making early detection and intervention a critical societal need. However, analyzing sensitive user-generated content for such risks raises significant challenges, especially in terms of privacy protection, data heterogeneity, and the need for effective multimodal and multilingual understanding.
Objectives: This work aims to develop a collaborative and privacy-preserving system LifeGuardAI capable of accurately detecting and preventing suicide and self-harm risks across multiple social media platforms, while strictly maintaining user confidentiality.
Methods: LifeGuardAI leverages personalized federated learning to handle heterogeneous and non-IID user data distributed over various platforms. Multimodal and multilingual content is processed using advanced deep learning models, including BLIP-2 and BERT. To ensure privacy, all training and model updates are encrypted via homomorphic encryption, and model aggregation is performed through blockchain-based mechanisms for added security and data integrity. The system performs continuous, day-by-day temporal analysis of each user's posts across all connected social networks, providing individualized and comprehensive risk assessments. Upon identifying potential risks, LifeGuardAI automatically alerts trusted contacts to enable timely intervention..
Results: In extensive evaluations, LifeGuardAI achieved a detection accuracy of 98.8%, outperforming conventional and state-of-the-art models such as CNN (80.3%), Decision Tree (90.78%), SVM (88.96%), Transformer (97.4%), BiLSTM-CNN (69.83%), ResNet50 2D (97.59%), and VGG16 2D (98.73%). These results highlight the system's superior performance and its robustness across heterogeneous, real-world data.
Conclusions: LifeGuardAI demonstrates that it is possible to deliver highly accurate, personalized, and privacy-preserving detection of suicide and self-harm risks across diverse social media platforms. Its collaborative architecture and advanced privacy mechanisms make it a promising solution for proactive mental health intervention while upholding the highest standards of user security and ethical AI deployment.