Federated Learning Driven LSTM Model for Privacy-Preserving AI Framework Over Iot-Enabled Cloud Architectures

Main Article Content

Premkumar Ganesan

Abstract

The rapid proliferation of IoT devices in cloud-integrated environments has raised significant concerns about data privacy and security. Traditional AI models require centralized data aggregation, which poses risks related to data breaches and regulatory compliance. To address these challenges, this study proposes Federated LSTM, a novel privacy-preserving deep learning framework that leverages Federated Learning (FL) with Long Short-Term Memory (LSTM) networks for distributed IoT environments. Federated LSTM enables edge devices to collaboratively train AI models without sharing raw data, ensuring compliance with privacy standards such as GDPR and HIPAA. The proposed approach optimizes communication efficiency and model convergence using adaptive weight aggregation, reducing network overhead while maintaining high predictive accuracy. Performance evaluations demonstrate that Federated LSTM achieves superior results in anomaly detection, predictive maintenance, and real-time analytics compared to traditional centralized deep learning models. The experimental results show an improvement in privacy preservation, latency reduction, and scalability in cloud-based IoT networks. Furthermore, the proposed method enhances model robustness by mitigating adversarial attacks and improving generalization across heterogeneous IoT devices. This research contributes to the development of secure, intelligent, and privacy-aware AI frameworks for next-generation IoT-cloud ecosystems, making them more resilient and efficient.

Article Details

Section
Articles