An Adaptive Deep Reinforcement Learning Framework for Optimizing Dynamic Resource Allocation in Federated Cloud Computing Environments

Main Article Content

Shubhangi Kharche, Devika Rani Roy, Aarti Bakshi, Amarja Adgaonkar

Abstract

ederated cloud computing requires dynamic resource allocation which becomes challenging because resources vary in nature and workloads differ by demand and decision-making needs to operate in real time. The proposed research develops an adaptive deep reinforcement learning (DRL) platform for optimizing resource distribution in such complex infrastructure. The framework implements DRL to handle automatic cloud resource distribution across federated clouds for creating efficient operations with reduced delays and better scalability. Planned adaptive learning methods within the proposed system enable it to process workload variations and resource changes which makes the system optimal for big distributed cloud environments. Simulation tests show that the implemented framework delivers superior results of 92.4% resource utilization, 85.0 seconds task completion time and 89.3 kWh energy efficiency, than traditional methods such as static scheduling and heuristic-based algorithms. The evaluation results demonstrate DRL's ability to tackle complex federal cloud resource administration challenges thus creating foundations for better intelligent cloud software systems.

Article Details

Section
Articles