Performance Evaluation and Optimization of Dynamic Resource Allocation and Task Offloading for Mobile Edge Computing using Deep Reinforcement Learning
Main Article Content
Abstract
Resource-efficient and low-latency applications are made possible by the revolutionary paradigm known as Multi-Access Edge Computing (MEC), which places computational resources closer to the end users. Dynamic resource allocation and task offloading are essential for guaranteeing optimal performance since MEC systems enable a wide range of computationally demanding and time-sensitive applications. However, a number of obstacles, including network congestion, a lack of processing capacity, and fluctuating user demands, make it difficult to manage these resources efficiently in real-time. In this regard, the intricate trade-offs between latency, energy efficiency, computational resources, and quality of service (QoS) can be effectively addressed by multi-objective optimization (MOO) and deep reinforcement learning (DRL). In order to optimize resource allocation and work offloading, this research investigates the use of MOO and DRL in MEC systems. In particular, we suggest a hybrid framework that uses DRL for adaptive, real-time decision-making and multi-objective optimization to balance conflicting objectives. The study offers a thorough model, simulations, and findings that show how well our strategy works to enhance system performance in a variety of scenarios. This study makes two contributions: first, it presents a new method for dynamic resource management and job offloading in MEC systems; second, it shows that combining MOO and DRL in practical applications is feasible and has potential advantages. Improved system performance, energy efficiency, and user happiness are anticipated results, which would represent a major step toward the creation of effective, scalable MEC settings.