Next-generation offloading using hybrid deep learning network for adaptive mobile edge computing

International Journal of Electrical and Computer Engineering

Next-generation offloading using hybrid deep learning network for adaptive mobile edge computing

Abstract

Deploying mobile application tasks that require a lot of computing and are time-sensitive to distant cloud-based data centers has become a popular method of working around the limitations of mobile devices (MDs). Deep reinforcement learning (DRL) techniques for offloading in mobile edge computing (MEC) environments struggle to adapt to new situations due to low sample efficiency for each new context. To address these issues, a novel computational offloading in mobile edge computing (COOL-MEC) algorithm has been proposed that combines the benefits of attention modules and bi-directional long short-term memory. This algorithm improves server resource utilization by lowering the cost of assimilating processing latency, processing energy consumption, and task throughput of latency-sensitive tasks. The experiment's findings show that, when used as intended, the recommended COOL-MEC algorithm minimizes energy consumption. When compared to the current deep convolutional attention reinforcement learning with adaptive reward policy (DCARL-ARP) and DRL techniques, the energy consumption of the proposed COOL-MEC is decreased by 0.06% and 0.08%, respectively. The average time per channel utilized for the execution of the proposed COOL-MEC also decreased by 0.051% and 0.054% when compared with existing DCARL-ARP and DRL methods respectively.

Discover Our Library

Embark on a journey through our expansive collection of articles and let curiosity lead your path to innovation.

Explore Now
Library 3D Ilustration