Optimal Resource Management in Fog-Cloud Environments via A2C Reinforcement Learning: Dynamic Task Scheduling and Task Result Caching

Document Type : Research Article

Authors

Faculty of Computer Engineering, K. N. Toosi University of Technology, Tehran, Iran

Abstract

In order to effectively manage tasks in fog-cloud environments, this paper proposes a two-agent architecture-based framework. In this framework, a task scheduling agent is responsible for selecting the computing execution node and allocating resources, while a separate agent manages the caching of results. In each decision cycle, the resource manager first checks whether a valid, fresh result already exists in the cache; if so, the cached result is immediately returned. Otherwise, the execution agent evaluates current conditions — such as network load, nodes’ computational capacity, and user proximity — and assigns the task to the most appropriate node. After task execution completes, an independent storage agent is selected to store the results, potentially operating on a node distinct from the execution node. Through extensive simulations and comparisons with advanced methods (e.g., A3C-R2N2, DDQN, LR-MMT, and LRR-MMT), we demonstrate significant improvements in response latency, computational efficiency, and inter-node communication management. The proposed framework decouples execution scheduling from result storage through two distinct agents while implementing history-based caching that tracks both task request frequencies and result recency. This design enables effective adaptation to variable workloads and dynamic network conditions. The two-agent architecture and history-based caching serve as core innovations that optimize resource utilization and enhance system responsiveness. The resulting decoupled, history-based strategy delivers scalable, low-latency performance and provides a robust solution for real-time service delivery in fog-cloud environments.

Keywords

Main Subjects