In order to effectively manage tasks in fog-cloud environments, this paper proposes a two-agent architecture-based framework. In this framework, a task scheduling agent is responsible for selecting the computing execution node and allocating resources, while a separate agent manages the caching of results. In each decision cycle, the resource manager first checks whether a valid, fresh result already exists in the cache; if so, the cached result is immediately returned. Otherwise, the execution agent evaluates current conditions — such as network load, nodes’ computational capacity, and user proximity — and assigns the task to the most appropriate node. After task execution completes, an independent storage agent is selected to store the results, potentially operating on a node distinct from the execution node. Through extensive simulations and comparisons with advanced methods (e.g., A3C-R2N2, DDQN, LR-MMT, and LRR-MMT), we demonstrate significant improvements in response latency, computational efficiency, and inter-node communication management. The proposed framework decouples execution scheduling from result storage through two distinct agents while implementing history-based caching that tracks both task request frequencies and result recency. This design enables effective adaptation to variable workloads and dynamic network conditions. The two-agent architecture and history-based caching serve as core innovations that optimize resource utilization and enhance system responsiveness. The resulting decoupled, history-based strategy delivers scalable, low-latency performance and provides a robust solution for real-time service delivery in fog-cloud environments.
Hassan Nataj Solhdar, M. and Esnaashari, M. M. (2025). Optimal Resource Management in Fog-Cloud Environments via A2C Reinforcement Learning: Dynamic Task Scheduling and Task Result Caching. AUT Journal of Electrical Engineering, 57(3), 589-610. doi: 10.22060/eej.2025.24181.5657
MLA
Hassan Nataj Solhdar, M. , and Esnaashari, M. M. . "Optimal Resource Management in Fog-Cloud Environments via A2C Reinforcement Learning: Dynamic Task Scheduling and Task Result Caching", AUT Journal of Electrical Engineering, 57, 3, 2025, 589-610. doi: 10.22060/eej.2025.24181.5657
HARVARD
Hassan Nataj Solhdar, M., Esnaashari, M. M. (2025). 'Optimal Resource Management in Fog-Cloud Environments via A2C Reinforcement Learning: Dynamic Task Scheduling and Task Result Caching', AUT Journal of Electrical Engineering, 57(3), pp. 589-610. doi: 10.22060/eej.2025.24181.5657
CHICAGO
M. Hassan Nataj Solhdar and M. M. Esnaashari, "Optimal Resource Management in Fog-Cloud Environments via A2C Reinforcement Learning: Dynamic Task Scheduling and Task Result Caching," AUT Journal of Electrical Engineering, 57 3 (2025): 589-610, doi: 10.22060/eej.2025.24181.5657
VANCOUVER
Hassan Nataj Solhdar, M., Esnaashari, M. M. Optimal Resource Management in Fog-Cloud Environments via A2C Reinforcement Learning: Dynamic Task Scheduling and Task Result Caching. AUT Journal of Electrical Engineering, 2025; 57(3): 589-610. doi: 10.22060/eej.2025.24181.5657