Treffer: Efficient Multi-Agent Optimized Reinforcement Learning Algorithm-Based Task Scheduling in Fog-Cloud Environment.

Title:
Efficient Multi-Agent Optimized Reinforcement Learning Algorithm-Based Task Scheduling in Fog-Cloud Environment.
Authors:
Sarumathi, S.1 (AUTHOR), Vijayalakshmi, K.1 (AUTHOR)
Source:
IETE Journal of Research. Oct2025, p1-14. 14p. 14 Illustrations, 5 Charts.
Database:
Academic Search Index

Weitere Informationen

Fog-cloud computing is crucial for Critical activities that can be integrated into smart (IoT) support by combining controlled resource heterogeneity cloud and mobile edge resources to optimise task scheduling. However, task scheduling in fog-cloud environments creates major obstacles, stochastic behaviour, network hierarchies, resource heterogeneity-controlled, device mobility and resource capabilities. Also, these challenges include optimising multiple conflicting objectives, such as minimising energy consumption, computational costs, makespan, and latency while ensuring dependability and scalability. Traditional task scheduling approaches often fail to address the dynamic nature of fog-cloud environments, leading to inefficiencies in resource utilisation and reduced system performance. Therefore, this paper proposes an efficient task-scheduling approach for fog-cloud environments. Initially, an IoT sensor collects user tasks and is given to the fog-cloud environment for scheduling. The task scheduler gathers data collected by the task manager and the resource manager about task and resource information. During the scheduling process, the tasks are scheduled to the resources using the Optimized Reinforcement Learning (ORL) algorithm, which combines the Q-learning and Enhanced Pufferfish optimisation (EPuO) algorithm. The EPuO algorithm improves Q-learning decision-making capabilities by selecting the best action sequences of Q-learning. To attain the best action state, the proposed EPuO algorithm considers two parameters as an objective function, namely, delay and energy consumption. The proposed ORL algorithm retains the Q-value associated with the state's appropriate action, hence saving storage space. Python is utilised to implement a proposed method. The effectiveness of the proposed approach is assessed using various metrics and compared with other approaches. [ABSTRACT FROM AUTHOR]