Treffer: LSTMAE-PBWO: A Hybrid Learning and Optimization Approach for Efficient Cloud Load Balancing.
Weitere Informationen
Cloud Computing has been considered as a successful prototype as far as Information Technology (IT) industry is considered by imparting advantages for both customers and service providers. Despite the advantages, CC also suffers from different issues and one of them is the incompetent provisioning of resources for dynamic workloads of balancing load in an optimal fashion. Accurate and optimal load balancing mechanisms for CC can aid effective provisioning of resources. Nevertheless, owing to the high dimensional and high variable features of cloud like CPU usage, memory usage, timestamp and CPU cores, it is both laborious and cumbersome to balance load optimally and accurately. Notably the consolidation of both CC and data analytics using Deep Learning algorithms aids in interpreting and processing incoming tasks to better meet the clients' requirements in an optimal fashion to substantially enhancing the accuracy. To address the issues like local optimum and premature convergence an optimal cloud load balancing data analytics method using a hybrid Long ShortTerm Auto Encoder and Polynomial Beluga Whale Optimization (LSTMAE-PBWO) is proposed. The hybrid LSTMAE-PBWO method combines LSTMAE and PBWO techniques that aim to predict and fine tune the load via PBWO by reducing makespan time and energy consumption while maintaining high accuracy and throughput for continuous assessment of load in a CC environment. Initially the LSTMAE-PBWO method uses a Long Short Term Memory with an Auto Encoder (LSTMAE) model for load prediction. Moreover to prevent from local optimum and premature convergence, the Polynomial Beluga Whale Optimization model is utilized for optimal hyperparameter selection that aids in boosting the overall data analytic prediction results towards load balancing. Simulations are performed to validate the proposed LSTMAE-PBWO method in Cloud Simulator and Java programming language in terms of makespan time, energy consumption, high accuracy, associated overhead and throughput. The results of LSTMAE-PBWO method provides average makespan as 68ms, energy consumption as 69 joules, throughput as 989 bits/s and overhead as 0.84 respectively for gwa-bitbrains dataset. The result analysis shows that the proposed LSTMAE-PBWO method has outperformed existing methods such as Variational Mode Decomposition with Modified Particle Swarm Optimization (VMD-MPSO) and Incremental Pattern Characterization Learning (IPCL). Simulated Annealing with Particle Swarm Optimization and Double Qlearning (SAPSOQ) LSTM and DForest framework. [ABSTRACT FROM AUTHOR]
Copyright of International Journal of Intelligent Engineering & Systems is the property of Intelligent Networks & Systems Society and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)