Treffer: Indoor navigation for mobile robots based on deep reinforcement learning with convolutional neural network.

Title:
Indoor navigation for mobile robots based on deep reinforcement learning with convolutional neural network.
Source:
International Journal of Electrical & Computer Engineering (2088-8708); Jun2025, Vol. 15 Issue 3, p2748-2757, 10p
Database:
Complementary Index

Weitere Informationen

The mobile robot is an intelligent device that can achieve many tasks in life. For autonomous, navigation based on the line on the ground is often used because it helps the robot to move along a predefined path, simplifies the path planning, and reduces the computational load. This paper presents a method for navigating the four-wheel mobile robot to track a line based on a deep Q-network as a control algorithm to desire the action of the mobile robot and a camera as a feedback sensor to detect the line. The control algorithm uses a convolution neural network (CNN) to generate the mobile robot action, defined as an agent of deep Q-network. CNN uses images from the camera to define the state of the deep Q network. The simulations are performed based on Gazebo software which includes a 3D environment, mobile robot model, line, and Python programming. The results demonstrate the high-performance tracking of mobile robots with complex line trajectories, achieving errors of less than 100 px, which is compared with the traditional vision method (VNS), the MSE of the proposal method is 0.0264 lower than VNS with 0.0406. Showcases proved convincingly that effectiveness suggested a control approach. [ABSTRACT FROM AUTHOR]

Copyright of International Journal of Electrical & Computer Engineering (2088-8708) is the property of Institute of Advanced Engineering & Science and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)