Treffer: Deep learning-based landmark detection and localization for autonomous robots in outdoor settings

Title:
Deep learning-based landmark detection and localization for autonomous robots in outdoor settings
Source:
Robotica ; volume 43, issue 4, page 1314-1330 ; ISSN 0263-5747 1469-8668
Publisher Information:
Cambridge University Press (CUP)
Publication Year:
2025
Document Type:
Fachzeitschrift article in journal/newspaper
Language:
English
DOI:
10.1017/s0263574725000219
Accession Number:
edsbas.61B4DA29
Database:
BASE

Weitere Informationen

Navigation is an important skill required for an autonomous robot, as information about the location of the robot is necessary for making decisions about upcoming events. The objective of the localization technique is “to know about the location of the collected data.” In previous works, several deep learning methods were used to detect localization, but none of them gives sufficient accuracy. To address this issue, an Enhanced Capsule Generation Adversarial Network and optimized Dual Interactive Wasserstein Generative Adversarial Network for landmark detection and localization of autonomous robots in outdoor environments (ECGAN-DIWGAN-RSO-LAR) is proposed in this manuscript. Here, the outdoor robot localization dataset is taken from the Virtual KITTI dataset. It contains two phases, which are landmark detection and localization. The landmark detection phase is determined using Enhanced Capsule Generation Adversarial Network for detecting the landmark of the captured image. Then the robot localization phase is determined using Dual Interactive Wasserstein Generative Adversarial Network (DIWGAN) for determining the robot location coordinates as well as compass orientation from identified landmarks. After that, the weight parameters of the DIWGAN are optimized by Rat Swarm Optimization (RSO) algorithm. The proposed ECGAN-DIWGAN-RSO-LAR is implemented in Python. The efficiency of the proposed ECGAN-DIWGAN-RSO-LAR technique shows higher accuracy of 22.67%, 12.45 %, and 8.89% compared to the existing methods.