Result: PARALLEL AND DISTRIBUTED COMPUTING TECHNOLOGIES FOR AUTONOMOUS VEHICLE NAVIGATION

Title:
PARALLEL AND DISTRIBUTED COMPUTING TECHNOLOGIES FOR AUTONOMOUS VEHICLE NAVIGATION
Source:
Radio Electronics, Computer Science, Control; No. 4 (2023): Radio Electronics, Computer Science, Control; 111; 111; 111; 2313-688X; 1607-3274
Publisher Information:
National University "Zaporizhzhia Polytechnic" 2024-01-03
Document Type:
Electronic Resource Electronic Resource
Availability:
Open access content. Open access content
https://creativecommons.org/licenses/by-sa/4.0
Note:
application/pdf
Radio Electronics, Computer Science, Control
English
Other Numbers:
UANTU oai:ojs.journals.uran.ua:article/296211
https://ric.zp.edu.ua/article/view/296211
1478949593
Contributing Source:
NATIONAL TECH UNIV OF UKRAINE
From OAIster®, provided by the OCLC Cooperative.
Accession Number:
edsoai.on1478949593
Database:
OAIster

Further Information

Context. Autonomous vehicles are becoming increasingly popular, and one of the important modern challenges in their development is ensuring their effective navigation in space and movement within designated lanes. This paper examines a method of spatial orientation for vehicles using computer vision and artificial neural networks. The research focused on the navigation system of an autonomous vehicle, which incorporates the use of modern distributed and parallel computing technologies. Objective. The aim of this work is to enhance modern autonomous vehicle navigation algorithms through parallel training of artificial neural networks and to determine the optimal combination of technologies and nodes of devices to increase speed and enable real-time decision-making capabilities in spatial navigation for autonomous vehicles. Method. The research establishes that the utilization of computer vision and neural networks for road lane segmentation proves to be an effective method for spatial orientation of autonomous vehicles. For multi-core computing systems, the application of parallel programming technology, OpenMP, for neural network training on processors with varying numbers of parallel threads increases the algorithm’s execution speed. However, the use of CUDA technology for neural network training on a graphics processing unit significantly enhances prediction speeds compared to OpenMP. Additionally, the feasibility of employing PyTorch Distributed Data Parallel (DDP) technology for training the neural network across multiple graphics processing units (nodes) simultaneously was explored. This approach further improved prediction execution times compared to using a single graphics processing unit. Results. An algorithm for training and prediction of an artificial neural network was developed using two independent nodes, each equipped with separate graphics processing units, and their synchronization for exchanging training results after each epoch, employing PyTorch