Treffer: Mobility Aid for the Visually Impaired Using Machine Learning and Spatial Audio.

Title:
Mobility Aid for the Visually Impaired Using Machine Learning and Spatial Audio.
Source:
Journal of Robotics & Control (JRC); 2025, Vol. 6 Issue 2, p779-797, 19p
Database:
Complementary Index

Weitere Informationen

Assistive technology is crucial in enhancing the quality of life for individuals with disabilities, including the visually impaired. Many mobility aids lack advanced features such as real-time machine learning-based object detection and spatial audio for environmental awareness. This research contributes to developing more intelligent and adaptable assistive technology for visually impaired individuals, promoting improved navigation and environmental awareness. This research presents a head-mounted mobility aid that integrates a time-of-flight camera, a web camera, and a touch sensor with K-Means clustering, Convolutional Neural Networks (CNNs), and concurrent programming on a Raspberry Pi 4B to detect and classify surrounding obstacles and objects. The system converts obstacle data into spatial audio, allowing users to perceive their surroundings through sound direction and intensity. Object recognition is activated via a touch sensor, providing distance and directional information relative to the user using audio description. The concurrent programming implementation improves execution time by 50.22% compared to Infinite Loop Design (ILD), enhancing real-time responsiveness. However, the system has limitations, including object recognition limited to 80 predefined categories, a 4-meter detection range, reduced accuracy under highintensity sunlight, and potential interference in spatial audio perception due to external noise. Assistive technology to help the mobility of blind people using advanced technology based on machine learning has developed in a form that can be used flexibly for the user's mobility. Keywords--Assistive Technology; Blind People; Time-of-Flight Camera; K-Means; Image Recognition; Concurrent Assistive technology is crucial in enhancing the quality of life for individuals with disabilities, including the visually impaired. Many mobility aids lack advanced features such as real-time machine learning-based object detection and spatial audio for environmental awareness. This research contributes to developing more intelligent and adaptable assistive technology for visually impaired individuals, promoting improved navigation and environmental awareness. This research presents a head-mounted mobility aid that integrates a time-of-flight camera, a web camera, and a touch sensor with K-Means clustering, Convolutional Neural Networks (CNNs), and concurrent programming on a Raspberry Pi 4B to detect and classify surrounding obstacles and objects. The system converts obstacle data into spatial audio, allowing users to perceive their surroundings through sound direction and intensity. Object recognition is activated via a touch sensor, providing distance and directional information relative to the user using audio description. The concurrent programming implementation improves execution time by 50.22% compared to Infinite Loop Design (ILD), enhancing real-time responsiveness. However, the system has limitations, including object recognition limited to 80 predefined categories, a 4-meter detection range, reduced accuracy under highintensity sunlight, and potential interference in spatial audio perception due to external noise. Assistive technology to help the mobility of blind people using advanced technology based on machine learning has developed in a form that can be used flexibly for the user's mobility. [ABSTRACT FROM AUTHOR]

Copyright of Journal of Robotics & Control (JRC) is the property of Journal of Robotics & Control (JRC) and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)