Treffer: Real-time vision-based hand gesture to text interpreter by using artificial intelligence with augmented reality element.

Title:
Real-time vision-based hand gesture to text interpreter by using artificial intelligence with augmented reality element.
Source:
AIP Conference Proceedings; 2024, Vol. 2934 Issue 1, p1-12, 12p
Database:
Complementary Index

Weitere Informationen

Real-time Vision-based Hand Gesture to Text Interpreter by Using Artificial Intelligence with Augmented Reality Element is a device that can interpret sign language to text in real-time. This communicator used a machine learning approach with a slight touch of deep learning elements, which are OpenCV, MediaPipe, and TensorFlow algorithms. Those algorithms have been used to differentiate the hand from other objects, detect the movement and coordinate of hands and perform imagery data analysis to produce output instantly in real-time. The camera will detect the user's hand movement, and the output will be produced on an LCD monitor. This project has been developed by using Python programming language. 13,000 of ASL's alphabets and 5,000 of ASL's number imagery datasets have been collected and trained by using cloud platforms which are Google Teachable Machine and Google Colab. The training process produced 99.85% of accuracy for the alphabets and 100% accuracy for the number. Finally, the constructed machine learning algorithm able to display alphabets and numbers on an LCD monitor by performing ASL's alphabet and number hand gesture in real-time. The performance of the prototype has been analyzed and experimented by two users at plain and noise background with different determined distances. [ABSTRACT FROM AUTHOR]

Copyright of AIP Conference Proceedings is the property of American Institute of Physics and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)