Result: Multi Hand Gesture Recognition Using a Multilayer Perceptron (MLP) Model with mmWave Radar Sensing Data
Further Information
This study presents a research on radar-based hand gesture recognition technology toward the next level of Human Computer Interaction (HCI) in the areas where minimal physical contact is preferable (healthcare and sterile environments). While visual-based systems have been properly analysed, their robustness to illumination variations, occlusion, and privacy is questionable and questions their practicality in deployment. Radar sensing is a promising alternative since it is robust to environmental changes and has better privacy protection, but radar data are complex and high-dimensional, leading to decreased recognition rate, lower generalization ability, and raised computational cost. In order to solve these problems, a multilayer perceptron (MLP) model was created and trained on a publicly available dataset taken with the Texas Instruments AWR1642BOOST millimetre-wave radar evaluation board. Experiments were conducted in Python 3.10 in Google Colab, and the model's performance was measured on training and validation splits in terms of accuracy and cross entropy loss. The accuracy of the MLP model on the training set was 94.86%, the loss on the training set was 0.16, the accuracy on the validation set was 87.66%, and the loss on the validation set was 0.41, which shows that the model converged well, the feature extraction effect was good, and the generalization ability was strong. These results show the appropriateness of compact MLP architectures for radar gesture recognition, and they offer a repeatable benchmark for efficient, data-driven, and privacy-preserving HCI systems in real time.