Result: A deep learning based framework for face de-morphing to strengthen security and improve accuracy in facial recognition systems.
Further Information
Face morphing attacks have emerged as an important security threat, compromising the reliability of facial recognition systems. Despite extensive research on morphing detection, limited attention has been given to restoring accomplice face images, which is critical for forensic applications. In this manuscript, a deep learning based framework for face de-morphing to strengthen security and improve accuracy in facial recognition systems (DLF-FDM-SS-IA-FRS) is proposed. At first, the input image is taken from both the Face Research Lab London set and the unconstrained college students Dataset. Then, the gathered images are fed into the pre-processing segment using the inverse unscented Kalman filter (I-UKF), which is used for resizing and normalizing the image. The pre-processed images are fed into feature extraction utilizing the refined linear chirplet transform (RLCT). The RLCT is used to extract the facial features like eyes, nose, mouth, ears, and eyebrows. Then the extracted facial features are fed into a prediction using a temporal dynamic graph neural network (TDGNN) to predict the de-morphing of faces. Then, the prediction is fed into optimizing TDGNN using the Dollmaker optimization algorithm to enhance the weight parameters of TDGNN. The proposed method was implemented in Python, and the performance metrics like precision, recall, accuracy, F1-score, specificity, and receiver operating characteristic were analyzed. The DLF-FDM-SS-IA-FRS method achieves 95% precision, 95% recall, 95% F1-score, 97.5% efficiency, 1.178 s time for de-morphing faces compared with existing methods, such as face de-morphing based on identity feature transfer (FDM-IFT), adaptive de-morphing factor framework for restoring accomplice’s facial image (ADMFF-RAFI), and improving accomplice detection in the morphing attack (IAD-MA). [ABSTRACT FROM AUTHOR]
Copyright of Signal, Image & Video Processing is the property of Springer Nature and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)