Treffer: Automated detection of mouth opening in newborn infants.
Original Publication: Austin, Tex. : Psychonomic Society, c2005-
Baltrusaitis, T., Robinson, P., & Morency, L.-P. (2016). OpenFace: An open source facial behavior analysis toolkit. IEEE Winter Conference on Applications of Computer Vision (WACV), 2016, 1–10. https://doi.org/10.1109/WACV.2016.7477553. (PMID: 10.1109/WACV.2016.7477553)
Bilalpur, M., Hinduja, S., Goodrich, K., & Cohn, J. F. (2022). Ballistic timing of smiles is robust to context, gender, ethnicity, and national differences. 2022 10th International Conference on Affective Computing and Intelligent Interaction (ACII), 1–7. https://doi.org/10.1109/ACII55700.2022.9953828.
Bolzani-Dinehart, L., Messinger, D., Acosta, S., Cassel, T., Ambadar, Z., & Cohn, J. (2005). Adult perceptions of positive and negative infant emotional expressions. Infancy, 8, 279–303. https://doi.org/10.1207/s15327078in0803_5. (PMID: 10.1207/s15327078in0803_5)
Brothers, L. (1990). The neural basis of primate social communication. Motivation and Emotion, 14(2), 81–91. https://doi.org/10.1007/BF00991637. (PMID: 10.1007/BF00991637)
Buolamwini, J. & Gebru, T.. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of the 1st Conference on Fairness, Accountability and Transparency, in Proceedings of Machine Learning Research, 81, 77–91. Available from https://proceedings.mlr.press/v81/buolamwini18a.html .
Camras, L. A., Lambrecht, L., & Michel, G. F. (1996). Infant “surprise” expressions as coordinative motor structures. Journal of Nonverbal Behavior, 20(3), 183–195. https://doi.org/10.1007/BF02281955. (PMID: 10.1007/BF02281955)
Cecchini, M., Baroni, E., Di Vito, C., & Lai, C. (2011). Smiling in newborns during communicative wake and active sleep. Infant Behavior and Development, 34, 417–423. https://doi.org/10.1016/j.infbeh.2011.04.001. (PMID: 10.1016/j.infbeh.2011.04.00121524798)
Chang, C. C., & Lin, C. (2001). LIBSVM: A library for support vector machines. Available: https://www.csie.ntu.edu.tw/~cjlin/libsvm . Accessed 2022 Mar 10.
Chen, X., Striano, T., & Rakoczy, H. (2004). Auditory-oral matching behavior in newborns. Developmental Science, 7, 42–47. https://doi.org/10.1111/j.1467-7687.2004.00321.x. (PMID: 10.1111/j.1467-7687.2004.00321.x15323117)
Cohn, J. F., & Kanade, T. (2007). Automated facial image analysis for measurement of emotion expression. In J. A. Coan & J. B. Allen (Eds.). The Handbook of Emotion Elicitation and Assessment (pp. 22–238).; New York: Oxford University Press.
Cohn, J. F., & Schmidt, K. L. (2004). The timing of facial motion in posed and spontaneous smiles. International Journal of Wavelets, Multiresolution, and Information Processing, 2(2), 121–132. https://doi.org/10.1142/S021969130400041X. (PMID: 10.1142/S021969130400041X)
Cortes, C., & Vapnick, V. (1995). Support-vector networks. Machine Learning, 20(3), 273–297. https://doi.org/10.1007/BF00994018. (PMID: 10.1007/BF00994018)
da Motta, G. D. C. P., Schardosim, J. M., & da Cunha, M. L. C. (2015). Neonatal infant pain scale: Cross-cultural adaptation and validation in Brazil. Journal of Pain and Symptom Management, 50(3), 394–401. https://doi.org/10.1016/j.jpainsymman.2015.03.019. (PMID: 10.1016/j.jpainsymman.2015.03.01926025270)
Dobson, V., & Teller, D. Y. (1978). Visual acuity in human infants: A review and comparison of behavioral and electrophysiological studies. Vision Research, 18(11), 1469–1483. (PMID: 10.1016/0042-6989(78)90001-9364823)
Duan, K., Keerthi, S. S., & Poo, A. N. (2003). Evaluation of simple performance measures for tuning SVM hyperparameters. Neurocomputing, 51, 41–59. https://doi.org/10.1016/S0925-2312(02)00601-X. (PMID: 10.1016/S0925-2312(02)00601-X)
Feinstein, A. R., & Cicchetti, D. V. (1990). High agreement but low kappa: I. The problems of two paradoxes. Journal of Clinical Epidemiology, 43(6), 543–549. https://doi.org/10.1016/0895-4356(90)90158-l. (PMID: 10.1016/0895-4356(90)90158-l2348207)
Fogel, A., Hsu, H. C., Shapiro, A. F., Nelson-Goens, G. C., & Secrist, C. (2006). Effects of normal and perturbed social play on the duration and amplitude of different types of infant smiles. Developmental Psychology, 42(3), 459. (PMID: 10.1037/0012-1649.42.3.45916756438)
Futagi, Y., Ozaki, N., Matsubara, T., Futagi, M., Suzuki, Y., & Kitajima, H. (2016). Eye-mouth associated movement in the human newborn and very young infant. Pediatric Neurology, 58, 75–82. https://doi.org/10.1016/j.pediatrneurol.2016.01.006. (PMID: 10.1016/j.pediatrneurol.2016.01.00626997038)
Gingras, J. L., Mitchell, E. A., & Grattan, K. E. (2005). Fetal homologue of infant crying. Archives of Disease in Childhood - Fetal and Neonatal Edition, 90, F415–F418. https://doi.org/10.1136/adc.2004.062257. (PMID: 10.1136/adc.2004.062257158578761721928)
Han, S., Cao, Q., & Han, M. (2012). Parameter selection in SVM with RBF kernel function. World Automation Congress 2012, 1–4. https://ieeexplore.ieee.org/abstract/document/6321759.
Harris, C. R., Millman, K. J., van der Walt, S. J., et al. (2020). Array programming with NumPy. Nature, 585, 357–362. https://doi.org/10.1038/s41586-020-2649-2. (PMID: 10.1038/s41586-020-2649-2329390667759461)
Heimann, M. (1989). Neonatal imitation, gaze aversion, and mother–infant interaction. Infant Behavior and Development, 12(4), 495–505. https://doi.org/10.1016/0163-6383(89)90029-5. (PMID: 10.1016/0163-6383(89)90029-5)
Heimann, M., & Tjus, T. (2019). Neonatal imitation: Temporal characteristics in imitative response patterns. Infancy, 24(5), 674–692. https://doi.org/10.1111/infa.12304. (PMID: 10.1111/infa.1230432677278)
Hoti, K., Chivers, P. T., & Hughes, J. D. (2021). Assessing procedural pain in infants: A feasibility study evaluating a point-of-care mobile solution based on automated facial analysis. The Lancet Digital Health, 3, E623–E634. https://doi.org/10.1016/S2589-7500(21)00129-1. (PMID: 10.1016/S2589-7500(21)00129-134481769)
Hunter, J. D. (2007). Matplotlib: A 2D graphics environment. Computing in Science & Engineering, 9, 90–95. https://doi.org/10.5281/zenodo.5773480. (PMID: 10.5281/zenodo.5773480)
Jacob, G. M., & Stenger, B. (2021). Facial Action Unit Detection With Transformers. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, 7676–7685. https://doi.org/10.1109/CVPR46437.2021.00759. (PMID: 10.1109/CVPR46437.2021.00759)
Jacobson, S. W. (1979). Matching behavior in the young infant. Child Development, 50(2), 425–430. https://doi.org/10.2307/1129418. (PMID: 10.2307/1129418487882)
Jeni, L. A., Cohn, J. F., & Kanade, T. (2017). Dense 3D face alignment from 2D video for real-time use. Image and Vision Computing, 58, 13–24. https://doi.org/10.1016/j.imavis.2016.05.009. (PMID: 10.1016/j.imavis.2016.05.00929731533)
Kawaguchi, Y., & Waller, B. M. (2024). Lorenz’s classic ‘baby schema’: A useful biological concept? Proceedings of the Royal Society b: Biological Sciences, 291(2025), 20240570. https://doi.org/10.1098/rspb.2024.0570. (PMID: 10.1098/rspb.2024.057011285920)
Khalil, A., Ahmed, S. G., Khattak, A. M., & Al-Qirim, N. (2020). Investigating bias in facial analysis systems: A systematic review. IEEE Access, 8, 130751–130761. https://doi.org/10.1109/ACCESS.2020.3006051. (PMID: 10.1109/ACCESS.2020.3006051)
Kohut, S. A., Riddell, R. P., Flora, D. B., & Oster, H. (2012). A longitudinal analysis of the development of infant facial expressions in response to acute pain: Immediate and regulatory expressions. Pain, 153, 2458–2465. https://doi.org/10.1016/j.pain.2012.09.005. (PMID: 10.1016/j.pain.2012.09.00523103435)
Koo, T. K., & Li, M. Y. (2016). A guideline of selecting and reporting intraclass correlation coefficients for reliability research. Journal of Chiropractic Medicine, 15(2), 155–163. https://doi.org/10.1016/j.jcm.2016.02.012. (PMID: 10.1016/j.jcm.2016.02.012273305204913118)
LaValle, S. M., Branicky, M. S., & Lindemann, S. R. (2004). On the relationship between classical grid search and probabilistic roadmaps. The International Journal of Robotics Research, 23(7–8), 673–692. (PMID: 10.1177/0278364904045481)
Lew, A. R., & Butterworth, G. (1997). The development of hand-mouth coordination in 2- to 5-month-old infants: Similarities with reaching and grasping. Infant Behavior and Development, 20(1), 59–69. https://doi.org/10.1016/S0163-6383(97)90061-8. (PMID: 10.1016/S0163-6383(97)90061-8)
Maroulis, A. (2018). Baby FaceReader AU classification for infant facial expression configurations. In R. A. Grant, T. Allen, A. Spink, & M. Sullivan (Eds.), Measuring Behavior 2018 (pp. 155–157). Manchester.
Mattson, W. I., Cohn, J. F., Mahoor, M. H., Gangi, D. N., & Messinger, D. S. (2013). Darwin’s duchenne: Eye constriction during infant joy and distress. PLoS ONE, 8(11), e80161. https://doi.org/10.1371/journal.pone.0080161. (PMID: 10.1371/journal.pone.0080161242782553835870)
McHugh, M. L. (2012). Interrater reliability: The kappa statistic. Biochemia Medica, 22(3), 276–282. https://doi.org/10.11613/BM.2012.031.
McKinney, W., & others. (2010). Data structures for statistical computing in Python. Proceedings of the 9th Python in Science Conference, 445, 51–56. https://doi.org/10.25080/Majora-92bf1922-00a.
Meltzoff, A. N., & Moore, M. K. (1983). Newborn infants imitate adult facial gestures. Child Development, 54(3), 702–709. https://doi.org/10.2307/1130058. (PMID: 10.2307/11300586851717)
Meltzoff, A. N., & Moore, M. K. (1992). Early imitation within a functional framework: The importance of person identity, movement, and development. Infant Behavior and Development, 15(4), 479–505. https://doi.org/10.1016/0163-6383(92)80015-M. (PMID: 10.1016/0163-6383(92)80015-M25147415)
Messinger, D. S., Cassel, T. D., Acosta, S. I., Ambadar, Z., & Cohn, J. F. (2008). Infant smiling dynamics and perceived positive emotion. Journal of Nonverbal Behavior, 32(3), 133–155. https://doi.org/10.1007/s10919-008-0048-8. (PMID: 10.1007/s10919-008-0048-8194213362676856)
Messinger, D. S., Mahoor, M. H., Chow, S. M., & Cohn, J. F. (2009). Automated measurement of facial expression in infant–mother interaction: A pilot study. Infancy, 14(3), 285–305. Pii91147651010.1080/15250000902839963.
Messinger, D. S., Mattson, W. I., Mahoor, M. H., & Cohn, J. F. (2012). The eyes have it: Making positive expressions more positive and negative expressions more negative. Emotion, 12(3), 430–436. https://doi.org/10.1037/a0026498. (PMID: 10.1037/a002649822148997)
Messinger, D. S., Mahoor, M. H., Chow, S., Haltigan, J. D., Cadavid, S., & Cohm J. F. (2013). Early emotional communication: Novel approaches to interaction. In J. Gratch & S. Marsella (Eds.), Social emotions in nature and artifact: Emotions in human and human–computer interaction (Vol. 14). Oxford, United Kingdom: Oxford University Press.
Michel, P., & El Kaliouby, R. (2003). Real-time facial expression recognition in video using support vector machines. Proceedings of the 5th International Conference on Multimodal Interfaces, 258–264. https://doi.org/10.1145/958432.958479.
Nagy, E. (2011). The newborn infant: A missing stage in developmental psychology. Infant and Child Development, 20(1), 3–19. https://doi.org/10.1002/icd.683. (PMID: 10.1002/icd.683)
Nagy, E., Thompson, P., Mayor, L., & Doughty, H. (2021). Do foetuses communicate? Foetal responses to interactive versus non-interactive maternal voice and touch: An exploratory analysis. Infant Behavior and Development, 63, 101562. https://doi.org/10.1016/j.infbeh.2021.101562. (PMID: 10.1016/j.infbeh.2021.10156233831801)
Northall, A., Mukhopadhyay, B., Weber, M., Petri, S., Prudlo, J., Vielhaber, S., Schreiber, S., & Kuehn, E. (2022). An automated tongue tracker for quantifying bulbar function in ALS. Frontiers in Neurology. https://doi.org/10.3389/fneur.2022.838191. (PMID: 10.3389/fneur.2022.838191352802698914067)
Onal Ertugrul, I., Cohn, J. F., Jeni, L. A., Zhang, Z., Yin, L., & Ji, Q. (2019). Cross-domain AU detection: Domains, learning approaches, and measures. Proceedings of the 2019 14th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2019), 1–8. https://doi.org/10.1109/FG.2019.8756543.
Onal Ertugrul, I., Ahn, Y. A., Bilalpur, M., Messinger, D. S., Speltz, M. L., & Cohn, J. F. (2023). Infant AFAR: Automated facial action recognition in infants. Behavior Research Methods, 55(3), 1024–1035. https://doi.org/10.3758/s13428-022-01863-y. (PMID: 10.3758/s13428-022-01863-y35538295)
Oster, H. (2003). Emotion in the infant’s face. Annals of the New York Academy of Sciences, 1000(1), 197–204. https://doi.org/10.1196/annals.1280.024. (PMID: 10.1196/annals.1280.02414766632)
Oster, H. (2005). The repertoire of infant facial expressions: An ontogenetic perspective. In J. Nadel & D. Muir (Eds.), Emotional development: Recent research advances (pp. 261–292). Oxford University Press.
Osuna, E., Freund, R., & Girosit, F. (1997). Training support vector machines: An application to face detection. Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 130–136. https://doi.org/10.1109/CVPR.1997.609310.
Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., et al. (2011). Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12, 2825–2830. https://doi.org/10.5555/1953048.2078195. (PMID: 10.5555/1953048.2078195)
Raji, I. D., & Buolamwini, J. (2019). Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial AI products. Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 429–435. https://doi.org/10.1145/3306618.3314244.
Shultz, S., Klin, A., & Jones, W. (2018). Neonatal transitions in social behavior and their implications for autism. Trends in Cognitive Sciences, 22, 452–469. https://doi.org/10.1016/j.tics.2018.02.012. (PMID: 10.1016/j.tics.2018.02.012296098956554740)
Simpson, E. A. (2025). Rethinking the study of newborn sociality: Challenges and opportunities. Infant Behavior and Development, 81, 102149. https://doi.org/10.1016/j.infbeh.2025.102149.
Simpson, E. A., Murray, L., Paukner, A., & Ferrari, P. F. (2014). The mirror neuron system as revealed through neonatal imitation: Presence from birth, predictive power and evidence of plasticity. Philosophical Transactions of the Royal Society b: Biological Sciences, 369(1644), 20130289. https://doi.org/10.1098/rstb.2013.0289. (PMID: 10.1098/rstb.2013.0289)
Singh, L., Cristia, A., Karasik, L. B., Rajendra, S. J., & Oakes, L. M. (2023). Diversity and representation in infant research: Barriers and bridges toward a globalized science of infant development. Infancy, 28(4), 708–737. https://doi.org/10.1111/infa.12545. (PMID: 10.1111/infa.1254537211974)
Wainer, J., & Fonseca, P. (2021). How to tune the RBF SVM hyperparameters? An empirical evaluation of 18 search algorithms. Artificial Intelligence Review, 54(6), 4771–4797. https://doi.org/10.1007/s10462-021-10011-5. (PMID: 10.1007/s10462-021-10011-5)
Wen, Z., Shi, J., Li, Q., He, B., & Chen, J. (2018). ThunderSVM: A fast SVM library on GPUs and CPUs. The Journal of Machine Learning Research, 19(1), 797–801.
Yan, X., Andrews, T. J., Jenkins, R., & Young, A. W. (2016). Cross-cultural differences and similarities underlying other-race effects for facial identity and expression. The Quarterly Journal of Experimental Psychology, 69(7), 1247–1254. https://doi.org/10.1080/17470218.2016.1146312. (PMID: 10.1080/17470218.2016.114631226878095)
Zaharieva, M. S., Salvadori, E. A., Messinger, D. S., Visser, I., & Colonnesi, C. (2024). Automated facial expression measurement in a longitudinal sample of 4- and 8-month-olds: Baby FaceReader 9 and manual coding of affective expressions. Behavior Research Methods, 56(6), 5709–5731. https://doi.org/10.3758/s13428-023-02301-3. (PMID: 10.3758/s13428-023-02301-33827307211335827)
Zhi, R., Zamzmi, G. Z. D., Goldgof, D., Ashmeade, T., & Sun, Y. (2018). Automatic infants’ pain assessment by dynamic facial representation: Effects of profile view, gestational age, gender, and race. Journal of Clinical Medicine. https://doi.org/10.3390/jcm7070173. (PMID: 10.3390/jcm7070173299973136069472)
Zimmerman, P. H., Bolhuis, J. E., Willemsen, A., Meyer, E. S., & Noldus, L. P. J. J. (2009). The Observer XT: A tool for the integration and synchronization of multimodal signals. Behavior Research Methods, 41, 731–735. https://doi.org/10.3758/BRM.41.3.731. (PMID: 10.3758/BRM.41.3.73119587185)
Weitere Informationen
Automated behavioral measurement using machine learning is gaining ground in psychological research. Automated approaches have the potential to reduce the labor and time associated with manual behavioral coding, and to enhance measurement objectivity; yet their application in young infants remains limited. We asked whether automated measurement can accurately identify newborn mouth opening-a facial gesture involved in infants' communication and expression-in videos of 29 newborns (age range, 9-29 days, 55.2% female, 58.6% White, 51.7% Hispanic/Latino) during neonatal imitation testing. We employed a three-dimensional cascade regression computer vision algorithm to automatically track and register newborn faces. The facial landmark coordinates of each frame were input into a support vector machine (SVM) classifier, trained to recognize the presence and absence of mouth openings at the frame level as identified by expert human coders. The SVM classifier was trained using leave-one-infant-out cross-validation (training: N = 22 newborns, 95 videos, 354,468 frames), and the best classifier showed an average validation accuracy of 75%. The final SVM classifier was tested on different newborns from the training set (testing: N = 7 newborns, 29 videos, 118,615 frames) and demonstrated 76% overall accuracy in identifying mouth opening. An intraclass correlation coefficient of .81 among the SVM classifier and human experts indicated that the SVM classifier was, on a practical level, reliable with experts in quantifying newborns' total rates of mouth opening across videos. Results highlight the potential of automated measurement approaches for objectively identifying the presence and absence of mouth opening in newborn infants.
(© 2025. The Psychonomic Society, Inc.)
Declarations. Conflicts of interest: The authors have no relevant financial or non-financial interests to disclose. Ethics approval: The Institutional Review Board for Human Subject Research at the University of Miami approved this study for human participants. Consent to participate: Informed consent was obtained from the caregivers of all infants included in the study. Consent for publication: The authors affirm that both adults consented to publication of the images in Figs. 1 and 2, and parents consented to the publication of their infant’s image in Figs. 2 and 4. Open practices statement: The de-identified face tracking data and expert human coder data are available at Open Science Framework repository: https://osf.io/7xqd8/?view_only=912c5127521c4426beca48f8de0c2f3c . The raw newborn video data used for the manual and automated mouth opening detection are stored in the data repository of the University of Miami and are not shared in open-access format to protect participants’ confidentiality. Video data sharing is available from the corresponding author upon reasonable request. The study was not preregistered. Python code for replicating the training and evaluation of the support vector machine classifier and a user-friendly toolkit with instructions are available at the Open Science Framework repository: https://osf.io/7xqd8/?view_only=912c5127521c4426beca48f8de0c2f3c .