Treffer: Deep Learning for Autonomous Surgical Guidance Using 3-Dimensional Images From Forward-Viewing Endoscopic Optical Coherence Tomography.

Title:
Deep Learning for Autonomous Surgical Guidance Using 3-Dimensional Images From Forward-Viewing Endoscopic Optical Coherence Tomography.
Authors:
Ly S; School of Computer Science, University of Oklahoma, Norman, Oklahoma, USA., Badré A; School of Computer Science, University of Oklahoma, Norman, Oklahoma, USA., Brandt P; School of Computer Science, University of Oklahoma, Norman, Oklahoma, USA., Wang C; Stephenson School of Biomedical Engineering, University of Oklahoma, Norman, Oklahoma, USA., Calle P; School of Computer Science, University of Oklahoma, Norman, Oklahoma, USA., Reynolds J; School of Computer Science, University of Oklahoma, Norman, Oklahoma, USA., Zhang Q; Stephenson School of Biomedical Engineering, University of Oklahoma, Norman, Oklahoma, USA., Fung KM; Department of Pathology, University of Oklahoma Health Sciences Center, Oklahoma City, Oklahoma, USA.; Stephenson Cancer Center, University of Oklahoma Health Sciences Center, Oklahoma City, Oklahoma, USA., Cui H; School of Computer Science, University of Oklahoma, Norman, Oklahoma, USA., Yu Z; Department of Pathology, University of Oklahoma Health Sciences Center, Oklahoma City, Oklahoma, USA.; Children's Hospital, University of Oklahoma Health Sciences Center, Oklahoma City, Oklahoma, USA., Patel SG; Department of Urology, University of Oklahoma Health Sciences Center, Oklahoma City, Oklahoma, USA., Liu Y; School of Computer Science, University of Oklahoma, Norman, Oklahoma, USA., Bradley NA; Department of Urology, University of Oklahoma Health Sciences Center, Oklahoma City, Oklahoma, USA., Tang Q; Stephenson School of Biomedical Engineering, University of Oklahoma, Norman, Oklahoma, USA.; Stephenson Cancer Center, University of Oklahoma Health Sciences Center, Oklahoma City, Oklahoma, USA., Pan C; School of Computer Science, University of Oklahoma, Norman, Oklahoma, USA.; Stephenson School of Biomedical Engineering, University of Oklahoma, Norman, Oklahoma, USA.
Source:
Journal of biophotonics [J Biophotonics] 2025 Nov; Vol. 18 (11), pp. e202500181. Date of Electronic Publication: 2025 Jul 25.
Publication Type:
Journal Article
Language:
English
Journal Info:
Publisher: Wiley-VCH Country of Publication: Germany NLM ID: 101318567 Publication Model: Print-Electronic Cited Medium: Internet ISSN: 1864-0648 (Electronic) Linking ISSN: 1864063X NLM ISO Abbreviation: J Biophotonics Subsets: MEDLINE
Imprint Name(s):
Original Publication: Weinheim : Wiley-VCH
References:
Biomed Opt Express. 2022 Apr 11;13(5):2728-2738. (PMID: 35774323)
Biomed Opt Express. 2021 Mar 29;12(4):2404-2418. (PMID: 33996237)
Comput Biol Med. 2022 Feb;141:105089. (PMID: 34920160)
Comput Methods Programs Biomed. 2025 Dec;272:109063. (PMID: 40946520)
Invest Ophthalmol Vis Sci. 2016 Jul 1;57(9):OCT1-OCT13. (PMID: 27409459)
Science. 1991 Nov 22;254(5035):1178-81. (PMID: 1957169)
Med Image Anal. 2020 Aug;64:101730. (PMID: 32492583)
Turk J Urol. 2018 Nov;44(6):478-483. (PMID: 30395796)
Med Image Anal. 2017 Dec;42:60-88. (PMID: 28778026)
Nat Rev Nephrol. 2022 May;18(5):277-293. (PMID: 35173348)
PLoS One. 2014 Dec 12;9(12):e114818. (PMID: 25502759)
Sci Rep. 2024 May 23;14(1):11758. (PMID: 38783015)
Diagn Interv Imaging. 2017 Apr;98(4):315-319. (PMID: 27765515)
Sci Rep. 2023 Sep 5;13(1):14628. (PMID: 37670066)
Sci Rep. 2022 May 31;12(1):9057. (PMID: 35641505)
Heliyon. 2023 Sep 14;9(9):e20052. (PMID: 37809748)
Eur Urol. 2016 Aug;70(2):382-96. (PMID: 26876328)
Sci Rep. 2023 Apr 8;13(1):5760. (PMID: 37031338)
Int J Comput Assist Radiol Surg. 2021 Sep;16(9):1517-1526. (PMID: 34053010)
J Biomed Opt. 2020 Sep;25(9):. (PMID: 32914606)
BMC Bioinformatics. 2006 Feb 23;7:91. (PMID: 16504092)
J Biophotonics. 2022 May;15(5):e202100347. (PMID: 35103420)
J Biophotonics. 2024 Feb;17(2):e202300330. (PMID: 37833242)
Grant Information:
R01 DK133717 United States DK NIDDK NIH HHS; The University of Oklahoma Libraries' Open Access Fund; P30 CA225520 United States CA NCI NIH HHS; P20 GM135009 United States GM NIGMS NIH HHS; P30CA225520 United States CA NCI NIH HHS; P20GM135009 United States GM NIGMS NIH HHS; HR23-071 Oklahoma Center for the Advancement of Science and Technology; R01DK133717 United States DK NIDDK NIH HHS; P20GM103639 United States GM NIGMS NIH HHS; P20 GM103639 United States GM NIGMS NIH HHS; 2238648 National Science Foundation; OIA-2132161 National Science Foundation
Contributed Indexing:
Keywords: 3D CNN; 3D‐CNN; OCT imaging; data pre‐processing; deep learning; nested cross‐validation
Entry Date(s):
Date Created: 20250725 Date Completed: 20251116 Latest Revision: 20251223
Update Code:
20251223
PubMed Central ID:
PMC12718128
DOI:
10.1002/jbio.202500181
PMID:
40709742
Database:
MEDLINE

Weitere Informationen

A three-dimensional convolutional neural network (3D-CNN) was developed for the analysis of volumetric optical coherence tomography (OCT) images to enhance endoscopic guidance during percutaneous nephrostomy. The model was performance-benchmarked using a 10-fold nested cross-validation procedure and achieved an average test accuracy of 90.57% across a dataset of 10 porcine kidneys. This performance significantly exceeded that of 2D-CNN models that attained average test accuracies ranging from 85.63% to 88.22% using 1, 10, or 100 radial sections extracted from the 3D OCT volumes. The 3D-CNN (~12 million parameters) was benchmarked against three state-of-the-art volumetric architectures: the 3D Vision Transformer (3D-ViT, ~45 million parameters), 3D-DenseNet121 (~12 million parameters), and the Multi-plane and Multi-slice Transformer (M3T, ~29 million parameters). While these models achieved comparable inferencing accuracy, the 3D-CNN exhibited lower inference latency (33 ms) than 3D-ViT (86 ms), 3D-DenseNet121 (58 ms), and M3T (93 ms), representing a critical advantage for real-time surgical guidance applications. These results demonstrate the 3D-CNN's capability as a powerful and practical tool for computer-aided diagnosis in OCT-guided surgical interventions.
(© 2025 Wiley‐VCH GmbH.)