Treffer: OsiriXGPT: An Innovative AI Co-pilot Plug-In for Seamless Deployment of Generative AI Models in Scan-to-Scan Reporting Workflows.

Title:
OsiriXGPT: An Innovative AI Co-pilot Plug-In for Seamless Deployment of Generative AI Models in Scan-to-Scan Reporting Workflows.
Authors:
Candito A; Division of Radiotherapy and Imaging, The Institute of Cancer Research, London, UK.; MRI Unit, Royal Marsden NHS Foundation Trust, London, UK., Mun TS; Division of Radiotherapy and Imaging, The Institute of Cancer Research, London, UK., Holbrey R; Division of Radiotherapy and Imaging, The Institute of Cancer Research, London, UK., Doran S; Division of Radiotherapy and Imaging, The Institute of Cancer Research, London, UK., Messiou C; Division of Radiotherapy and Imaging, The Institute of Cancer Research, London, UK.; MRI Unit, Royal Marsden NHS Foundation Trust, London, UK., Koh DM; Division of Radiotherapy and Imaging, The Institute of Cancer Research, London, UK.; MRI Unit, Royal Marsden NHS Foundation Trust, London, UK., Blackledge MD; Division of Radiotherapy and Imaging, The Institute of Cancer Research, London, UK. matthew.blackledge@icr.ac.uk.
Source:
Journal of imaging informatics in medicine [J Imaging Inform Med] 2025 Oct 16. Date of Electronic Publication: 2025 Oct 16.
Publication Model:
Ahead of Print
Publication Type:
Journal Article
Language:
English
Journal Info:
Publisher: Springer Nature Country of Publication: Switzerland NLM ID: 9918663679206676 Publication Model: Print-Electronic Cited Medium: Internet ISSN: 2948-2933 (Electronic) Linking ISSN: 29482925 NLM ISO Abbreviation: J Imaging Inform Med Subsets: MEDLINE
Imprint Name(s):
Original Publication: [Cham, Switzerland] : Springer Nature, [2024]-
References:
The Royal College of Radiologists, “Clinical Radiology Workforce Census 2023,” 2023.
K. B. Lysdahl and B. M. Hofmann, “What causes increasing and unnecessary use of radiological investigations? a survey of radiologists’ perceptions,” BMC Health Serv Res, vol. 9, 2009.
T. Davenport and R. Kalakota, “The potential for artificial intelligence in healthcare,” Future Healthc J, vol. 6, no. 2, pp. 94–102, 2019. (PMID: 10.7861/futurehosp.6-2-94313635136616181)
C. Mello-Thoms and C. A. B. Mello, “Clinical applications of artificial intelligence in radiology,” British Institute of Radiology, vol. 96, no. 1150. 2023.
M. Nair, P. Svedberg, I. Larsson, and J. M. Nygren, “A comprehensive overview of barriers and strategies for AI implementation in healthcare: Mixed-method design,” PLoS One, vol. 19, no. 8, 2024.
Q. Jin et al., “Hidden flaws behind expert-level accuracy of multimodal GPT-4 vision in medicine,” NPJ Digit Med, vol. 7, no. 1, 2024.
K. Saab et al., “Capabilities of Gemini Models in Medicine,” 2024 [Online]. Available: http://arxiv.org/abs/2404.18416.
C. Li et al., “LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day,” 2023 [Online]. Available: http://arxiv.org/abs/2306.00890.
P. S. Suh et al., “Comparing Diagnostic Accuracy of Radiologists versus GPT-4V and Gemini Pro Vision Using Image Inputs from Diagnosis Please Cases,” Radiology, vol. 312, no. 1, 2024.
F. Busch, T. Han, M. R. Makowski, D. Truhn, K. K. Bressem, and L. Adams, “Integrating Text and Image Analysis: Exploring GPT-4V’s Capabilities in Advanced Radiological Applications Across Subspecialties,” J Med Internet Res, vol. 26, no. 1, 2024.
V. Gupta et al., “Current State of Community-Driven Radiological AI Deployment in Medical Imaging.,” JMIR AI, vol. 3, no. 1, 2024.
K. G. van Leeuwen et al., “Comparison of Commercial AI Software Performance for Radiograph Lung Nodule Detection and Bone Age Prediction,” Radiology, vol. 310, no. 1, 2024.
J. J. X. Quek, O. J. Nickalls, B. S. S. Wong, and M. O. Tan, “Deploying artificial intelligence in the detection of adult appendicular and pelvic fractures in the Singapore emergency department after hours: efficacy, cost savings and non-monetary benefits,” Singapore Med J, 2024.
P. S. Gidde et al., “Validation of expert system enhanced deep learning algorithm for automated screening for COVID-Pneumonia on chest X-rays,” Sci Rep, vol. 11, no. 1, 2021.
A. Jimenez-Pastor et al., “Automated prostate multi-regional segmentation in magnetic resonance using fully convolutional neural networks,” Eur Radiol, vol. 33, no. 7, pp. 5087–5096, 2023. (PMID: 10.1007/s00330-023-09410-936690774)
F. Yu et al., “Evaluating progress in automatic chest X-ray radiology report generation,” Patterns, vol. 4, no. 9, 2023.
A. Rosset, L. Spadola, and O. Ratib, “OsiriX: An open-source software for navigating in multidimensional DICOM images,” J Digit Imaging, vol. 17, no. 3, pp. 205–216, 2004. (PMID: 10.1007/s10278-004-1014-6155347533046608)
T. Zhao et al., “A foundation model for joint segmentation, detection and recognition of biomedical objects across nine modalities,” Nat Methods, 2024.
J. Ma, Y. He, F. Li, L. Han, C. You, and B. Wang, “Segment anything in medical images,” Nature Communications 2024 15:1, vol. 15, no. 1, pp. 1–9, 2024.
K. Zhang et al., “A generalist vision–language foundation model for diverse biomedical tasks,” Nat Med, 2024.
H.-Y. Zhou, S. Adithan, J. Nicolás Acosta, E. J. Topol, and P. Rajpurkar, “A Generalist Learner for Multifaceted Medical Image Interpretation,” 2024. [Online]. Available: https://doi.org/10.48550/arXiv.2405.07988.
A. I. Pérez-Sanpablo, J. Quinzaños-Fresnedo, J. Gutiérrez-Martínez, I. G. Lozano- Rodríguez, and E. Roldan-Valadez, “Transforming Medical Imaging: The Role of Artificial Intelligence Integration in PACS for Enhanced Diagnostic Accuracy and Workflow Efficiency,” Curr Med Imaging, vol. 21, 2025.
P. Theriault-Lauzier et al., “A Responsible Framework for Applying Artificial Intelligence on Medical Images and Signals at the Point of Care: The PACS-AI Platform,” Canadian Journal of Cardiology, vol. 40, no. 10, pp. 1828–1840, 2024. (PMID: 10.1016/j.cjca.2024.05.02538885787)
A. Gill et al., “Artificial Intelligence user interface preferences in radiology: A scoping review,” J Med Imaging Radiat Sci, vol. 56, no. 3, 2025.
D. Lameira and F. Ferraz, “Transversal PACS Browser API: Addressing Interoperability Challenges in Medical Imaging Systems,” 2024 [Online]. Available: https://doi.org/10.48550/arXiv.2412.14229.
S. Purkayastha et al., “A general-purpose AI assistant embedded in an open-source radiology information system,” 2023 [Online]. Available: https://doi.org/10.48550/arXiv.2303.10338.
M. D. Blackledge, D. J. Collins, D. M. Koh, and M. O. Leach, “Rapid development of image analysis research tools: Bridging the gap between researcher and clinician with pyOsiriX,” Comput Biol Med, vol. 69, pp. 203–212, 2016. (PMID: 10.1016/j.compbiomed.2015.12.002267739414761020)
T. Sum et al., “OsiriXgrpc: Rapid development and deployment of state-of-the-art artificial intelligence for clinical practice,” 4th Annual AAAI Workshop on AI to Accelerate Science and Engineering (AI2ASE), 2022.
M. Abadi et al., “TensorFlow: A system for large-scale machine learning,” 2016 [Online]. Available: https://doi.org/10.48550/arXiv.1605.08695.
A. Paszke et al., “PyTorch: An Imperative Style, High-Performance Deep Learning Library,” 2019 [Online]. Available: https://doi.org/10.48550/arXiv.1912.01703.
M. Barker et al., “Introducing the FaIR Principles for research software,” Sci Data, vol. 9, no. 622, 2022.
J. F. Senge et al., “ChatGPT may free time needed by the interventional radiologist for administration / documentation,” Swiss Journal of Radiology and Nuclear Medicine, vol. 7, no. 2, p. 14, 2024. (PMID: 10.59667/sjoranm.v7i2.12)
H. Li et al., “Decoding radiology reports: Potential application of OpenAI ChatGPT to enhance patient understanding of diagnostic reports,” Clin Imaging, vol. 101, pp. 137–141, 2023. (PMID: 10.1016/j.clinimag.2023.06.00837336169)
P. Hager et al., “Evaluation and mitigation of the limitations of large language models in clinical decision-making,” Nat Med, 2024.
P. Keshavarz et al., “ChatGPT in radiology: A systematic review of performance, pitfalls, and future perspectives,” Diagnostic and Interventional Imaging, vol. 105, no. 7–8. pp. 251–265, 2024. (PMID: 10.1016/j.diii.2024.04.00338679540)
OpenAI et al., “GPT-4 Technical Report,” 2023 [Online]. Available: http://arxiv.org/abs/2303.08774.
A. Kirillov et al., “Segment Anything,” 2023 [Online]. Available: https://doi.org/10.48550/arXiv.2304.02643.
T. Orekondy, M. Fritz, and B. Schiele, “Connecting Pixels to Privacy and Utility: Automatic Redaction of Private Information in Images,” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 8466–8475, Dec. 2017.
G. Petralia et al., “Whole-body magnetic resonance imaging (WB-MRI) for cancer screening: recommendations for use,” Radiologia Medica, vol. 126, no. 11. pp. 1434–1450, 2021. (PMID: 10.1007/s11547-021-01392-234338948)
Y. S. Hu et al., “Applying ONCO-RADS to whole-body MRI cancer screening in a retrospective cohort of asymptomatic individuals,” Cancer Imaging, vol. 24, no. 1, 2024.
F. Zugni, A. R. Padhani, D. Koh, P. E. Summers, M. Bellomi, and G. Petralia, “Whole-body magnetic resonance imaging (WB-MRI) for cancer screening in asymptomatic subjects of the general population: review and recommendations,” Cancer Imaging, pp. 1–13, 2020.
D. C. Sullivan et al., “Metrology standards for quantitative imaging biomarkers1,” Radiology, vol. 277, no. 3, pp. 813–825, 2015. (PMID: 10.1148/radiol.201514220226267831)
G. Petralia et al., “Oncologically Relevant Findings Reporting and Data System (ONCO-RADS): Guidelines for the Acquisition, Interpretation, and Reporting of Whole-Body MRI for Cancer,” Radiology, 2021.
J. M. Winfield et al., “Extracranial Soft-Tissue Tumors: Repeatability of Apparent Diffusion Coefficient Estimates from Diffusion-weighted MR Imaging,” Radiology, vol. 284, no. 1, 2017.
F. Bray, J. Ferlay, I. Soerjomataram, R. L. Siegel, L. A. Torre, and A. Jemal, “Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries,” CA Cancer J Clin, vol. 68, pp. 394–424, 2018. (PMID: 30207593)
N. R. Brand, L. G. Qu, A. Chao, and A. M. Ilbawi, “Delays and Barriers to Cancer Care in Low- and Middle-Income Countries: A Systematic Review,” Oncologist, vol. 24, no. 12, pp. e1371–e1380, 2019. (PMID: 10.1634/theoncologist.2019-0057313879496975966)
V. Florou, A. G. Nascimento, A. Gulia, G. De, and L. Lopes, Global Health Perspective in Sarcomas and Other Rare Cancers, vol. 38. Am Soc Clin Oncol Educ Book, 2018.
C. S. Pramesh et al., “Priorities for cancer research in low- and middle-income countries: a global perspective,” Nat Med, vol. 28, no. 4, pp. 649–657, 2022. (PMID: 10.1038/s41591-022-01738-x354407169108683)
J. N. Acosta et al., “The Impact of AI Assistance on Radiology Reporting: A Pilot Study Using Simulated AI Draft Reports,” 2024 [Online]. Available: https://doi.org/10.48550/arXiv.2412.12042.
R. Siepmann et al., “The virtual reference radiologist: comprehensive AI assistance for clinical image reading and interpretation,” Eur Radiol, vol. 34, no. 10, pp. 6652–6666, 2024. (PMID: 10.1007/s00330-024-10727-23862728911399201)
Y. Huang et al., “Look Before You Leap: An Exploratory Study of Uncertainty Measurement for Large Language Models,” IEEE Transactions on Software Engineering, 2023.
W. Cai, “Feasibility and Prospect of Privacy-preserving Large Language Models in Radiology,” Radiology, vol. 309, no. 1, 2023.
Grant Information:
C56167/A29363 International Accelerator Award funded by Cancer Research UK
Contributed Indexing:
Keywords: Large Language Models (LLMs); One-click AI-driven segmentation tools; OsiriX; OsiriXgrpc; Radiology reporting; Radiology workflow; Vision-Language Models (VLMs)
Entry Date(s):
Date Created: 20251016 Latest Revision: 20251016
Update Code:
20251017
DOI:
10.1007/s10278-025-01712-2
PMID:
41102424
Database:
MEDLINE

Weitere Informationen

Generative Artificial Intelligence (GenAI) has the potential to transform radiology by reducing reporting burdens, enhancing diagnostic workflows and facilitating communication of complex radiological information. However, research and adoption remain limited due to the lack of seamless integration with medical imaging viewers. This study introduces OsiriXgrpc, an open-source API plug-in that bridges this gap, enabling real-time communication between OsiriX, a CE-marked and FDA-approved DICOM viewer, and AI-driven tools deployed in any supported programming language (e.g., Python). OsiriXgrpc's design provides users with a unified platform to query, interact with, and visualise AI-generated outputs directly within OsiriX. To demonstrate its potential, we developed an AI Co-pilot for radiology that leverages OsiriXgrpc for iterative "request-to-answer" interactions between users and GenAI models, allowing real-time data queries and AI-generated output visualisation within the same DICOM viewer. We have adapted OsiriXgrpc to allow users to: (i) interrogate Foundation Large-Language Models (LLMs) to generate text from text-based prompts, (ii) employ Foundation Vision-Language Models (VLMs) to generate text by combining text and image prompts, and (iii) employ a one-click Foundation AI-driven segmentation model to generate Regions of Interest (ROIs) by combining points/bounding boxes with text prompts. For this proof-of-concept report, we applied OpenAI's LLMs and VLMs for text generation and the Segment Anything Model (SAM) for generating ROIs. We provide evidence for successful implementation of the plug-in, including visualisation of the AI-generated outputs for each model tested. We hypothesise that OsiriXgrpc can lower adoption barriers, facilitating GenAI models integration into clinical trials and routine healthcare, even in resource-limited settings, including low/middle income countries (LMICs).
(© 2025. The Author(s).)

The manuscript refers to an anonymised GitHub repository (“[blinded for review]”) that is essential for understanding and evaluating the core contributions of this work. The actual repository is available here: https://github.com/osirixgrpc/osirixgrpc The related prior tool, pyOsiriX, mentioned in the manuscript as a previous solution, is also hosted at: https://github.com/osirixgrpc/pyosirix These links are redacted in the manuscript text to comply with double-blind review policy and can be reinstated upon acceptance.