Result: Evaluating the Machine Learning Models in Predicting Intensive Care Unit Discharge for Neurosurgical Patients Undergoing Craniotomy: A Big Data Analysis.
Circulation. 2000 Jun 13;101(23):E215-20. (PMID: 10851218)
NPJ Digit Med. 2021 Nov 15;4(1):158. (PMID: 34782696)
Neurocrit Care. 2022 Aug;37(Suppl 2):163-165. (PMID: 35023043)
JAMA Netw Open. 2019 Dec 2;2(12):e1917221. (PMID: 31825503)
J Clin Med. 2020 Jun 01;9(6):. (PMID: 32492874)
Mayo Clin Proc Innov Qual Outcomes. 2023 Nov 15;7(6):534-543. (PMID: 38035051)
Intern Med J. 2022 Feb;52(2):176-185. (PMID: 33094899)
Postgrad Med J. 2017 Sep;93(1103):528-533. (PMID: 28450581)
J Am Med Inform Assoc. 2021 Nov 25;28(12):2670-2680. (PMID: 34592753)
Crit Care. 2011 Aug 16;15(4):308. (PMID: 21892976)
N Engl J Med. 2019 Apr 4;380(14):1347-1358. (PMID: 30943338)
BMC Med Inform Decis Mak. 2021 Oct 30;21(1):298. (PMID: 34749708)
Healthc Policy. 2016 Nov;12(2):105-115. (PMID: 28032828)
Sci Data. 2023 Jan 3;10(1):1. (PMID: 36596836)
Brain Inj. 2021 Dec 06;35(14):1658-1664. (PMID: 35080996)
J Hosp Med. 2017 Aug 23;13(3):158-163. (PMID: 29068440)
JAMA Netw Open. 2018 Nov 2;1(7):e184087. (PMID: 30646340)
BMJ Qual Improv Rep. 2016 Sep 19;5(1):. (PMID: 27752313)
J Clin Med. 2025 Feb 10;14(4):. (PMID: 40004675)
Neurocrit Care. 2022 Aug;37(Suppl 2):160-162. (PMID: 35072924)
Continuum (Minneap Minn). 2021 Oct 1;27(5):1382-1404. (PMID: 34618765)
Int J Cardiol. 2019 Aug 1;288:140-147. (PMID: 30685103)
Surg Neurol Int. 2017 Sep 07;8:220. (PMID: 28966826)
Neurocrit Care. 2022 Aug;37(Suppl 2):230-236. (PMID: 35352273)
Neurocrit Care. 2022 Aug;37(Suppl 2):157-159. (PMID: 35799093)
Further Information
Background: Predicting intensive care unit (ICU) discharge for neurosurgical patients is crucial for optimizing bed sources, reducing costs, and improving outcomes. Our study aims to develop and validate machine learning (ML) models to predict ICU discharge within 24 h for patients undergoing craniotomy.
Methods: The 2,742 patients undergoing craniotomy were identified from Medical Information Mart for Intensive Care dataset using diagnosis-related group and International Classification of Diseases codes. Demographic, clinical, laboratory, and radiological data were collected and preprocessed. Textual clinical examinations were converted into numerical scales. Data were split into training (70%), validation (15%), and test (15%) sets. Four ML models, logistic regression (LR), decision tree, random forest, and neural network (NN), were trained and evaluated. Model performance was assessed using area under the receiver operating characteristic curve (AUC), average precision (AP), accuracy, and F1 scores. Shapley Additive Explanations (SHAP) were used to analyze importance of features. Statistical analyses were performed using R (version 4.2.1) and ML analyses with Python (version 3.8), using scikit-learn, tensorflow, and shap packages.
Results: Cohort included 2,742 patients (mean age 58.2 years; first and third quartiles 47-70 years), with 53.4% being male (n = 1,464). Total ICU stay was 15,645 bed days (mean length of stay 4.7 days), and total hospital stay was 32,008 bed days (mean length of stay 10.8 days). Random forest demonstrated highest performance (AUC 0.831, AP 0.561, accuracy 0.827, F1-score 0.339) on test set. NN achieved an AUC of 0.824, with an AP, accuracy, and F1-score of 0.558, 0.830, and 0.383, respectively. LR achieved an AUC of 0.821 and an accuracy of 0.829. The decision tree model showed lowest performance (AUC 0.813, accuracy 0.822). Key predictors of SHAP analysis included Glasgow Coma Scale, respiratory-related parameters (i.e., tidal volume, respiratory effort), intracranial pressure, arterial pH, and Richmond Agitation-Sedation Scale.
Conclusions: Random forest and NN predict ICU discharge well, whereas LR is interpretable but less accurate. Numeric conversion of clinical data improved performance. This study offers framework for predictions using clinical, radiological, and demographic features, with SHAP enhancing transparency.
(© 2025. The Author(s).)
Declarations. Conflict of interest: All authors disclose no conflicts of interest related to this article. Ethical approval/informed consent: This study does not require institutional review board approval because it uses an open-source, anonymized dataset (Medical Information Mart for Intensive Care), and no patient-identifiable data were used.