Result: Addressing Explainability, Transparency and Interpretability Requirements of AI Models for ECG Analysis
Further Information
Increased application of artificial intelligence (AI), particularly deep neural networks, in electrocardiogram (ECG) interpretation has highlighted critical concerns about the opacity of algorithmic decisions. The demand for explainability, interpretability, and transparency in medical AI has driven the development of explainable AI (XAI) techniques. This review aims to provide a comprehensive evaluation of the most relevant and promising XAI methods for ECG analysis, highlighting their methodological diversity and identifying the main challenges that must be addressed for clinical translation. Despite rapid advancements in XAI for ECG interpretation, significant gaps remain in clinical validation, personalization, and ethical governance. The findings underscore the need to integrate explainability with robust performance, transparency, and clinician-in-the-loop approaches. This review serves as a reference for future work that aims to bridge the gap between experimental research and clinically viable AI systems, ultimately promoting safer and more accountable deployment of AI in cardiology.