Closed Access
Refine
Year of publication
Document type
- Article (peer-reviewed) (403)
- Conference Proceeding (242)
- Part of a Book (151)
- Book (62)
- Contribution to a Periodical (51)
- Doctoral Thesis (1)
Has full text
- No (910)
Is part of the Bibliography
- Yes (910) (remove)
Keywords
- Electrical impedance tomography (20)
- Mechanical ventilation (16)
- Corona (12)
- Parameter identification (12)
- Cloud computing (11)
- Porous silicon (10)
- E-Learning (9)
- Security (9)
- Surface roughness (9)
- Monitoring (8)
3-D Lung Visualization Using Electrical Impedance Tomography Combined with Body Plethysmography
(2013)
A concept for modelling linear lung compliances using a mechanical artificially ventilated simulator
(2013)
A deep learning spatial-temporal framework for detecting surgical tools in laparoscopic videos
(2021)
Explainable Artificial Intelligence (XAI) seeks to enhance transparency and trust in AI systems. Evaluating the quality of XAI explanation methods remains challenging due to limitations in existing metrics. To address these issues, we propose a novel metric called Explanation Significance Assessment (ESA) and its extension, the Weighted Explanation Significance Assessment (WESA). These metrics offer a comprehensive evaluation of XAI explanations, considering spatial precision, focus overlap, and relevance accuracy. In this paper, we demonstrate the applicability of ESA and WESA on medical data. These metrics quantify the understandability and reliability of XAI explanations, assisting practitioners in interpreting AI-based decisions and promoting informed choices in critical domains like healthcare. Moreover, ESA and WESA can play a crucial role in AI certification, ensuring both accuracy and explainability. By evaluating the performance of XAI methods and underlying AI models, these metrics contribute to trustworthy AI systems. Incorporating ESA and WESA in AI certification efforts advances the field of XAI and bridges the gap between accuracy and interpretability. In summary, ESA and WESA provide comprehensive metrics to evaluate XAI explanations, benefiting research, critical domains, and AI certification, thereby enabling trustworthy and interpretable AI systems.