Closed Access
Refine
Year of publication
Document type
- Article (peer-reviewed) (403)
- Conference Proceeding (242)
- Part of a Book (151)
- Book (62)
- Contribution to a Periodical (51)
- Doctoral Thesis (1)
Has full text
- No (910)
Is part of the Bibliography
- Yes (910) (remove)
Keywords
- Electrical impedance tomography (20)
- Mechanical ventilation (16)
- Corona (12)
- Parameter identification (12)
- Cloud computing (11)
- Porous silicon (10)
- E-Learning (9)
- Security (9)
- Surface roughness (9)
- Monitoring (8)
Explainable Artificial Intelligence (XAI) seeks to enhance transparency and trust in AI systems. Evaluating the quality of XAI explanation methods remains challenging due to limitations in existing metrics. To address these issues, we propose a novel metric called Explanation Significance Assessment (ESA) and its extension, the Weighted Explanation Significance Assessment (WESA). These metrics offer a comprehensive evaluation of XAI explanations, considering spatial precision, focus overlap, and relevance accuracy. In this paper, we demonstrate the applicability of ESA and WESA on medical data. These metrics quantify the understandability and reliability of XAI explanations, assisting practitioners in interpreting AI-based decisions and promoting informed choices in critical domains like healthcare. Moreover, ESA and WESA can play a crucial role in AI certification, ensuring both accuracy and explainability. By evaluating the performance of XAI methods and underlying AI models, these metrics contribute to trustworthy AI systems. Incorporating ESA and WESA in AI certification efforts advances the field of XAI and bridges the gap between accuracy and interpretability. In summary, ESA and WESA provide comprehensive metrics to evaluate XAI explanations, benefiting research, critical domains, and AI certification, thereby enabling trustworthy and interpretable AI systems.
In several domains of product design - like medical device design -
risk related use scenarios have to be analyzed in an early stage of design process.
Virtual reviews with users make it possible to get early insights in use problems.
Also, situations that are difficult to imitate in reality can be modeled and simulated
in virtual reality without risking the health of user. Therefore, virtual usability
tests are a promising approach which allow testing different scenarios under
controlled conditions. We chose a sample scenario from medical device design
and compare and evaluate different technical systems which can be used for virtual
usability tests. Aim is to derive guidance for virtual usability tests including
systems that can be used for specific conditions and the qualitative and quantitative
data, which can be collected with these systems. A formative test is performed
to evaluate and compare different systems. Also, a summative test is performed
to evaluate the selected systems. Results show that virtual usability tests
make it possible to test scenarios with users in an early stage and thus to encounter
possible interaction problems. But there are also many additional and new
things to consider compared to normal usability tests, such as checking motion
sickness, maintaining presence and the extensive operation of the technology.
Finally, a proposed method for virtual usability testing is described which also
comprises our equipment recommendations for virtual usability tests.
Qualitative Analyseverfahren
(2023)
Design and fabrication of a novel 4D-printed customized hand orthosis to treat cerebral palsy
(2024)