Volltext-Downloads (blau) und Frontdoor-Views (grau)

Explainability and Understandability of Artificial Neural Networks

  • Artificial Neural Networks (ANNs) have achieved significant success in fields like healthcare, but their "black box" nature challenges transparency and user trust. Existing Explainable AI (XAI) methods aim to interpret ANN decisions, yet many are not understandable to non-AI-experts, emphasizing the need for approaches that prioritize both accuracy and usability, especially in high-stakes environments. This thesis investigates the reliability and usability of selected existing XAI methods, evaluating how effectively they convey meaningful explanations to users with AI varying expertise. Assessments of methods like LIME, GradCAM, and Fast-CAM identify key limitations, such as inconsistent visual saliency maps and a lack of user-centred design. These findings underpin the need of more understandable XAI methods tailored to specific needs. Among its various contributions, the research outlines domain-adapted approach to XAI within healthcare by automating the integration of domain knowledge. This customization reduces manual effort, ensuring that XAI methods provide technically accurate and contextually meaningful explanations in applications like surgical tool classification. To enhance XAI evaluation, the thesis introduces novel metrics such as Explanation Significance Assessment (ESA), Weighted Explanation Significance Assessment (WESA), and the Unified Intersection over Union (UIoU). These metrics address gaps in existing techniques by emphasizing precision and clarity, improving transparency in AI systems for both AI experts and non-AI-experts. Finally, the thesis introduces the Explainable Object Classification (EOC) framework, which integrates object parts, attributes, and domain knowledge to offer comprehensive, multimodal explanations accessible to users with varying expertise. By providing text, images, and decision paths, EOC enables users to understand AI decisions more effectively, aiding informed decision-making in critical sectors like healthcare. This thesis contributes to advancing XAI by developing methods that bridge the gap between AI developers and users, ensuring AI outputs are interpretable and practically useful in real-world contexts.

Export metadata

Additional Services

Search Google Scholar

Statistics

frontdoor_oas
Metadaten
Author:Jan StodtORCiDGND
URL:https://researchportal.plymouth.ac.uk/en/studentTheses/explainability-and-understandability-of-artificial-neural-network
Advisor:Christoph Reich, Nathan Clarke, Martin Knahl
Document Type:Doctoral Thesis
Language:English
Year of Completion:2024
University:University of Plymouth
City of university:Plymouth
Date of final exam:2024/12/05
Release Date:2025/01/07
Tag:Künstliche Intelligenz
Licence (German):License LogoUrheberrechtlich geschützt