Refine
Document type
- Conference Proceeding (10)
- Article (peer-reviewed) (3)
- Part of a Book (1)
Language
- English (14)
Is part of the Bibliography
- Yes (14)
Keywords
- Blockchain (8)
- Machine learning (4)
- Maintenance (3)
- Smart contracts (3)
- Cybersecurity (2)
- Industry 4.0 (2)
- Verifiability (2)
- Artificial intelligence (1)
- Assessment (1)
- Audit (1)
Explainable Artificial Intelligence (XAI) seeks to enhance transparency and trust in AI systems. Evaluating the quality of XAI explanation methods remains challenging due to limitations in existing metrics. To address these issues, we propose a novel metric called Explanation Significance Assessment (ESA) and its extension, the Weighted Explanation Significance Assessment (WESA). These metrics offer a comprehensive evaluation of XAI explanations, considering spatial precision, focus overlap, and relevance accuracy. In this paper, we demonstrate the applicability of ESA and WESA on medical data. These metrics quantify the understandability and reliability of XAI explanations, assisting practitioners in interpreting AI-based decisions and promoting informed choices in critical domains like healthcare. Moreover, ESA and WESA can play a crucial role in AI certification, ensuring both accuracy and explainability. By evaluating the performance of XAI methods and underlying AI models, these metrics contribute to trustworthy AI systems. Incorporating ESA and WESA in AI certification efforts advances the field of XAI and bridges the gap between accuracy and interpretability. In summary, ESA and WESA provide comprehensive metrics to evaluate XAI explanations, benefiting research, critical domains, and AI certification, thereby enabling trustworthy and interpretable AI systems.
The YOLO series of object detection algorithms, including YOLOv4 and YOLOv5, have shown superior performance in various medical diagnostic tasks, surpassing human ability in some cases. However, their black-box nature has limited their adoption in medical applications that require trust and explainability of model decisions. To address this issue, visual explanations for AI models, known as visual XAI, have been proposed in the form of heatmaps that highlight regions in the input that contributed most to a particular decision. Gradient-based approaches, such as Grad-CAM, and non-gradient-based approaches, such as Eigen-CAM, are applicable to YOLO models and do not require new layer implementation. This paper evaluates the performance of Grad-CAM and Eigen-CAM on the VinDrCXR Chest X-ray Abnormalities Detection dataset and discusses the limitations of these methods for explaining model decisions to data scientists.
The digital transformation of companies is expected to increase the digital interconnection between different companies to develop optimized, customized, hybrid business models. These cross-company business models require secure, reliable, and traceable logging and monitoring of contractually agreed information sharing between machine tools, operators, and service providers. This paper discusses how the major requirements for building hybrid business models can be tackled by the blockchain for building a chain of trust and smart contracts for digitized contracts. A machine maintenance use case is used to discuss the readiness of smart contracts for the automation of workflows defined in contracts. Furthermore, it is shown that the number of failures is significantly improved by using these contracts and a blockchain.
Enormous potential of artificial intelligence (AI) exists in numerous products and services, especially in healthcare and medical technology. Explainability is a central prerequisite for certification procedures around the world and the fulfilment of transparency obligations. Explainability tools increase the comprehensibility of object recognition in images using Convolutional Neural Networks, but lack precision.
This paper adapts FastCAM for the domain of detection of medical instruments in endoscopy images. The results show that the Domain Adapted (DA)-FastCAM provides better results for the focus of the model than standard FastCAM weights.
Formal Description of Use Cases for Industry 4.0 Maintenance Processes Using Blockchain Technology
(2019)
Digital transformation strengthens the interconnection of companies in order to develop optimized and better customized, cross-company business models. These models require secure, reliable, and trace- able evidence and monitoring of contractually agreed information to gain trust between stakeholders. Blockchain technology using smart contracts allows the industry to establish trust and automate cross- company business processes without the risk of losing data control. A typical cross-company industry use case is equipment maintenance. Machine manufacturers and service providers offer maintenance for their machines and tools in order to achieve high availability at low costs. The aim of this chapter is to demonstrate how maintenance use cases are attempted by utilizing hyperledger fabric for building a chain of trust by hardened evidence logging of the maintenance process to achieve legal certainty. Contracts are digitized into smart contracts automating business that increase the security and mitigate the error-proneness of the business processes.
The usage of machine learning models for prediction is growing rapidly and proof that the intended requirements are met is essential. Audits are a proven method to determine whether requirements or guidelines are met. However, machine learning models have intrinsic characteristics, such as the quality of training data, that make it difficult to demonstrate the required behavior and make audits more challenging. This paper describes an ML audit framework that evaluates and reviews the risks of machine learning applications, the quality of the training data, and the machine learning model. We evaluate and demonstrate the functionality of the proposed framework by auditing an steel plate fault prediction model.