Refine
Document type
- Conference Proceeding (10)
- Article (peer-reviewed) (3)
- Part of a Book (1)
Language
- English (14) (remove)
Is part of the Bibliography
- Yes (14)
Keywords
- Blockchain (8)
- Machine learning (4)
- Maintenance (3)
- Smart contracts (3)
- Cybersecurity (2)
- Industry 4.0 (2)
- Verifiability (2)
- Artificial intelligence (1)
- Assessment (1)
- Audit (1)
In modern industrial production lines, the integration and interconnection of various different manufacturing components, like robots, laser cutting machines, milling machines, CNC-machines, etc. allows for a higher degree of autonomous production on the shop floor. Manufacturers of these increasingly complex machines are beginning to equip their business models with bidirectional data flows to other factories. This is creating a digital, cross-company shop floor infrastructure where the transfer of information is controlled by digital contracts. To establish a trusted ecosystem, the new technology "blockchain" and a variety of technology stacks must be combined while ensuring security. Such blockchain-based frameworks enable bidirectional trust across all contract partners. Essential data flows are defined by specific technical representation of contract agreements and executed through smart contracts.This work describes a platform for rapid cross-company business model instantiation based on blockchain for establishing trust between the enterprises. It focuses on selected security aspects of the deployment- and configuration processes applied by the industrial ecosystem. A threat analysis of the platform shows the critical security risks. Based on an industrial dynamic machine leasing use case, a risk assessment and security analysis of the key platform components is carried out.
Digital transformation strengthens the interconnection of companies in order to develop optimized and better customized, cross-company business models. These models require secure, reliable, and trace- able evidence and monitoring of contractually agreed information to gain trust between stakeholders. Blockchain technology using smart contracts allows the industry to establish trust and automate cross- company business processes without the risk of losing data control. A typical cross-company industry use case is equipment maintenance. Machine manufacturers and service providers offer maintenance for their machines and tools in order to achieve high availability at low costs. The aim of this chapter is to demonstrate how maintenance use cases are attempted by utilizing hyperledger fabric for building a chain of trust by hardened evidence logging of the maintenance process to achieve legal certainty. Contracts are digitized into smart contracts automating business that increase the security and mitigate the error-proneness of the business processes.
Formal Description of Use Cases for Industry 4.0 Maintenance Processes Using Blockchain Technology
(2019)
The YOLO series of object detection algorithms, including YOLOv4 and YOLOv5, have shown superior performance in various medical diagnostic tasks, surpassing human ability in some cases. However, their black-box nature has limited their adoption in medical applications that require trust and explainability of model decisions. To address this issue, visual explanations for AI models, known as visual XAI, have been proposed in the form of heatmaps that highlight regions in the input that contributed most to a particular decision. Gradient-based approaches, such as Grad-CAM, and non-gradient-based approaches, such as Eigen-CAM, are applicable to YOLO models and do not require new layer implementation. This paper evaluates the performance of Grad-CAM and Eigen-CAM on the VinDrCXR Chest X-ray Abnormalities Detection dataset and discusses the limitations of these methods for explaining model decisions to data scientists.
The usage of machine learning models for prediction is growing rapidly and proof that the intended requirements are met is essential. Audits are a proven method to determine whether requirements or guidelines are met. However, machine learning models have intrinsic characteristics, such as the quality of training data, that make it difficult to demonstrate the required behavior and make audits more challenging. This paper describes an ML audit framework that evaluates and reviews the risks of machine learning applications, the quality of the training data, and the machine learning model. We evaluate and demonstrate the functionality of the proposed framework by auditing an steel plate fault prediction model.
Explainable Artificial Intelligence (XAI) seeks to enhance transparency and trust in AI systems. Evaluating the quality of XAI explanation methods remains challenging due to limitations in existing metrics. To address these issues, we propose a novel metric called Explanation Significance Assessment (ESA) and its extension, the Weighted Explanation Significance Assessment (WESA). These metrics offer a comprehensive evaluation of XAI explanations, considering spatial precision, focus overlap, and relevance accuracy. In this paper, we demonstrate the applicability of ESA and WESA on medical data. These metrics quantify the understandability and reliability of XAI explanations, assisting practitioners in interpreting AI-based decisions and promoting informed choices in critical domains like healthcare. Moreover, ESA and WESA can play a crucial role in AI certification, ensuring both accuracy and explainability. By evaluating the performance of XAI methods and underlying AI models, these metrics contribute to trustworthy AI systems. Incorporating ESA and WESA in AI certification efforts advances the field of XAI and bridges the gap between accuracy and interpretability. In summary, ESA and WESA provide comprehensive metrics to evaluate XAI explanations, benefiting research, critical domains, and AI certification, thereby enabling trustworthy and interpretable AI systems.
Data scientists, researchers and engineers want to understand, whether machine learning models for object detection work accurate and precise. Networks like Yolo use bounding boxes as a result to localize the object in the image.
The principal aim of this paper is to address the problem of a lack of an effective metric for evaluating the results of bounding box regression in object detection networks when boxes do not overlap or lie completely within each other.
The standard known metrics, like IoU, lack of differentiating results, which do not overlap but differ in the distance between predicted bounding box and label.
To solve this challenge, we propose a new metric called UIoU (Unified Intersection over Union) that combines the best properties of existing metrics (IoU, GIoU and DIoU) and extends them with a similarity factor. By assigning weight to each component of the metric, it allows for a clear differentiation between the three possible cases of box positions (not overlapping, overlapping, boxes inside each other).
The result of this paper is a new metric that outperforms the existing metrics such as IoU, GIoU and DIoU by providing a more understandable measure of the performance of object detection models. This provides researchers and users in the field of explainable AI with a metric that allows the evaluation and comparison of prediction and label bounding boxes in an understandable way.