Refine
Year of publication
Document type
- Conference Proceeding (135)
- Article (peer-reviewed) (26)
- Contribution to a Periodical (9)
- Report (8)
- Part of a Book (7)
- Working Paper (2)
- Book (1)
Keywords
- Cloud computing (27)
- Security (22)
- Industry 4.0 (16)
- Blockchain (10)
- Machine learning (8)
- Privacy (8)
- Audit (7)
- Cloud Computing (7)
- Monitoring (7)
- PaaS (6)
A Fog-Cloud Computing Infrastructure for Condition Monitoring and Distributing Industry 4.0 Services
(2019)
Explainable Artificial Intelligence (XAI) seeks to enhance transparency and trust in AI systems. Evaluating the quality of XAI explanation methods remains challenging due to limitations in existing metrics. To address these issues, we propose a novel metric called Explanation Significance Assessment (ESA) and its extension, the Weighted Explanation Significance Assessment (WESA). These metrics offer a comprehensive evaluation of XAI explanations, considering spatial precision, focus overlap, and relevance accuracy. In this paper, we demonstrate the applicability of ESA and WESA on medical data. These metrics quantify the understandability and reliability of XAI explanations, assisting practitioners in interpreting AI-based decisions and promoting informed choices in critical domains like healthcare. Moreover, ESA and WESA can play a crucial role in AI certification, ensuring both accuracy and explainability. By evaluating the performance of XAI methods and underlying AI models, these metrics contribute to trustworthy AI systems. Incorporating ESA and WESA in AI certification efforts advances the field of XAI and bridges the gap between accuracy and interpretability. In summary, ESA and WESA provide comprehensive metrics to evaluate XAI explanations, benefiting research, critical domains, and AI certification, thereby enabling trustworthy and interpretable AI systems.
A Review on Digital Wallets and Federated Service for Future of Cloud Services Identity Management
(2023)
In today’s technology-driven era, managing digital identities has become a critical concern due to the widespread use of online services and digital devices. This has led to a fragmented landscape of digital identities, burdening individuals with multiple usernames, passwords, and authentication methods. To address this challenge, digital wallets have emerged as a promising solution. These wallets empower users to store, manage, and utilize their digital assets, including personal data, payment information, and credentials. Additionally, federated services have gained prominence, enabling users to access multiple services using a single digital identity. Gaia-X is an example of such a service, aiming to establish a secure and trustworthy data infrastructure. This paper examines digital identity management, focusing on the application of digital wallets and federated services. It explores the categorization of identities needed for different cloud services, considering their unique requirements and characteristics. Furthermore, it discusses the future requirements for digital wallets and federated identity management in the cloud, along with the associated challenges and benefits. The paper also introduces a categorization scheme for cloud services based on security and privacy requirements, demonstrating how different identity types can be mapped to each category.
The YOLO series of object detection algorithms, including YOLOv4 and YOLOv5, have shown superior performance in various medical diagnostic tasks, surpassing human ability in some cases. However, their black-box nature has limited their adoption in medical applications that require trust and explainability of model decisions. To address this issue, visual explanations for AI models, known as visual XAI, have been proposed in the form of heatmaps that highlight regions in the input that contributed most to a particular decision. Gradient-based approaches, such as Grad-CAM, and non-gradient-based approaches, such as Eigen-CAM, are applicable to YOLO models and do not require new layer implementation. This paper evaluates the performance of Grad-CAM and Eigen-CAM on the VinDrCXR Chest X-ray Abnormalities Detection dataset and discusses the limitations of these methods for explaining model decisions to data scientists.
Quality assurance (QA) plays a crucial role in manufacturing to ensure that products meet their specifications. However, manual QA processes are costly and time-consuming, thereby making artificial intelligence (AI) an attractive solution for automation and expert support. In particular, convolutional neural networks (CNNs) have gained a lot of interest in visual inspection. Next to AI methods, the explainable artificial intelligence (XAI) systems, which achieve transparency and interpretability by providing insights into the decision-making process of the AI, are interesting methods for achieveing quality inspections in manufacturing processes. In this study, we conducted a systematic literature review (SLR) to explore AI and XAI approaches for visual QA (VQA) in manufacturing. Our objective was to assess the current state of the art and identify research gaps in this context. Our findings revealed that AI-based systems predominantly focused on visual quality control (VQC) for defect detection. Research addressing VQA practices, like process optimization, predictive maintenance, or root cause analysis, are more rare. Least often cited are papers that utilize XAI methods. In conclusion, this survey emphasizes the importance and potential of AI and XAI in VQA across various industries. By integrating XAI, organizations can enhance model transparency, interpretability, and trust in AI systems. Overall, leveraging AI and XAI improves VQA practices and decision-making in industries.