Closed Access
Refine
Year of publication
- 2023 (63) (remove)
Document type
- Part of a Book (20)
- Article (peer-reviewed) (17)
- Conference Proceeding (14)
- Book (8)
- Contribution to a Periodical (4)
Has full text
- No (63)
Is part of the Bibliography
- Yes (63) (remove)
Keywords
- Corona (5)
- Actuators (1)
- Acute respiratory distress syndrome (1)
- Artificial intelligence (1)
- Aryl (1)
- Augmented reality (1)
- Authentication (1)
- Authorization (1)
- BPMN (1)
- Barbara Ehrenreich (1)
Modelability of processes is a recognized and important characteristic of any modeling language. Nevertheless, it is not always purposeful or easy to create process models for every kind of workflow. This article discusses the opportunities and limitations of modeling agile development projects with SCRUM as an example. For this purpose, a BPMN and an S-BPM model for SCRUM are presented. The discussion along recognized rules for good process models shows that both notations provide possible and accurate insights into the process of SCRUM on the one hand. On the other hand, the models raise questions of necessity, added value, and relevance in practice. Practitioners can use the developed models to technically implement agile project management, while researchers benefit from a discourse on opportunities and limitations of modeling agility.
On Consistency Viability and Admissibility in Constrained Ensemble and Hierarchical Control Systems
(2023)
Several control architectures, such as decentralized, distributed, and hierarchical control, have been elaborated over the past decades for controlling systems composed of a set of subsystems. However, computational complexity and constraint satisfaction are still challenging tasks. We present an approach to control an ensemble of similar heterogeneous systems with input and state constraints via an identical control input. This control input is globally admissible and computed based on an aggregated system that reflects the overall behavior of the ensemble. To limit the computational complexity of the control task, the aggregated system is designed such that its dimension is independent of the number of subsystems. To guarantee viability, i.e., state constraint satisfaction for all times, appropriate consistency conditions are derived based on invariant set theory. The presented approach is illustrated with a numerical example.
Explainable Artificial Intelligence (XAI) seeks to enhance transparency and trust in AI systems. Evaluating the quality of XAI explanation methods remains challenging due to limitations in existing metrics. To address these issues, we propose a novel metric called Explanation Significance Assessment (ESA) and its extension, the Weighted Explanation Significance Assessment (WESA). These metrics offer a comprehensive evaluation of XAI explanations, considering spatial precision, focus overlap, and relevance accuracy. In this paper, we demonstrate the applicability of ESA and WESA on medical data. These metrics quantify the understandability and reliability of XAI explanations, assisting practitioners in interpreting AI-based decisions and promoting informed choices in critical domains like healthcare. Moreover, ESA and WESA can play a crucial role in AI certification, ensuring both accuracy and explainability. By evaluating the performance of XAI methods and underlying AI models, these metrics contribute to trustworthy AI systems. Incorporating ESA and WESA in AI certification efforts advances the field of XAI and bridges the gap between accuracy and interpretability. In summary, ESA and WESA provide comprehensive metrics to evaluate XAI explanations, benefiting research, critical domains, and AI certification, thereby enabling trustworthy and interpretable AI systems.
Qualitative Analyseverfahren
(2023)