Refine
Year of publication
Document type
- Conference Proceeding (26)
- Article (peer-reviewed) (5)
- Part of a Book (1)
Language
- English (32)
Is part of the Bibliography
- Yes (32)
Keywords
- Cloud computing (9)
- Security (9)
- Privacy (5)
- Audit (3)
- Computer architecture (3)
- Docker (3)
- Monitoring (3)
- Agents (2)
- Cloud security (2)
- Container virtualization (2)
Despite the unabated growth of algorithmic decision-making in organizations, there is a growing consensus that numerous situations will continue to require humans in the loop. However, the blending of a formal machine and bounded human rationality also amplifies the risk of what is known as local rationality. Therefore, it is crucial, especially in a data-abundant environment that characterizes algorithmic decision-making, to devise means to assess performance holistically. In this paper, we propose a simulation-based model to address the current lack of research on quantifying algorithmic interventions in a broader organizational context. Our approach allows the combining of causal modeling and data science algorithms to represent decision settings involving a mix of machine and human rationality to measure performance. As a testbed, we consider the case of a fictitious company trying to improve its forecasting process with the help of a machine learning approach. The example demonstrates that a myopic assessment obscures problems that only a broader framing reveals. It highlights the value of a systems view since the effects of the interplay between human and algorithmic decisions can be largely unintuitive. Such a simulation-based approach can be an effective tool in efforts to delineate roles for humans and algorithms in hybrid contexts.
A simulation-based approach for measuring the performance of human-AI collaborative decisions
(2022)
Despite the widespread adoption of artificial intelligence and machine learning for decision-making in organizations, a wealth of research shows that situations that involve open-ended decisions (novel contexts without predefined rules) will continue to require humans in the loop. However, such collaboration that blends formal machine and bounded human rationality also amplify the risk of what is known as local rationality, which is when rational decisions in a local setting lead to globally dysfunctional behavior. Therefore, it becomes crucial, especially in a data-abundant environment that characterizes algorithmic decision-making, to devise means to assess performance holistically, not just for decision fragments. There is currently a lack of quantitative models that address this issue.