The growing number of algorithmic decision-making environments, which blend machine and bounded human rationality, strengthen the need for a holistic performance assessment of such systems. Indeed, this combination amplifies the risk of local rationality, necessitating a robust evaluation framework. We propose a novel simulation-based model to quantify algorithmic interventions within organisational contexts, combining causal modelling and data science algorithms. To test our framework’s viability, we present a case study based on a bike-share system focusing on inventory balancing through crowdsourced user actions. Utilising New York’s Citi Bike service data, we highlight the frequent misalignment between incentives and their necessity. Our model examines the interaction dynamics between user and service provider rule-driven responses and algorithms predicting flow rates. This examination demonstrates why understanding these dynamics is essential for devising effective incentive policies. The study showcases how sophisticated machine learning models, with the ability to forecast underlying market demands unconstrained by historical supply issues, can cause imbalances that induce user behaviour, potentially spoiling plans without timely interventions. Our approach allows problems to surface during the design phase, potentially avoiding costly deployment errors in the joint performance of human and AI decision-makers.
A simulation-based approach for measuring the performance of human-AI collaborative decisions
(2022)
Despite the widespread adoption of artificial intelligence and machine learning for decision-making in organizations, a wealth of research shows that situations that involve open-ended decisions (novel contexts without predefined rules) will continue to require humans in the loop. However, such collaboration that blends formal machine and bounded human rationality also amplify the risk of what is known as local rationality, which is when rational decisions in a local setting lead to globally dysfunctional behavior. Therefore, it becomes crucial, especially in a data-abundant environment that characterizes algorithmic decision-making, to devise means to assess performance holistically, not just for decision fragments. There is currently a lack of quantitative models that address this issue.
Despite the unabated growth of algorithmic decision-making in organizations, there is a growing consensus that numerous situations will continue to require humans in the loop. However, the blending of a formal machine and bounded human rationality also amplifies the risk of what is known as local rationality. Therefore, it is crucial, especially in a data-abundant environment that characterizes algorithmic decision-making, to devise means to assess performance holistically. In this paper, we propose a simulation-based model to address the current lack of research on quantifying algorithmic interventions in a broader organizational context. Our approach allows the combining of causal modeling and data science algorithms to represent decision settings involving a mix of machine and human rationality to measure performance. As a testbed, we consider the case of a fictitious company trying to improve its forecasting process with the help of a machine learning approach. The example demonstrates that a myopic assessment obscures problems that only a broader framing reveals. It highlights the value of a systems view since the effects of the interplay between human and algorithmic decisions can be largely unintuitive. Such a simulation-based approach can be an effective tool in efforts to delineate roles for humans and algorithms in hybrid contexts.