To err is human, not algorithmic – Robust reactions to erring algorithms. Laetitia A. Renier, Marianne Schmid, Mast Anely Bekbergenova. Computers in Human Behavior, May 30 2021, 106879. https://doi.org/10.1016/j.chb.2021.106879
Highlights
• Reactions toward erring algorithms go beyond algorithm aversion.
• Gut reactions were harsher and behavioral intentions linked to action were stronger when the error was made by an algorithm.
• Justice cognitions were weaker when the error was made by an algorithm.
• Observed effects were immune to the domain of use, the severity of the error, and information about algorithm maturity.
Abstract: When seeing algorithms err, we trust them less and decrease using them compared to after seeing humans err; this is called algorithm aversion. This paper builds on the algorithm aversion literature and the third-party reactions to mistreatment model to investigate a wider array of reactions to erring algorithms. Using an experimental design deployed with a vignette-based online study, we investigate gut reactions, justice cognitions, and behavioral intentions toward erring algorithms (compared to erring humans). Our results show that when the error was committed by an algorithm (vs. a human), gut reactions were harsher (i.e., less acceptance and more negative feelings), justice cognitions weaker (i.e., less blame, less forgiveness, and less accountability), and behavioral intentions stronger. These results remain independent of factors such as the maturity of the algorithms (better than or same as human performance), the severity of the error (high or low), and the domain of use (recruitment or finance). We discuss how these results complement the current literature thanks to a robust and more nuanced pattern of reactions to erring algorithms.
Keywords: Algorithm aversionArtificial intelligenceErrorReactionsPerceptionThird-party
No comments:
Post a Comment