Heuristics of AI Decision Support Systems in the Military

Between July and August 2024, I had the opportunity to take part in the 2024 International Security and Intelligence Summer Programme organized by the Cambridge Security Initiative. As part of my independent research project I delved deep into behavioural sciences, ergonomics, and war gaming, to try and explore some of the challenges behind the adoption of AI Decision Support Systems in the military. My final paper, although graded at Distinction level, is not complete or refined enough to justify being published (here or elsewhere). So I have decided to share just an abstract for people to get an idea of my work. I thoroughly appreciate any feedback, questions, or any other requests related to this.

Abstract

This paper proposes a novel methodology for studying cognitive heuristics, particularly automation bias, in military decision-making involving AI decision support systems (AI-DSS). Automation bias, where individuals overly depend on automated systems at the expense of vigilant decision-making, is well-documented in fields like aviation and healthcare, but remains underexplored in military contexts. The paper addresses this gap by proposing the use of wargaming as a method to simulate realistic, high-pressure military scenarios and observe how biases such as automation bias, machine heuristic, confirmation bias, and a newly introduced accountability avoidance heuristic influence decisions. Wargaming, combined with a mixed-method approach that integrates both quantitative decision analysis and qualitative insights from post-game interviews, allows researchers to assess the occurrence and interaction of these cognitive shortcuts in ecologically valid settings. The methodology also seeks to overcome the challenges of obtaining real-world military decision-making data. While acknowledging limitations such as the challenge of fully replicating the complexity of military operations and the ethical concerns of high-stress simulations, the paper lays a foundation for future empirical research. It suggests that understanding these biases could inform military training, the design of AI-DSS, and policies aimed at reducing the harmful impacts of automation bias in military operations. By advancing a comprehensive research framework, the paper aims to spur further investigation into the cognitive challenges AI-DSS presents in high-stakes military environments.