The introduction of artificial intelligence (AI) into the healthcare sector is reshaping how clinicians approach patient care. The way information is presented to clinicians plays a crucial role in the adoption of AI technologies. Despite the potential benefits, less than 4% of hospitals are considered "high adopters" of AI. While AI models can enhance efficiency, they can also miss significant findings and flag false positives, leading to automation bias where clinicians might trust AI predictions too much. Conversely, clinicians can also become skeptical of AI, leading to burnout. This complex trust dynamic is influenced by how AI models explain their predictions and the contextual nature of trust.
Research on automation bias began approximately three decades ago, primarily focusing on pilots. This bias, however, has become increasingly relevant in healthcare, where the human impulse to trust machines remains somewhat mysterious. Clinicians' trust in AI varies based on the simplicity or thoroughness of AI explanations. Training clinicians to handle AI's explanations and potential biases could mitigate these challenges, but this adaptation requires significant changes in how clinicians are trained and practice.
AI's role in healthcare presents both challenges and opportunities. Overworked radiologists, for instance, may find AI to be a "double-edged sword," increasing efficiency while also introducing potential errors. Clinicians must remain vigilant when utilizing AI, particularly in high-stakes situations, to navigate new challenges and opportunities effectively, requiring more nuanced explanations and awareness of potential biases.
The presentation of information significantly influences clinicians' adoption of AI systems. Jennifer Goldsack emphasizes this by stating:
“We know, or at least have signals in the data, that the way that the information is presented matters.” – Jennifer Goldsack
AI explanations that are simple rather than thorough tend to gain faster agreement from clinicians, yet this simplicity can obscure critical details. Mistakes occur when AI systems miss significant findings or flag false positives, as noted by Belmustakov:
“Missed a significant number of findings and would also flag false positives.” – Belmustakov
In some cases, despite recognizing potential inaccuracies in AI predictions, such as in Belmustakov's experience:
“In the back of our mind, we all knew the computer was wrong.” – Belmustakov
This realization underscores the necessity for clinicians to exercise due diligence, as highlighted by Neel Guha:
“What the physician should have done is notice there was a reason not to trust the AI system, or done some additional due diligence.” – Neel Guha
Yi further points out the scarcity of research on how AI explanations affect radiologists and other doctors:
“There’s really limited research on how AI explanations are presented to radiologists and other doctors, and how that affects them.” – Yi
Even with numerous FDA-cleared products available for radiology using AI:
“And this is in spite of hundreds of FDA-cleared products now available on the market for radiology using AI.” – Yi
Claims about AI's expert-level accuracy remain contentious:
“They all claim that their AI is expert level, even if the evidence is a little bit controversial.” – Yi
AI's integration into clinical settings also raises concerns about automation bias. Clinicians might overly trust machine predictions due to the inherent human inclination to rely on technology. Yet, as Goldsack notes:
“Technology is an inanimate object. The way that humans interact with it is a human problem, not a technology problem.” – Jennifer Goldsack
The challenge lies not in the technology itself but in human interaction with it. Goldsack warns against anchoring solely to human judgment when high-performing AI tools become available:
“What I don’t want to happen is for us to get these really high-performing tools that routinely outperform humans, and then anchor to a human.” – Jennifer Goldsack
Yi concurs that alignment between human performance and AI is crucial:
“It’s great if the AI is correct. It’s great if we’re at the top of our game.” – Yi
The contextual nature of trust means that clinicians must evaluate each situation individually. Automation bias research has shown trust varies depending on context. Clinicians might need additional training to comprehend AI systems properly and recognize biases. This adaptation will entail adjustments in clinical training and practice.
Leave a Reply