One of the functions that we are giving to artificial intelligence is to advise humans responsible for making critical decisions , such as “what is the appropriate penalty that we should impose on this inmate?” or “What medication should be prescribed for this patient?” The idea is that, since AI is sometimes able to see things that we don’t see , its suggestions will help us choose more wisely.
But that’s just the theory: in practice, those responsible for making these decisions, either because of an excessive workload or because of having excessive faith in machines, tend in too many cases to take the advice of algorithms as good. and to act by following it closely.
This is what is often called ‘automation bias’ : the lack of skepticism about the information that algorithms provide us. Paradoxically, here we are, and not the machines, who sin to act automatically. And like all biases, we tend to deny them.
If the AI says so, is it for something?
The first studies on automation bias focused on aircraft navigation systems: a study published almost a decade ago found that pilots tended to say that they would not trust an automated system to inform them of a fire in the aircraft. engine unless they have complementary evidence to corroborate it; however, once immersed in the simulations, they opted to uncritically accept the fire warning.
And this is a problem. In the first place, because we know that the information offered by artificial intelligence can fail, either because it has been ‘hacked’ or because it has been trained with erroneous or biased data . And yet, the misinformation of users and the excessive marketing that has been built around this technology give it an unjustified halo of precision .
Ryan Kennedy, an automation specialist at the University of Houston, explains that ” when people have to make decisions in relatively short time frames, with little information … that’s when they tend to trust whatever advice an algorithm gives them.”
According to Axios, an imminent publication study by Matthew Lungren of the Stanford University Center for Medicinal AI has found that doctors linked to their university follow the advice of an AI “in some cases, although it is clear that wrong “.
According to Lungren, the solution would lie in providing information on the level of reliability of each algorithm , so that human decision makers could put the advice they receive into context.