Veronika Alexander, Collin Blinder, and Paul J. Zak, Why Trust an Algorithm? Performance, Cognition, and Neurophysiology, Computers in Human Behavior 89 (2018) 279-288, https://doi.org/10.1016/j.chb.2018.07.026.
As a librarian, I am interested in information behavior. I am not a computer scientist or a programmer, so my knowledge about the intricacies of algorithms is minimal. But I try to inform myself where I can.
I found this study to be helpful because it addressed how trust in algorithms affects behavior. The authors conducted a study in which they measured participants’ neurophysiological responses to gain a better understanding of people’s perception of the trustworthiness of algorithms. Their hypothesis? “Higher algorithm accuracy and greater prior use by others would result in higher algorithm adoption … (and) social proof would influence adoption more than algorithm accuracy information.”
In providing background information about why they decided to undertake this study, the authors pointed out that “humans are prone to error, and errors can be useful when it comes to creativity and innovation.” Studies have also indicated that people think that judgments by human experts are more credible than judgments generated by algorithms. That said, it has also been shown that algorithms are more reliable than human judgment in certain predictive circumstances in the areas of healthcare, finance, the military, and education.
The methodology of the study was fairly straightforward. Participants were given mazes to solve. For each maze solved, they would receive some money. They were also told that they could purchase an “imperfect helper algorithm” to make solving the mazes easier. To test the social proof component of the hypothesis, some participants were told that a lot of other people had used the algorithm, and other participants were told that very few people had.
61 % of the participants ended up adopting the algorithm during the study. The people who were most likely to do this were those who, according to pre-study survey answers, believed others to be trustworthy. Additionally, the most likely adopters were those who were told that others had adopted it. This second factor was especially notable, according to the authors, who stated that “social proof is the most effective way we tested to persuade people to adopt algorithms.” Interestingly, women adopted the algorithm twice as often as men.
As for the neurophysiological portion of the study, the authors found that, based on ECG measurements of the participants’ heart rates during the study, the cognitive load (“amount of mental effort needed to solve a problem”) was higher among those who declined to adopt the algorithm.
As a final conclusion, the authors wrote that “blind trust is foolish, but so is avoiding algorithms altogether.”
I am not sure what to make of all of this. If I personally had to solve a computer maze, I would want all the help I could get, even in the form of an “imperfect algorithm,” since I am not really that great at spatial problem solving.
That said, I am often highly skeptical of algorithm-based “relevance” rankings of search results in electronic library catalogs and research databases — an area I know very well — because I don’t know how they define “relevance.” When I am teaching research classes, I remind students that they should always question how the computer could possibly know what is most relevant for you, and to not just look at the first page of search results and think you’ve done enough research.
I also know that algorithm-aided decision-making cannot help but reflect biases by everyone involved in the creation and use of the algorithm. So I guess in the end I would also adopt the measured response that the authors of this study advocate: trust, but with skepticism.