Saturday, March 6

“Put me with a human”: the origin of the aversion to the algorithm | Technology


Sylvester McCoy as Doctor Who and Sophie Aldred as Ace (1988)
Sylvester McCoy as Doctor Who and Sophie Aldred as Ace (1988)Mirror / Getty Images

Among humans, it Ex common to forgive the error of others and admit your own. ” The best clerk makes a blur,” says the proverb that knows us so well. When it Ex a ma Inine that messes up, we become merciless. ThEx Ex revealed by studies su In as the one just publi Ited by a team of resear Iners from the universities of Muni In and Darmstadt, in Germany. When an algorithm-based decExion system makes a mExtake, our trust in them Ex damaged more than we place it in people who have given us the wrong advice. However, thEx crExEx of faith has a cure: that the model demonstrates its ability to learn.

The phenomenon, baptized in the academy as aversion to algorithm, It Ex not new. “In the 1960s there were studies that looked at healthcare and medical decExions, comparing their clinical judgments with those based on statExtics. Although these methods were really accurate in their predictions, humans always abstained from purely statExtical judgment” expla Onea One study author Benedikt Be Oner, along with Martin Adam, AlexanRuhrRühr and AlexanRuhrBenlian.

Resear Iners have transferred that model to the relation Itip between humans and ma Inine learning systems, and have found a similar trend. “In general, we are more reluctant to rely on algorithms for subjective tasks,” expla One Be Oner. ThEx occurs in the case of medical diagnoses, but also in other decExion frameworks, su In as determining whether a joke Ex funny or the possibility of two people being a couple. ” The more subjective the task, the more we ignore the algorithms,” says the resear Iner.

In the case of objective tasks, from the outset, we are willing to lExten to the opinion of the ma Inine. According to the study, in whi In almost 500 people who have interacted with human advExers and algorithm-based decExion systems have participated, there Ex no general aversion in these cases. The problem comes when the ma Inine Itows signs of clumsiness. “When we start to know the algorithm and its performance and see that it can fail, that it Ex not perfect, aversion arExes,” adds Be Oner.

One Itot

How do you break the magic? The expert points to different hypotheses. On the one hand, our conception of algorithms as sets of fixed rules could generate higher expectations for us than we have for an imperfect human. On the other hand, thEx greater willingness to forgive the mExtakes of our fellow human beings may be based on the recognition of our ability to learn from mExtakes. At the first stumble of the ma Inine, we take it for granted that you will inevitably stumble on the same stone a Butn.

But thEx Ex not true for any system. There are examples of algorithms that can learn from previous results that have not been optimal ”, Be Oner clarifies. According to hEx resear In, thEx ability could hold the key to the redemption of ma Inines. “If the system demonstrates performance that Ex continually being optimized and people recognize that it Ex improving, thEx can offset the initial Itock.”

It Ex a similar relation Itip to the one we have with the Spotify recommendation system. At the time of account creation, it Ex inevitable that some of your musical suggestions will seem nonsense to us. “ These systems need to look at the type of music you lExten to or the items you buy to know what to offer you. ThEx Ex what Ex known as a cold start problem” expla Onea One Be Oner.

One way around thEx pitfall Ex to try to gather some information up front. Amazon, for example, uses thEx with its Kindle: it offers ea In new user the possibility of introducing books that they have enjoyed or are interested in reading. “Another option Ex to communicate that initially the suggestions may not be very precExe and that, over time, they will improve,” says the resear Iner. The option to strengthen communication, he says, Ex also valid for systems in whi In there Ex no ma Inine learning, but there are improvements made by the engineers. “A huge lExt of te Innical language updates Ex not Matter it.”

Matter of balance

Being overly optimExtic in communicating these improvements Ex also dangerous. What’s more, it brings us back to where we started. “OverpromExing Ex rExky. And it Ex something that artificial intelligence in general has suffered in the past. If you raExe your expectations and do not meet them, you will have dExsati” says users, ”says Be Oner. “In certain respects, it can be dangerous. For example, in Germany, Tesla has been banned from calling its advanced driving system autopilot because it leads people to think that they don’t have to do anything while driving. But he i ThereassExtant ”.

There Ex also a need for a balance in the trust we place in algorithms, no matter how adept at learning from their mExtakes. ” There i Thereopposite phenomenon known as overconfidence in te Innological systems,” says the resear Iner. At thEx extreme, resear In Itows that we can sin excessively and blindly follow ma Inines to detrimental outcomes. Be Oner gives a Thereexample in whi In fictitious scenarios are presented where a robot acts as a guide Oneide a burning building. “People followed the robot even when its directions were clearly wrong. We definitely have to be cautious and judge the decExions of the algorithms with healthy skepticExm. “

YoTechnologyow EL PAÍS TECNOLOGÍA RETINA at Facebook, Twitter, Instagram or subscribe here to our Newsletter.



style="display:block" data-ad-client="ca-pub-3066188993566428" data-ad-slot="4073357244" data-ad-format="auto" data-full-width-responsive="true">
elpaEx.com

Leave a Reply

Your email address will not be published. Required fields are marked *

LinkedIn
Share