Tuesday, June 15

Macho, sexist, deaf or blind: when tools only serve the designer | Creators | ICON Design


Carol Reiley, an IT entrepreneur, a pioneer in the development of applications for autonomous driving and robotic surgery, is clear: “Biased design, bad design, is very often a matter of life and death.” In an article for the technology magazine Tech Crunch, Reiley, a world authority in her field, says that already in her university years, in the 2000s, she developed a prototype of a surgical robot that received orders through Microsoft’s speech recognition system, at that time a leading edge. Incredible as it may seem, that system did not recognize my voice. It had been designed by a team of men in their 20s and 30s and was not able to process the generally higher pitch of female voices. ” It was not suitable, in short, for 51% of the inhabitants of the planet.

To present her design in the university classroom, Reiley was forced to enlist the help of a classmate, who was the one who relayed her orders. “It may seem like a simple anecdote,” Reiley continues in his article, “but now think for a moment of a precarious field hospital in a place like Afghanistan and a doctor who loses a patient who could have been saved just because her advanced system telematic surgery depends on an application so rudimentary and, yes, so sexist that it only recognizes male voices ”.

Reiley finds it “disappointing and scandalous” how little progress has been made in this area in the last twenty years. “We continue to release dolls like Hello Barbie, programmed to talk to girls, but not boys. Our voice or facial recognition algorithms continue to be racially and gender-biased almost without exception. And we are designing brand new autonomous driving systems thought almost exclusively by and for white men. “

Computer and robotics engineer Carol Reiley is a pioneer in telecontrol and autonomous robot systems and robotic surgery.
Computer and robotics engineer Carol Reiley is a pioneer in telecontrol and autonomous robot systems and robotic surgery.twitter @robot_MD

Reiley has focused his recent efforts on this last point. He does not lose sight, he explains, that when we get into a vehicle we are putting our lives in the hands of a team of human beings who have made a series of crucial technological and design decisions. “If these people have not taken into account our specific characteristics and needs, they are putting us in danger.”

Rains, it pours. As early as the 1960s, when the automotive industry developed and consolidated its modern safety systems, from seatbelt to seatbelt. airbag, tests were carried out with mannequins whose dimensions and morphology were always those of the average man. The result is that anyone who did not fit into this predetermined mold, starting with women, had an insufficient level of protection, or at least significantly lower than that of men. In 2011, when the use of female mannequins in stress tests and collision drills finally began to be consolidated, by legal imperative, it was found that women had been at risk of death or serious injury for up to 40 years. % higher than that of men.

Even the simplest and most effective diagnostic tests are not immune to this perverse logic. In an opinion piece published in The Economist On April 10, there was talk of the racial bias of pulse oximeters, those simple devices that measure oxygen saturation in the blood and have been key to determining who was hospitalized and who was not in the worst moments of health collapse caused by the pandemic. The measurement is carried out by chromatic contrast, projecting a beam of light on the fingertip. As explained The EconomistGiven that what is measured is the difference between the color of the blood and that of the skin, “the logical thing would have been to introduce a calibration system to adjust the measurement to different skin colors.”

However, nine out of ten certified oximeters are not calibrated. So the diagnostic test measures oxygen saturation very precisely in fair-skinned people, but tends to overestimate that of darker-skinned people at several points. As a consequence, a high percentage of citizens with black or copper skin with a real saturation index that would require intensive care are not hospitalized. Some of them die, victims of design inertias that do not take into account something as basic as racial diversity.

Another example. According to sources from, among others, the Texas Heart Institute, the automatic heart health alert protocol still does not sufficiently take into account that heart attack symptoms differ significantly in men and women. In men there are recurrent tightness in the diaphragm area and intense pain in the left half of the chest that radiates to the arm, while in women cold sweats, nausea or back pain are much more common. jaw and neck. The Texas Heart Institute denounced in 2019 that nowadays, artificial intelligence-based cardiac assist devices continue to be designed that barely take account of these differences, and that makes early detection much more likely in men than in women.

The archetype of
The archetype of “human” – middle-aged white man – remains the same as that represented by Leonardo da Vinci’s ‘Vitruvian Man’.

Photo 12 / Getty

The journalist Anne Quito, expert editor in technology and design of the magazine Quartz, has a theory: “That bias that causes it to be designed with the specific needs of the young or middle-aged white man in mind is due to the fact that a very high percentage of designers fit into that robot portrait. To correct or lessen it, it would be enough to hire more women and more members of racial minorities ”. More than conscious prejudices, Quito attributes these dysfunctions to “mental inertias.” As she explains, “neurologists consider that at least 95% percent of our daily decisions are due to subconscious processes that we often do not even question or rationalize a posteriori.”

Among the developers of artificial intelligence programs there is a very common maxim: “Don’t make me think.” That is, it is about applying cognitive routines and bundles that allow automating and systematizing tasks, because if a conscious reflection is applied on each of the adapted design decisions, it is very difficult to make significant advances. This flexible and pragmatic approach makes it possible to develop algorithms of extraordinary complexity, but it does not prevent blunders that are not always detected in time.

Some of those mistakes can be comical. In March 2016, Microsoft launched Tay, an artificial intelligence program designed to chat with other users on Twitter. It was about the adaptation to the English-speaking market of Xiaoice, a bot launched a year and a half earlier on Chinese social networks and that had held more than 40 million conversations with human interlocutors without detecting any noteworthy problems. Tay (acronym for Thinking About You) was endowed with the personality, worldview, and linguistic resources of a 19-year-old American girl, and also incorporated a complex machine learning algorithm (called deep learning) based on the one used by chess programs.

This “warm and empathetic” artificial intelligence, as described by its creators, was withdrawn from circulation in just 16 hours. In that period, Tay “learned” to behave like a spoiled, lascivious and prejudiced teenager, chaining phrases so incompatible with any label online or criteria of political correctness such as “Barack Obama is the ape who rules us”, “Hitler was right”, “I want you to have sex with my robotic pussy”, “George W. Bush was responsible for 11-M”, “I hate to homosexuals “or” I wish all feminists rot in hell. ” Microsoft attributed the drift of its bot, programmed to behave like “a friendly, kind and a little naive teenager”, to the concerted assault of users of 4chan, the Anglo-Saxon ForoCoches, an environment with very active users, virulent in their interactions and prone to playful sabotage.

The fact is that Tay was baptized by the press as “The racist and neo-Nazi robot” and ended up in the dungeons of Microsoft, while its programmers searched for a way to refine its self-learning algorithm and shield it against the onslaught of trolls malicious. On March 30, the bot He reappeared again for just a few hours and very soon began to engage in scandalous behaviors that were unforeseen, such as apology for drugs. For Anne Quinto, “what is disconcerting about this case is that an algorithm as refined as Tay was so vulnerable to the onslaught of trolls after overcoming thousands of hours of simulated interaction that, at least in theory, should have prepared her to peek into the real world ”.

Quinto adds that “his programmers, almost all white men, did not think that he could be a victim of the high degree of toxicity of social networks, because they, no matter how much they may reject it, do not suffer it to the same extent as a woman or the member of a racial or sexual minority ”. Microsoft decided to withdraw the wayward teenager again and replaced her months later, in December 2016, with Zo, a conversationalist, initially so bland and so neutral, that The Washington Post, came to define her as “a prude without substance or intelligence whose main concern, apparently, is to scold the interlocutors who express controversial opinions.” Despite everything, an editor of the magazine BuzzFeedNews He “interviewed” Zo and managed to get her out of her comfort zone on several occasions, dragging her to express opinions such as that the Koran is a “violent” book or that the death of fanatics like Osama Bin Laden should be celebrated.

Special mention in this gallery of technological horrors deserve facial recognition systems. After several unsuccessful attempts, Facebook, Google and Microsoft have decided to withdraw until further notice from this potentially profitable but slippery field. One of the last to suffer the rigors of white men’s design has been Robert Julian-Borchak Williams, an African-American living in the Detroit suburbs. In January 2020, Williams spent nearly 20 hours in police custody after a facial recognition algorithm linked him to the perpetrator of the assault on a luxury goods store. It was of little use that two of the agents who interacted with him insisted to their superiors that the person arrested hardly resembled the robotic portrait of the suspect, reconstructed from a blurry frame of the store’s security cameras. The algorithm identified him because he was simply not programmed to accurately distinguish one African American from another.

We are talking about programs with such a poor degree of optimization that they mistake people with black skin for gorillas, as Jacky Alcine, designer of software New Yorker who Google labeled in its image bank as a primate instead of a human. Both Alcine and Williams have embarked on crossed paths so that such dubious devices are not used in social networks or, of course, in official institutions such as courts or police departments.

Hoo Keat Wong, a doctor of psychology based in Malaysia, wrote an article about the incomprehensible racial bias of these programs after finding himself in an absurd situation one day when he went to a digital service to take photos for his passport renewal: the program He gave a mistake and exhorted him over and over to “open his eyes”, ignoring that Wong, like most of the citizens of East Asia, is “slanted eyes”. That is, it has a very pronounced crease of the epicanthus, the wrinkle of the upper eyelid that covers the eye. More than a third of the inhabitants of planet Earth share this trait. But many facial recognition programs, whether due to subconscious inertia or the unrepentant and inconsiderate ethnocentrism of their creators, prefer not to take it into account.




elpais.com

Leave a Reply

Your email address will not be published. Required fields are marked *