Sunday, June 20

Microsoft’s Kate Crawford: ‘AI is neither artificial nor intelligent’ | Artificial intelligence (AI)


Kate crawford studies the social and political implications of artificial intelligence. She is a professor of communication research and science and technology studies at the University of Southern California and senior principal investigator at Microsoft Research. His new book AI Atlas, discusses what it takes to create AI and what is at stake as you reshape our world.

He has written a critical book on AI, but works for a company that is somewhere between leaders in their deployment. How do you square that circle?
I work in the research wing of Microsoft, which is a separate organization, separate from product development. Unusually, throughout its 30-year history, it has hired social scientists to critically analyze how technologies are being built. Being indoors we can often see the downsides before the systems are widely deployed. My book did not go through any prepublication reviews (Microsoft Research does not require that), and my lab leaders are supportive of asking tough questions, even if the answers involve a critical appraisal of current technology practices.

What is the purpose of the book?
We are commonly presented with this vision of AI that is abstract and immaterial. I wanted to show how AI is created in a broader sense: its natural resource costs, its labor processes, and its classificatory logics. To see that in action, I went to places that included mines to see the necessary extraction of the earth’s crust and an Amazon fulfillment center to see the physical and psychological cost to workers of being under an algorithmic management system. My hope is that by showing how artificial intelligence systems work, by exposing production structures and material realities, we will have a more accurate description of the impacts and invite more people into the conversation. These systems are being implemented in a multitude of sectors without strong regulation, consent, or democratic debate.

What should people know about how AI products are made?
We are not used to thinking of these systems in terms of environmental costs. But saying, “Hey, Alexa, ask me for some toilet paper rolls,” invokes this pull chain, which runs all over the planet … We have a long way to go before this is green technology. Furthermore, the systems may seem automated, but when we open the curtain we see vast amounts of underpaid labor, from collective labor that categorizes data to the never-ending job of shuffling Amazon boxes. AI is neither artificial nor intelligent. It is made of natural resources and it is people who perform the tasks to make the systems appear autonomous.

Bias problems have been well documented in AI technology. Can more data solve that?
Bias is too narrow a term for the kind of problems we are talking about. Time and again, we see these systems fail: women are offered less credit through creditworthiness algorithms, mislabeled black faces, and the answer has been, “We just need more data.” But I have tried to see these deeper logics of classification and you start to see forms of discrimination, not only when the systems are applied, but in how they are built and trained to see the world. Training data sets used for machine learning software that casually categorizing people into just one of two genders; that classify people according to their skin color in one of the five racial categories, and that try, depending on the appearance of people, to assign them a moral or ethical character. The idea that you can make these determinations based on appearance has a dark past, and unfortunately the grading policy has been built into AI substrates.

You stand out ImageNet, a large publicly available training data set for object recognition …
With around 14 million images in more than 20,000 categories, ImageNet is one of the most important training data sets in the history of machine learning. It is used to test the effectiveness of object recognition algorithms. It was launched in 2009 by a group of Stanford researchers who pulled huge amounts of images from the web and had collective workers tag them according to nouns from WordNet, a lexical database that was created in the 1980s.

As of 2017, I did a project with artist Trevor Paglen. to see how people were labeled. We encountered horrible classifying terms that were misogynistic, racist, empowering, and extremely critical. Images of people were being paired with words like kleptomaniac, alcoholic, bad person, closet queen, prostitute, whore, drug addict and much more, I can’t say here. ImageNet now has removed many of the obviously troublesome categories of people – definitely an improvement – however the problem persists because these training sets are still circulating on torrent sites [where files are shared between peers].

And we were only able to study ImageNet because it is public. There are huge sets of training data in the hands of technology companies that are completely secret. They have looted images that we have uploaded to photo-sharing services and social media platforms and turned them into private systems.

You discredit the use of AI for emotion recognition, but work for a company that sells AI emotion recognition technology. Should AI be used for emotion detection??
The idea that you can see in someone’s face what they are feeling is deeply flawed. I do not think that’s possible. I have argued that it is one of the most necessary domains for regulation. Most of today’s emotion recognition systems are based on a line of thought in psychology developed in the 1970s, most notably by Paul Ekman, who says that there are six universal emotions that we all display on our faces and that can be read. using the proper techniques. But from the beginning there was backtracking and the most recent work shows that there is no reliable correlation between Expressions on the face and what we are really feeling.. And yet we have tech companies that say that emotions can be drawn just by watching videos of people’s faces. We’re even seeing it built into car software systems..

What do you mean when you say we should focus less on the ethics of AI and more on power?
Ethics are necessary, but not sufficient. More useful are questions like, who benefits and who is harmed by this AI system? And do you put power in the hands of those who are already powerful? What we see time and time again, from facial recognition to tracking to workplace surveillance, is that these systems are empowering already powerful institutions: corporations, armies, and police.

What does it take to make things better?
Much stronger regulatory regimes and greater rigor and accountability for how training data sets are constructed. We also need different voices in these debates, including the people who see and live with the disadvantages of these systems. And we need a renewed refusal policy that challenges the narrative that just because technology can be built, it must be implemented.

Any optimism?
Things are going on that give me hope. This April, the EU drew up the first draft of general regulations for AI. Australia has also just published new guidelines for regulating AI. There are holes that need to be repaired, but now we are beginning to realize that these tools need much stronger guardrails. And giving myself as much optimism as progress in regulation is the work of activists fighting for change.

The AI ​​Ethics Researcher Timnit Gebru she was kicked out of Google late last year after executives criticized her investigation. What is the future of industry-led criticism?
Google’s treatment of Timnit has caused a stir in academic and industry circles. The good news is that we have not seen the silence; instead, Timnit and other powerful voices have continued to speak out and push for a fairer approach to designing and implementing technical systems. A key element is ensuring that industry researchers can publish without corporate interference and fostering the same academic freedom that universities seek to provide.

AI Atlas by Kate Crawford is published by Yale University Press (£ 20). To support the guardian order your copy at guardianbookshop.com. Shipping charges may apply


www.theguardian.com

Leave a Reply

Your email address will not be published. Required fields are marked *