Thursday, October 28

Google May Ask Questions About AI Ethics, But Doesn’t Want Answers | Google

If I told you that an academic work titled “On the dangers of stochastic parrots” If I had sparked a historic dispute involving one of the most powerful companies in the world, you would have asked me what I had been smoking. And you might as well: but stay tuned.

The article has four co-authors, two from the University of Washington and two from Google: Dr. Timnit Gebru and Dr. Margaret Mitchell. Provides a helpful critical review of machine learning (LM) language models such as GPT-3, who are trained in huge amounts of text and are capable of producing plausible-looking prose. The amount of calculation (and associated carbon emissions) involved in its construction has skyrocketed to insane levels, so at some point it makes sense to ask the question that is never asked in the tech industry: how much is enough?

What is one of the questions that the authors of the article asked. Responding to them, they identified “a wide variety of costs and risks associated with the rush for ever-larger LMs, including: environmental costs (typically borne by those who do not benefit from the resulting technology); the economic costs, which in turn create barriers to entry, limiting who can contribute to this area of ​​research and which languages ​​can benefit from the most advanced techniques; opportunity cost, as researchers divert their efforts in directions that require fewer resources; and the risk of substantial harm, including stereotypes, denigration, the rise of extremist ideology and wrongful arrest ”.

These findings provide a useful counter-narrative to Gadarene’s current rush from the technology industry into language modeling. However, there was a slight difficulty. In 2018, Google created a language model called Bert, which was so powerful that the company incorporated it into its search engine, its main (and most lucrative) product. Consequently, Google is very sensitive to criticism of such a key technology. And two of the co-authors of the research paper were Google employees.

What happened next was predictable and crude, and there are competing narratives about that. Gebru says she was fired, while Google says she quit. Either way, the result was the same: in English labor law it would look like “Constructive dismissal” – when an employee feels they have no choice but to quit because of something their employer has done. But whatever the explanation, Gebru is out. And so is his co-author and colleague, Mitchell, who had been trying to determine the reasons for Google’s opposition to the research paper.

But now comes the really absurd part of the story. Gebru and Mitchell were prominent members of Google Ethical AI team. In other words, by co-authoring the article, they were doing their job, which is to critically examine machine learning technologies of the kind that are now critical to their employer’s business. And while its treatment (and subsequent online harassment by trolls) has been traumatic, it has at least highlighted the extent to which the tech industry’s recent obsession with “ethics” is a manipulative fraud.

As the industry frenzy for machine learning has accelerated, so has the proliferation of ethics oversight boards, panels, and bodies established by the companies themselves. To create them, they have enlisted the help of enterprising academics eager to get a piece of the action. In that sense, lucrative consultancies to advise on ethical issues raised by machine learning have become a vast system of relief for philosophers and other scholars who would otherwise be unemployed. The result is a kind of theater of ethics similar to the security theater that was played at airports in the years when people were allowed to fly. And the reason this charade continues is that tech companies see it as a preemptive strike to protect themselves from what they really fear: regulation by law.

The point is, current machine learning systems have ethical problems in the same way that rats have fleas. Its intrinsic flaws include prejudice, injustice, ethnic and gender discrimination, huge environmental footprints, theoretical scarcity, and a paralyzed epistemology that equates the volume of data with better understanding. However, these limitations haven’t stopped tech companies from adopting wholesale technology and, indeed, in some cases (Google and Facebook, to name just two) from betting on it.

Given that, you can imagine how top executives respond to nagging researchers who point out the ethical difficulties implicit in such Faustian agreements. It may not be much of a comfort to Gebru and Mitchell, but at least their traumatic experience has already sparked promising initiatives. One is a noisy campaign by some of his colleagues at Google. Even more promising is a student campaign: #recruitmenot – intended to persuade fellow students to join tech companies. After all, if they wouldn’t work for a tobacco company or an arms manufacturer, why would they work for Google or Facebook?

What i have been reading

Evolution or revolution?
There is a thoughtful post by Scott Alexander on his new Astral Codex Ten blog on whether radical change is better than organic development.

Extremism online
Issie Lapowsky wonders why social media companies were better at taking down Isis than at tempering domestic terrorists in a interesting protocol essay.

The only certainty
Doc Searls has written an amazing vivid meditation on Medium about the planetary impact of our species and why death is a characteristic, not a mistake.

Leave a Reply

Your email address will not be published. Required fields are marked *