Thursday, April 18

The Digital Republic by Jamie Susskind review – how to tame big tech | Computing and the net books


There was a moment when Facebook was a democracy. Blink and you would have missed it, but in December 2012, as part of an initiative announced three years earlier by Mark Zuckerberg, the company unveiled new terms and conditions that it wanted to impose on users. They were invited to vote on whether they should be enacted, yes or no. The voters were pretty clear: 88% said no, the new terms weren’t acceptable. It was a triumph of people power.

Except that Zuckerberg had imposed a precondition: the decision would only be binding if at least 30% of all users took part. That would have required votes from about 300 million of the roughly 1 billion users the platform then had (it’s since roughly tripled). But just over 650,000 participated. King Zuckerberg declared that the time for democracy was over, and in the future, Facebook – which in reality means Zuckerberg, for he owns the majority of the voting shares – would decide what would happen, without reference to user opinion.

Since then, the company has been accused of aiding the genocide of the Rohingya in Myanmar, the spreading of misinformation in 2016 in the Philippines and US elections and the Brexit referendum, of bringing together violent rightwing extremists who went on to kill in the US, of failing to douse the QAnon conspiracy theory, and most recently of helping foment the January 2021 US insurrection.

Sure, the 2012 terms and conditions probably didn’t lead to those outcomes. Equally, leaving Facebook to its own devices didn’t “help” prevent them. In 2016 an internal memo by one of its executives, Andrew Bosworth, suggested that such collateral damage was tolerable: “We connect people. That can be good if they make it positive. Maybe someone finds love. Maybe it even saves the life of someone on the brink of suicide… That can be bad if they make it negative. Maybe it costs a life by exposing someone to bullies. Maybe someone dies in a terrorist attack coordinated on our tools … [but] anything that allows us to connect more people more often is de facto good.”

Also Read  Australia's rainiest place: the tiny NSW town has had just 14 dry days this year | Australia weather

“Maybe” someone dies in a coordinated terrorist attack on your tools, but overall what we do is good? Even if Zuckerberg distanced himself and Facebook from the remarks, it’s not the sort of language you’d expect to hear from, say, an executive of a nuclear power plant. So why should we accept it from senior people in companies with proven adverse track records? No surprise, then, that the clamor is growing for more regulation of big tech companies such as Facebook, Google (particularly YouTube), Twitter, Instagram and the fast-rising TikTok, which already has more than 1 billion users worldwide.

Into this tumult comes Jamie Susskind, a British barrister who argues that we need a “digital republic” to protect society from the harms indifferently caused by these companies, and provide a framework – legal, ethical, moral – for how we should oversee them now and in the future.

Susskind argues that our present emphasis on “market individualism” – where individuals pick and choose the platforms they interact with, and thus shape which ones succeed or fail – has allowed these companies to create fiefdoms. What we need, he says, is more accountability, which means that we should have more oversight into what the companies do. This would be a proper citizens’ republic; rather than relying on the inchoate mass of individuals, a collective focus on responsibility would force accountability and strip away unearned powers.

Facebook’s ‘War Room’ monitoring Brazil’s elections in 2018. Photograph: Bloomberg/Getty Images

Big tech seems like a space where it should be easy to find solutions. Do the companies sell data without permission? (The big tech ones don’t, but there’s a thriving advertising ecosystem that does.) Do their algorithms unfairly discriminate on the basis of race, gender, locale? Do they throw people off their platforms without reason? Do they moderate content unfairly? Then we have casus belli to litigate and correct.

OK, but how? The problem facing Susskind, and us, is that there are three choices for dealing with these companies. Leave them alone? That hasn’t worked. Pass laws to control them? But our political systems struggle to frame sensible laws in a timely fashion. Create technocratic regulators to oversee them and bring them into line when they stray? But those are liable to “regulatory capture”, where they get too cozy with their charges. None is completely satisfactory. And we are wrestling a hydra; as fast as policy in one area seems to get nailed down (say, vaccine misinformation), two more pop up (say, facial recognition and machine learning).

Also Read  With inflation skyrocketing, should you sell your house now or wait?

Susskind suggests we instead try “mini-publics” – most often seen in the form of “citizen assemblies”, where you bring a small but representative group of the population together and give them expert briefings about a difficult choice to be made, after which they create policy options. Taiwan and Austria use them, and in Ireland they helped frame the questions in the referendums about same-sex marriage and abortion.

What he doesn’t acknowledge is that this just delays the problem. After the mini-publics deliberate, you are back at the original choices: do nothing, legislate or regulate.

Deciding between those approaches would require a very detailed examination of how these companies work, and what effects the approaches could have. We don’t get that here. A big surprise about the book is the chapters’ length, or lack of it. There are 41 (including an introduction and conclusion) across 301 pages, and between each of the book’s 10 “parts” is a blank page. Each chapter is thus only a few pages, the literary equivalent of those mini Mars bars infuriatingly described as “fun size”.

But a lot of these topics deserve more than a couple of bites; they are far meatier and more complicated. How exactly do you define “bot” accounts, and are they always bad? Should an outside organization be able to overrule a company’s decision to remove an account for what it sees as undesirable behaviour? If a company relies on an algorithm for its revenues, how far should the state (or republic) be able to interfere in its operation, if it doesn’t break discrimination laws? Bear in mind that Facebook’s algorithms in Myanmar, the Philippines and the US before the 2021 insurrection did nothing illegal. (The Facebook whistleblower Frances Haugen said recently that only about 200 people in the whole world understand how its News Feed algorithm chooses what to show you.) So what is it we want Facebook to stop, or start, doing? The correct answer, as it happens, is “start moderating content more aggressively”; in each case, too few humans were tasked with preventing inflammatory falsehoods running out of control. Defining how many moderators is the correct number is then a tricky problem in itself.

Also Read  Maverick Ventures' Ambar Bhattacharyya on the future of health care

These are all far from fun-sized dilemmas, and even if we had clear answers there would still be structural barriers to implementation – which often means us, the users. “The truth is that individuals still click away too many of their protections,” writes Susskind, noting how easily we dismissively select “I agree”, yielding up our rights. Fine, but what’s the alternative? The EU’s data protection regime means we have to give “informed consent”, and while the ideal would be uninformed dissent (so nobody gets our data), there’s too much money ranged against us to make that the default. So we tick boxes. It would have been good too to hear from experts in the field such as Haugen, or anyone with direct experience who could point towards solutions for some of the problems. (They too tend to struggle to find them, which doesn’t make one hopeful.) Difficult questions are left open; nothing is actually solved. “This is a deliberately broad formulation,” Susskind says of his recommendation for how algorithms should be regulated.

One is left with the sneaking suspicion that these problems might just be insoluble. The one option that hasn’t really been tried is the one rejected back in 2012: let users decide. It wouldn’t be hard for sites to make voting compulsory, and allow our decisions to be public. Zuckerberg might not be happy about it. But he’d get a vote: just one, like everyone else. That really might create a digital republic for us all.

Charles Arthur is the author of Social Warming: How Social Media Polarises Us All. The Digital Republic: On Freedom and Democracy in the 21st Century by Jamie Susskind is published by Bloomsbury (£25). To support the Guardian and Observer order your copy at guardianbookshop.com. Delivery charges may apply.


www.theguardian.com

Leave a Reply

Your email address will not be published. Required fields are marked *