Sunday, December 5

Action on Sexual Abuse Images is Overdue, but Apple’s Proposals Bring Other Dangers | Ross Anderson


LLast week, Apple announced two back doors in the US in the encryption that protects its devices. One will monitor the iMessages: if any photos sent by or under the age of 13 appear to contain nudity, the user can be challenged and their parents can be informed. The second will see Apple scan all the images on a phone’s camera roll and, if they are similar to known sexual abuse images, mark them as suspicious. If enough suspicious images are backed up to an iCloud account, they will be decrypted and inspected. If Apple believes they are illegal, the user will be reported to the appropriate authorities.

The action on the circulation of images of child sexual abuse is long overdue. Effective mechanisms to prevent image sharing and vigorous prosecution of perpetrators must be given the political priority they deserve. But the measures proposed by Apple do not address the problem and provide the architecture for a massive expansion of state surveillance.

Historically, the idea of ​​scanning customer devices for evidence of crime comes from China. It was introduced in 2008 when a system called Green dam It was installed on all PCs sold in the country. It was described as a porn filter, but its main purpose was to search for phrases like “Falun Gong” and “Dalai Lama.” It also made its users’ computers vulnerable to remote takeover. Thirteen years later, technology companies in China are completely subordinate to the state: including apple, which keeps all the iCloud data of its Chinese customers in data centers run by a state-owned company.

Scan photos is difficult to scale. First, if a program blocks only exact copies of a known illegal image, people can lightly edit it. Less skilled people can go out and make new images, which in the case of sexual abuse images, means new crimes. So a censor wants software that marks images similar to those on the block list. But there are false alarms, and a small system of the type that will run on a phone could have an error rate of up to 5%. Applying that error rate to the 10 billion iPhone photos taken every day, a rate 5% false alarms could average 5oom images submitted for a secondary selection.

To avoid this, Apple will only act if the main screen of a phone detects a certain threshold of suspicious images, probably 10 of them. Every photo added to the camera roll will be inspected and, when backed up to iCloud, will be accompanied by an encrypted “security check” indicating whether it is suspicious or not. The cryptography is designed so that once 10 or more coupons are marked as unsafe, Apple can decrypt the images. If they appear illegal, the user will be informed and their account will be blocked.

A well-known weakness of machine learning systems, like the one Apple proposes, is that it’s easy to modify a photo to misclassify it. Pranksters can modify photos of cats so that phones mark them as abuse, for example, while gangs selling actual abuse images figure out how to sneak past the censor. But images are only part of the problem. Interestingly, Apple proposes to do nothing about live streaming, which has been the dominant medium for online abuse since at least 2018. And the company has said nothing about how it will track where illegal footage is coming from.

But if the technical questions are difficult, the political questions are much more difficult. Until now, democracies have allowed government surveillance in two sets of circumstances: first, if it is limited to a specific purpose; second, if it is aimed at specific people. Examples of special purpose surveillance include speed cameras and copier software that prevents you from copying banknotes. Targeting specific people generally requires paperwork, such as a court order. Apple’s system seems like the first kind of these, but once it’s built into phones, Macs, and even watches, in a way that bypasses their security and privacy mechanisms, you could look for anything else, or anyone else, that demands. a government.

These concerns are not abstract. Nor are they limited to just countries considered authoritarian. In Australia, the government threatened prosecute a journalist about photos of Australian troops killing civilians in Afghanistan, arguing that the war crime images were covered by national security laws. What’s more, Australian law empowers ministers to force companies to retrain an existing surveillance system on different images, vastly expanding the scope of Apple’s proposed spying.

Closer to home, the European Union just got updated the law allowing tech companies to scan communications for illegal images and announced that a new child protection initiative will be extended to “readiness,” requiring companies to scan text as well. In Great Britain, the Investigative Powers Act also enable ministers order a company to adapt its systems when possible to assist in the interception. Your iPhone may be silently searching for missing children, but it may also be searching for the “most wanted” by the police.

Legally, the first big fight is likely to be in the United States, where the constitution prohibits blanket orders. But, in a drug sniffer dog case, a court found that a search that find that only smuggling is legal. Expect the supreme court to hear from privacy advocates claiming that your iPhone is now a mistake in your pocket, while Apple and the FBI argue that it is just a sniffer dog.

Politically, the tech industry has often resisted pressure to increase vigilance. But now that Apple has broken ranks, it will be harder for other companies to resist government demands. Child protection online is an urgent issue, but this proposal will do little to prevent these gruesome crimes, while opening the floodgates to a significant expansion of the surveillance state.


www.theguardian.com

Leave a Reply

Your email address will not be published. Required fields are marked *

Share