Wednesday, April 17

Instagram under fire over sexualized child images | online abuse


Instagram is failing to remove accounts that attract hundreds of sexualized comments for posting pictures of children in swimwear or partial clothing, even after they are flagged to it through the in-app reporting tool.

Instagram’s parent company, Meta, claims it takes a zero-tolerance approach to child exploitation. But accounts that have been flagged as suspicious through the in-app reporting tool have been ruled acceptable by its automated moderation technology and remain live.

In one case, an account posting photos of children in sexualized poses was reported, using the in-app reporting tool, by a researcher. Instagram provided a same-day response saying that “due to high volume”, it had not been able to view the report, but that its “technology has found that this account probably doesn’t go against our community guidelines”. The user was advised to block or unfollow the account, or report it again. It remained live on Saturday, with more than 33,000 followers.

Similar accounts – known as “tribute pages” – were also found to be running on Twitter.

One account, which posted pictures of a man performing sexual acts to images of a 14-year-old TikTok influencer, was deemed not to break Twitter’s rules after being reported using the in-app tools – despite him suggesting in posts that he was seeking to connect with people to share illegal material. “Looking to trade some younger stuff,” one of his tweets from him said. It was removed after the campaign group Collective Shout posted about it publicly.

The findings raise concerns about the platforms’ in-app reporting tools, with critics saying the content appeared to be allowed to remain live because it did not meet a criminal threshold – despite being linked to suspected illegal activity.

Also Read  Meg Lanning ton helps Australia condemn South Africa to first Women's World Cup defeat | Women's Cricket World Cup
Instagram messages concerning a photo of a child. Photography: Instagram

Often, the accounts are used for “breadcrumbing” – where offenders post technically legal images but arrange to meet up online in private messaging groups to share other material.

Andy Burrows, head of online safety policy at the NSPCC, described the accounts as a “shop window” for paedophiles. “Companies should be proactively identifying this content and then removing it themselves,” he said. “But even when it is reported to them, they are judging that it’s not a threat to children and should remain on the site.”

He called for MPs to tackle “loopholes” in the proposed online safety bill – which is intended to regulate social media firms and will be debated in parliament on 19 April. They should, he said force companies to tackle not only illegal content, but also that which is clearly harmful but may not meet the criminal threshold.

Lyn Swanson Kennedy of Collective Shout, an Australia-based charity that monitors exploitative content globally, said the platforms were relying on external organizations to do their content moderation for them. “We are calling on platforms to address some of these very concerning activities which put particularly underage girls at serious risk of harassment, exploitation and sexualisation,” she said.

Meta, Instagram’s parent company, said it had strict rules against content that sexually exploits or endangers children, and that it removed it when it became aware of it. “We’re also focused on preventing harm by banning suspicious profiles, restricting adults from messaging children they’re not connected with and defaulting under-18s to private accounts,” a spokesperson said.

Also Read  Shop activewear, joggers and jackets for spring

Twitter said the accounts reported to it had now been permanently suspended for violating its rules. A spokesperson said: “Twitter has zero tolerance for any material that features or promotes child sexual exploitation. We aggressively fight online CSE and have heavily invested in technology… to enforce our policy.”

Imran Ahmed, chief executive of the Center for Countering Digital Hate, a non-profit thinktank, said: “Relying on automated detection, which we know cannot keep up with simple hate speech, let alone cunning, determined, child sex exploitation rings, is an abrogation of the fundamental duty to protect children.”


www.theguardian.com

Leave a Reply

Your email address will not be published. Required fields are marked *