Thursday, April 18

Social networks are on alert for the anniversary of the assault on the Capitol


New York (CNN Business) – January 6 was a clear turning point for the major social media platforms, as they demonstrated that, under certain circumstances, they would be willing to deactivate the platform of a sitting president of the United States. But some experts worry that they still haven’t done enough to address the underlying issues that allowed Trump supporters and others on the far right to be duped and radicalized, and organized using their platforms.

Heading into the anniversary, Meta, parent company of Facebook, Twitter and YouTube say they have been monitoring their platforms for harmful content related to the Capitol riots.

“We have strong policies that we continue to enforce, including banning hate organizations and removing content that praises or supports them,” a Meta spokesperson told CNN, adding that the company has been in contact with the organizations. law enforcement agencies, including the FBI and the Capitol Police, around the anniversary. As part of its efforts, Facebook is proactively monitoring content praising the assault on the Capitol, as well as content calling on people to carry or use weapons in Washington, according to the company.

“We continue to actively monitor threats on our platform and will respond accordingly,” the Meta spokesperson said.

Twitter convened an internal task force with members from various parts of the company to make sure the platform could enforce its rules and protect users around the one-year mark of January 6, a Twitter spokesperson told CNN.

“Our focus, both before and after January 6 [de 2020], has been cracking down on accounts and tweets that incite violence or have the potential to lead to offline harm, “the spokesperson said, adding that Twitter also has open lines of communication with federal officials and law enforcement agencies.

YouTube’s Office of Intelligence, a group tasked with proactively finding and moderating problematic content, has been monitoring trends around content and behavior related to the Capitol riots and its anniversary. As of this Wednesday, the company had not detected an increase in content with new conspiracies related to the January 6 or the 2020 elections that violate its policies, according to spokeswoman Ivy Choi.

“Our systems are actively targeting high-authority channels and limiting the spread of misinformation harmful to election-related issues,” Choi said in a statement.

These efforts come after Facebook, Twitter, YouTube and other platforms have faced intense criticism over the past year for the role of social media in the crisis. Businesses, for their part, have largely argued that they had strong policies in place even before the Capitol riots and that they have done nothing more than tighten protections and enforcement ever since.

Also Read  Brother of Mauricio Leal, "celebrity stylist", planned murder of the hairdresser and their mother for 4 months to pass it off as suicide

When the agitators escalated their attack on the Capitol on January 6, breaking into the building, looting the Congressional offices and subjugating the officers, social media platforms They were quick to do what they could to stem the fallout, first by tagging President Trump’s posts, then deleting them, and then suspending his account entirely.

But some experts wonder if the approach to moderation has changed substantially in the last year.
“While I certainly hope they have learned from what happened, if they have, they haven’t really reported it publicly,” said Laura Edelson, a New York University researcher who studies political communication online.

This is especially concerning, Edelson says, as there could be a resurgence of misinformation about the assault and the conspiracy theory that the elections were stolen, popping up around the anniversary of the assault. “Much of the narrative within the far-right movement is that, in the first place, [el asalto] it wasn’t that bad, and secondly, it was actually the others who did it, “he said.

In interviews leading up to the January 6 anniversary, some Trump supporters in Washington they told CNN who believe that the Democrats or the FBI were responsible for the attack.

Those responsible for the assault on the Capitol will go to justice, says Garland 2:39

Facebook’s response to January 6

Facebook, now a division of Meta, was the social media platform most affected by January 6, due in part to internal documents leaked by whistleblower Frances Haugen, who showed that the company had removed the protections that he had implemented for the 2020 elections before January 6 of last year. Haugen told the SEC in a filing that the company only re-implemented some of those protections after the assault began.

Days after the Capitol Riots, Facebook prohibited content about “stop the steal”. And, internally, researchers they analyzed why the company was unable to stop the movement from growing, documents since then released by Haugen (and obtained by CNN from a Congressional source) revealed. Meta has also taken steps to “disrupt militarized social movements” and prevent QAnon and the militias from organizing on Facebook, Meta Vice President of Integrity Guy Rosen said in an October blog post about the company’s efforts in around the 2020 elections.

Also Read  No one is safe from the rich elite's abuse of British law. Just ask Charlotte Leslie | Nick Cohen

Meta refuted Haugen’s claims and tried to distance himself from the attack. Nick Clegg, Vice President of Global Affairs for the company, stated to CNN in October that it is “ridiculous” to blame the riots on social media. “Responsibility for the violence on January 6 and the insurrection on that day rests squarely with the people who inflicted the violence and those who encouraged it,” Clegg said.

However, investigators say the company continues to fight to combat disinformation and extremist content.

“We haven’t really seen any substantial change in moderation for Facebook content that they have talked about publicly or that has been externally detectable,” Edelson said. “It appears that externally they continue to use rather rudimentary keyword matching tools to identify problematic content, be it hate speech or misinformation.”

Goal pointed in a blog post September that its Artificial Intelligence systems have improved in proactively removing problematic content, such as hate speech. And in his november report Regarding compliance with community regulations, the company stated that the prevalence of views of hate speech content compared to other types of content decreased for the fourth consecutive quarter.

A new report released Tuesday by the technology advocacy and research group Tech Transparency Project (TTP) found that content related to “Three Percenters”, an extremist and anti-government group whose followers They were accused In connection with the January 6 assault, it remains widely available on Facebook, some of which use “militia” in group names or include well-known symbols associated with the group. According to the report, when TTP researchers examined this content, Facebook’s “suggested friends” and “related pages” functions recommended accounts or pages with similar images. (TTP is funded in part by an organization founded by Pierre Omidyar.)

“As Americans approach the first anniversary of the assault, the TTP has encountered many of the same troubling patterns on Facebook as the company continues to overlook militant groups that pose a threat to democracy and the state of Right, “the report states, adding that” Facebook’s algorithms and advertising tools often promote this type of content to users. “

“We have removed several of these groups for violating our policies,” Meta spokesman Kevin McAlister said in a statement to CNN about the TTP report.

Facebook says it has removed thousands of groups, pages, profiles and other content related to militarized social movements and has banned such organizations, including the Three Percenters, noting that the pages and groups cited in the TTP report had a number relatively small number of followers.

Also Read  Brussels vetoes merger of South Korean shipyards Daewoo and Hyundai

Other actors

It is clear that the disinformation landscape extends well beyond Facebook, even to more marginal platforms, such as Gave, which have gained popularity after January 6 thanks to their promises not to moderate content, while the largest companies faced calls to take action against hate speech, disinformation and violent groups.

In August, the Select Committee of the House of Representatives investigating the deadly January 6 riots on Capitol Hill sent letters to 15 social media companies, including Facebook, YouTube, and Twitter, in order to understand how misinformation existed on their platforms and efforts to annul the elections by of both foreign and national actors.

Six days after the attack, Twitter He said that it had removed 70,000 accounts spreading conspiracy theories and QAnon content. Since then, the company says it has removed thousands more accounts for violating its policy against “coordinated harmful activity” and also says that bans violent extremist groups.

“Engagement and focus across government, civil society and the private sector are also critical,” the Twitter spokesperson said. “We recognize that Twitter has an important role to play, and we are committed to doing our part.”

YouTube said that in the months leading up to the Capitol riots it had removed the channels of various groups that later became associated with the attack, such as those related to the Proud Boys and QAnon, for violating its existing policies on hate, harassment and violence. electoral integrity. During the assault and in the days after, the company removed live broadcasts of the riots and other related content that violated its policies, and YouTube says its systems are more likely to direct users to authoritative sources of information about the elections. .

“Over the past year, we have removed tens of thousands of videos for violating our policies related to US elections, most before reaching 100 views,” said Choi of YouTube. “We remain vigilant in the face of the 2022 elections and our teams continue to monitor closely and act quickly in the face of electoral misinformation.”

– CNN’s Oliver Darcy contributed to this report.




cnnespanol.cnn.com

Leave a Reply

Your email address will not be published. Required fields are marked *