- Ana Pais (@_anapais)
- BBC World News
It was April 1994 when one of the most violent episodes in modern history took place in Rwanda: in just 100 days, more than 800,000 people were massacred, many of them with machetes.
Those who survived the genocide and those who have studied it agree that the hostile rhetoric used for years by the ruling ethnic group, the Hutus, against the Tutsi ethnic group was key in the violence that the former unleashed on the latter.
Perhaps the best explanation for the role of speech in the massacre was heard years later when a witness stated that a local radio station had been responsible for “spread gasoline all over the country little by little, so that one day the whole country could catch fire”.
These types of messages capable of inspiring mass violence are the focus of study of the Dangerous Speech Project, created and directed by the American Susan Benesch.
“You cannot make a list of words considered dangerous because it all depends on who says it, to whom and under what circumstances,” he explains to BBC Mundo.
That is why your team analyzes the messages in their respective contexts and works on methods to avoid their devastating consequences.
His line of research has gained special relevance in times when the internet and, in particular, social networks have become amplifiers of these dangerous discourses.
In fact, Benesch, who is also a member of the Berkman Klein Center for the Internet and Harvard University Society, recently submitted a technical project to the Facebook Content Advisory Council on account and content moderation.
But before getting there, it is important to understand what dangerous speech is and the role of social media in its increase and lack of control.
The violent and their accomplices
By its definition, dangerous speech is “any form of expression (for example, spoken, written, or in pictures) that may increase the risk that its audience will commit or condone violence against members of another group.”
The definition has three interesting points to develop.
First of all, it talks about “increase the risk” of violence rather than provoking it, since in general it is difficult to establish a direct cause-consequence when it comes to words.
On the other hand, this definition equates those who execute violence with those who justify it.
“In each process of massive violence we see that, for every person who commits it, there are many more who agree“, says Benesch.
The academic adds that, “even in cases of violence on a very large scale, such as the known and terrible examples of the Holocaust or the genocide in Rwanda, those who directly commit the violence are a very small percentage of the population. But they would not do so. if they didn’t feel that others agreed. “
This support can come from family members and neighbors to political or religious leaders.
Third and last, dangerous speech encompasses violence from one human group to another, where a logic of “us versus them” operates.
According to the Dangerous Speech Project, what is usually drawn as the dividing line between one and the other is race, ethnicity, religion, class or sexual orientation, and therefore this concept can be confused with hate speech.
Although in some respects both concepts overlap, dangerous speech falls into a “narrower and more specific category, defined not by a subjective emotion such as hatred, but by its ability to inspire harm that is easy to identify: violence massive “, is explained on its website.
In addition, dangerous speeches do not always incite hatred: “They often instill fear, which can be as powerful as hatred when inspiring violence,” it is detailed.
The effect of social media
“In many ways the internet and social networks have not changed the rules of the game,” says Benesch, explaining that “since the beginning of history negative leaders have incited their followers to violence.”
Nevertheless, the digital age offers advantages for the dissemination of these messages.
“Today even the most powerful and influential leaders can communicate much more directly with many more people,” he says.
Benesch then mentions the most notorious recent example: Donald Trump.
“When Trump wrote on Twitter, literally millions of people were reading it directly“, dice.
And while he acknowledges that radio had a similar function at the time, social networks “give the impression of a more intimate and personal contact”.
But Benesch didn’t just cite Trump as an example of an influential tweeter.
The word “fight”
Last month, the now former president of the United States was subjected to what would be his second impeachment or impeachment (and second acquittal) for his participation in the January 6 protest that ended with the taking of the Capitol and five deaths.
Specifically, Congress accused him of “incitement to insurrection”.
That January 6, Trump gave a speech to thousands of followers where he called on them to walk to the Capitol and “fight” because if not, “we will no longer have a country.”
During said rally it would end using the verb “to fight” 14 times And although he never explicitly told his followers to enter the Congress building, that word would become key in the trial.
“Trump’s lawyers tried to defend him by saying that many, many Democratic politicians have also used the word ‘fight’ and have even tried to convince their followers to fight,” says Benesch.
But according to the academic, the situations are not comparable by the context: in the examples of the Democrats, “the word did not have the same ability to persuade the audience to commit violence”.
The last drop
Although Congress exonerated Trump, the events led Twitter to permanently suspend his account “due to the risk of further incitement to violence,” the social network reported at the time.
The problem, according to Benesch, is that “for every Trump there are tens or hundreds oftrumpcitos who now manage to communicate directly with many followers and they are the ones that in the pre-internet past would not have been able to go to audiences so large or so diverse or so dispersed.
And he clarifies: “Not because they are small are they less dangerous in their ability to convince people to commit violence. With 100 trumpcitos It could do a lot of damage. “
What the assault on the Capitol did was force social media to take action in the face of dangerous speeches, putting them in an uncomfortable position regarding the right to freedom of expression.
“Returning to the metaphor of the witness in the case of Rwanda,” says Benesch, “the point is that if someone incites violence by spilling drops of gasoline little by little over months or years, How is it possible to identify the drop that will set fire a everything?“.
In your experience, what Facebook, Twitter, and other large platforms do to decide which content and accounts to remove is ask yourself two questions.
The first is how the language in question could be interpreted.
Benesch explains: “We know that Trump always expresses himself in ambiguous language, but he is not the only one. He is classic of people who are inciting violence. In general they do not say directly: ‘Go and massacre such and such.”
The second question is what the author of the message meant.
“We will never be able to know what is inside the head of Trump or anyone else, then it is not worth wasting time on that “, affirms the academic.
“What is very important is how his followers are interpreting Trump, especially those who, for different reasons, are more prone to violence,” he continues.
This is where your Proposal for the Facebook Content Advisory Council and published earlier this month in the American magazine Noema.
Step by Step
Benesch’s idea starts from a development “not very difficult from a technical point of view, which is to create so-called classifiers, that is, pieces of software that identify a certain type of language. “
The objective, continues, is “look for significant changes in the language with which people are reacting to an influential account which may be inciting violence. ”
Once these significant changes are detected, humans must judge the content.
“If necessary, according to my proposal, the platform would inform the owner of the account about how it is being interpreted by a significant number of its followers and I would request that, if that is not what it seeks to transmit, say it publicly and clearly in the platform”.
If the person responsible for the account refuses to do so, then the social network must act.
“My interest is what can be called consequentialist,” Benesch acknowledges.
And he adds: “My research is not designed to try to fix what is going on in the mind of someone like Trump, but to improve the consequences of dangerous speech in the real world“.
Remember that you can receive notifications from BBC News Mundo. Downloadto our app and activate them so you don’t miss our best content.
Eddie is an Australian news reporter with over 9 years in the industry and has published on Forbes and tech crunch.