Online conspiracy theories and false claims about Brazil’s election results sparked riots in the country’s capital last weekend as extremists copied the digital playbook used in the attack on the Capitol on Jan. 6, 2021.
For months, researchers and advocates have been warning tech companies to take action to mitigate Brazil’s potential for real-world harm. Now they say mainstream giants are doing too little, too late, to curb the spread of unrest-fueled content — a pattern they say is common in Silicon Valley.
With social media companies lacking what they consider to be adequate safeguards, experts warn that a rise in online political vitriol risks sparking similar incidents in the future.
“When we see this kind of content, this kind of attitude, repeated over and over again. It’s very worrying,” said Flora Rebello Arduini, campaign director for the advocacy group Sum of Us.
“History is repeating itself, not in a positive way,” Arduini said.
Mobs supporting former Brazilian President Jair Bolsonaro attacked Brazil’s Congress, Supreme Court and presidential palace on Sunday as President Luiz Inácio Lula da Silva stole the country’s October election.
Like the attack on the U.S. Capitol two years ago, the riots in Brazil came after extremists promoted those narratives online, sowing the seeds for calls for violence.
Nina Santos, a postdoctoral researcher at Brazil’s National Institute of Science and Technology for Digital Democracy, said Bolsonaro’s network has been active online over the past year, reinforcing rhetoric against the legitimacy of electoral voting and “questioning the entire election.” process”. That, she said, set the stage for scenes such as the violence that erupted over the weekend, with rioters calling for “support for the strength of the country” that they believed was “stolen”.
In some cases, the voices driving this discussion were virtually the same before the riots in Brazil and before the 6 January riots. Meghan Conroy, a researcher at the Atlantic Council’s Digital Forensics Lab who served as an investigator on the House Select Committee investigating the Jan. 6 attacks, said some of the “really spearheading” surrounding the Jan. 6 riots Key figures in the narrative” “played a huge role in bringing about the same conditions in Brazil’s election.” They were exploited by right-wing influencers and media personalities like Steve Bannon and Alex Jones online presence to promote a false narrative about the Brazilian election.
The images of Brazil’s unrest on Sunday echoed those of Jan. 6, “it’s no accident that they were inspired,” Conroy said.
Another similarity in the incidents, Conroy said, is the failure of social media companies to adequately address the rhetoric that fueled them. While platforms took action before and after the riots in Brazil, as they did in the weeks around Jan. 6, “these steps weren’t enough,” she said.
“[Former President Trump and Bolsonaro] have generated undying loyalty and enough support that their supporters will storm federal buildings, disrupt the democratic process, or protest democratic outcomes. But social media platforms, online forums and messaging apps contain disinformation and coordination, and I don’t think these riots would have happened without social media,” she added.
In September, a group of more than 100 civil society organizations released a policy assessment of Twitter, YouTube, Facebook and Instagram, as well as messaging platforms WhatsApp and Telegram, ahead of Brazil’s October general elections.
Based on the assessment, no platform other than Twitter has a policy in place to prevent instigating uprisings against the democratic order or interfering with the peaceful transfer of power.
Since the report was published, Twitter has changed ownership, taken itself private and revoked many of its previous content moderation policies following Elon Musk’s $44 billion acquisition of the company. It was unclear what steps Twitter may have taken before or after the riots in Brazil controlled by Musk, and a Twitter spokesperson did not respond to a request for comment.
The Sum of Us study, conducted between September and December, also found that platforms failed to mitigate the spread of false narratives online.
The content was also able to circulate widely online after the unrest began last weekend. By Monday, police appeared to have suppressed most of the violence. But before they could do so, the extremist influencers posted livestreams of the attack on government buildings on major social media platforms, amassing hundreds of thousands of views, according to a “Sum Of Us” report released Wednesday.
The researchers analyzed five live streams of far-right YouTube users who participated in the riots and mapped how that content was posted on other platforms, primarily Facebook.
An extremist influencer streamed the unrest live on YouTube non-stop for more than five hours. Their accounts were reportedly shut down on Monday, but not before they racked up 670,000 riot content views in less than 24 hours.
During the breach, a Sum of Us researcher reportedly opened their personal YouTube page and was actively recommended to livestream on their homepage, without previously searching for the influencer’s channel.
As of Friday, the same influencer’s Facebook page still featured similar video content.
The report also identified rioting content on TikTok, with at least one video garnering 1.5 million views. All of the TikTok content found was removed this week. A TikTok spokesman declined to comment further on the report.
YouTube and Meta, the parent companies of Facebook and Instagram, said in statements that the companies were removing content that supported or praised rioters who broke into government buildings.
YouTube spokesman Ivy Choi said the platform terminated more than 2,500 channels and removed more than 10,000 videos related to Brazil’s election, “the vast majority of which had fewer than 100 views.”
According to YouTube, the channels and videos mentioned in the Sum of Us study are being reviewed, but the platform has previously removed content from several of the channels mentioned.
A Meta spokesperson said the situation in Brazil was designated as a “violation, which means we will remove content that supports or praises these actions.”
Damon McCoy, an associate professor of computer science at New York University and a member of the school’s cybersecurity committee, said that in addition to focusing on what content is removed online, platforms also need to be concerned about the fact that they allow certain content to spread without censorship How fast is it. Democracy Research Group.
“When they see something go viral on their platform, they can slow it down until they have a human moderator who can review it. I think this approach might be better than just focusing on removing content Better, because the reality is that algorithmic feeds spread this content on the platform very quickly,” he said.
Experts who spoke to The Hill have broadly pushed platforms to hire content moderation teams who understand not only the language a post is in, but cultural nuances that might allow a post to bypass keywords used in the context of a local audience.
Alex Krasodomski, a senior fellow at Chatham House’s Digital Society Programme, said platforms can make policy changes that make them “more indifferent” to certain groups, such as right-wing extremists, but these There will always be alternative spaces for groups to go.
Since Jan. 6, a growing number of alternative platforms have pitched themselves as having minimal content moderation rules to appeal to right-wing users — including Trump’s own Truth Social. They lack the broad user base of mainstream sites, but allow false narratives to breed in right-wing echo chambers.
Another hurdle facing efforts to stop the spread of these claims is the “hypermainstream” nature of the movement behind them, Krasodomski said.
In dozens of countries, he said, it was the “person in charge” or the “main opposition” — including Bolsonaro and Trump — who spearheaded the theories. The same trend is evident across Europe.
“There are a lot of people who question whether the democracy they live in is working for them, and a lot of them have enormous political power. Trying to sift through it and say, ‘Well, no, this is misinformation’ is A very difficult thing. And I’m not envious of the platforms that take care of it,” Krasodomski said.