69.1 F
Cambridge
Friday, May 2, 2025
69.1 F
Cambridge
Friday, May 2, 2025

Mainstreaming Murder: Stochastic Terrorism and Social Media

On Jan. 7, 2025, Mark Zuckerberg announced an end to fact-checking on all of Meta’s platforms — including Instagram, Facebook, and Threads — and introduced a “Community Notes” feature. This comes four years after Twitter introduced “Birdwatch,” an alternative to fact-checking that relied on verified users to review and rate posts based on their authenticity and correctness. Elon Musk renamed this program “Community Notes” when he bought the site and changed its name to “X” in 2022. Zuckerberg cites both a change in public opinion surrounding free speech and the example set by Elon Musk’s policies on free speech as influencing his company’s decision. 

After Musk took over Twitter in 2022, a noticeable spike in hate speech ensued. Additionally, the number of people who viewed and liked hateful posts was elevated until at least May 2023. At that point, changes were made to X to prevent researchers from continuing to collect data on the site. Although no specific policy changes could be cited to account for these upticks, Musk’s newfound ownership of the site, his history of offensive posts, and his self-proclaimed policy of free speech absolutism undoubtedly contributed to this increase. 

Current U.S. law protects hate speech provided it does not incite, attempt to incite, or threaten immediate violence or harassment. However, there are real and often measurable consequences to protected hate speech in the modern day, especially when it is found online. 

“Stochastic Terrorism” is a term coined in 2002 by mathematician and risk analyst Gordon Woo to explain the connection between the fear stoked by widespread media coverage of events like the September 11 terrorist attacks and seemingly random acts of terror. Since then, the term has become popularly defined as any form of hateful speech within mass media platforms that could potentially support or motivate individual and unpredictable attacks against certain people or groups. These dangerous messages reverberate faster than ever before with social media. 

By making use of and promoting hateful rhetoric in a public setting, regardless of intent, one is contributing to the creation of a community that normalizes hate and bigotry. This gives rise to “lone wolf” attackers or smaller extremist groups who perceive that their beliefs are widely supported. 

- Advertisement -

One of the most infamous examples of online support for a hate crime was in 2019 when a lone gunman killed 51 people in Christchurch, New Zealand. The shooter posted both his 74-page manifesto and a commitment to “stop sh*tposting” and “make a real life effort post” on an 8chan message board, a chat site with a history of right-wing extremism. In the aftermath of the shooting, clips of the shooter’s live stream were reposted thousands of times. They garnered millions of views, with some distributors and viewers going so far as to suggest their attacks or make criminally offensive comments.

In 2022, a lone gunman opened fire at a supermarket in a predominantly black neighborhood in Buffalo, New York, killing ten and injuring several others. He was influenced by the Christchurch shooter and online forums engaging with white supremacy and race-based conspiracy theories. The shooter posted online, “It’s time to stop sh*tposting and time to make a real-life effort sh*tpost.” The wording is nearly identical to the Christchurch shooter’s post announcing his attack. 

These patterns have reached the mainstream. President Trump has repeatedly used tactics that can be considered stochastic terrorism. Michigan Governor Gretchen Whitmer ran afoul of Trump by pushing for safety measures early in 2020 when the COVID-19 pandemic was just beginning to spread throughout the United States. On Apr. 17, 2020, Trump tweeted “LIBERATE MICHIGAN!” in addition to other potentially provocative but very generalized language criticizing Whitmer and calling for action in Michigan. 

Emboldened by Trump’s support of an armed protest at the Michigan State Capitol, a group of extremists who had already been planning to kidnap and assassinate Whitmer for months saw Trump’s first round of tweets and comments as a call for more immediate action. By the end of 2023, five out of eight defendants were found guilty on charges related to their involvement in the plot to kill Governor Whitmer. 

With the advent of Donald Trump as a leading political figure in the United States, the Overton window, or the range of views seen as generally acceptable or appropriate by the majority of the population and major media outlets, has shifted substantially to the right. Members of far-right groups like the Proud Boys and the Oathkeepers are being given recognition and even support from Trump. 

This endorsement has real-world effects. During the President’s first campaign and then first term, hate crimes rose consistently. Minority groups bear this burden more heavily, as indicated by a 67% increase in hate crimes targeted at Muslims in 2015. These instances of violence have risen alongside divisive, hateful, and repeated comments from Trump regarding ethnic and religious minorities.

- Advertisement -

Incidents like the Christchurch shooting occurred in part because of obscure and fringe message boards and chat sites like 8chan. With the rise of MAGA politics and the shifting of public discourse to the right in recent years, what was once a relatively fringe set of ideologies is becoming more and more mainstream.

Sites like X are driven by an algorithm that can feed people who might have only liked one post or commented under one video an endless stream of related content. In the case of conservative media, this can eventually lead to extremist and “alt-right” sources. 

New research shows that Community Notes is an inferior and ineffective form of moderation, especially on sites the size of X, Facebook, and Instagram.  

When Community Notes replace independent fact-checkers, sites like Facebook, X, and Instagram can become home to powerful “echo chambers.” These insular conversations make it difficult for legitimate political discussions and conversations to occur, something that is not a strong suit of social media platforms in the first place. People are trapped in a “virtual reality,” which eventually supersedes the real world by depriving them of reality checks to moderate their belief in these more extreme and often dangerous conspiracies and ideologies. 

In 2017, Kellyanne Conway infamously argued that then-White House Press Secretary Sean Spicer’s false claim about the size of the crowd at Trump’s first inauguration was simply an “alternative fact.” In a social media echo chamber, the actual images of his inauguration would not be provided, leaving those caught in it to believe wholly in this virtual reality or alternative fact.

- Advertisement -

What is more concerning is that these echo chambers can, and do, amplify hateful, extremist messaging to larger audiences through algorithms designed to maximize engagement, not accuracy. This is in stark contrast to sites like 8chan, where users have to seek out and independently engage with message boards and forums. Social media allows users to become passively engaged in a feedback loop, progressively leading to more extremist content. 

Freedom of speech is one of the foundational rights of American citizens. To do anything to impede that freedom sets a dangerous precedent. With that being said, this is a new digital age in which misinformation and hate speech can be spread like never before. 

In 2022, several weeks after buying Twitter, Musk issued a “general amnesty” on over 62,000 accounts, including those of neo-Nazis, white supremacists, and conspiracy theorists. As opposed to a case-by-case review, which would be more logistically complex and time-consuming, general amnesty allowed Musk to put these accounts back in action almost immediately. 

Additionally, more recent studies and reports have shown that, in conflict with Musk’s claims that he is simply removing censorship to facilitate free and fair speech between multiple viewpoints, X is in reality becoming a “pro-Trump echo chamber.” This, in turn, helps explain the uptick in hate speech and harmful misinformation, two things that Trump has made hallmarks of his political persona. 

In 2019, Mark Zuckerberg faced intense criticism during congressional hearings for Facebook’s apparent lack of fact-checking measures. Two years later, Zuckerberg even went as far as to suspend Trump’s Facebook account for two years over his role in the Jan. 6 insurrection. Now, in 2025, the tech giant has completely changed his tune, offering olive branches in the form of Community Notes and a $1 million donation to Trump’s inaugural fund. These concessions, coming from a corporation that had suspended the President’s accounts just four years ago, highlight how mainstream these extreme and hateful ideologies have become.

Despite these changes, as well as legitimate concerns over the future of free speech in the United States, there is a clear path forward. The UN appointed over two dozen independent experts who released a 2023 report that detailed ways in which social media companies can moderate expression on their platforms. The changes recommended focus specifically on hate speech and use already established international doctrines as a basis for future regulations, a step that ensures that these changes are specific, targeted, and cannot be generalized to attack political or personal opponents. Furthermore, they requested that governments make long-term commitments to enforcing moderation laws built upon these conventions, which is the only way to ensure that these changes last for more than a few months or years, and that more work is done on social media platforms not just to fight hate speech but also to promote tolerance and equality. 

Former Supreme Court Justice Oliver Wendell Holmes Jr. famously said, “The right to swing my fist ends where the other man’s nose begins.” Freedom of speech is a sacred part of American culture and citizenship and a hallmark of democratic nations. Yet, with outbreaks of violence across the globe, it is evident that something must be done to curb the legitimate threat that hate speech poses. This is especially true when it comes to private social media companies that can amplify messages and manipulate algorithms, exposing more people than ever before to extreme, hateful ideologies.

- Advertisement -
- Advertisement -
- Advertisement -

Latest Articles

Popular Articles

- Advertisement -

More From The Author