By: River Terrell
Racism on Social Media
We have long heard that rumors, misinformation, and all sorts of bad information are widely spread on social media. It turns out that racist speech is also one of these sorts of bad information. Researchers at The University of Texas at Austin recently studied the spread of racist speech on the social media platform, Parler–and found out that it spreads a lot like misinformation. Scientists in the Computational Media Laboratory (CML), directed by Professor Dhiraj Murthy, studied over forty-four million comments made by four million Parler users. The lead author of this work, Akaash Kolluri said, “Users on Parler who engaged with a lot of racist content tended to post more racist content.”
You can think about the spread of racism on social media as being similar to how infectious diseases are transmitted. For example, the study shows that interacting with an “infected” or racist person on social media makes a person more likely to use racist speech themselves. The more a person interacts with racist posts, and leaves racist comments on those posts, the more likely they will eventually create new posts with content that is racist – starting the cycle over again. We know from viruses that the less someone interacts with an infectious agent, the less chance that they become infected. This is seen with the “racism virus” idea – non-racist posts did not usually have racist comments. The parent post sets the tone – comments on racist posts were more likely to contain racist references. It’s important to note, though, that not all people who interacted with and commented on racist posts were spurred to racism themselves. Just like not everyone who is exposed to a virus will get sick, some people seem to have a natural resistance to racism – seeing racist posts but not creating any or encouraging racism.
So What?
Over the last several years, we have seen racist hate speech spread on social media can lead to violence in the real world. The recent U.S. Capitol Riot is one example. On January 6, 2021, a violent mob of Trump supporters overran and assaulted police, vandalized the U.S. Capitol, and attempted to disrupt the certification of the 2020 U.S. presidential election. The connection between racism and violence should be obvious, and it’s backed up since many of the rioters were posting racist content on social media–specifically on Parler. In fact, past research has shown that racist hate speech on Twitter from a general location was connected to race-based hate crimes in the same location. It shouldn’t be a surprise, then, that many of the rioters who brought racist signs and symbols with them to the capital (such as confederate flags) were also known members of white supremacist organizations which regularly encourage violence.
Racism is Hateful and Harmful
Hate speech as any kind of communication in speech, written or spoken aloud, that attacks or uses derogatory or discriminatory language with reference to a person or a group based on who they are. This can be based on religion, ethnicity, nationality, race (or skin color), descent, gender, age, disability, or many other identity factors.
One example of this occurred near the start of the COVID-19 pandemic. Rumors that people with dark skin may be immune to COVID-19 spread on social media. We know that many factors contributed to differences in death rates among races during the COVID-19 pandemic. However, rumors like this example were extremely dangerous in places like Louisiana and Chicago where 30% of the population was Black, because populations of color accounted for 70% of the COVID-19 deaths. In April of 2020, the APM Research Lab measured the COVID-19 death rate of Americans and found that Black, Indigenous, and Latino Americans together had a death rate of at least three times compared to White Americans. In Chicago and Louisiana, the death rate for Black people was closer to five times the rate of non-Black people. Additionally, plenty of other studies showed that malicious COVID-19 narratives were linked to violence against people who were of Asian descent.
Equal Opportunity Racism? Not a Chance
This study found that some groups were targeted more than others. This research focused on racist comments toward four groups: 1) Jewish people, 2) people of Muslim faith, 3) people from any national origin who identify as Black, and 4) people of Asian/Pacific Islander origin. Anti-Black and anti-Muslim posts were over two hundred times more likely to also have comments that echoed the racist content in the post. In short, racists of a feather flock together. This means that if the original post is anti-Black, and the comments are racist, the comments will likely be anti-Black as well. Hate speech on social media tends to target one group, and then as it spreads… will stick to targeting that same group.
“Trying on” Racism in Comments First
Parler was first introduced as a Twitter alternative. Unlike Twitter, with enforced community guidelines for what content could be posted on its platform, Parler did not take down a significant amount of hate and racist speech. Parler’s popularity grew substantially when Twitter began deleting posts with false or misleading information and suspending the accounts that posted it. With little to no moderation, Parler was a bit like the Wild West. However, rather than starting as overtly racist right out of the gate, many people were hesitant to share their own original racist content without first seeing other people share racism. This can be explained since many people who harbor racist views tend not to express them publicly. So, it makes sense that most people would rather start by making racist remarks in the comment sections of other posts.
The unfortunate truth is that many racist posts from Parler did go viral – making some people famous. And, the larger amount of racist content on Parler compared to other social platforms gave people the impression that it was okay. It could have been especially appealing to someone who had internally embraced racist views but who had never felt comfortable writing them on other social media platforms for fear of disapproval. It makes sense that these types of people would first post their own racist views in a less visible, safer, comment section. As Parler was used more, it became clear that racist content on the platform often got more attention – likes, interactions, and comments. This likely showed people who were internally racist that it was ok to share racism publicly – especially since those who did seemed to get more attention and prestige. On a human level, social inclusion is a basic need. We have a deep desire to be around like-minded people. This has powerful psychological effects on human behavior. So, maybe the combination of having racist thoughts and then finding a safe place to express them, in a community who would not cast you out for sharing these thoughts, could trigger the urge to create new racist content and then post it.
It’s worth highlighting again – it is not just reading a racist post that inspires people to create new racist content. It is the act of joining in—experimenting first with posting a racist comment. If people do not engage with racism in the first place, they are unlikely to spontaneously begin posting racist ideas. And if they do encounter it but they don’t like or comment on it, they are also unlikely to post racist content. It is only the gradual slope of engaging with racist posts and realizing that it’s safe and “acceptable” to do so that spurs people to create original racist posts. On Parler, people were much more likely to create racist content after first “trying out” racism in comment sections of others’ posts. And it did not take much practice. Many people who posted original racist content did so after making just one or two comments (racist or not) on racist posts. The study showed that people who took the step to engage with a racist post went on to post 160% more racist content, on average afterwards.
Silver Linings
- Most people on Parler did not post racist content.
- Previous studies have shown that Parler is more racist than Twitter, but this study showed that most posts on Parler were not racist. Also, the act of engaging with racism did not guarantee that people on Parler went on to post racist content. Not everyone on Parler was or became racist. For example, there were over 57,000 people on Parler who commented on racist posts, and just over 14,000 of those users ever posted racist content.
- Posts without racism were “upvoted” more than posts with racism
- People on Parler were more likely to “upvote” posts that were not racist. Most people on social media intuitively understand that an “upvote” helps determine what posts are more visible to other users. It appears that a large number of people on Parler attempted to use this “power” for good, refusing to encourage or share content that was racist.
Key Takeaways
- Racism on social media is contagious – both from posts to comments and from user to user.
- Users who frequency engaged with racism are more likely themselves to go on and post racism.
- Given the real-world violence that racism often incites, there is an urgent need to create innovative strategies to address and fight online hate speech.
Credit to Akaash Kolluri for consulting on research and quotes
Contributor: Ubah-Kamillo Moallim