Fact or Fiction? Evaluating Covid-19 News with CoVerifi
Posted on Jul 29, 2022 in News
By Kami Vinton, Trisha Lobo, River Terrell, Zachary Daum
July 29, 2022 at 11:00am CDT
In the age of the Internet, it is increasingly difficult for anyone to tell if the news is real or ‘fake’. There are so many convincing imposters, that it is difficult for experts to tell the difference anymore.
CoVerifi is tailored for rooting out fact and fiction in news about COVID-19! Dr. Murthy, helped break down why this tool is so important, and how it can have a real-world impact for all of us trying to figure out what information we can trust during uncertain times.
He said, “Due to the scale, speed and sheer diversity of COVID-19-related misinformation, fact-checkers cannot realistically keep up. That is why we created a tool that helps people navigate COVID-19-related news and social media. CoVerifi blends human judgment and knowledge with artificial intelligence to empower people so they can better evaluate the quality of COVID-19-related content.”
Common Rumors about COVID-19
Tell me if you’ve heard any of these at some point throughout the pandemic:
COVID-19 is a hoax.
If I gargle with bleach, it will kill the virus.
It’s just a cold, I’ll be fine.
The vaccine is a ploy to put micro trackers in our arms.
A doctor on Joe Rogan’s show said that hydroxychloroquine prevents COVID, but the government won’t approve it because it is inexpensive.
Only people who eat bats can get it.
Sound familiar? It would be difficult to find one person who had not heard at least one of those rumors during the pandemic.
Misinformation is Deadly—it is a Kind of Disease
Most of those rumors spread like an infectious disease on social media. The World Health Organization (WHO) monitors diseases and threats to public health across the globe. They work with public health experts in almost every country to reduce and prevent disease and death. The misinformation and rumors about COVID-19 worried the WHO so much, that they labeled it as its own disease of sorts—an infodemic. The WHO observed that diseases, like COVID-19, were much more deadly partially because the uncontrolled spread of misinformation aided the spread of the virus itself.
How Misinformation Works?
These terms sound similar, but they are distinct. What’s the difference? Disinformation is purposely deceptive and designed to influence. Misinformation is not on purpose. It is just wrong information–it is not intended to trick anyone. However, disinformation is designed to attract attention, provoke an emotional response, and ultimately trick people into doing something that is harmful. Unfortunately, the people who create disinformation are really good at it. Studies continue to show almost all misinformation that has gone viral (pardon the pun) can be traced back to an intentional disinformation campaign. The result is that most disinformation actually does its harm through the spread of misinformation. To be clear, everyday people (like you and me) are responsible for spreading the bulk of misinformation, but we are not doing it on purpose. We share it because we believe it is true and are trying to be helpful. The challenge now is, what can we do about it?
The Difference between Misinformation and Disinformation
For disinformation to exist, there must be information in the first place. For example, before early warning systems for severe storms, many people died because they did not know that a tornado, tsunami, flood, landslide, avalanche, etc. was imminent.
Disinformation Strategy 1: Not a threat
In other words, people cannot respond to a threat if they do not know (or believe) that one exists. One disinformation strategy convinces people that there is no threat. No threat means no need to act. Successful disinformation campaigns convinced many to adopt positions against masking and vaccinating. They did not believe COVID-19 was a threat. Likewise, many feared that major institutions, governments, and powerful shadow figures were lying to the general population to take control of them. This brings us to another successful strategy of disinformation:
Disinformation Strategy 2: Move the goalpost and sow confusion
Move the goalpost. Make people fear powerful shadow forces that work in secret to control them. This strategy takes advantage of people’s emotions, fears, and anger. Remember Q-Anon? Much of that campaign focused on vilifying political leaders, public health figures, and information institutions like the news media. Much of this strategy worked by simply adding chaos to a confusing situation. Essentially, the message was to trust no one.
Disinformation Strategy 3: The devil is in the details
Lastly, disinformation works to give people a sense of control. It tells people that things are simple. Black or white. Wrong or right. Good or bad. Unfortunately, science and the world are complex, and there are always elements of uncertainty. In short, it is much harder to tell the truth than it is to tell a lie. The truth is often stranger than fiction. Paying attention to the details, the facts, and who is reporting are important. CoVerifi helps us to do that.
“Disinformation is a global issue and a coordinated campaign can potentially affect many around the world.” -Dhiraj Murthy
What is CoVerifi and How Does it Work?
CoVerifi is a web-based tool that people can use to check their news. CoVerifi works in a three step process. 1) It compares information against a database of verified facts and known falsehoods. 2) It records whether people vote the story to be credible or false. 3) It compares the computer score with the human votes. As it’s a database of news stories, human votes, and verified facts and falsehoods grows, it continues to learn and become more accurate. If it sounds incredibly complicated, you are correct. It is the product of years of innovation and combines the power and speed from computer science with human judgment. See some examples of using CoVerifi in practice at the bottom of this article.
The History and Development of CoVerifi
CoVerifi combines both human and computing power. Just like there are supercomputers, where a bunch of computers are linked together and become more powerful, crowdsourcing (allowing a bunch of humans to cast their votes) is similar–together they make a superhuman. Humans are smart. Why not take advantage of all that brain power, and then link it together with a super fast, really smart computer program? That’s what CoVerifi does.
There aren’t enough experts or fact-checkers to do the amount of work needed to capture all the misinformation–because these specialized experts cannot feasibly review every single claim reported by the news. There is just too much. Computer programs work much faster than fact-checkers, and a large group of people working together (like the crowdsourced element of CoVerifi) can work much more quickly as well. The combination of those forces make up the backbone of CoVerifi.
We know that computers are only as good as their program. They fail to catch every piece of misinformation because language is very complex. While humans can easily recognize almost any variation of words and phrases, they are routinely ignored by a computer program if its programming does not tell it to “see” it. Before CoVerifi, other tools used only one or the other: humans or computer programs. CoVerifi is exactly the kind of innovation that The Good Systems project, “Designing Responsible AI Technologies to Curb Disinformation” was created to support.
Each provides checks and balances on the other. When CoVerifi users encounter news, they can vote for whether that piece of information is credible or not, then that vote is stored, calculated, and remembered for future use in a database. The computer program then tallies all the human votes and compares them to the score from the computerized checker to check for agreement. In short, if people and computer checker agree that the story is true, it is most likely true. If there is disagreement, then it is much harder to make a ruling. If they all agree it is untrue, then it is most likely untrue. However, as the database grows, more and more news stories will reach that critical tipping point of credible (high agreement that it is true) or potentially misleading (meaning that there is disagreement in votes, or that people agree it is false).
CoVerifi: Computer Speed Powered by Human Judgment
CoVerifi works because it is modeled on how human beings learn. When people are confronted with a new piece of information, our brains must accept the idea first. In other words, we have to imagine it to be true first, so that we can analyze it, compare it with other ideas we have already internalized, and then choose to accept or reject it as true or false. Meanwhile, if we have access to an expert that can double-check our individual judgment, our accuracy will be improved a lot. CoVerifi is like having access to your own personal expert.
Making intentional judgments (like to either believe or reject information) are difficult cognitive processes. This kind of thinking is cognitively demanding. Humans are not really wired to evaluate every single piece of information. Our big brains learn over time… and much of what we learn is saved to our subconscious (like our own internal hard drive–so we can save energy and do most of our thinking on autopilot). Many folks are unaware of this, but humans live most of their cognitive lives in autopilot mode! Humans tend to snap out of autopilot and pay attention to things that seem out of place, or exciting, or make us feel emotional, or feel threatened, or engaged. Humans subconsciously rely on autopilot to accept or reject most information. However, now that most people agree that we are living in a world that is filled with rumors and false information, CoVerifi means that we are no longer on our own when trying to figure it all out.
CoVerifi combines the unequaled intelligence and ability of human judgment with a computer program that can almost instantly sort, save and search a database of information that are known facts or falsehoods. CoVerifi gives people a reliable marker to quickly decide if they can trust the information, distrust the information, or if they need to verify.
“We don’t want to schedule them just in case there’s a delay”
Parents are searching high and low to get their kids vaccinated. For children under the age of five, their parents wondered how to make appointments and where they can get the shots for their children.
CBS 2 spoke with La Rabida Children’s Hospital Chief Medical Officer, Sarah Hoehn, who said they don’t expect there to be a shortage in vaccines for children under five.
She said younger kids will get a smaller dose, so one vaccine serves more children than adults. But larger hospitals, like La Rabida are still waiting to hear when they’re going to get shipment from the department of public health.
She advises parents to call their pediatrician and their office will let them know when they’ll get it. Hoehn said the reason parents are having a hard time finding the vaccine because clinics won’t open appointments until the vaccines have been delivered.
“I think it’s just a matter of people waiting for the shipment to come in and once the shipment will come in, then you’ll be able to do it,” Hoehn said. “I know for us here, we’re going to have visits available all day long everyday and the second it’s here, we’ll schedule them, but we don’t want to schedule them just in case there’s a delay.”
4:51 PM • JUN 21, 2022
You should try and rely on outlets that are known to be reputable when consuming news, particularly on social media. This doesn’t mean you should ignore partisan outlets, they can still be reliable sources, but you should consider possible biases when consuming news.
There are some helpful websites out there, like Media Bias / Fact Check that can help you figure out if a news outlet is reliable by giving you information about likely partisan leaning, funding sources, and past failed fact checks.
For this article, CBS is an established and reputable US-based media company and CBS Chicago is their local TV affiliate for the Chicago area.
Does the outlet:
While not all reputable articles will list author(s) in the byline, it’s important to look at authors and determine if they are credible. Ask yourself: Are they a real person? Do they have a history of publishing false or questionable information? Do they have a vested interest in swaying my opinion?
The reporter for this article has a Twitter account where we can confirm where she works.
A common tactic among those that spread disinformation is to write articles with broad and overly-general claims which they portray as fact. Using these broad claims also makes it difficult, if not impossible, to fact check these claims. When reading the news, be aware of this tactic and see if the claims in the articles you are reading are specific and checkable.
In this article, CBS2 provides concrete numbers that are not outside the realm of plausibility and which could be confirmed through the vaccination center Shardaa Gray visisted in her report.
Does this fact:
CoVerifi rated this article as potentially misleading, which is most likely incorrect. This is because CoVerifi has no actual fact-checking feature, and instead relies on how an article is written. Since this article is written more like a TV news report rathen than a typically news article CoVerifi flagged it as potentially misleading.
It’s important to note that CoVerifi is not always right, which is why it’s important to supplement CoVerifi’s analysis with your own media-literacy skills.
Pfizer’s shot causes “mortality hundreds of times greater in young people compared to mortality from coronavirus without the vaccine, and dozens of times more in the elderly, https://t.co/9AhVNBVudO
7:35 PM • JUN 2, 2022
Spoiler: this user shared misinformation. For that reason, we are not sharing their Twitter account.
When evaluating whether a Twitter user is credible you should start from the assumption that they are not. Using Twitter as a source is frowned upon, but there are some ways to help you estimate whether a Twitter user might be credible.
The most important thing to do is to see if the user exists elsewhere online. If they don’t, there’s a decent chance the account is a bot, which have historically been used as vectors for spreading disinformation by hostile foreign governments.
Does the user:
Whether or not a claim is credible is the hardest thing to estimate without extensive knowledge on the topic of any article/tweet, which is why we built CoVerifi to help you make those determinations.
That being said, before looking at CoVerifi’s results it is helpful to ask youself if the claims here are in line with scientific consensus at the time, or if the claims are reasonable in the first place. Once you have considered those questions, you should be ready to change your mind on the given claims, but you should also know that CoVerifi isn’t always right. CoVerifi is never 100% confident on whether or not a claim is credible, and you shouldn’t be either.
Does this claim:
This links to an Isreal National News article. On Media Bias/Fact Check, Isreal National News has a factual reporting rating of “Mixed” which is a red-flag. Media Bias/Fact Check also shows a history of failed fact checks on the basis of issuing misleading or outright false reports, including some misleading claims about COVID-19.
With those two things considered, this should NEVER be used as a source.
Does this source:
CoVerifi identified this tweet as highly likely to be misleading and somewhat likely to be human-generated. This demonstrate’s CoVerifi’s ability to see past the fact that the Twitter user provided “sourcing” for their claims.
As always, it is best to combine good media literacy skills, like we went through on the front side of the card, but in this case CoVerifi only serves to back-up what a thorough audit of the Twitter user and their claims.
You can also view the CoVerifi demo video for further insight into how CoVerifi works and how it is used.
Dhiraj Murthy (Ph.D, University of Cambridge), Professor of Journalism and Media Studies(in the Moody College of Communication) and of Sociology, both at the University of Texas at Austin is founder and director of the CML. His research targets many of the same areas analyzed in this lab, including tobacco control on social media, misinformation/disinformation on social media, digital research methods, race/ethnicity, and computational social science. He wrote the book on Twitter. He has authored over 70 peer reviewed articles, papers and proceedings.
Nikhil Kolluri is an Electrical & Computer Engineering student working on COVID-19 misinformation, disinformation, and fake news detection methods through the use of machine learning models. His work in CML involved developing CoVerifi: A COVID-19 News Verification System.
Yunong Liu is an undergraduate at the University of Edinburgh, majoring in Electronics and Computer Science. She studied at UT-Austin as an exchange student. Her interests are Natural Language Processing and Computer Vision. In CML, she worked on COVID-19 misinformation detection using various deep learning models.