By Kami Vinton, Trisha Lobo, River Terrell, Zachary Daum
July 29, 2022 at 11:00am CDT
In the age of the Internet, it is increasingly difficult for anyone to tell if the news is real or ‘fake’. There are so many convincing imposters, that it is difficult for experts to tell the difference anymore.
Researchers at The University of Texas at Austin who are part of The Good Systems project, “Designing Responsible AI Technologies to Curb Disinformation,” developed a way to help us all do just that—tell the difference between fact and fiction – with an online tool called CoVerifi.
Dr. Dhiraj Murthy, director of the Computational Media Laboratory (CML), along with his co-authors, Nikhil Kolluri and Yunong Liu, recently launched the online tool CoVerifi.
CoVerifi is tailored for rooting out fact and fiction in news about COVID-19! Dr. Murthy, helped break down why this tool is so important, and how it can have a real-world impact for all of us trying to figure out what information we can trust during uncertain times.
He said, “Due to the scale, speed and sheer diversity of COVID-19-related misinformation, fact-checkers cannot realistically keep up. That is why we created a tool that helps people navigate COVID-19-related news and social media. CoVerifi blends human judgment and knowledge with artificial intelligence to empower people so they can better evaluate the quality of COVID-19-related content.”
Common Rumors about COVID-19
Tell me if you’ve heard any of these at some point throughout the pandemic:
- COVID-19 is a hoax.
- If I gargle with bleach, it will kill the virus.
- It’s just a cold, I’ll be fine.
- The vaccine is a ploy to put micro trackers in our arms.
- A doctor on Joe Rogan’s show said that hydroxychloroquine prevents COVID, but the government won’t approve it because it is inexpensive.
- Only people who eat bats can get it.
Sound familiar? It would be difficult to find one person who had not heard at least one of those rumors during the pandemic.
Misinformation is Deadly—it is a Kind of Disease
Most of those rumors spread like an infectious disease on social media. The World Health Organization (WHO) monitors diseases and threats to public health across the globe. They work with public health experts in almost every country to reduce and prevent disease and death. The misinformation and rumors about COVID-19 worried the WHO so much, that they labeled it as its own disease of sorts—an infodemic. The WHO observed that diseases, like COVID-19, were much more deadly partially because the uncontrolled spread of misinformation aided the spread of the virus itself.
How Misinformation Works?
These terms sound similar, but they are distinct. What’s the difference? Disinformation is purposely deceptive and designed to influence. Misinformation is not on purpose. It is just wrong information–it is not intended to trick anyone. However, disinformation is designed to attract attention, provoke an emotional response, and ultimately trick people into doing something that is harmful. Unfortunately, the people who create disinformation are really good at it. Studies continue to show almost all misinformation that has gone viral (pardon the pun) can be traced back to an intentional disinformation campaign. The result is that most disinformation actually does its harm through the spread of misinformation. To be clear, everyday people (like you and me) are responsible for spreading the bulk of misinformation, but we are not doing it on purpose. We share it because we believe it is true and are trying to be helpful. The challenge now is, what can we do about it?
The Difference between Misinformation and Disinformation
For disinformation to exist, there must be information in the first place. For example, before early warning systems for severe storms, many people died because they did not know that a tornado, tsunami, flood, landslide, avalanche, etc. was imminent.
Disinformation Strategy 1: Not a threat
In other words, people cannot respond to a threat if they do not know (or believe) that one exists. One disinformation strategy convinces people that there is no threat. No threat means no need to act. Successful disinformation campaigns convinced many to adopt positions against masking and vaccinating. They did not believe COVID-19 was a threat. Likewise, many feared that major institutions, governments, and powerful shadow figures were lying to the general population to take control of them. This brings us to another successful strategy of disinformation:
Disinformation Strategy 2: Move the goalpost and sow confusion
Move the goalpost. Make people fear powerful shadow forces that work in secret to control them. This strategy takes advantage of people’s emotions, fears, and anger. Remember Q-Anon? Much of that campaign focused on vilifying political leaders, public health figures, and information institutions like the news media. Much of this strategy worked by simply adding chaos to a confusing situation. Essentially, the message was to trust no one.
Disinformation Strategy 3: The devil is in the details
Lastly, disinformation works to give people a sense of control. It tells people that things are simple. Black or white. Wrong or right. Good or bad. Unfortunately, science and the world are complex, and there are always elements of uncertainty. In short, it is much harder to tell the truth than it is to tell a lie. The truth is often stranger than fiction. Paying attention to the details, the facts, and who is reporting are important. CoVerifi helps us to do that.
“Disinformation is a global issue and a coordinated campaign can potentially affect many around the world.” -Dhiraj Murthy
What is CoVerifi and How Does it Work?
CoVerifi is a web-based tool that people can use to check their news. CoVerifi works in a three step process. 1) It compares information against a database of verified facts and known falsehoods. 2) It records whether people vote the story to be credible or false. 3) It compares the computer score with the human votes. As it’s a database of news stories, human votes, and verified facts and falsehoods grows, it continues to learn and become more accurate. If it sounds incredibly complicated, you are correct. It is the product of years of innovation and combines the power and speed from computer science with human judgment. See some examples of using CoVerifi in practice at the bottom of this article.
The History and Development of CoVerifi
CoVerifi combines both human and computing power. Just like there are supercomputers, where a bunch of computers are linked together and become more powerful, crowdsourcing (allowing a bunch of humans to cast their votes) is similar–together they make a superhuman. Humans are smart. Why not take advantage of all that brain power, and then link it together with a super fast, really smart computer program? That’s what CoVerifi does.
There aren’t enough experts or fact-checkers to do the amount of work needed to capture all the misinformation–because these specialized experts cannot feasibly review every single claim reported by the news. There is just too much. Computer programs work much faster than fact-checkers, and a large group of people working together (like the crowdsourced element of CoVerifi) can work much more quickly as well. The combination of those forces make up the backbone of CoVerifi.
We know that computers are only as good as their program. They fail to catch every piece of misinformation because language is very complex. While humans can easily recognize almost any variation of words and phrases, they are routinely ignored by a computer program if its programming does not tell it to “see” it. Before CoVerifi, other tools used only one or the other: humans or computer programs. CoVerifi is exactly the kind of innovation that The Good Systems project, “Designing Responsible AI Technologies to Curb Disinformation” was created to support.
Each provides checks and balances on the other. When CoVerifi users encounter news, they can vote for whether that piece of information is credible or not, then that vote is stored, calculated, and remembered for future use in a database. The computer program then tallies all the human votes and compares them to the score from the computerized checker to check for agreement. In short, if people and computer checker agree that the story is true, it is most likely true. If there is disagreement, then it is much harder to make a ruling. If they all agree it is untrue, then it is most likely untrue. However, as the database grows, more and more news stories will reach that critical tipping point of credible (high agreement that it is true) or potentially misleading (meaning that there is disagreement in votes, or that people agree it is false).
CoVerifi: Computer Speed Powered by Human Judgment
CoVerifi works because it is modeled on how human beings learn. When people are confronted with a new piece of information, our brains must accept the idea first. In other words, we have to imagine it to be true first, so that we can analyze it, compare it with other ideas we have already internalized, and then choose to accept or reject it as true or false. Meanwhile, if we have access to an expert that can double-check our individual judgment, our accuracy will be improved a lot. CoVerifi is like having access to your own personal expert.
Making intentional judgments (like to either believe or reject information) are difficult cognitive processes. This kind of thinking is cognitively demanding. Humans are not really wired to evaluate every single piece of information. Our big brains learn over time… and much of what we learn is saved to our subconscious (like our own internal hard drive–so we can save energy and do most of our thinking on autopilot). Many folks are unaware of this, but humans live most of their cognitive lives in autopilot mode! Humans tend to snap out of autopilot and pay attention to things that seem out of place, or exciting, or make us feel emotional, or feel threatened, or engaged. Humans subconsciously rely on autopilot to accept or reject most information. However, now that most people agree that we are living in a world that is filled with rumors and false information, CoVerifi means that we are no longer on our own when trying to figure it all out.
CoVerifi combines the unequaled intelligence and ability of human judgment with a computer program that can almost instantly sort, save and search a database of information that are known facts or falsehoods. CoVerifi gives people a reliable marker to quickly decide if they can trust the information, distrust the information, or if they need to verify.
Using CoVerifi In Practice
“We don’t want to schedule them just in case there’s a delay”
Parents are searching high and low to get their kids vaccinated. For children under the age of five, their parents wondered how to make appointments and where they can get the shots for their children.
CBS 2’s Shardaa Gray reports from Bloomingdale where families have lined up to get their kids vaccinated.
They got a shipment of 100 Pfizer vaccines and 400 Moderna. Even though they have vaccines available here, many parents are still on the hunt to find vaccines for their kids.
CBS 2 spoke with La Rabida Children’s Hospital Chief Medical Officer, Sarah Hoehn, who said they don’t expect there to be a shortage in vaccines for children under five.
She said younger kids will get a smaller dose, so one vaccine serves more children than adults. But larger hospitals, like La Rabida are still waiting to hear when they’re going to get shipment from the department of public health.
She advises parents to call their pediatrician and their office will let them know when they’ll get it. Hoehn said the reason parents are having a hard time finding the vaccine because clinics won’t open appointments until the vaccines have been delivered.
“I think it’s just a matter of people waiting for the shipment to come in and once the shipment will come in, then you’ll be able to do it,” Hoehn said. “I know for us here, we’re going to have visits available all day long everyday and the second it’s here, we’ll schedule them, but we don’t want to schedule them just in case there’s a delay.”
CoVerifi rated this article as potentially misleading, which is most likely incorrect. This is because CoVerifi has no actual fact-checking feature, and instead relies on how an article is written. Since this article is written more like a TV news report rathen than a typically news article CoVerifi flagged it as potentially misleading.
It’s important to note that CoVerifi is not always right, which is why it’s important to supplement CoVerifi’s analysis with your own media-literacy skills.
CoVerifi – Credibility ClassifierPotentiallyCrediblePotentiallyMisleading82.754%
GPT-2 – Output DetectorHumanGeneratedMachineGenerated99.114%
Read the CoVerifi Paper
Our CoVerifi paper was published in the journal Online Social Networks and Media. Read it on ScienceDirect or on PubMed. A description of the updated CoVerifi model can be found in the journal JMIR Infodemiology.
You can also view the CoVerifi demo video for further insight into how CoVerifi works and how it is used.