Welcome to another episode of Terms of Service. I’m Clare Duffy, and today we’re going deep on conspiracy theories, which have become more prevalent as we all spend more time online. Recently, I came across this study that took people who believe in various conspiracies and asked them to talk it out with an AI chatbot. And honestly, I was surprised by how willing these folks were to engage with the technology. It got me thinking about some of the conspiracy theories that, with the help of social media, have gone more mainstream in recent years. Could AI chatbots really be a helpful tool for pushing back on these false narratives? I’m going to speak with a coauthor of that study in a bit about what they found. But first, I wanted to chat with a colleague of mine who’s spent a lot of time interviewing conspiracy theory believers. I have CNN senior correspondent Donie O’Sullivan here with me in the studio. Hey, Donie.
Donie O’Sullivan
00:00:59
Hi, Clare. How are you?
I’m good. Thanks for doing this.
Donie O’Sullivan
00:01:01
Thank you for having me.
Okay. So, Donie, you’ve done a ton of reporting on how online misinformation and conspiracy theories manifest in the offline world for nearly a decade now. What have you learned about the people who believe in these stories that are just not true?
Donie O’Sullivan
00:01:17
It’s less about what people believe. It’s more about why they are believing it. What are they getting out of that set of beliefs? And a lot of times it comes down to people are looking for meaning, they’re looking for connection, they’re looking for community that they are not getting elsewhere.
Talk to me about some of the conspiracy theories that folks you’ve interviewed believe in.
Donie O’Sullivan
00:01:37
‘Oh pretty much everything from anti-vaxxers to QAnon to people who believe that JFK wasn’t really assassinated or that he’s still alive. You know, there’s a whole spectrum there of kind of things that define and consume people’s lives.
Donie O’Sullivan
00:01:56
What does JFK have to do with all of this?
He is Jesus reincarnated.
Donie O’Sullivan
00:02:01
You don’t believe Taylor Swift as a commentator?
I don’t know what to believe about Taylor Swift.
Donie O’Sullivan
00:02:05
What happens if Trump loses?
I don’t think he lost the last election. To be honest.
As some of these conspiracy theories gain larger followings, what have you seen in terms of what that means for people’s lives and for their family or loved ones lives?
Donie O’Sullivan
00:02:22
Yeah, I think it’s ruining a lot of lives and it’s ruining a lot of families. There’s a broad spectrum, of course, of like having somebody in your life who’s bringing this up every now and then and maybe lessen some for it and stuff. And then all the way to, you know, somebody who has gone to join a cult or is showing up on January 6th to storm the Capitol. There’s a lot of space in between where there’s people causing real tension in families, real friction because so much conspiracy theories are based in fear.
So, I’m curious what you think of this study. I honestly was quite skeptical when I came across this idea.
Donie O’Sullivan
00:02:59
So was I. Yeah.
That an AI chatbot could sort of talk people out of their conspiracy theory beliefs. Talk to me about your reaction when you saw this.
Donie O’Sullivan
00:03:08
Yeah, there’s so many studies when it comes to misinformation. But, I was surprised. I did this for a full time job and I’m not equipped to push back on half of the conspiracy theories because they are so involved and so difficult to follow. So, at some point you’re going to get either frustrated or you don’t have the answers. Whereas I guess an AI chatbot doesn’t run into that issue and AI can, I guess, present this also in a way as a neutral voice.
How likely do you think folks are to want to engage with a chatbot that they know is going to challenge their beliefs in the real world?
Donie O’Sullivan
00:03:47
Nobody wants engage, you know, challenge their beliefs, really. That is the other big hurdle, right? There has to be a willingness on the part of the person who believes this stuff to maybe admit to themselves that they might be wrong. If a person is showing some openness to seeing life beyond the conspiracy theory world their in, you know, you have to give them the space to say maybe I’m wrong, but in a way that they don’t feel like a total idiot or a fool.
Donie O’Sullivan
00:04:20
Because at that moment it’s a very difficult, vulnerable place to be. So, if you had a tool like this, that could actually be very useful.
Well, Donie, thanks so much.
Donie O’Sullivan
00:04:30
Thank you.
So, Donie had a lot of the same questions about the study that I did. Conspiracy theories have been around for a long time. And technology that could help change people’s minds about them sounds useful. It could also be a more positive way to use a technology that some have feared could cause more confusion and distrust because it’s so easy to create convincing fake content. But how realistic is this idea, really? The authors of this study put more than 2000 conspiracy believers in conversation with an AI chatbot. And by the end a lot of participants felt differently than they did at the start. How did this actually work and are there ways that we can use this technology even if we don’t believe in conspiracy theories? To find out, I spoke with Thomas Costello, an assistant professor of psychology at American University, who coauthored the study.
So, Thomas, talk to me about how you set up this experiment. What was your hypothesis going into it?
We were testing whether an AI model could talk to people and deliver facts and evidence and change their mind about a conspiracy belief. And to get a little more specific, what we were testing was not so much the AI part, but whether facts and evidence might work to change conspiracy believers mind, to make them more skeptical. And in order to test that, we brought people in, we asked a very open ended question about whether there were any conspiracy theories that they believed. We put it in quotes and we provided a definition, so as not to stigmatize, basically. So, people wrote in a very open ended way about whether they believed anything, and if so, what that was. And then, we followed up and we had them provide all of the factual information they knew that supported the conspiracy. So if you wrote about the 911 conspiracy theory, you might talk about how jet fuel doesn’t burn hot enough to melt steel beams. And there was evidence of a demolition team coming in or, you know, those kinds of things.
This study was conducted in waves. First, they gather data from paid participants. Then they opened the study to anyone on the Internet. And there was a wide range of conspiracy beliefs, everything from the false claim that the 2020 election was stolen to theories about the death of Princess Diana, and even one Tom had never heard of about a Loch Ness Monster in a lake in Montana.
I think one of the advantages of this approach is that we were not curating or selecting on a particular conspiracy theory. So, what we asked was there are some accounts of world events, important world events that attribute them to powerful people scheming in secret. And the result is a phenomenon that’s different from what the media or the popularly accepted stories will tell you. And then we said, some people call these conspiracy theories, are there any such beliefs that you have?
Researchers told participants that they’d be speaking with an AI chatbot. What they didn’t tell them, they’d fed the information participants provided to the chatbot and programed it to persuade against the conspiracy.
And so, it opened the conversation typically in a very long several paragraph response where it said, I understand why you might think this conspiracy is true. It makes a lot of sense. I hear what you’re saying. But at the same time, have you considered this alternative piece of evidence that you didn’t write about? And well, you talked about how jet fuel doesn’t burn hot enough to melt steel beams, but it does burn hot enough to weaken them and weakening them is enough for all of the weight on top of them in the Twin Towers to collapse, which is what happened and so on and so forth. And it might ask, you know, does that make sense to have any questions? And the person would respond then, well, I never thought of it that way, perhaps. Or they’d say, screw you, what about all this other stuff? I don’t believe you or you’re biased AI, in some cases. But on balance, people did respond positively and did update their beliefs.
Was this a mass market AI chatbot that you were using or did you have to train it with information about the conspiracy theories that you expected to come up, so that it would know how to respond?
‘So, we used GPT-4 Turbo, which at the time was one of the models that was undergirding ChatGPT. It wasn’t prompted in exactly the same way. So, it had slightly different kinds of behaviors, but for the most part it was out of the box vanilla. The big caveat, though, is that OpenAI had already trained it to align with facts and evidence and to engage with misinformation in a constructive way. So in many respects, we were just building on top of what they had already done.
Got it. But you didn’t necessarily have to like feed it a bunch of information about the Loch Ness Monster in Montana or, you know, 911 in order to get it to respond.
Right. Well, so it’s been trained on the Internet, so it already knows all about conspiracy theories.
The chatbot had thousands of conversations with people about their beliefs. They’re actually available to read online. You can browse them by topic. We’ll put a link in the shownotes.
I was looking through them. They’re amazing. One of the ones I really loved is about aliens. The person says, I think the government knows about aliens. There’s no way we are the only living beings in the universe, just common sense, maybe. I don’t know of any hard evidence. And it was so interesting to see the model respond in this really sort of empathetic way. Like it’s common to wonder about the vastness of the universe. But, you know, as you said, it’s important to consider the evidence. And then, it’s interesting that it doesn’t even necessarily like actively, directly try to change the person’s mind, but rather just sort of tries to get them to be more open minded and consider other evidence. Was that intentional or is that something that just sort of happened?
The alien case is really interesting because it’s actually not quite a conspiracy.
I mean, I guess I believe that. I believe that aliens are out there somewhere in the vastness of the universe. They’re just very far away from us. And we did look at instances where people said things that were in fact explicitly true. So, things that have been verified and indeed occurred and those people didn’t change their minds at the end of the conversation. And the model usually said something like, you’re right, that is something that happens. And the alien case, so it might be kind of on the bubble of that or right on the line.
Yeah, it sort of pushes back on this idea that the government knows about aliens. It offers evidence contradicting that. But then it does say like there’s a lot more research to do and maybe there’s aliens out there somewhere. And it’s this really kind of beautiful conversation that encourages curiosity.
Right. Broadly, if you go one level up from our particular experiment to what people interacting with these tools and models looks like at a societal level, encouraging curiosity and open mindedness seems like a really positive, good outcome. There are lots of very scary, bad things that we can imagine happening as people embrace AI. But curiosity and greater access to information is at least one positive thing.
So you found that at least in some cases, the AI did actually convince some people to change their minds. Do you have a sense of like, what percentage of people altered their beliefs?
Yeah. We can look at it in terms of the average effect, the average amount of change that was visible in the experiments. And that number is about 20%. So, if you start it on a scale of zero, not at all believing to 100 totally believing and you’re at 100, you’d go down to 80. If you’re at 80, maybe you’d go down to 67 or something like that. On average, people went down by about 20% and 1 in 4 people became skeptical towards the conspiracy.
Did you find that some types of conspiracy theories were easier for the AI to debunk than others?
Yeah. That’s a little tricky to interpret because it’s not clear whether that’s more about the kinds of people who believe certain conspiracy.
So, we had conspiracies about election fraud in 2020, in the US presidential election. And we saw a significant effect on average, but there were a lot more people who didn’t change their minds or who only had very small belief change. And we can contrast that with the aliens example actually, like the government is hiding aliens or JFK or 911 was another one where people moved quite a lot. The effects were actually bigger than that 20% average. So, there was visible difference across the different conspiracy theories. But at the same time, I think further research is needed.
Yeah, it does strike me that, you know, some of those beliefs are sort of more connected to people’s identities, like the election fraud seems to be very sort of like an identity marker that people take on, whereas maybe aliens is not so much like that. Why is AI so good at this in a way that, like, maybe if somebody is trying to convince their uncle at the Thanksgiving table that birds are actually real is not so good at this?
I think a lot of the active ingredient or what makes this work so well is that the AI is acting as a really effective search tool, essentially. There is so much information out there in the world. A lot of it is even information related to conspiracy theories. It’s true in a superficial way. So it is true that jet fuel does not burn hot enough to melt steel beams as just like a factual claim out of context. That is true. What the AI model can do if someone brings that in and says, hey, I think that I know that this is a fact, like what’s going on with this? It can provide the exact context that they need. So it can say, well, it doesn’t melt, but it weakens. And here’s why that’s important. And there are lots of other examples of that. Conspiracy beliefs are fundamentally descriptive claims. They are saying that something happened and we’re in data and information in the Internet age. And so what the AI model can do is almost like compress or condense down just to the stuff that people need and provide it to them without having to search for it. And then the other angle here is that a lot of arguments, especially when it comes to socially, politically loaded topics like conspiracy theories, a lot of arguments are almost proxy battles. You’re a stand in for your group or your ideological cohort. And so giving ground is seen as almost like a loss of ideological status in a way. But when it’s an AI, you’re not playing with those stakes because you’re not talking to another human being who’s going to judge you or is going to in any way spread the news that you lost. Although, of course, if you’re in a research study, we are literally doing that. But with an AI, it’s it’s a more personal interaction. It’s about you and it’s about the information.
So, AI helps us get to the facts of the matter without the human element of an emotional debate or the slog through countless Google results. But what if the bot gets it wrong? That’s after the break.
We know that AI can sometimes hallucinate, sometimes it just makes things up. Is there a risk in these kinds of interactions of replacing somebody’s conspiracy theory beliefs with some other false information?
Yes, that is a risk. We hired a fact checker to go through a random sample of the conversations, in the study, and fact check each claim that the AI made. And we found that of the subset, virtually all of the claims were true. So, I think we looked at 120 and 119 were rated as true and one was rated as misleading and zero were rated as false. And this fact checker also looked at political bias and did not find any evidence of political bias in the set of conspiracies. That is not to say that in the whole group of conversations there aren’t any hallucinations. There must be. And also more broadly, it’s not to say that these models don’t have problems, related to hallucinations. And so, if you wanted to stand this up as a genuine tool for people to use, you would want to have a check against that.
One of the things I also found really heartening reading through the conversations and this sort of gets to your point at, you know, this is just a good search tool is how many of the participants asked for the AI sources? You know, present this information and then the next exchange, they would say, okay, that’s really interesting. What are your sources? That seemed like a really good sign to me.
‘Yeah, I think so, yeah. If you’re going to do the fact checking yourself, when you’re talking to a model, it means you’re both open to what they’re saying as a possibility and you’re not, you know, super credulous and just taking it because they’re an AI and, you know, you trust them because they’re objective or something like that, ostensibly. And that’s what you would want people to do in society generally, is if you get information, cross-reference it and see if it’s actually true, ask for the source.
So, this was an experiment. In the real world, how would you get people to engage with an AI chatbot about a conspiracy theory they believe in, in the first place? Like, do you see this becoming a real tool?
I see something like it becoming a real tool, whether it’s being integrated into web browsers or search engines and things like that, or a standalone tool that people seek out. I think for political conspiracies, people are probably not going to be super motivated to check their work, as it were. But for other kinds of conspiracy theories, things like, you know, 911 truthers or Aliens or JFK or new ones that crop up in the coming years. If this is a tool that gives you access to information, especially like a tool that is like kind of controllable, you can tell it what you want to know about. Maybe that’s something that’s appealing. I’m excited to see what people do with the findings now that we’ve put them out there.
We know that lots of people distrust artificial intelligence. Do you think it’s going to be tricky to get people to trust AI when it’s telling them they’re wrong about such a deeply held belief?
What we found in the data is that, if you didn’t trust AI in the first place, coming into the conversation, you didn’t update your beliefs quite as much, but you still did a little bit. The displays of credibility are going to be important. So past experience with AI models where they provide people with correct information over and over again, presumably will factor in when people are then exposed to a fact or an interpretation that they don’t like. So, you’ll do this calculus in your head of, yeah, every other time of the year it’s been right maybe, or it’s only been wrong in the small number of instances. And so, what what are the chances that it’s also wrong in this instance?
Do you think that this technology eventually has the potential to meaningfully reverse the spread of conspiracy theories, which we know have created so much division in our country in families who are dealing with this right now? Do you think that there are major implications for these findings?
I think it gives us insight into how to change conspiracy beliefs and maybe even where they come from. But you’ll need a larger effort or set of processes to really reverse the tide of whatever you want to call it, misinformation, conspiracy beliefs. Some of which would need to happen at the source as people are being schooled and educated, you know, in adolescence and in childhood. We need to teach them how to think critically and evaluate information in an objective, rational manner. It’s also partially an issue of societal trust and institutional trust. And I think what we’ve found is that AI offers a tool to help solve it. And it’s almost like treating an acute symptom of something rather than the underlying cause. And so, I don’t think we’ll resolve it, but I do think it will democratize access to information and especially to rational information. And that seems like a positive thing to me. But I guess we’ll see in the next couple of years.
What’s next for this research? What outstanding questions do you have about the ways that people can engage with this technology?
I want to see if it works for all kinds of beliefs, not just conspiracies. So, you know, everything from health information to people’s brand preferences or prejudices, all kinds of things. And I just like to mention because there’s like a almost dystopian bent to this, I think one of the key features of this AI persuasion, at least as we’re using it, is that it is all based on facts and evidence and is not really in any way coercive. We’re basically providing people with information and more context rather than pressuring them to change their beliefs in a meaningful way.
Okay, so it sounds like there is more work to be done to figure out how these findings will manifest in the real world, but are there practical takeaways for listeners now? Like, if you believe in a conspiracy theory, should you consider chatting with a chatbot?
Yeah, absolutely. One of the nice things about these chatbots is that they can expose you to information that you might not come across otherwise. As I’ve kind of said, it’s a really efficient way of searching the like sea of true and false things that are out there to find the very facts and precise pieces of information that you care about. So, you can try out our experiment for yourself at this website we’ve set up for people to test it called debunkbot.com. Or you can just, you know, ask ChatGPT to try to change your mind about something. It might not do quite as good a job because it is rewarded and trained to be agreeable. And so it may not want to argue with you unless you really try to get it to do that. But I think this is a good way to to test your beliefs, which is good epistemic hygiene, right? It’s almost like you brush your teeth twice a day and when you come across a new fact that might or might not be true, you should go check it out. You can talk to a bot about it, you can Google search, whatever. But functionally, what these do is they lower the barrier to that sort of practice. It’s easier to talk to a bot about something than it is to dig through a bunch of Google search results. You know, basically saving yourself the cognitive labor by talking to an AI about it.
Got it. Yeah, like kind of a helpful search partner.
Awesome. Well, Tom, thank you so much. Really appreciate your time.
Yeah, thanks. Thanks for having me.
So to recap, here are three tips that can help you and your loved ones navigate the ocean of information and misinformation out there and find out what’s true: First, if you’re unsure about a story that feels like a conspiracy belief, test it out at debunkbot.com. You can ask if it has any credibility and get some context to understand the facts. Next, if you expect to be the devil’s advocate to your conspiracy believing family member this Thanksgiving, you can use the debunkbot as an argument coach. Input the argument you anticipate and ask for the evidence that rebuts it. Finally, and this may be a lot to ask, but I’d especially encourage you to do this before sharing information on social media that feels shocking or otherwise questionable. Slowing the spread of false claims online can help us all have a better grip on reality.
‘Thanks again for listening to this episode of Terms of Service. If you try out the debunkbot for yourself, let me know how it goes. Send us an email at CNNTermsofService@Gmail.com. I’m Clare Duffy. Talk to you next week.
Terms of service is a CNN Audio and Goat Rodeo production. This show is produced and hosted by me, Clare Duffy. At Goat Rodeo, the lead producer is Rebecca Seidel, and the executive producers are Megan Nadolski and Ian Enright. Production support on this episode from Jay Venables. At CNN, Haley Thomas is our senior producer and Dan Dzula is our technical director. Steve Lickteig is the executive producer of CNN Audio. With support from Emily Williams, Tayler Phillips, David Rind, Dan Bloom, Robert Mathers, Jamus Andrest, Nicole Pesaru, Alex Manasseri, Leni Steinhardt, Jon Dianora, and Lisa Namerow. Special thanks to Katie Hinman and Wendy Brundige. Thank you for listening.