Podcast / Transcript and Show Notes

How AI Will Turbocharge Misinformation

Listen now

Transcript for Season 4, Episode 5: How AI Will Turbocharge Misinformation

Audio

[MUSIC IN]

Joe Biden:  The illegal Russian offensive has been swift, callous, and brutal. It's barbaric. Putin's illegal occupation of Kiev and the impending Chinese blockade of Taiwan has created a two-front national security crisis that requires more troops than the volunteer military can supply. 

Gary Marcus: That’s President Biden earlier this year making an important statement on the war in Ukraine.

Joe Biden: I have received guidance from General Milley, Chairman of the Joint Chiefs, that the recommended way forward will be to invoke the Selective Service Act, as is my authority as president.

Remember, you're not sending your sons and daughters to war. You're sending them to freedom. God bless our troops, and God bless Ukraine. 

Gary Marcus: Biden is announcing that the United States is reinstating the military draft for the first time in more than 50 years. You don’t remember that?  Well, it never happened. But it sure sounds like it did.

The speech is fake. Every word of it synthesized by AI. But that didn’t stop it from spreading through social media. Hyper-realistic imitations like this one — enabled by new AI technologies — are called deepfakes.  It’s scary how easy this kind of thing is to create. 

And it’s only the beginning. These technologies are allowing bad actors to imitate the way we write, the way we speak, and appear in photos and videos. 

Deepfakes are already being used to trick voters and to defraud people by imitating the voices of family members. For those who work in the field of AI, AI’s ability to turbocharge misinformation is their most pressing worry. 

Misinformation isn’t, of course, new. We’ve already seen its effects on our elections and even on our markets. But large language models like GPT can not only blur the lines between what’s real and what’s not, but they have the potential to make and spread misinformation at a scale we’ve never seen before.    

You used to need an army of online trolls to spread lies. Now, large language models can do the same thing, more cheaply and more convincingly than ever before. The cost of creating misinformation has basically gone to zero.

In May, someone posted a Deepfake of the Pentagon on fire. It quickly went viral on Twitter.  

[MUSIC OUT]

Reporter: Not long after the markets opened this morning, a disturbing picture and claim suddenly appeared on Twitter. 

Reporter: The fake photo was shared by a variety of verified Twitter accounts.  It was convincing enough to quickly spook the markets.

Reporter: It wasn’t the fact that this shaved 500 million dollars off the S&P 500 for a few hours. Alright, is just scary as anything right now.  

[THEME MUSIC]

Gary Marcus: For a few minutes, the stock market nosedived. Our markets and our elections are extremely vulnerable.

In this episode, we are going to talk about what happens when AI says things that aren’t true and what that might mean for society. The question in the age of AI isn’t “what we can believe in?” It’s whether we’ll be able to believe in anything at all. 

I’m your host, Gary Marcus – or at least I sound like Gary Marcus – and this is Humans versus Machines.

[THEME MUSIC DOWN]

Jonathan Turley is a well-known law professor, conservative writer and Fox News contributor. Earlier this year he received a disturbing phone call from Eugene Volokh, who teaches law at UCLA.

Jonathan Turley Tape: Well, a UCLA professor and friend contacted me that he had done research using ChatGPT asking whether professors had been accused of sexual harassment.

[MUSIC IN]

Gary Marcus: According to ChatGPT, Turley had sexually harassed a student on a class trip to Alaska. And it supported this claim by citing a Washington Post article from 2018. 

Pranshu Verma: The only problem was, is that no such article existed, and we had our research department check to verify that. There had also never been a class trip to Alaska. And Turley himself had said he's never been accused of harassing a student.

Gary Marcus: That’s Pranshu Verma, who looked into the whole thing alongside Will Oremus. Both work for the Washington Post. Here’s Will: 

Will Oremus: And when Pranshu told me about the story, um, I went and did something very old fashioned, which was to do a Search on Google.com [chuckles] where you can get 10 blue links for Jonathan Turley and sexual harassment. And, you know he and sexual harassment are correlated in online articles. Um, none of them say that he was accused of it.

Gary Marcus: This — making stuff up — is what we call an AI hallucination. ChatGPT has no idea how the world works. It can’t reason. As smart as ChatGPT can sometimes seem, it’s really just a pastiche machine, putting together sequences of words without knowing what they mean that only sometimes end up being true. Turley’s name and the words “sexual harassment” have occurred together in some sentences, but that’s because Turley has written about the topic, not because there are reports that he harassed anyone. 

A similar thing happened when another AI model claimed that Elon Musk died in a Tesla car crash in 2018. Elon Musk runs Tesla, but the large language model swirled together the words “Musk,” and “car crash,” and “2018” without having the slightest idea what they actually mean or how they connect.

[MUSIC DOWN]

Gary Marcus: So there's kind of two issues here. One is about reliability, and the other is about accountability. Um, but it illustrates how these systems hallucinate, right? That wasn't based on any source material that said these facts, but rather a kind of mangling of facts.

Will Oremus: Yeah, so we had the exact prompt that the law professor in California had used to get that false information about Jonathan Turley. We plugged that into ChatGPT. I did it using 3.5. I did it using GPT-4. Did it on Bing and did it in Google's Bard, and they all answered in somewhat different ways, as you might expect. ChatGPT using 3.5 declined to answer it. But Bing interestingly repeated the false claim about Turley, and it cited Turley's own op-ed in The USA Today as evidence that Turley had, in fact, been accused of sexual harassment. 

Gary Marcus: Turley’s op-ed was about how alarming it was that ChatGPT had falsely accused him. Bing got it exactly backwards. 

Will Oremus: And it was picking up on the initial falsehood amplified via Turley's op-ed in order to repeat and spread that falsehood further.

Gary Marcus: Alright, so Turley, in his op-ed, just to be clear, didn't amplify the falsehood, right? He said, “This is not true. It's made up, and I'm worried that the system might perpetuate these things.”

Will Oremus: I think so, and Gary, you might know this better than I do. I don't actually know for certain whether Bing is first generating its response and then looking for sources that seem to support it or whether it's doing a web search and then, you know, aggregating information from those sources. But either way, it's exactly as you said; it's mangling the facts. 

Gary Marcus: So you just raised an interesting question too, which is, neither of us actually knows exactly what's going on in Bing, or GPT-4, and so forth. So not only do we have a situation where somebody's been falsely accused, and then the evidence has been perpetuated with a reference that was misinterpreted, but we have no idea how it got there.

Will Oremus:  It's like, you know, one AI spits out something that's wrong, and then people write about it, and then the other AI sees the writing about it and draws the same wrong conclusion.

Gary Marcus: We call that an echo chamber effect. And Turley’s case may not be the only one. Brian Hood, a mayor in Australia, was accused of bribery. 

Will Oremus: And Hood had played a role in helping to expose a worldwide bribery scandal that was linked to Australia's National Reserve Bank. He had been praised for showing tremendous courage and helping to bring this to light. One of his constituents brought to his attention that when they asked about that episode in ChatGPT, ChatGPT said that Hood had been, uh, arrested or convicted in this scandal, in this bribery scandal. It turned Hood into the perpetrator of the scandal rather than the whistleblower. 

Gary Marcus: All this might not be so bad if mainstream media outlets weren't using AI to generate news. But they are. CNET, one of the oldest and best-known technology sites, used a large language model late last year to generate dozens of articles and then found errors in more than half of them. Bloomberg, a giant in financial news, has created its own AI program called BloombergGPT to produce news. The website Insider has instructed its reporters to use AI for their work. Buzzfeed fired most of its journalists and shifted to cheaper AI-produced content. So the question becomes: who’s responsible when AI hallucinates? 

Gary Marcus: So you guys have thought a little bit about the kind of liability and accountability side of this. Like, where does this lead? You know, What can the legal process do? Do we need to make changes? How have you thought about that side?

Will Oremus: The short answer is we don't know. Um, the long answer is [laughs] that there, there are a lot of interesting issues. I talked to a couple people specifically about that question. They said, It's interesting, you know, for a defamation case, you need the information to have been published, which means it needs to have been said to someone other than just the person being accused.

Gary Marcus: The legal situation is complicated. In the US, in some cases, like with public figures, there has to be something called actual malice. The plaintiff has to prove that the defendant meant to lie or showed reckless disregard for the truth. But chatbots don’t have intent, and they don’t know what’s true or not.  

The legal battles over who is responsible for chatbot errors may be epic. People who are defamed might well have no recourse at all.  And the problem only gets more complex on social media.  

Will Oremus: When we talk about misinformation or harmful misinformation on social media, it is really hard to sue the social media platforms for information that their users post because Congress specifically carved out, they wrote into Section 230 of the Communications Decency Act, that a website, an interactive computer service provider is not to be treated as the publisher or speaker of information that's posted by somebody who's using it.

Gary Marcus: So social media platforms like Twitter or Facebook are largely protected from being sued for lies and misinformation on their sites. But what about generative AI? If an AI program defames someone, who is responsible? 

Will Oremus: I've now talked to many lawyers. Does Section 230, could it apply to the makers of large language models and chatbots? Given that, it's not a third party posting ChatGPT's response; that's open AI's own system that's generating the response. That looks a lot more like first-party information that would not be protected under Section 230. 

Gary Marcus: Accidental misinformation is a huge problem we’ll have to wrestle with. But there’s an even bigger threat. Governments, political parties, and bad actors can now create and spread deliberate misinformation. Some people call that disinformation on a scale we haven’t seen before.  

Will Oremus: So when we talk about misinformation, we're usually talking about stuff that is demonstrably false. When we talk about disinformation, it adds an element. It's demonstrably false, and that's on purpose. It's designed to mislead people. You know, there's state actors, or there's corporate actors. There's somebody out there who wants to deceive people. And disinformation is the way they do that.

Gary Marcus: Generative AI can make this problem much worse. It makes the quality of misinformation much more realistic, and it also makes it cheaper.  

Pranshu Verma: I do think that the pace and the quality of some of the disinformation, I think has increased. And, some of it is generative AI, but I mean, combining, you know, the ability to create the content that's believable and then spreading it through groups that I see my family in all the time, where misinformation can spread, um and them asking me, ‘is this real or is this not?’

[MUSIC IN]

Gary Marcus: Damaging reputations, even tipping elections, aren’t the only threats we face from AI-enabled misinformation. Deepfakes can be created to impersonate anyone’s voice, and that’s opening the door for a new kind of fraud that’s much closer to home.  

Pranshu Verma: We're starting to see actual scam phone calls happen where people are impersonating people in distress and asking for money. And the voice-generating quality has gotten to a point where it seems believable over a landline. I talked to two victims that I had found. One was a very elderly woman in Canada, and, you know, she got a call one day, and there was a lawyer that said her grandson was jailed because he'd been in a car accident and quickly then put the grandson on the phone.

And the grandson said, yes, this is real. I'm in distress, and I need x thousands of dollars. And they were very scared. I mean, this was, you know, 70, 80-year-old grandparents hearing their grandson, you know, talk in distress and being jailed. And they didn’t; in the moment, they actually thought, is this a scam? But I remember asking them, um, well then, what did you say to that? And they said, ‘Well, the voice sounded very believable, so what could we say? This was our grandson’. And so they actually went out to the bank, and thankfully in line, there was a bank manager who said some other elderly couple had had this similar scam happen a few days ago. So please don't take out your money.

You know, you just need maybe five bucks and a few minutes of somebody's audio sample to create a somewhat believable clone

Gary Marcus: What can we do to slow the tide?  One person who has been thinking about this for a very long time is Dr. Rumman Chowdhury. She is a leader in Ethical AI and the former Director of the Ethics and Transparency team at Twitter. 

[MUSIC OUT]

Rumman Chowdhury: So much of misinformation is really about what you want to believe and not necessarily entirely about the quality of the information. Misinformation is very fascinating to me because it is as much about human psychology and how humans understand and relate to the world in relationship to their economic status, their social groups, etcetera, as much as it is about the technology that exists. 

So much of it capitalizes on the fact that things that make us emotionally riled up and usually angry are the things that do tend to perpetuate more. And I do think that social media is part of it, but also it has to do with human credulity as well. We want to believe certain things, so we seek out certain information, and this is why some of the tech solutions who try to address the problem don't really work, because they forget that people are people. We're not just like machines who are receiving every piece of information and, you know, balancing it. We're not easy to map that way.

[MUSIC IN]

Gary Marcus: Dr. Chowdhury began her career by studying political science at college and then looked at ways to add quantitative analysis to politics. But the wider world kept intruding. 

Rumman Chowdhury: I was in academia, and I was very disillusioned because when I entered grad school, we saw the world we live in today, which is rife with misinformation, political polarization, etcetera, really start to manifest itself. And I didn't see how all of the work that my colleagues and my professors were doing were actually translating into policy.

I didn't necessarily come in wanting to address misinformation, but the idea that I could take my understanding of how to curate public opinion or understanding of human beings and translate that to giant solutions that really addressed tangible problems for people was exactly what I'd wanted to do my whole life.

[MUSIC OUT]

Gary Marcus: Twitter and Facebook have been at the center of debate about the spread of harmful information on social media. For a while, up until the fall of 2022, Dr. Chowdhury was at ground zero, at Twitter. She had a lot of insight into what needs to be done and why content moderation is so hard.

Rumman Chowdhury: You know, like starting point is raw data and content moderation, right? So there are content moderators who are going to look at the pool of potential Tweets and say, well, these are, and these are things that, like, are clearly violative, right? Like, um, terrorist and violent extremist material, like this is just like the quote, obviously bad stuff that gets filtered out there 

Gary Marcus: By machine or by person?

Rumman Chowdhury: By person, Well both content moderation happens by people and then also later by machine.

[MUSIC IN]

Gary Marcus:  One thing Twitter employed as a solution when Dr. Chowdhury was there is now called Community Notes. It's basically hive-mind content moderation, where users add context and corrections to the information that may be misleading.

Rumman Chowdhury: Content moderation is a huge problem to tackle. Just the sheer volume of information is mind-boggling, right? So if you think about Community Notes, Community Notes happens after content moderation. Uh, hate speech filters, toxic speech filters, like, so there's a whole wave of other things that have happened to presumably weed out wrong things or bad things, and then it still makes it to Twitter. And even then, the problem is massive. So trying to scale this, you know, uh, based on like a number of bodies is really hard for a couple of reasons. One is finding, you know, actually, reputable people who will go in and vote on information, who will give you good feedback. Given that, you know, people aren't being paid to do this, right? So a product like this is nearly impossible to scale. So you have to make some decisions.

So you have to say, okay, well, what are we gonna prioritize? And one of the things to prioritize are things that people are just seeing a lot. There are always a lot of limitations to mostly human-based solutions.

[MUSIC OUT]

Gary Marcus: There was that great essay about a week after Elon bought and took over Twitter, which was like, ‘Haha, you just bought a content moderation business. Do you have any idea what you've done?’ It was an amazing essay.

Rumman Chowdhury: Yeah. It is a hundred percent correct. This is never a democratic discourse platform. This is a content moderation platform, and the biggest problem facing just about any very large online platform or search engine is content moderation. At the end of the day, someone is deciding what should and shouldn't be seen. And that's maybe like the conspiracy theory way of putting it. But at some point, even if we're gonna hide it behind machine learning models, there is some group or party, which is a private owned for-profit entity, that is deciding what we should and shouldn't see online.

Gary Marcus: And as Dr. Chowdhury points out, misinformation gets spread by different people for different reasons.

Rumman Chowdhury: There are two potential sets of actors, and what their motivation is actually changes your strategy. There's one set of mis and disinformation actors who are the people who believe some sort of information and will continue to spread it. Most of them are actually well-intentioned. We can say whatever we want to say about how smart we think they are, but they think they are doing the right thing.

And that is a very different approach from what I'm about to talk about now, which are individuals or actors that are trying to purposely spread fake information. 

Gary Marcus: So, just clarify that in my own words. You've got one set of people that hear this stuff and push it out, contributing to the problem unwittingly. And then you have, uh, people that are deliberately spreading that. That's the distinction you're making, right?

Rumman Chowdhury: That's exactly right.

Gary Marcus: It’s hard to see how humans could ever keep up with the flow of misinformation. One tactic that Dr. Chowdhury took at Twitter was working to combat Bots, fake Twitter accounts often controlled by bad actors. 

Rumman Chowdhury: We were starting a project towards the end of our Twitter days on doing bot detection. And bot detection is really, really fascinating. There are certain characteristics to bots, um, that made them easier to identify. And maybe generative AI will make it harder to identify. So the idea of a bot farm, um, is actually a very accurate depiction. So they are literally — 

Gary Marcus: So just explain what a bot and a bot farm is for people who may not know.

Rumman Chowdhury: Yeah, so a bot would be a fake account, often automated that, some malicious actors, so some, you know, ne'er-do-well, um, is actually trying to spread certain false information. So these people are sometimes paid by other entities and organizations to, you know, to spread misinformation.  

Gary Marcus: Running bots is a lot of work. To be effective, bots need to infiltrate online communities and convince people they are real.

Some of the bots spreading Russian disinformation in 2016 had been around for at least two years, posting stories and commenting on posts in order to earn credibility. That’s a lot of effort. You used to need a whole army to make a troll farm, now you can do it with a laptop. 

[MUSIC IN]

And it’s not necessarily in the direct interest of the tech companies to do much about all this. After all, sharing — even stuff that isn’t true — is the business model of social media. 

Rumman Chowdhury: As somebody who has studied democracy, democratic processes, and politics, it is to me rather scary that a privately owned company which has profitability incentives has such a sway on literally the future of global democracy. I — that is an unresolved issue, and I frankly think it’s rather frightening. 

Gary Marcus: And with relatively little oversight, right? I guess there's some, but there's not an enormous amount of oversight.

Rumman Chowdhury: Exactly. I don't think there are people sitting at these companies who are just evil people who want to see democracy upend. I think that so many things are driven by the incentives based on the organizational, the institutional incentives, right? Your entire corporate structure is based on optimization of revenue. Your corporate structure is not built on ensuring democracy is upheld. 

Gary Marcus: And just to take the heat off Twitter for one second, I mean, we see exactly the same issues with Facebook, right? So Facebook is driven by trying to get eyeballs on their page. Um, they weren't trying to destroy or undermine democracy. They did take a reputational hit for it. But number one goal has always been to maximize eyeballs, and that has not been aligned with democracy.

Rumman Chowdhury: Of course, well and also frankly, there are aspects to how Facebook is designed, in particular Facebook groups that helped groups like QAnon organize. So a lot of January 6th, the January 6th uprising in the US was organized on Facebook groups because you can have closed private groups of people where people actually talk a little bit more freely than they do on Twitter. And I'll also add that that was one of the reasons why Twitter never made groups.

[MUSIC OUT]

Gary Marcus: Misinformation is a very real threat to our democracy, and the ways we’re combatting it now aren’t quite cutting it. I asked Dr. Chowdhury the best ways to address the problem.

Gary Marcus: So my view is that the best that we can hope for is something that's semi-automated better than that we can build right now. It might be able to reason about things relative to facts, um, that are known and make some guesses. There's still gonna be, you know, nuanced cases that I just can't imagine any AI having enough, kind of, theory of human mind to be able to do anytime soon.

Rumman Chowdhury: That is spot on. The best solution is some sort of a hybrid solution, right? So some way of utilizing the human ability to discern content in a way that machine learning models cannot, but then using machine learning models or artificial intelligence models to be able to scale that across multiple different geographies and situations in a way that a human being could not, or even a, the largest possible group of human beings could not possibly even imagine to address.

Gary Marcus: There is something else we need to consider. Schools nowadays teach media literacy so that students can understand what they see on TV or read online. We need AI literacy as well.

[MUSIC IN]

Here again, are Pranshu and Will from The Washington Post.

Pranshu Verma: I think the realistic conversation we also need to have much more widely now is ‘what do we do from a very early age to create the critical reasoning and create the skeptical thinking as a part of our workflow for seeking information as a part of our daily lives’? I think that that is, uh, the actual question that our teachers and our educators are going to have to also grapple with. there will always be a new thing that tricks us, and I think that we need to be creating thinkers that are a bit less trickable if that makes sense.

Will Oremus: For me, the biggest concern with generative AI right now is the rate of change. You know humans are resilient. We can adapt, we can, we can change how we educate people. But can we adapt at a pace commensurate with, with the pace that this technology is evolving? That part, I think, is a little scary. 

Gary Marcus: On the next episode of Humans vs. Machines, we talk about jobs. What does AI mean for them and our economy? 

Brian Merchant: There are a lot of, uh, workers that are vulnerable right now

Amy Winter: You're hearing a lot of anecdotal stories of, you know, a colleague of a colleague losing their job because of this, of studios not hiring more artists than they would've because of AI, and that adds to the kind of fear because there’s already a lot of uncertainty

I’m your host Gary Marcus. And that’s next week on Humans Vs. Machines.

[MUSIC OUT]

Credits

logo

aventine

About UsPodcast

contact

380 Lafayette St.
New York, NY 10003
info@aventine.org

follow

sign up for updates

If you would like to subscribe to our newsletter and be kept up to date on upcoming Aventine projects, please enter your email below.

© Aventine 2021
Privacy Policy.