You are here: 麻豆视频 School of International Service Big World podcast Episode 65: Disinformation, Part 2: the US

Disinformation, Part 2: the US

In this episode, SIS professor Samantha Bradshaw joins Big World to discuss the spread of disinformation online in the US.

Over the last decade in the United States, disinformation is a topic that has dominated discussions surrounding elections, political campaigns, COVID-19, and more.

Bradshaw, who is a leading expert on new technologies and democracy, begins our discussion by explaining her definition of disinformation (1:41) and overviews the factors that have contributed to the rise of disinformation in the US over the last decade (2:41). Bradshaw also identifies the kinds of groups who primarily spread disinformation in the United States (5:40).聽

Are people being targeted with disinformation roughly equally, or do disparities exist between racial and ethnic groups (8:41)? What does Bradshaw think about the methods social media companies are using to combat the spread of disinformation (11:55)? Bradshaw answers these questions and discusses her research into Russian trolling operations (15:16) and press freedom (24:03) before rounding out the episode with some thoughts on disinformation and AI (27:00).聽

In the 鈥淭ake 5鈥 segment (18:53), Bradshaw answers the question: What five policies would you want to see enacted in the US to address disinformation?

0:07 聽聽聽聽 Kay Summers: From the School of International Service at 麻豆视频 in Washington, this is Big World, where we talk about something in the world that really matters. Are you a voter? I hope you are. Where do you get your information? When you see something on social media about a candidate, do you read it? Do you believe it? What about your causes鈥攖he things you care about in this world? Maybe you're an environmentalist, or a feminist, or an advocate for education. If you saw something on Instagram that you already kind of believed鈥攎aybe some politicians slurring their words or saying they shouldn't really be president鈥攚ould you check it out, or would you just file it away in your head because it's just what you'd expect from them really? But maybe they never said it. Maybe they never posed for that picture. Maybe it was all fake. Today we're talking about disinformation. I'm Kay Summers and I'm joined by Samantha Bradshaw. Samantha is a professor here at the School of International Service and a member of the faculty of our Center for Security Innovation and New Technology.

1:12 聽聽聽聽 KS: She's a leading expert on new technologies and democracy, and she researches the producers and drivers of disinformation and how technology, including artificial intelligence, automation, and big data analytics enhance and constrain the spread of disinformation online. She's also at the forefront of understanding and explaining the complex relationship between social media and democracy. Samantha, thanks for joining Big World.

1:40 聽聽聽聽 Samantha Bradshaw: Thanks so much for having me.

1:41 聽聽聽聽 KS: Samantha, to get us started, what is your definition of disinformation? And how do you differentiate misinformation from disinformation in your work and analysis?

1:53 聽聽聽聽 SB: So when I think about disinformation, I'm thinking about content that is purposefully designed to deceive someone. So there's this idea of intent to deceive behind the definition of disinformation. And this is really different from misinformation, which has more to do with the unintentional spread of false information. So someone might unknowingly spread a piece of content that is false or misleading, but they don't know or realize that it's false or misleading. And so there isn't that intention there to trick or fool or deceive somebody in the same kind of way.

2:41 聽聽聽聽 KS: Disinformation is definitely a topic we've heard discussed more frequently in recent years, particularly during and after the 2016 and 2020 presidential elections, and related to the COVID-19 pandemic. Samantha, what do you think are the factors that have contributed to the rise of disinformation in the US over the last decade?

3:02 聽聽聽聽 SB: Disinformation, propaganda, fake news, none of these things are necessarily new problems, but there's certainly been this growing feeling that our digital spaces are being inundated with this kind of harmful content and content that's really harmful for our democracies and our societies more broadly. Today, disinformation is algorithmically curated in a digital environment where rumor, sensation, fear, anger, and other kinds of emotionally appealing content is prioritized. Social media platforms want to keep users connected and on their platforms for as long as possible, and algorithms will tailor and deliver content that is going to keep us engaged. Sometimes this content isn't necessarily good for democracy and good for our wellbeing. Another thing that's really new is that disinformation today is networked, so it can be produced, shared, and distributed from the bottom up from its users, in addition to being pushed onto audiences from the top down. And so what I mean by this is that everyday people really participate in propaganda.

4:24 聽聽聽聽 SB: It's not just government ministries that might curate and craft a propaganda message. I think a third thing that's really new about disinformation today is that it can also be automated. So it might not even be real people behind the engagement we see with content online, but pieces of code that are designed to mimic human behavior by liking, sharing, or retweeting content to give this false sense of popularity, momentum, or relevance around a person or an idea. And finally, I think one of the most important things about disinformation today is that it can also be data-driven. So you can be tailored to a very specific user or community of users based on their preexisting values, beliefs, and interests. And this data-driven nature of disinformation also means that it can be measured. How many people actually looked and engaged in a message? How long did they stay on that page or look at that meme? Disinformation can be tested, measured, and refined to then mobilize certain audiences or stifle and suppress others.

5:40 聽聽聽聽 KS: And Samantha, you mentioned that it's both a top down and a bottom up phenomenon on social media. Thinking about that top down aspect, and particularly when you think about being able to test and kind of weaponize the most effective and emotionally resonant types of content, in the US, what kind of groups do we primarily see spreading disinformation, putting it out there on purpose? Who's doing this?

6:09 聽聽聽聽 SB: Definitely. There's lots of different producers of disinformation, but one of the most salient types that we often think about is the role of influencers and high profile influencers. We could think about former President Donald Trump and his role as an influencer. What I mean by this is, as the former president, he had an enormous audience on social media, and he could then reach millions of people through these channels in ways that other people couldn't. And so not only influencers, but politicians often play a role in promoting disinformation to benefit them politically. And we don't just see this in the United States. We see this in many other countries around the world. Looking at President Duterte, former President Duterte in the Philippines, Bolsonaro in Brazil, there are many strong armed political leaders who have learned to use and abuse social media for political gain. Other groups that we see, particularly in the United States, would be non-state actors. Groups like the far right have also been really effective in creating and weaponizing the digital media ecosystem to promote their own political agenda. And so we see a lot of those groups using these affordances of social media to spread viral memes that might use humor and sensation to really drive engagement around some really divisive ideas. We also see a lot of conspiracy theorists using social media in a very malicious way, and we can think very clearly, or draw very clearly from COVID-19 and the various conspiracy theories and disinformation that were spread about the origin of the virus, the efficacy of the vaccines with real public health consequences, right? These individuals weren't grounded in the actual scientific literature and understanding of the effects of vaccines and the COVID vaccine on public health.

8:41 聽聽聽聽 KS: And marketing in general can be extremely targeted for digital marketing. Anybody who spends time on the internet knows that you're receiving ads all the time that are based on your search behavior and things that you've looked at, and there's a pair of pants it keeps following you around because you looked at a similar pair of pants. So certainly, people can be targeted

9:00 聽聽聽聽 KS: ... for their general profile, geographic, ethnographic demographic. It can be sliced and diced so that things are super targeted. With that in mind, are people online being targeted by disinformation roughly equally, or are there disparities among racial and ethnic groups in terms of who is being targeted with this bad information?

9:27 聽聽聽聽 SB: There's definitely disparities in the ways and the audiences who are targeted by certain kinds of messages, because at the end of the day, the people behind these coordinated campaigns have some kind of geopolitical goal. Sometimes that goal has to do with galvanizing a particular audience base. Maybe you want certain kinds of people to show up at the polls and you don't want others to.

9:58 聽聽聽聽 SB: If we look back at the 2016 US presidential elections, there was a lot of galvanizing around Trump's main voter base, while there was simultaneously a suppression campaign launched against Black voters who might have been voting for Hillary Clinton. So there's often a broad strategy that does create these uneven effects and creates a very uneven digital media environment. What I see is going to be very different from what someone in another part of the United States is seeing, what someone in Europe is going to be seeing, because of the data-driven nature of how social media content is tailored and refined to particular audiences. And then of course, there are people who will weaponize that tailoring, that data-driven nature of these campaigns, for their own political goals.

10:59 聽聽聽聽 KS: Social media has definitely become the place where disinformation campaigns spread like wildfire. Social media companies, some of them, have attempted, after enormous public and political pressure, it must be noted, to take on a fact-checking role to combat the spread of disinformation. But now, even these mild attempts to combat disinformation on social media seem to be withering a bit. I know this is all readily information, I'm going to attribute it to the Guardian, because that's where I saw it in an article in July, that Twitter's head of content moderation left Twitter in June amid a general fall in standards under the new owner, Elon Musk. Instagram allowed the anti-vaccine conspiracy theorists and Democratic candidate for president, Robert F. Kennedy Jr., back on its platform, and YouTube reversed its election integrity policy.

11:55 聽聽聽聽 KS: Samantha, what are your thoughts on the way social media impacts the spread of disinformation, and do you agree at all with how social media companies are not trying to address it?

12:08 聽聽聽聽 SB: Companies have definitely taken several steps to address the challenges of disinformation. We already know that the moderators that platforms were working with, a lot of the time they were understaffed and overworked. This didn't create a conducive environment to being able to deal with the problem of disinformation on a global scale, because a lot of countries that don't speak English that have very different local cultural contexts were really lacking moderators to begin with. So stepping back on that can really reduce the integrity of these overall systems to continue to respond. So I think it's really problematic that we're seeing companies step backwards.

13:06 聽聽聽聽 SB: I think, as well, a lot of the election policies around mis and disinformation are things that actually need to be thought about and incorporated beyond election times, because when you look at the life cycle of disinformation campaigns, they don't just start around election time. So taking a step back from these election related policies is really taking a step away completely, because not only are we not thinking about some of the most important times, but we're just not thinking about disinformation and its lifecycle around elections more broadly, and then the effects that this can have on democracy.

13:50 聽聽聽聽 SB: I do sympathize with the platforms a little bit because this is a really big undertaking, to be able to address disinformation in every single country. Context around every election is an incredibly massive undertaking. But I think this points to some of the deeper problems with our social media platforms and our digital environments. And that's this question around why disinformation really goes viral in the first place. I think that the approaches that a lot of the platforms have taken, and now have taken a step back away from, have addressed the issues or tried to address the issues of the content itself. To me, I think a lot of the platform responses have fallen short here, because they don't deal with these systemic issues of the attention economy, the ways that disinformation is incentivized to go viral, to keep our attention on the platforms longer. The solutions, they don't deal with this problem. They don't deal with the problem of surveillance capitalism and the targeting of certain kinds of messages and narratives to people based on their identities and their beliefs and their values.

15:16 聽聽聽聽 KS: I think that is a great pivot to my next question, because this touches on something that you had worked a little bit on, this idea of figuring out, not just how do we stop this one piece, but why was this effective? You've researched how disinformation may not be as simple as a group or a government putting out something untrue. You looked at a specific case of how Russian trolling operations looked to drive wedges among feminists in the aftermath of the very successful Women's March in 2017. In this case, the trolls' tactics included things like posting to accounts purported to belong to Black women saying disparaging things about white women, all of whom were feminists. So in this case, the Russian trolls were looking to take advantage of existing suspicions between racial groups of feminists to cause outright ruptures. How much more dangerous is this type of strategic trolling that takes advantage of existing mistrusts and systemic issues? How more damaging or dangerous is that than the older, more obvious types, of just putting out an untrue message to a group of people who want to believe it anyway?

16:35 聽聽聽聽 SB: Yeah. I think it's a different kind of threat too. It's a different kind of problem, because when we're talking about, first of all, platform responses and the systemic design features of these technologies that can amplify mis and disinformation, while human biases and these longstanding ideas of how we see and frame one another, they can also be another systemic reason as to why disinformation or harmful narratives can go viral.

17:14 聽聽聽聽 SB: We know from research that disinformation narratives that draw on stereotypes really resonate with audiences because they're congruous with the inequalities that we see and experience in the real world. So I think it can be particularly dangerous because we're not only dealing with a technical issue here that has a very easy engineering solution, we're also dealing with a very human issue, where longstanding cultural inequity, longstanding histories, longstanding identities, are really coming into tension and being amplified in our

18:00 聽聽聽聽 SB: ... our digital spaces. So you can't just label sexism the same way that you can label a piece of disinformation that is spreading a lie. You can't just fact check racism. The questions around identity, identity-based disinformation, and the way that racism, and sexism, xenophobia are weaponized for political gains, the solutions to these problems are going to require different long-term strategies that have a lot more to do with building trust and building empathy and reducing polarization, things that we can't just easily fix overnight.

18:53 聽聽聽聽 KS: Samantha Bradshaw, it's time to take five, and this is when you, our guest, get to daydream out loud and reorder the world as you'd like it to be by single-handedly instituting five policies or practices that would change the world for the better. What five policies would you want to see enacted in the United States to address disinformation?

19:13 聽聽聽聽 SB: So the first policy that I'd like to see enacted by the U.S. would be better data transparency regimes. I think that a lot of the research on disinformation and other harms associated with digital technologies and the use of platforms, it's not quite good enough to inform holistic responsive policies, because we don't have access as researchers to the data held by proprietary platform companies. And so, I think governments can play a really important role here by creating better data transparency regimes that would allow scholars, activists, journalists to work more with the data of platform companies.

20:04 聽聽聽聽 SB: The second policy that I would like to see enacted would be better trust and safety mechanisms to protect vulnerable populations. Women who are targeted with harassment don't always have the appropriate reporting mechanisms or support from platforms to deal with this kind of harm. And so, I think the companies can do a lot more to build out better trust and safety mechanisms to protect users no matter who they are and where they are in the world.

20:36 聽聽聽聽 SB: The third policy that I would like to see enacted would be expanding human content moderation efforts. There's been so much disinvestment in content moderators, and I think this is really problematic when we're looking at the future of our information ecosystems. We need to make investments in this labor to keep our information ecosystems healthy and secure, but this labor has often been invisible and it has often been very low wage. We need to expand how we not only do human content moderation, but we need to expand the safety mechanisms and labor protection and practices in place for people who are doing this really important work. Fourth, I think that social media has really shifted the journalism and media landscape, because platforms make information so freely available, news organizations have really struggled to generate advertising revenue that pays for journalists, the people who actually make news work. And we need more creative ways to sustain and fund the news media industry. And so, I would really encourage policymakers to think beyond social media and think beyond our digital ecosystem, and instead look at the broader media ecosystem and what we need to make journalism an institution that can really reach everyday people without being held behind paywalls.

22:28 聽聽聽聽 SB: And then finally, I don't think banning platforms or certain kinds of platforms is necessarily the answer to fixing a lot of these problems. Recently, we've seen some governments and some state governments in the United States introduce bans against certain kinds of platforms, TikTok in particular, because of some of the harmful effects that it might have on children and teens. But I think this very heavy-handed approach really misses a lot of the benefits that social media and digital media bring, not only to us, but to young people. There's a lot of really great research out there that shows all of the benefits and opportunities that young people have when they engage on social media. There are of course harms, and we need to be cognizant of those harm, but banning platforms outright, completely isn't the solution to these harms. Instead, we can think of other policy responses around designing platforms to be less addictive, to make them less attention demanding. If we can think of creative ways to redesign these business models so that social media is something that is healthy for everyone, then I think we'll be moving in a very positive direction.

24:01 聽聽聽聽 KS: Thank you. Over the last decade, we have seen the rise of laws aimed at combating the spread of misinformation and disinformation. You recently published a report discussing how these laws can impact freedom of the press across the globe, because that's important. We don't want to lose sight of, in trying to tamp down disinformation, make it difficult for reporters to report the truth. So can you discuss some of the findings of that research as well as the tension that exists between maintaining freedom of the press and cracking down on misinformation and disinformation?

24:39 聽聽聽聽 SB: Definitely. So I think that freedom of the press is one of those rights that doesn't get talked about enough when we're thinking about mis- and disinformation. Often, the focus is really on freedom of expression and people's rights to express themselves on social or digital media in some kind of way.

24:59 聽聽聽聽 SB: But when you take this lens of freedom of the press and think about some of the responses to these problems, there's a bunch of new challenges that arise. Around the world, we've seen more than 70 countries pass new laws designed to limit the spread of false or misleading information on social media, including disinformation. While a lot of these laws can focus on positive things, like improving transparency or accountability and digital advertising, a lot of these laws are actually focusing on the disinformation content itself and looking at criminalizing the creation and distribution of "fake news".

25:51 聽聽聽聽 SB: But if you are a journalist who is working on corruption in a country that doesn't have strong protections for your rights, not very many good systems, and lacks rule of law, then what constitutes fake news is not necessarily very straightforward. And the government can really determine what that definition actually is. And we're seeing a lot of these laws then be used to target journalists and activists for their work.

26:32 聽聽聽聽 KS: And you see that so often even in a country like the U.S., that has freedom of the press, where people are so quick to say, "It's fake news. It's fake news," if they don't like it, even when it's coming from a reputable source. And you can see how in an environment where that institution of the press is not free, that could quickly become, "It's fake if the government says it's fake and you're a criminal." So yeah, that's definitely a consequence that has to be kept in mind.

27:00 聽聽聽聽 KS: Samantha, last question. I hope you brought your crystal ball. Generative artificial intelligence makes it so much easier to create deep fakes and in general, push out a lot of phony stuff. And as you said earlier, some of these tools make it easier for just people, not even organized groups, for people to create and distribute this type of content. So how will new technologies and AI change the frontiers of disinformation, especially as we're heading toward a US presidential election in 24?

27:35 聽聽聽聽 SB: This is a really great question because I think that generative AI expands the scale of the digital harms that we've been grappling with around disinformation. We already know that it's not new to social media, but generative AI really enables this greater scale for creating content. We can now have these content farms generating misleading articles, creating images in minutes, something that would take days to build and put together and write. That would require a lot of manpower and a lot of resources, can now be funneled through large language models through the image generation. So it really makes it a lot easier to produce this kind of content, and it can also make this content and disinformation more compelling in certain kinds of ways. You could ask the generative AI to write a compelling prompt for a particular kind of audience, and it would then be able to tailor its message and give you multiple iterations of that tailored message to really speak to certain kinds of audiences.

29:04 聽聽聽聽 SB: One of the things that we used to look for in a lot of disinformation campaigns is what we call copypasta. And copypasta is essentially copying and pasting the same message over and over, and it would be a really easy way to identify networks of fake accounts because they were all saying the exact same things with the exact same grammar and spelling mistakes. But now that you can have generative AI write many different iterations of the message you're trying to get across with very good English and grammar and spelling, it takes a lot of those indicators that researchers and investigators use, it makes it harder for us to detect those campaigns. Whether or not generative AI is going to expand the scale at which this content spreads is going to be a different question. And here we are going to have to rely on the platforms to implement good policies and practices to detect harmful actors. Because you can think about platform strategies in many ways, they tend to look at disinformation in the context of actors, behaviors and content. If they're only looking at the content, it might be really hard to determine what a disinformation campaign and what isn't with generative AI being so good at creating very trustworthy information.

30:44 聽聽聽聽 SB: But if you're looking at the behavior of accounts, then you might be able to actually identify this content before it even makes it to the feeds of social media platforms. Bad actors still have to register accounts, they still have to act like normal users. They still have to spend time on social media, embed themselves into communities like their real people, and this is no easy task, but this is some of the other indicators that platforms use to identify this kind of harmful content. And so I'm not fully convinced that it's going to be apocalyptic for democracy, but it's definitely going to introduce many new challenges for detecting and limiting the spread.

31:37 聽聽聽聽 KS: Samantha Bradshaw, I feel like maybe not going to be totally apocalyptic is the best that we can do at this point with trying to frame this conversation. I thank you so much for joining Big World to talk about disinformation in the US. It's been a treat to talk to you.

31:54 聽聽聽聽 SB: Awesome. Thank you so much for having me.

31:57 聽聽聽聽 KS: Big World is a production of the School of International Service at 麻豆视频. Our podcast is available on our website, on Apple Podcasts, Spotify, and wherever else you listen to podcasts. If you will leave us a good rating or review, it'll be like a social media feed full of videos of real hedgehogs getting real tummy rubs. Our theme music is, It was Just Cold by Andrew Codeman. Until next time.

Episode Guests

Samantha Bradshaw,
professor, SIS

Stay up-to-date

Be the first to hear our new episodes by subscribing on your favorite podcast platform.

Like what you hear? Be sure to leave us a review!