Leave me Alone and Let me Scroll! The Shifting Landscape of Misinformation in Policy

GPPR Junior Editor Amelie D’hers (MS-DSPP ‘25) delves into the intricacies of navigating misinformation and its policy implications with Emily Horne, the founder and CEO of Allegro Public Affairs. Emily, formerly the Special Assistant to the President, Spokesperson, and Senior Director for Press at the National Security Council (NSC) in the Biden Administration, and the former Vice President of Communications at the Brookings Institution, engages in a comprehensive discussion. This conversation spans various aspects of the misinformation landscape, from the day-to-day media literacy to National Security.

 

Check out more podcasts from the Georgetown Public Policy Review (GPPR) Podcast
Team: https://soundcloud.com/gppolicyreview
To follow GPPR podcasts, click the above link to GPPR’s Soundcloud Page, then click “FOLLOW” on the
right-hand side of the page to be sure to know when our podcasts drop! GPPR Podcasts are also
published to Apple Podcasts and to Spotify (see button at bottom of GPPR page).

[Intro]

Amelie D’Hers:

Hello and welcome back to the Georgetown public policy review podcast. I’m your host Amelie D’Hers today I am talking to Emily Horne, the founder and CEO of Allegro public affairs, and previously served as first Special Assistant to the President, spokesperson and director for press at the National Security Council in the Biden administration. I know you’ve held multiple other positions as well, which I am sure we’ll come up with today’s discussion. But I truly can think of no better person to talk to you about policy communication and today’s social media landscape. Emily, welcome.

 

Emily Horne:

Thanks so much for having me. Great to be with you all. Awesome.

 

D’Hers:

Since you are a comms expert. And we’ve had so many chats about media literacy, specifically on social media platforms. I thought it would be fun to ask some of our students questions about their own media literacy. So I’m gonna run a tape for you. Just to kick off our interview.

[Tape Begin]

D’Hers:

What’s something you learned on social media that you’ll never forget?

 

Student 1:

The concept of girl dinner.

 

D’Hers:

Have you ever seen fake news before?

 

Student 1:

Yes.

 

D’Hers:

What was it?

 

Student 1:

Oh, that there were drugs on the Prime Minister of Canada’s plane on his way back from the summit.

 

D’Hers:

How do you know it was fake news?

 

Student 1:

Because it was on Twitter.

 

 

D’Hers:

What’s something you learned on social media that you’ll never forget?

 

Student 2:

That avocado was going extinct.

 

D’Hers:

Have you ever seen fake news before?

 

Student 2:

Yeah.

 

D’Hers:

What was it? 

 

Student 2:

That the Biden administration has been offering $2,200 to illegal immigrants.

 

D’Hers:

How did you know this was fake news?

 

Student 2:

Because it actually got called out a few weeks after.

 

D’Hers:

What’s something you learned on social media that you will never forget?

 

Student 3:

The whole Barbie marketing campaign.

 

D’Hers:

Have you ever seen fake news before.

 

Student 3:

Yes.

 

D’Hers:

What was it?

 

Student 3:

To be honest, I cannot even remember one on top of my head. It goes so quickly that I dont see it. I forget.

 

D’Hers:

What’s something you learned on social media that you will never forget?

 

Student 4:

That cats like being brushed by a toothbrush? Because it reminds them of being groomed by their mothers.

 

D’Hers:

Have you ever seen fake news before?

 

Student 4:

Yeah.

 

D’Hers:

What was it?

 

Student 4:

Um, there was this AI fake ad of Tom Hanks in this like dental advertisement that he had to come out and just kind of clear up saying that he wasn’t in it.

 

D’Hers:

How do you know it was fake news?

 

Student 4:

Because he said, so.

 

D’Hers:

Awesome. Thank you.

[Tape End]

Horne: That’s fascinating. Lots of interesting fodder there.

 

D’Hers:

right?

 

Horne: Yeah.

 

D’Hers:

It was really fun to ask people questions about like, what they’ve seen on social media, how educated they feel about how to identify stuff that they’re seeing, that’s not real.

So to kick us off, now that you’ve heard what our students have to say, I am personally dying to know about how an expert like yourself would answer these questions. What’s something you learned on social media that you’ll never forget?

 

Horne:

Oh, gosh, um, let’s see. I’ve been really obsessed with the Eras tour this year. So I feel like I now have a parasocial relationship with several of Taylor Swift’s backup dancers because of social media.

 

D’Hers:

Oh, I could not relate more. And then a slightly modified question, because I’m sure you’ve seen fake news. Have you ever intervened in stopping fake news before either personally or professionally?

 

Horne:

Yeah, absolutely. And one of your one of your interviews they had mentioned, when she saw something debunked that it was debunked a couple of weeks later, I think that speaks to the challenge of trying, well, one of the challenges of trying to push back against disinformation or misinformation on social media, and I use those terms instead of fake news. Because in my mind, fake news is something that’s been so politicized, particularly in the US context. And so I use disinformation, basically, is a fancy term for lies where the person who was saying knows it to be false and is saying it or expressing it anyways, misinformation, the information is not accurate. But the person who is putting it out into the world may not know that. So it’s an important distinction. I think it goes to the intent of why something is out in the world. And it also is important for understanding like, how do we push back against it because the tactics that you can use for fighting disinformation/misinformation may differ depending on what the intent is? Someone who is misinformed who’s consuming information that they believe to be true. This and the speaker believes it to be true. You know, there’s an opportunity to have a conversation, there’s an opportunity to fact check. And so yes, both in government and also when I was working at Twitter in 2017. In 2018, we did a lot of fact checking, something pops up on social media and we say, here’s actually the truth, or here’s context that was missing that you think you need to know. Or here’s the the real version of what how this happened, what you saw was manipulated or edited footage, or perhaps even invented whole cloth. But that’s a different approach. And I think when you’re dealing with disinformation, when the person who is pushing it knows it to be false, and doesn’t care, because having it out in the world serves their agenda. And so you’ve got to use some different tactics when you’re pushing back against disinformation. Exposing who is doing the disinforming can be a big part of that, like, Who is this person? Why are they putting this out in the world and to what end, helping explain that to audiences can go a long way, pre budding can also go a long way, sometimes you can anticipate what an adversary is going to do. And you can get it out into the world before they put it out there and kind of take some of the sting out of out of that disinformation attempt. But it’s always an uphill battle. Because if you care about the truth, and your adversary doesn’t, then you’re inherently operating at a disadvantage tactically, because they can move faster, they can be sloppier, and they can, if something doesn’t work, just move on to the next tactic. Whereas you if you care about the truth, you’re held accountable, you’ve got to work to make sure that what you’re putting out into the world is accurate. You’ve got to be credible, and you’ve got to build that credibility with your audience. All of that takes time and resources. And it can be, you know, if you’re wrong, especially if you’re knowingly wrong yourself, you can do a lot of damage to your credibility. So it’s it’s a tough fight. But the good news is, I think we’re talking about this, at the terms disinformation and misinformation are in people’s vocabularies in 2023, in a way that they weren’t even a few years ago, you know, we talk about these as part of a healthy media diet and how we talk about these as part of being an informed citizenry. So I think that there’s a lot of cause for optimism, because we’ve identified the problem. Now we’re working on what to do about it.

 

D’Hers:

Yeah, absolutely. And this leads really well sort of into something else I wanted to chat about, specifically with your professional expertise. During the run up to Russia’s 2022 invasion of Ukraine, you helped execute the Biden administration’s strategy to declassify share and publicize intelligence, which is something we’ve chatted a lot about, in the past few months. Part of this effort, my understanding was to control the narrative around the invasion and rally support of Ukraine. With the rise of misinformation, which we all know spreads like wildfire on social media platforms, will we see this strategy carried out more by the US government and foreign affairs? Do you think? Or was this really a one time strategy?

 

Horne:

I think you’ll see it more and more. And I think that’s just in part a reflection of how the information environment has evolved. So take a step back to 2014, Russia’s invasion of Crimea, you know, which I think in retrospect, the US government really saw the signs in the intelligence that this was a military operation that was being planned by Russia that was going to be launched within a certain window. And there was a lot of behind the scenes diplomacy to you know, try to prevent it from happening, but very little, you know, informing the public because the information was classified. I think there was a great deal of frustration within the US government, obviously, certainly within the Ukrainian government, but certainly within the US government, that you saw something in the intelligence but you your hands were tied, and being able to get it out into the world because of the classification challenge. Also, again, we talk about these things differently. We under I think we the US populace generally understands misinformation and disinformation, much better today than we did in 2014, in part because we ourselves have been subjected to a state sponsored disinformation attacks, most notably Russian interference in the 2016 election, which really cleaved existing social and cultural divisions within the United States, by way of an disinformation campaign largely on social media, perpetrated by by Russia.

 

And so it’s also a reflection of the fact that there’s a number of actors who are in the citizen journalism or open source space, who talk about these things. And you report many of the same things that we can see an intelligence, but their information is not classified. They can publish their own satellite imagery, their own citizen reporting, their own analysis based on their own expertise, which is considerable from any of these outlets, what they see happening in real time. So it’s already out there, if we have it. The question is, how do we share it in a way that does not compromise our sources and methods? You know, Russia, I think we had very good insight into what they were planning on without going into those sources and methods. We found that there was a lot that we could declassify about some of the things that they were considering or actively planning in the run up of their invasion of Ukraine. And again, without getting into sources and methods, we know that publicizing some of those things. Things like crisis extra plots to foment a pretext for invading Ukraine. Things like plans to ethnically cleanse certain areas of Ukraine or hold Ukrainian citizens in, in camps, plans to depose or even execute, democratically elected Ukrainian leadership and install puppet government governance on parts of Ukraine, you know, being able to talk about these plots and expose them before they could get off the ground did a couple of things. One, it denied Russia the ability to surprise the world with launching these plans. Two, it exposed them for what they were so that even if they, we were not able to deter them, which was ultimately you know, what we hope to do, but even if we couldn’t deter them, then we would deny them the ability to claim legitimacy for their actions to say we felt ethnic Russians were under threat. And so therefore, we had to take these actions, or to say, you know, we were doing this for any other x BS reason. You know, by exposing them, you deny them the legitimacy and you deny them the opportunity to make their case and use our own media against us in that sense. So, again, I think you Russia was a unique case, because there was some novelty to it. Candidly, it was a coordinated effort. And it was the first time the US government had done a coordinated effort like that, I think in such a, a thorough way and in such a way that had such a good understanding of, of one, how that would play in in the media and how it would play around the world. But also in service of clear foreign policy objectives. We didn’t do it because it was fun to do, or because we could we did it because we were trying to stop an invasion. And if we were not able to do that, then we wanted to hold the world together in holding Russia accountable. So the challenge now is how do you take those capabilities of downgrading and sharing information in service of a foreign policy or national security objective and apply them to other objectives? You know, key for this is you have to have the information. And I think, again, without getting into sources and methods, there are things that are happening in the world today, where maybe we don’t have the same level of insight into what bad actors may be planning. And so you know, something I hear from, from friends in the press now is, you know, you guys could do this for Russia and Ukraine, why can’t you do it for for Israel, Gaza, you know, why couldn’t you do it for Hamas? Or the invasion on October 7, the attack on October 7? And the answer is, well, we did not know that that was coming. Israel did not know it was coming. So you have to have the knowledge. And you have to have the you have to have the intelligence in order to run plays like this. That’s that’s the foundation for it. And, you know, I think it’s fair to say that being successful in that has probably raised some expectations about how do we do this in future settings. And the challenge will be do we have the intelligence? And can we declassify it and share it with the world in a way that doesn’t compromise those sources and methods?

 

D’Hers:

Right, absolutely. And so when you were at the National Security Council, under the Biden administration, and then prior to that at the State Department, under the Obama administration, how big of a consideration was social media at your communications teams? And how often did you strategize around it? Or was it a timeline of stratigization, did that evolve?

 

Horne:

Sure. So at the White House, currently, under the Biden administration, a lot of the social media activity that the administration does proactively is managed through ODS, the Office of Digital Services. So I would work very closely with my counterparts in ODS on launching proactive campaigns. So we want to launch a messaging campaign to talk about X foreign policy issues. So let’s, you know, get together and think about what platform does that live on? Who’s the audience that we’re trying to reach? And that will determine what platform we go to? How can we do some creative storytelling using social media in a way that’s, you know, a good fit for that platform? And for that audience, you know, it’s all about, you know, as we’ve talked about in groups throughout the semester, you know, all of communications really comes back to three fundamental questions. Who is my audience? Where is my audience? And what is the impact that I want to have on my audience. And that’s true of traditional communications. It’s true social media. And so those were always the, you know, the foundations for any plan. Now, reactive is something different. And I think this was something you know, I left government for a time, and I left a government in 2017. And then I came back in in January of 2021, as a Biden political appointee. And the pace of the job had changed completely in those four years, largely due to the advent of social media is a really indispensable part of every reporter’s life, and therefore, every communications professional’s life, Twitter, in particular. And so something that I had to be and my team had to be much more active on was being on these platforms as government spokespeople, I think when I was in government before we use them fairly passively as media monitoring services, or if we did use them, you know, proactively, it would be like a very benign way, you know, blah, you know, here’s the press release that we just put out, and I’ll post it up on Twitter. But it was much more of a conversation that was happening when I came back in government. And by conversation, I mean, having that conversation on platform in real time, not just sort of passively listening to what other people were talking about. So that could be having an exchange with a reporter in real time about something that they were working on and me saying, you know, in addition to whatever comment I might give, for a story, me if they’re putting out something that I think is inaccurate, or is missing context, correcting that in real time on Twitter on platform, because it’s not going to be enough to wait for the eventual story to come out on I don’t mean to pick on any particular outlet here, but like, you know, cnn.com, or the Washington Post or Breitbart, you’ve got to engage on the platform where that conversation is happening. And you’ve and where the opinion is being shaped in real time, speed is essential. And you can’t wait for the final story to pop and run and be accurate if the conversation that’s happening on social media is not the one that needs to be happening.

 

 

D’Hers:

Yeah, absolutely. And so let’s talk about a hot word. Propaganda.

 

Horne:

Okay, let’s do it.

 

D’Hers:

How, you know, you’ve talked about how the government is is, you know, treating social media and sort of trying to combat misinformation and disinformation efforts. How do individuals go about deciphering bias and motive in the context of International Affairs, especially in a position where we’re not necessarily educated enough on the topic?

 

Horne:

Sure. I mean, there’s a couple of things. So I’m a huge proponent of media literacy, education. And I think it should start young, younger than it does right now. I have a seven year old and a three year old, and with my seven year old, I’m already talking about, you know, where did you hear this? Who’s it? Are you hearing other people saying it? How do you know it’s true? These are questions that you can start asking very early in life, you know, where did you see this? Who told you? Why do you think this is true, you know, without correcting people, but encouraging them to ask questions and interrogate what it is they’re seeing? You know, I think the first question is always where, you know, where did you hear this? Why do you trust this source? Who is the source? Why are they credible to you? And then I think when you’re talking about whether it’s, you know, it’s breaking news, it’s always important to remember first reports are almost always inaccurate in some key ways. And that’s usually not through anything malicious. It’s just because facts emerge as events unfold, and as reporting unfolds. And so, you know, being patient, waiting for something to be confirmed by official sources before letting rumors fly around is really, really important. Again, looking to who that source is. Social media can be great for elevating voices that are being actively suppressed by authorities. It’s a wonderful way for people to speak truth to power, and to give a voice and platform people who otherwise may not have access to influence their policymakers or their lawmakers. What it’s really it can be very dangerous about is when rumors start to fly in the midst of crises or natural disasters or moments of great concern. And so as I see, you know, more and more sort of a balkanization of social media, and people getting different information from different sources in real time. You know, what I see there is a common quest for wanting to understand the world around us, but really not being sure about how you can trust information. And so it really does come down I think, to your point of a lot about teaching individuals to have good information, hygiene, teaching them to encase, interrogate the sources of what they’re seeing, teach them to take a beat, you don’t need to share everything right away. But you know, take a minute to read the full story, not just the headline, ask yourself, why is this coming out? Now? Why am I seeing this now? Can I verify this from other sources? Those are all really, really important questions to ask. And when in doubt, focus on being right over being first because it’s really basically impossible to be both.

 

D’Hers:

Yeah, that’s a really excellent point. And I would love to chat about media literacy more. Before we do that, though, I want to focus specifically in on social media platforms. We see so much misinformation on these platforms. We’ve seen social medias companies, especially during the pandemic start to take action like adding warning banners to fact check posts that have been identified as potentially false. You see this like pop up on your Instagram story, etc. Which begs the question, Where should the burden of truth lie? And I’m curious if you have an opinion on this is the responsibility with users? Is it with the company or even the government to provide accurate information and call out misinformation?

 

Horne:

I think it’s all of the above. But to varying degrees and varying roles. I think it’s first of all important to remember that the social media platforms where so many of us get information are not regulated. These are privately held companies in some cases, and even if they are public and report to a board of directors or report to shareholders, that doesn’t make them government utilities or government entities, it’s also really important to remember, these are global platforms, a lot of them are headquartered in America. And as Americans, I think a lot of us view them through an American user lens. But you need to if your Twitter or whatever we’re calling it these days, or if your meta or, or YouTube, your service, your platform needs to be legally operable around the world. And so we need to be careful at holding these platforms, I think just to an American context of what we established truth to be. Because truth, I think facts are not subjective. Truth can be subjective. And so differentiating also between what are facts, and what is truth is really important. facts that can be verified and proven, that add context, to a conversation. Those are good things, I think, objectively to lift and they’re good things for individual users to look for. And they’re always good things for government to cite. I personally am inherently suspicious of anyone who tells me that they have an absolute knowledge of absolute truth, I don’t think anybody has the ability to claim that no matter how many facts they have at their disposal, but also because truth is emotional for people. That’s just, I think, our lived reality. And so focusing on facts focusing on again, sort of slowing down, don’t just read the headline, but read the full story. You know, there are social media platforms that I think have made really good steps in this direction, to interstitial content that they know to be factually inaccurate, or that they are to add a user community that adds unnecessary context to something that, you know, taken out of context may sound really inflammatory. But you know, you can make clear that this was subjectively edited, or that this was manipulated imagery in some way. That is information and context that I would like to see platforms add more and more. I do think also, it’s it’s incumbent upon individuals to be on the lookout for it, and again, to interrogate what it is they’re seeing. Now, that’s those are different things, then government regulation. And I think, again, when you frame it in terms of truth, you make it very, very difficult to have a conversation about government regulation, I do not think that it should be in the business of any government to determine what a what a truth is for, for their citizens to believe that’s a very dangerous thing. Again, what they can do is they can lay out facts, and they can be active on platforms, they can present their own facts as well. And where there are clear legal violations of the that are being that are happening on a social media platform, that is, I think, a really important point for intervention. So when a social media platform is being used to foment terrorist content, or when a social media platform is being used to share images of child pornography or child sexual exploitation, or non consensual nudity amongst adults, those are really clear instances, I think, where there’s a strong need for government regulation of these platforms. But speech is very different. We regulate speech in this country very differently also than a lot of other countries do. And so I think it’s really important for, you know, these American companies that have the advantages of the First Amendment, and whose citizens have a full expectation of exercising their First Amendment rights to be mindful that not every country has their own First Amendment.

 

 

D’Hers:

Yes, I think that makes a lot of sense, a multifaceted approach where social media platforms are clear about what’s been edited, what’s been, you know, manipulated, individuals are diligent about being on the lookout. And governments also have the ability to leverage social media to present their own facts and then sort of regulation and clear instances of, you know, human rights violations, things that call for regulation. And so one of my favorite things to chat with you about is, um, during the fall of 2017, you led global communications for X previously known as Twitter,

Horne: always Twitter for me, right, still have a lot of Larry the Bird merch, that’s all send my kids to college.

D’Hers:

So, Twitter, and its first ever congressional hearing, and then this was shortly after accusations that foreign entities had interfered with the 2016. Election via Meta. Meta, previously known as Facebook. Wow, the social media company is changing their names. It’s just it’s it’s giving Enron I don’t think that’s a that’s an old reference.

 

Horne:

As an elderly millennial, I’m right there with you don’t worry.

 

D’Hers:

As somebody who used to work in a company that was named Enron and now its something different, I understand. But this was really the first time social media companies had to testify at large about their role in politics. Can you speak about what you think came out of these testimonies?

 

Horne:

Look, I think a big takeaway from me personally was that there was not a tremendous amount of understanding by elected members of Congress or their staff’s about how these platforms worked. And so there is an opportunity to educate and have a conversation about them in real time. But something that disturbed me was a lack of curiosity from both sides of the aisle about what happened, what we did to ourselves, essentially, and a real assumption that again, it was really partisan on both sides of the aisle about how disinformation and state sponsored disinformation in particular around the 2016 election, how much of a role it actually played in our elections. I look back at that time, and my main takeaway is that state actors, Russia, chiefly, you know, we’re able to, you know, do damage to our democracy, but that we did most of the damage to ourselves. There is no reason why a lot of this content, which was frankly, very low quality, very spammy, obviously, you know, not,  you know, attributable to real people. You know, there’s no reason that most of it ever should have gotten the attention that it did. But you know, what we found was that, yes, this was on platform, and it shouldn’t have been on platform, it violated the Twitter Terms of Service, and all sorts of ways, not, for the most part, because of the content of the substance of the tweets that people would hold up. Most of the violations were because of, you know, they were created under a multiple account violation. So you can’t create multiple accounts, that all said the same thing. You know, that’s just spammy and bad, a bad user experience. And it’s not allowed under the Twitter rules. And so we should have caught it. But we didn’t. You also, you also can’t create, there were a whole bunch of bot violations, for instance. And there were some. And there were some other accounts that you know, violated the Twitter rules and a whole bunch of other ways things like lying about polling information was a violation of the Twitter rules. And some of those were allowed to go forward, but they shouldn’t have. But the vast majority of it was, frankly, pretty crappy quality that never should have been elevated. What happened was because it was politically convenient for one party or another, to have this content on platform, you would see legitimate users, real people real accounts, real active users on Twitter, who would sort of pluck this out of the muck of, of Twitter and launder it and push it out to their followers and amplify it. And then when the Twitter hearings, when the social media hearings in the fall of 2017, were happening, you know, what got even more attention, stuff that was again, really low quality, that never under normal conditions would have ever sort of penetrated the average Twitter users consciousness. It got elevated even more and more on the fact of reporters talking about it, the fact that members of Congress talking about it, I think inadvertently made it even more a part of our discourse. And I saw a real lack of curiosity around that, what by trying to expose these influence operations might we inadvertently be doing to infect further amplify them. Part of the you know, when kinetic terrorists when they launch a terrorism operation, there’s the initial attack and however many people that kills or injures, but the broader goal is to stoke fear, and Information Operations, there’s the initial operations that are going to have whatever impact that they’re going to have. But the ripple effects, the distrust that they sow the partisanship that they increase and intensify and solidify, the impact on our democracy from those I think is far greater than the initial, you know, round of crappy tweets or accounts that should have been caught. But weren’t, during the 2016 election, that’s the much more insidious thing that I think we really have yet to reckon with as a democracy.

 

D’Hers:

Yeah, I think that’s a really good call out. I’m also like, specifically curious to hear more about the understanding of congress people to really understand technology. Are you saying that they didn’t understand? Is the technological aspect the issue? Or was it the effect of the technological aspect that seemed to be lost?

 

Horne:

I think both. So there’s the obvious technical things like there, this was not a hearing that I was participating in, but then sort of infamous Mark Zuckerberg, “Senator, we run ads” here and, you know, sort of speaks to the lack of technical understanding and how these platforms work and what their business model is. But you know, I, we had members or staff who would ask us, Well, why don’t you just tell us what the algorithm is, and then we can help you figure out how to fix it. And of the implicit conversation or the the explicit in this case, conversation being will regulate your algorithms. Anybody who knows anything about how algorithms are built and run knows that that is nonsense. You can’t pull out a line of code from an algorithm and say, we’re going to have lawmakers look at this and regulate it and that that’s just not how the technology actually works. But then there’s also I think there was a lack of understanding about again, these are global platforms that you can’t just apply the American context to and how difficult it is to ask a platform to be the arbiter of truth. Again, the difference between truth and facts is, I think, a very important one to pull out because truth can be subjective, and facts can I think facts are not subjective Facts are facts. But it’s a very difficult thing to look to the social media platform to adjudicate what is truth. And I think we saw a lot of interest from both sides of the aisle in, in Twitter, specifically for my equities and in social media platforms more broadly taking that on when depending on what context you have what set of facts youre looking at two very reasonable people could look at the same situation and walk away with very different truths about it. Thats a difficult thing to ask a social media company to weigh in on and I definitely felt like walking away from both private conversations and public testimony on that topic that everyone was unsatisfied with the status quo.

 

D’Hers:

I ask this question because I also am weary of the idea of technicality being an inhibitor of regulation. I would love to see a world where people feel like they can demand more of technology regardless of how much they know which is way talking to a comms professional is important because this perspective is important and you don’t have to be the best developer in the world to talk about this stuff. So I would love to pivot back to media literacy and what are the non technical things we can do to combat misinformation and disinformation? California schools I was just reading yesterday are now required to teach student misinformation and disinformation in English science and math and history classes throughout every grade level. I believe this is also something that has been adopted by New Jersey, Illinois, Delaware and my home state Washington state might adopt this soon. Do you think we will see more of this? Is this something we should advocate for on a federal level media literacy?

 

Horne:

I would love to see it on a federal level. I think because the issue is so politicized its difficult to imagine certainly through congress being as dysfunctional as it is right now its very difficult to imagine legislation moving forward on this without it becoming intensely politicized. I’m glad there are states moving forward on it. In a perfect world I think we would have federal legislation and federal support for better media literacy. There are countries in Europe in particular that do this quite effectively, Finland is probably I think the most shining example of a country that perhaps understandably, it is has an 800 mile long boarder with Russia, and so is keenly aware of the impacts of disinformation and state sponsored disinformation on a populace, really does this exceptionally well and actively teaches media literacy in their schools from a very young age and has federal support for this efforts.

 

D’Hers:

Well thank you so much for chatting about this. Just to finish us off, what do you think of podcasting as a mode of political communication. I know its a new fad, is this something you would have adopted when you had worked under the Biden or Obama administration? Is this something even on the radar of communications teams?

 

Horne:

You know the white house under Obama launched West Wing weekly is what it was called so they did a podcast and video podcast of what was happening in the West Wing that week. I think, look im a fan of podcasting, I mean the market is fairly saturated right now. But anytime you have the ability to have a face to face conversation with someone its a good thing. Anytime you have the ability to have a longer conversation that has context and nuance and a back and forth and exchange of ideas in addition to sharing of thoughts is a good thing so I’m in favor of it, more podcasts bring them on.

 

D’Hers:

And do you have any feedback for a new young bright eyed podcaster whose just trying to get political media right?

 

Horne:

Oh gosh well you’re entering a very crowded space. So think about whose your audience where is your audience and what the impact you want to have on them. Those are the key questions for any communicator. And know who you are. Be authentic, be credible, Be consistent. If you’re gonna launch a podcast don’t do it with at least 8 episodes in the plan so you have plenty of content right from the beginning. Have good marketing, branding, have interesting guests and have fun with it. I think people don’t like being talked at they like being entertained. So you can have a smart informative conversation that makes people enjoy the experience not just makes them feel like they are being droned at.

 

D’Hers:

Well I loved ending on an authentic and hopefully entertaining note for all of our listeners. I hope we entertained yall. Thanks again for joining us.

 

Horne:

Thank you!

+ posts

Established in 1995, the Georgetown Public Policy Review is the McCourt School of Public Policy’s nonpartisan, graduate student-run publication. Our mission is to provide an outlet for innovative new thinkers and established policymakers to offer perspectives on the politics and policies that shape our nation and our world.

Amelie D'hers (MS-DSPP ‘25)
+ posts