AI Literacy as a Public Health Imperative

What Psychological Inoculation Research Tells Us About Fighting Misinformation

Stacked research books with reading glasses, stethoscope, and medicine bottles against newspaper text—linking health literacy and critical thinking about information

Misinformation Became a Public Health Crisis

Our collective ability to agree on what's true has been eroding for decades—and it's accelerating. I keep seeing the same pattern in the organizations I work with: public health departments, nonprofits, small teams doing important work. They're stuck in permanent triage. No time to get ahead. Barely time to keep up. They hear about AI and think it might finally get them out of reactive mode. Sometimes it does. But more commonly, the blocker isn’t the tools available, it’s a system so bogged down in crisis that there aren’t any resources left over for prevention.

In 1958, 73% of Americans trusted the federal government to do what's right. Today it's around 22%. Since 2007, that number hasn't cracked 30%. Vietnam. Watergate. Iraq. The financial crisis. Covid. And now: an administration actively dismantling the agencies people once relied on for guidance, while calling any inconvenient reporting "fake news."

Then add social media—which didn't connect us as much as it sorted and labeled us. MIT researchers found that false news is 70% more likely to be shared than accurate news, and reaches people six times faster than the truth. Not because of bots. Because of us. We're wired to share things that are novel and emotionally charged, and unfortunately for the defenders and advocates of accurate health information, lies are often more interesting than reality.

Here's where it becomes a public health problem: health guidance only works if people believe it.

Diverse people absorbed in phones and laptops with concerned expressions, illustrating digital information overload and the need for AI literacy skills

I've seen this up close. I worked on Virginia Health Department’s 2021 COVID Vaccinated Virginia campaign, watching real-time how misinformation shaped who showed up to get vaccinated and who didn't. I have a client now, The Meg Foundation, a pediatric pain organization, where vaccine hesitancy isn't abstract—it's parents making decisions about their children based on what they saw on TikTok. The gap between what the research says and what people believe is a lack of trust. And we keep losing ground.

study in Scientific Reports found a direct relationship between misinformation exposure and lower vaccination rates—even when controlling for political affiliation. Research in Science found that exposure to vaccine misinformation on Facebook reduced vaccination intentions by 1.5 percentage points. Millions of people, deciding not to get a shot because of something they scrolled past.

How AI Accelerates the Crisis

Now a small handful of companies have decided to make generative AI tools available, at scale, and largely free. AI makes the information environment noisier and it accelerates the production of misinformation itself. Generating convincing text, images, and video is now cheap and fast. Now manipulation tactics can be deployed at scale by anyone with an API key.

These tools come with significant drawbacks for AI-illiterate users:

  • Confident wrongness: AI doesn't hesitate or hedge when it's making things up. It delivers hallucinations with the same tone as facts.
  • Citation hallucination: AI will invent studies, quotes, and experts that don't exist—complete with plausible-sounding journal names and dates.
  • Consensus manufacturing: One bad idea, echoed a thousand ways. AI can generate variations on misinformation faster than humans can debunk them and the ability to distribute them at scale gets better every day.

Meanwhile, people are increasingly turning to AI tools for health information. More than 230 million people ask ChatGPT health and wellness questions every week. They're asking about symptoms, medications, treatment options. And the platforms know it. In January 2026, OpenAI launched ChatGPT Health, allowing users to connect their medical records and wellness apps directly to the chatbot. Anthropic followed days later with Claude for Healthcare, offering similar integrations.

These aren't random moves, they're responses to what users are already doing and the liability this behavior creates for AI companies.

Dr. Danielle Bitterman, a radiation oncologist and clinical lead for data science and AI at Mass General Brigham Digital, noted that "this speaks to an unmet need that people have regarding their health care. It's difficult to get in to see a doctor... and there is, unfortunately, some distrust in the medical system." Both platforms have added privacy protections, disclaimers, and siloed health conversations from model training. Both companies emphasize that their systems "can make mistakes and should not be substitutes for professional judgment."

But disclaimers don't change behavior. All the privacy protections in the world can't obscure what's actually happening: health information is being consolidated into platforms designed to maximize engagement, not accuracy. The same tools that hallucinate citations and confidently present outdated information are now the first place hundreds of millions of people go with their health questions. The information environment just got faster, more personalized, and harder to verify.

But here's the part that should keep public health officials up at night: that same Science study from before, that tracked how information found on social media could reduce vaccination rates, found that content which was technically true but framed to raise doubts (aka vaccine-skeptical content that never crossed the line into outright falsehood) was 46 times more consequential for driving hesitancy than flagged misinformation. The most influential type of content for persuading people away from getting vaccinated were rooted in some truth and sounded true enough to the average person.

The stuff that can't be fact-checked is doing the most damage.

AI is architecturally optimized to produce exactly that kind of content. Large language models have gotten so good that they generate fewer obvious falsehoods every day. They’re built to use probability to generate plausible text — trained to predict what sounds true, not what is true. The output is fluent, confident, and rooted in patterns from real human writing. It’s designed to pass the sniff test, and it often does. These are powerful tools that, at this moment in history, are more right than they are wrong — placing them in the sweet spot for fabricating half-truths that do more damage than outright lies.

You can’t debunk faster than AI can generate. But researchers have found something that works: instead of chasing falsehoods after they spread, you inoculate people before they’re exposed.

Prebunking Works

Here's what makes this so frustrating: we actually know what works against misinformation. We have the technical ability to fight for a safer internet, even in the age of AI.

The intervention is called prebunking—and it borrows directly from nature.

Here’s how it works:

  • First proactively communicate to the people that they will encounter some type of manipulation
  • Second: expose them to a weakened version of the tactic, just enough so that they will recognize it when they see it in the wild

Sound familiar? It should.

The same way a vaccine teaches your immune system to recognize a pathogen, prebunking teaches your brain to recognize a con. The prebunking research focuses on five tactics that show up again and again in misinformation:

  • Emotional language: using fear, anger, or outrage to bypass critical thinking
  • False dichotomies: presenting only two options when more exist ("you're either with us or against us")
  • Scapegoating: blaming individuals or groups for complex problems they didn't cause
  • Ad hominem attacks: attacking the person making an argument rather than the argument itself
  • Incoherence: using multiple contradictory claims that can't all be true

Once people learn to spot these patterns, they can defend against them. Prebunking campaigns can help improve the health of populations by inoculating them against the misinformation tactics.

The best part? It's not theoretical. Google and Jigsaw ran prebunking campaigns on YouTube that reached millions of viewers and improved their ability to spot manipulation techniques by 5-10%. The effects lasted weeks. Cambridge researchers built games like Bad News and Go Viral that teach people to recognize disinformation tactics—not by telling them what to believe, but by showing them how manipulation works.

However prebunking isn’t a silver bullet — early research on AI-generated misinformation shows modest effects, and lab results don’t always hold when tested against the chaos of a real social media feed. But it’s one of the best upstream tool we have, and it targets the manipulation mechanics that AI still relies on. The output may be more polished, but the playbook is the same: emotional appeals, false urgency, in-group/out-group framing. Inoculation teaches people to recognize the techniques, not verify the content.

There’s an irony here worth noting: the same tools creating the problem can help scale the solution. Recent research found that LLM-generated prebunking messages were as effective as human-written ones at reducing belief in election misinformation, with effects persisting a week later. We may not be able to fact-check faster than AI can fabricate. But we might be able to inoculate faster than it can mutate.​​​​​​​​​​​​​​​

It gets even more interesting, there is reason to consider that the inoculating effect of prebunking campaigns might create an immunity that spreads from one person to another.

Four diverse advocates speaking through megaphones, representing public health communication and the challenge of reaching people through misinformation

How Immunity Could Spread

Here's what the research says about what happens next.

When people learn to recognize manipulation techniques, they don't just become more resistant themselves. They talk about those techniques and that resistance spreads to others. Researchers call this "post-inoculation talk": the conversations that happen after someone learns to spot rhetorical manipulation. And those conversations matter.

Research from Cambridge found that people who've been inoculated against misinformation don't just resist misinformation themselves—they share what they learned. And when they do, the people they talk to become more resistant too. The researchers call it "vicarious inoculation." The psychological immunity passes from person to person, like a virus.

Sander van der Linden, one of the lead researchers at Cambridge, has started using the phrase "psychological herd immunity." If enough people in a community are inoculated against manipulation, the misinformation has fewer places to land. It doesn't spread as far or as fast.

And the more you talk about what you've learned, the stronger your own resistance becomes. But it also spreads. When someone discusses manipulation tactics with family or friends, that conversation passes along a form of immunity—even to people who never saw the original intervention.

Public health already understands this concept. Take the media guidelines around suicide reporting for example. These guidelines exist because researchers documented how coverage of self-harm in the media affected populations. They named it the Werther effect—after Goethe's 18th-century novel whose protagonist's suicide reportedly triggered copycat deaths across Europe. Irresponsible framing increases risk. But researchers also found the inverse: responsible coverage that emphasizes coping, recovery, and available help has a protective effect on populations. They call this the Papageno effect, after the character in Mozart's The Magic Flute who is talked out of suicide by showing him another way forward. Self-harm can be a social contagion, spreading through human communication networks—but so can resilience.

It gets even more interesting when you consider that responsible media coverage about self-harm can work like an inoculation, even if it's not technically prebunking. The media guidelines encourage framing these deaths as a final symptom of a disease rather than a personal choice or character failing. By broadcasting this, communicators are giving at-risk populations a mental model that protects them: this is something that happens when illness goes untreated, not proof that you're at fault. The message inoculates against the idea that self-harm is a personal failing. You're not telling people what to believe about a specific claim. You're giving them a cognitive frame that makes them resistant to harm.

Epidemiologists increasingly use contagion models to study how misinformation moves through populations. But if misinformation spreads like a virus, then maybe immunity can too. Simulations suggest that at some threshold—researchers have modeled around 60% (though the real number likely varies by community and topic)—misinformation struggles to gain traction. That's "psychological herd immunity": enough resistant people that the contagion has fewer places to land.

Sixty percent is ambitious, and the theory is still being tested. But if the model holds, herd immunity works precisely because not everyone needs to be vaccinated. If enough people are immune, the contagion has fewer places to land.

But We're Defunding the Research

In April 2025, the National Science Foundation terminated more than 400 grants related to misinformation research—$233 million in funding, gone overnight. Researchers learned via email on a Friday afternoon that their work was "no longer aligned with NSF priorities." No appeal. No transition. Just done.

The official rationale, posted in an NSF FAQ: "NSF will not support research with the goal of combating 'misinformation,' 'disinformation,' and 'malinformation' that could be used to infringe on the constitutionally protected speech rights of American citizens."

To be fair: there are legitimate concerns here. Who decides what counts as misinformation? Content moderation has been applied inconsistently, and there are documented cases where platforms suppressed legitimate speech, sometimes under pressure from government officials. Giving any authority the power to determine what's "true" creates obvious risks. These aren't invented worries.

But there's a difference between "we should be careful about how we address this" and "we should not study this at all." The position that even researching how people process information is too dangerous, that the mere act of understanding manipulation threatens free speech, isn't caution. It's ignorance as policy. We shouldn’t cut research funding because findings might be inconvenient.

Here is what often gets lost: prebunking isn't content removal. It doesn't silence anyone or take down posts. It teaches people to recognize manipulation techniques—emotional appeals, false dichotomies, scapegoating—so they can evaluate information for themselves. It's the opposite of telling people what to think. It's teaching them how to think.

But these types of distinctions didn't matter. By November, the cuts had grown. More than 1,300 NSF grants terminated, worth $739 million. A Nature analysis found the administration "disproportionately cancelled or froze projects on topics it disfavors"—including misinformation, vaccine hesitancy, and infectious disease research.

And NSF is just one piece. Platforms are retreating from content moderation. Fact-checking organizations are losing funding. The infrastructure for finding trustworthy information is being dismantled.

Kate Starbird, co-founder of the University of Washington's Center for an Informed Public, put it plainly: "These cuts—along with diminished transparency of platforms, platforms retreating from content moderation, and the defunding of fact-checking organizations—are likely to make it even more difficult for researchers, journalists, and everyday people to find trustworthy information."

We have promising interventions. We have evidence they work. And they're being dismantled—not in isolation, but as part of a broader shift away from shared truth as a public good.

Women in lab coats and professional attire with thoughtful expressions, representing researchers and public health professionals evaluating evidence

AI Literacy Is Health Literacy Now

To be clear about what would actually solve this: we need systemic intervention.

Media literacy in K-12 curriculum. Workforce training for adults who never learned to navigate an information environment designed to manipulate them. AI literacy that teaches people what these tools can and can't do—how they hallucinate, why they sound confident when they're wrong, what it means that they're trained on the same polluted information environment we're trying to clean up.

AI literacy isn't a tech issue anymore. It's a health literacy issue.

When people treat ChatGPT like a doctor, they need to understand they're talking to a pattern-matching system that doesn't know what's true, it only knows what sounds right. These concepts need to be taught at scale.

The idea of institutions investing in misinformation isn't radical. Finland has been teaching media literacy in schools since the 1970s and consistently ranks among the most resistant populations to misinformation in Europe. It's investment, sustained over decades, in teaching people how to think about what they're seeing.

And people are fighting for this. The News Literacy Project just received a $1.2 million grant from Carnegie Corporation to continue building free K-12 curricula. Media Literacy Now has helped pass media literacy legislation in 19 states, with four states—California, Delaware, New Jersey, and Texas—now requiring K-12 instruction. NAMLE has been building educator networks since 1997. Researchers are still publishing, still running studies, still developing interventions that work.

But if you're waiting for the current administration to prioritize information literacy (or even to stop actively undermining it), I don't know what to tell you. The same government defunding misinformation research is not going to fund media literacy curricula. And given where public trust sits right now, I'm not alone in doubting whether federal involvement would even help.

This isn't a policy paper. I'm not going to pretend there's a bill we can call our senators about that will fix this. Not right now. So the question becomes: if systemic change isn't coming from the top, where does it come from?

Diverse speakers at podiums with microphones in a conference setting, representing science communication and expert voices in public health messaging

What Public Health Professionals Can and Should Do

Knowing that manipulation techniques exist, and that they can be spotted, is itself a form of inoculation. But knowing isn't enough. Here's what actually helps.

Get AI Literate

The field that has invested most heavily in understanding AI and how content reaches people is marketing—developers, growth teams typically at for-profit companies. I would know—I've been on those teams. And they have no incentive to increase health literacy. They're optimizing for conversions.

So yes, this might mean learning skills that feel like they belong to a different world, maybe a world you don't want to work in. But public health communicators need to be just as skilled at modern information distribution as the people selling supplements and miracle cures.

And if you're AI hesitant, if something about all of this makes you deeply uncomfortable, I get it. But don't let that discomfort be the reason you stay uninformed. Being AI literate doesn't mean you support the technology or the companies that produce it. In fact, the more literate you become, the more precise your critique can be. Ignorance isn’t resistance. It’s just ignorance.

The case for treating this as a professional competency is building. A July 2025 perspective in Frontiers in Digital Health proposed a five-pillar framework for embedding AI literacy into public health education: technical foundations, ethical and regulatory literacy, experiential learning, governance and policy, and equity and access. The authors note explains that most public health programs offer no structured AI coursework, "leaving students to acquire these skills through informal or ad hoc methods." You don't have to wait for your institution to adopt a framework. You can start with the first two pillars yourself—and the resources to do it are free.

Start with the fundamentals. The University of Helsinki's Elements of AI is free and designed for non-technical learners. Anthropic has published courses specifically for educators. For the research perspective on what this means for health organizations, this BMC Public Health review is worth your time.

Use Prebunking As A Content Strategy

If you work in public health communications, you're already creating content. You're already fighting misinformation. And your resources may already be stretched thin.

Here's what I want you to know: prebunking doesn't have to be a new program. It can be a content type.

The 90-second videos that Cambridge and Google Jigsaw tested on YouTube aren't expensive productions. They're animated explainers—the kind of thing public health social media teams already make. The difference is the framing: instead of debunking a specific false claim after it spreads, you're teaching your audience to recognize the manipulation tactics that show up across all health misinformation.

Emotional language designed to trigger fear. False dichotomies that make nuance impossible. Scapegoating that turns complex problems into someone's fault. These tactics don't change whether the topic is vaccines, fluoride, or the latest wellness grift. Teach your audience to spot the pattern by repeatedly exposing them to the concepts, and you've given them tools that they can take with them wherever they go online.

The research suggests this approach may actually be more effective than traditional myth-busting. Debunking requires you to repeat the false claim, which can inadvertently reinforce it. It also risks feeling like an attack on people who believed the misinformation, which tends to backfire. Prebunking sidesteps both problems.

And your bosses might actually like this idea, because prebunking content can be created with the resources you already have. A social media manager who's already producing educational content can add "manipulation technique explainers" to their content calendar. If you're using AI tools for content drafts, this is exactly the kind of structured, research-backed messaging those tools handle well. The Practical Guide to Prebunking Misinformation from Cambridge, Google, and BBC Media Action was written specifically to help practitioners deploy this without needing a psychology degree.

Learn the prebunking techniques yourself. The prebunking videos take 90 seconds each. Watch all five. You'll start noticing the patterns everywhere, which is the point. If you want something more interactive, the Bad News game puts you in the role of a misinformation producer. It takes about 15 minutes, and studies show it improves your ability to spot manipulation regardless of your politics, education level, or age. There's also Harmony Square for political misinformation specifically.

Upskill in Answer Engine Optimization

More people are getting health information from AI tools—not just search engines, but conversational AI that synthesizes answers from across the web. If your organization isn't showing up in those answers, someone else is. And that someone else is more likely to be a supplement company, a wellness influencer, or a health system with a marketing budget bigger than most.

This means public health organizations need offensive Answer Engine Optimization strategies, not just accurate information buried on page three of your website. You need to be the reputable answer that AI tools surface.

If you're starting from scratch, Ahrefs' guide to SEO for nonprofits is one of the clearest introductions. The Wild Apricot 2025 guide specifically addresses how AI is reshaping search for mission-driven organizations. Rosica's guide on competing with AI overviews lays out the strategic case for why this matters now. And for a comprehensive overview of answer engine optimization—how to get your content surfaced by ChatGPT, Perplexity, and Google's AI Overviews—CXL's AEO guide is one of the best available.

Model and Advocate For Increased Literacy In Your Personal Life

It’s easy to feel like the world is filled with problems so large that we feel helpless in the face of them, but we aren’t helpless. There are things we can do in our own individual lives that help move the needle on these goals.

  • Talk about what you learn with your community. When you explain to someone why a headline is using emotional manipulation, or point out a false dichotomy in a political ad, you're not just correcting that one instance—you're teaching them to recognize the pattern. The research on post-inoculation talk suggests these conversations may be more valuable than the original intervention.
  • Model good information hygiene. Pause before you share. Ask yourself: Is this triggering outrage? Does it present only two options? Is it blaming a group for something complex? You don't have to be perfect. You just have to be slightly more deliberate than the algorithm wants you to be. The people in your life are watching how you navigate this, even when you don't realize it.
  • Start early with the kids in your life. Read to them. This sounds too simple to matter, but the research is unambiguous: early reading creates a culture of literacy that shapes how children engage with information for the rest of their lives. Talk to them about how ads work, why headlines are written the way they are, what makes a source worth trusting.And don't underestimate the foundation you're laying with how you handle their early medical experiences—building medical literacy that prevents fear from compounding into healthcare avoidance later.
  • Support the organizations doing this work. The News Literacy ProjectNAMLEMedia Literacy Now, and others are fighting for systemic change even when the political environment makes it nearly impossible. They need funding, volunteers, and visibility. If policy isn't coming from the top, it has to be built from the bottom.

Make AI Literacy A Long Term Public Health Priority

We need to inoculate our communities against AI-specific misinformation pitfalls: hallucinated citations, confident wrongness, the illusion that a chatbot "knows" something. AI literacy is becoming a public health competency. Your audiences need to understand what these tools can and can't do—and you're positioned to teach them, the same way you teach them about vaccine schedules or nutrition labels. Prebunking isn't just for misinformation anymore. AI needs its own inoculation campaign.

I’ve spent years in public health communications. I know how it feels to watch misinformation spread faster than you can respond, to see the same false claims recycled with new packaging, to wonder if anyone is actually listening. Prebunking won't solve that. But it shifts the game from whack-a-mole to something closer to vaccination: build enough baseline immunity, and the outbreaks become smaller.

Your audience trusts you. Use that trust to teach them how to protect themselves—not just from this week's misinformation, but from the tactics that will keep showing up again and again.

You don't have to inoculate everyone. You just have to start.