By Sukhy Bachada, SVP, Global Media, Inizio Evoke Comms
Earned media in healthcare hasn’t lost relevance in the AI era. It has become even more critical.
What was once primarily about shaping reputation and reach now carries an additional responsibility: influencing how AI systems understand, validate, and explain health information at scale.
People still read newspapers, journals, and trusted outlets. Editorial judgment still determines what gets published and what carries weight with human audiences. But something important has changed. Increasingly, AI systems now mediate what people encounter first, how information is framed, and which sources are surfaced as credible. This often happens before anyone pauses to evaluate a source, check where the information came from, or engage directly with a brand or expert. Because AI now decides which health answers people see first, the sources it trusts have disproportionate influence.
In healthcare, where trust, nuance, and accuracy matter more than speed or novelty, that shift has profound implications.
This change doesn’t sit in isolation. My colleague Paul Herbert, SVP, Digital Strategy, Inizio Evoke Comms, recently explored the wider implications of generative AI for healthcare communications strategy in an article on the ten strategic communication shifts AI is forcing in the sector. In this article, the focus is narrower: what this shift means specifically for earned media, and why its role has become more critical – not less – in an AI-mediated information environment.
From publishing to encounters
Publishers still determine what exists. But encounter - how and where people first meet information - is increasingly shaped elsewhere.
When patients, caregivers, journalists, policymakers, and healthcare professionals ask AI tools questions about diseases, treatments, or health systems, they are not browsing a media landscape. They are receiving a synthesized answer. That answer is shaped by what the system has learned to trust, how it reconciles conflicting information, and which sources it treats as authoritative enough to repeat.
AI doesn’t replace earned media; it interprets it. And as Paul outlined in the earlier article, that interpretation increasingly happens before traditional engagement begins.
Why earned media carries disproportionate weight in healthcare
In healthcare, AI systems don’t optimise for popularity or volume alone. They lean heavily on credibility signals. Across models and platforms, three patterns consistently emerge.
1. First, health-specific safety and training priorities matter.
Major AI systems are explicitly designed to behave cautiously in health contexts. They are trained to prioritise information that appears authoritative, professionally sourced, and corroborated across multiple trusted outlets. Speculative, promotional, or one-sided content is actively downweighted. This naturally elevates content that has passed editorial scrutiny, features expert voices, and aligns with established medical or institutional norms. In practice, that means reputable earned coverage carries more weight than many teams realise. It matters not because it is louder, but because it fits how AI systems are trained to behave in high-risk domains.
2. Second, observable citation behavior reinforces this dynamic.
On AI platforms that surface sources – such as Perplexity, Bing, or Google’s AI experiences – citations skew consistently toward medical and health news outlets, established publishers, and NGO or institutional sources. Brand-owned content appears far less frequently unless it is clearly factual, well-structured, and corroborated elsewhere. This is a critical distinction. Visibility alone is not the same as influence. Earned media functions as an authority layer: it legitimises narratives, provides corroboration, and signals that information is safe to reuse.
3. Third, newsroom dynamics amplify the effect.
As AI-generated content becomes more widespread, human editors are not relaxing standards – they are raising them. Generic or synthetic material is filtered out faster. Original reporting, expert insight, and human judgment are valued more highly, not less. The result is a reinforcing loop: content that survives editorial scrutiny becomes more authoritative in the ecosystem, and those are precisely the sources AI systems are most likely to ingest, learn from, and cite.
Visibility without credibility doesn’t last
For years, earned media strategies were often judged on reach, tiering, and share of voice. Those metrics still matter for human audiences. But in an AI-mediated environment, they are no longer sufficient.
Being mentioned in an AI-generated answer does not guarantee that information is accurate, complete, or responsibly framed. In healthcare especially, presence without context can be misleading. What matters is not just whether something appears, but how it is explained, what sits alongside it, and which sources anchor the story. This is where earned media’s role has expanded.
Earned media now helps determine:
Which perspectives AI treats as credible
Which narratives are corroborated rather than averaged away
Which nuances survive summarization rather than being flattened
In other words, earned media increasingly shapes answer quality, not just awareness.
The strategic implication many teams haven’t fully clocked
The result is a shift that many organizations have not yet fully internalised. Earned media is no longer just influencing people directly. It’s also influencing the machines that are influencing people directly. It is shaping the answers that now stand in for discovery, understanding, and trust – particularly at the moment when questions are first asked and decisions begin to form.
That doesn’t mean abandoning traditional earned strategies. It means recognising that the same content now operates in two systems at once: a human media ecosystem built on editorial judgment and trust; and an AI-mediated layer that learns from, synthesizes, and repeats those signals at scale.
One caution: there is a growing industry of advice on how to "optimize for AI answers," much of it built on broad assumptions. In practice, when we audit how AI actually answers health questions across different diseases and treatments, we see huge variation in which sources are cited and how information is framed. Generic playbooks are not just unhelpful, they can be actively misleading. The only reliable starting point is to look at what AI is actually doing with your specific topic.
For teams asking “what do we do on Monday with this info?”, this means three things: -make sure your most important health narratives can pass strict editorial scrutiny, structure key explainers so they are easy for both humans and machines to understand, and start tracking how your organization shows up in AI‑generated answers - not just in traditional coverage reports.
In healthcare, where credibility is non-negotiable, misinterpretation carries real risk, and where one misread AI answer can shape patient decisions or policy, earned media’s dual role has never been more mission‑critical.
Reach out to our experts to find out more by clicking here.
