AI;DR: In Healthcare, It’s Not AI That Breaks Trust - It’s How You Use It

Two healthcare professionals reviewing data on a laptop with a digital AI interface overlay, representing AI use in healthcare decision-making

Transform

By Kristin Ryan, EVP, AI Transformation & Acceleration

AI;DR is more than a meme. It is an emerging shorthand people use to dismiss content that feels obviously AI-generated, generic, or low-effort. The broader signal behind it is what matters: we are moving into an “AI unless proven otherwise” internet, and audiences are rewarding content that feels authored, accountable, and real. For healthcare marketers, where credibility can influence patient decisions, that shift matters more than it does in almost any other category.  

This month’s abrupt shutdown of OpenAI’s Sora app adds to that signal. OpenAI has not publicly said audience backlash alone caused the move, reporting ties the shutdown to mounting concerns over deepfakes, misleading content, nonconsensual media, copyright, and pressure from rightsholders. The lesson for marketers is simple: novelty and scale do not automatically earn trust.  

As always, the market tells us a lot.  

  • Gartner says 50% of U.S. consumers would prefer to do business with brands that do not use GenAI in consumer-facing messages, advertising, and content. In the same survey, 68% said they frequently wonder whether the content they see is real, and 61% frequently question whether the information they use to make decisions is reliable.  

  • NIQ found that consumers intuitively identified most AI-generated ads and perceived them as less engaging and more annoying, boring, and confusing than traditional ads.  

  • Getty Images reports that nearly 90% of consumers want to know whether an image was created using AI, and 98% say authentic images and videos are pivotal to trust. Getty also specifically notes that healthcare/pharma is among the sectors where audiences increasingly expect transparency.  

  • Another industry study found that three in four U.S. consumers want to know if content was created by AI, 57% want visible labeling, 53% are uncomfortable with AI-assisted content, and two-thirds are uncomfortable with fully AI-generated content. In a separate Baringa cut, 22% of 16–24 year olds and 46% of adults 55+ said they would refuse to consume purely AI-generated content under any circumstances.  

The takeaway is not that audiences reject AI categorically. It is that they reject AI used as a shortcut for substance. They are far more open to AI when it is clearly assistive, transparent, and under human control than when it becomes the visible face of the content itself.  

In health, authenticity is not just a brand preference. It is a trust and safety signal. KFF found that about six in ten adults are not confident that health information from AI chatbots is accurate, and only 29% trust AI chatbots to provide reliable health information. Deloitte similarly found that 30% of consumers say they do not trust health and wellness information from gen-AI-enabled tools, while 74% view doctors as their most trusted source for treatment information.  

That matters because AI content is already shaping behavior. In Aha Media Group’s 2025 healthcare-search survey, 23% of consumers said they stop at the AI-generated answer without going further, and 38% said they have made a healthcare decision based on an AI answer. But the same research found that 76% could not recall any hospitals or health brands cited in the AI result. So visibility inside AI summaries is not the same thing as trust, memory, or preference.  

Healthcare marketers also have to contend with the darker side of synthetic authority. Full Fact found social accounts using AI deepfakes of real doctors and academics to promote bogus health-product endorsements, while CBS identified more than 100 videos across platforms featuring fictitious or impersonated doctors, with some viewed millions of times. In other words: in healthcare, “AI-generated expert” is not just a creative choice. It is now a recognized misinformation pattern.  

Social platforms are already moving toward disclosure. The platforms themselves are signaling that invisible AI is not a durable strategy: 

  • Meta says it adds “AI info” labels on Facebook, Instagram, and Threads when it detects industry-standard AI signals or when users self-disclose AI-generated content.  

  • YouTube requires creators to disclose realistic altered or synthetic content and says it may add labels even when creators do not disclose.  

  • TikTok requires labeling of realistic AI-generated content, automatically labels some content uploaded from other platforms via Content Credentials, says it has labeled more than 1.3 billion videos, and is testing controls that let people dial down AI-generated content in their feeds.  

  • Pinterest adds “AI modified” labels using metadata and classifiers, discloses when it believes an ad was modified with AI, and has introduced controls to reduce GenAI content visibility in certain categories.  

Platforms are building for provenance, labeling, and user control because they know audiences increasingly care about all three.  

In Part 2, we explore how to use AI in ways that enhance trust - rather than put it at risk.


Interested in hearing more? Connect with us here.