Skip to content

Building Trust

in an AI-Powered Crisis

Our doctors are dedicated to the Hippocratic Oath, often summarized as “first, do no harm.” But technology has taken no such oath.

 

On a light pink background is a hand in a blue medical glove holding another hand. The hands are encompassed by a thin white spiral.

 

Major brands are now utilizing and investing in generative and predictive artificial intelligence, publicly backing its potential to reimagine many aspects of the healthcare experience — from patient charting to diagnoses and imaging analysis. And while it has great value and potential, it’s also not hard to imagine how an error in the data set, design or use of the technology could lead to patient injury and a reputational crisis. 

In fact, we’ve already seen wide-scale failures. One AI-powered algorithm, used by hospitals and insurance companies across the country to predict patients in need of “high risk management programs” frequently overlooked Black patients, according to a research study published in Science. The model conflated an individual’s healthcare needs with their healthcare spending. 

Two large research studies – published in the British Medical Journal and Nature Machine Intelligence – similarly reviewed hundreds of AI tools developed to diagnose COVID and triage patients. Their conclusion: Out of more than 600 models, none were found to be accurate enough for clinical use. (Many had already been utilized in hospitals and health systems throughout the pandemic.)

Even the World Health Organization warned this year about the lack of proper caution accompanying the “precipitous adoption of untested [AI] systems” in healthcare — and the errors and patient harm that could result. 

While AI offers powerful possibilities for healthcare organizations, it’s only a matter of time before the technology causes a crisis — whether that’s a privacy breach, patient injury or system-wide error. 

And when it does, leaders must be prepared with a communications plan that can bring humanity into a technological crisis and adapt to the situation’s unique challenges. After all, “AI made a mistake” is not an explanation that that will foster trust or uphold a reputation. Leaders will need consider: 

 

The speed and scale with which an AI-powered crisis can unfold. Organizations will probably have little warning of a potential crisis, limiting the ability to draft statements and contingency plans in advance.  And unlike a human medical mistake, which would likely affect a small number of patients, an error in one AI tool could immediately impact health systems all over the country. Similar to cybersecurity protocols, leaders should consult with technology vendors in advance and have a strategy in place to immediately address and contain technological damage. 

You may never know what happened — and you need a communications plan that can handle that uncertainty. Generative AI is currently a “black box” that can analyze data and make predictions but can’t provide its reasoning. So if a radiology tool misread an image and made the wrong diagnosis, you probably won’t know how it arrived at that conclusion. 

It’s also hard to promise that you have fixed something when you don’t know what went wrong.

While it’s critical to communicate transparency and urgency in the face of a crisis, the black box nature of AI will make it difficult to quickly share initial facts or provide a follow-up report. You may also be limited in what you can share by NDAs, which some healthcare organizations are beginning to sign with technology vendors. 

Without complete information and the ability to analyze what went wrong, the public and media attention will focus even more heavily on the actions you take to help those affected. An executive who can quickly and genuinely communicate a plan to address the wrongs will demonstrate humility, empathy and decisive leadership — and begin to earn back trust and brand reputation. 

There’s no tolerance for technological error. People have a certain amount of grace for human error, but a patient injury or privacy breach caused by AI is likely to elicit a much different response from your patients, staff and the public.

Any crisis response must consider the fact that in today’s environment, patients already feel vulnerable to technology. According to a Pew research study, 75% say their top concern about AI in healthcare is that providers “will move too fast” implementing new solutions “before fully understanding the risks for patients.” 

In other words: No one wants the computer to be in charge. If a crisis occurs, you need to show that someone is still minding the store. 

And in the face of impersonal technology, it’s humanity that builds trust. Leaders must address the situation personally, with responsibility and compassion, to counterbalance the role of AI. Technology can’t apologize, provide retribution or be sued, so you must be ready to step forward with care, concern and a plan to make it right.  
 
While legal liability may not be clear, your organization is already on trial in the court of public opinion. Identifying the players at fault may be nuanced, as it likely depends on the information that can be gleaned from the AI tool and whether errors can be traced to a user or developer. But you should be prepared for the public to hold your organization responsible for choosing to use the technology and for being the site of the injury. While you don’t want to own something that may not be your fault, it is critical to acknowledge the impact of the harm caused to your patients and immediately take action on their behalf. 

 

AI is already changing the game in healthcare. And the companies that will lead the way are the ones ready to both harness the innovations and protect their reputations and people if something goes wrong.

 

This article was originally published in Medical Economics

Stay ahead of the communications curve with AI business integrations. Let’s talk.
Stay ahead of the communications curve with AI business integrations. Let’s talk.

Brian is the CEO of Brian Communications and Pulitzer Prize-winning former publisher of The Philadelphia Inquirer. A serial entrepreneur, respected communications strategist, and unmatched connector, he’s advised top companies including Comcast, Deloitte, IBM, and Uber. Brian is the chair of the Poynter Foundation Board and serves on the boards of the Inquirer and CVIM (Community Volunteers in Medicine).
Insights In Your Inbox.

Sign Up For The Agency Newsletter!

related articles
READ
Day 2 HealthKey Summit 2024
Healthcare Thought-Leaders Convene on What’s Next for the Industry
READ
Day 1 HealthKey Summit 2024
Healthcare Thought-Leaders Convene on What’s Next for the Industry
READ
Transforming Challenges Into Triumphs, with Steve Collis
Cencora Chairman, President & CEO, Steve Collis shares advice and learnings for leading a global pharmaceutical solutions organization
READ
Boost Healthcare Talent Retention with Internal Comms
The talent crisis in healthcare isn’t new, but leaders can leverage internal communications to alleviate employee loss
READ
Nonprofit Tough-Time Survival Guide
Shortfalls. Pandemics. Economic woes. Here are the must-do tips for nonprofits to weather the storm.
READ
HealthKey Summit Returns to Philadelphia
Highlighting national issues with perspectives from industry thought leaders