Crisis Communications in the Age of AI

Written by Tim Spreitzer | Feb 3, 2026 4:47:24 PM

Takeaways on emerging risk and crisis readiness from the Sharp Healthcare lawsuit.

 

A lawsuit recently filed against Sharp Healthcare, one of the largest health systems in Southern California, offers a clear reminder for organizations that artificial intelligence has introduced a slate of new reputational risks along with the celebrated opportunities for innovation and efficiency.

 

The potential class-action alleges that Sharp Healthcare failed to inform patients it was using the Abridge AI scribe tool to record and transcribe their private conversations with clinicians. It therefore did not offer opportunities to consent or opt-out; a clear violation of California’s privacy laws. The case also alleges that clinical notes falsely included standard text indicating that patients had been “advised” and provided consent.

 

It’s worth noting that the story has maintained traction for weeks, as crises sparked by artificial intelligence are still new and noteworthy (though not hard to imagine). Sharp Healthcare declined to comment – though it told Medscape and other outlets that “patient safety and privacy remain top priorities for the organization.” The story has picked up steam without further input from Sharp itself.

 

This situation illustrates several key principals:

 

The threats may be constantly evolving, but the fundamentals of a good crisis response do not.

  • Every strong crisis communications plan starts with your operational readiness. Your plan should be regularly updated to anticipate threats in the current landscape and pressure-tested on an ongoing, disciplined basis.
  • The AI-related lawsuit offers a reminder that companies should regularly evaluate their crisis teams and ensure the right people are in the right seats – including subject matter experts related to the latest tools and threats. Is your Chief Information Officer a core member of the team? Their POV on emerging threats related to artificial intelligence and cybersecurity is critical for your preparedness and future response.

 

Artificial intelligence can lead to a legal grey area, but it’s rarely grey in the court of public opinion. If a tool makes a mistake – say, misdiagnoses a patient – who is at fault? The answer is always you. Your reputation is on the line, and the public doesn’t differentiate between the tool or the person using it.

 

You need to be ready to respond with speed, accuracy, empathy and authenticity – principals that are often in tension with each other, and I have no doubt that they were for Sharp Healthcare as well. How do you express empathy for people affected while maintaining authenticity to your brand and the work you do? How do you swiftly acknowledge an incident even while you’re still waiting for the facts to unfold? Those are some of the central questions of an initial crisis response.

 

There’s usually a trade-off between reputational risk and litigation risk. With a pending lawsuit, companies are rightfully worried about the litigation risk. It’s easy to take the path that Sharp did and decline to comment, but your reputational risk will take a hit.

 

There is no silver-bullet solution, and sometimes “no comment” is the best choice. But all voices should be at the table. At Brian Communications, we work best when we work hand-in-hand with legal counsel, each of us presenting our well-informed perspective on potential risks to help guide the organization forward.

 

I don’t know the conversations that happened inside of Sharp Healthcare, but in a situation like this, it’s worth considering if you can step back and provide a broader frame for the organization without discussing the particulars of the litigation. There could be an opportunity to speak to the ways you’re using AI to help patients and communities, or lean on an industry partner or trade organization to provide this larger context so that you’re present in the story and not just taking your reputational hit without getting any upstream benefit. This could be especially valuable if the story continues to gain traction, as the Sharp Healthcare lawsuit has.

 

Now is the time to invest in your “credibility bank.” When a crisis hits, you need to draw on the credibility, reputation and goodwill you’ve built. Take the time now to build it.

 

And with AI-powered crises on the horizon, be strategic in your storytelling to discuss your use of AI and the exact benefits it can deliver to patients and communities with your key stakeholders.

 

There’s so much unknown with new technology like AI, and people are right to have some trepidation around it – especially within medical contexts. But I’ve seen great success when healthcare organizations specifically zero in on the patient benefit from those tools: “This AI note-taking tool frees up your physician to spend more focused time with you, personally discussing your concerns or challenges.”

 

That’s the key: How can you take these complex tools and translate them into direct benefits for your most important audiences?

 

A few weeks before the lawsuit hit, Sharp told Fierce Healthcare, a trade publication, that its use of Abridge had a direct, positive impact on physician productivity. They noted a 3.5% to 6% increase in work relative value units (wRVU) per encounter, and said, "Our clinicians are reporting that they felt more able to see patients in need of urgent attention.”

 

Oregon’s Samaritan Health Services similarly reported that the tool led to an 18% increase in patients seen by clinicians – even though the organization itself had not changed its policies or encouraged clinicians using the technology to see more patients.

 

Now that’s the AI story that patients want to hear.