How to tell patients about generative AI? Without regulation, U.S. hospitals look to each other for answers

Illustration by Mary Delaney
Illustration by Mary Delaney

Health systems around the world are beginning to explore patient-facing applications of generative AI. The opportunities for bringing the technology into the clinical environment seem to be boundless—from billing to after-hours patient messaging. 

But what many hospitals haven’t quite figured out is how to tell the patients.

Conundrums like this one have led global powers like the EU and China to begin rolling out regulations specifically pertaining to generative AI, such as the new draft policy approved by the European Parliament earlier this month. 

In the U.S., the absence of formal regulation (at least so far) is leading hospitals and EHR vendors to look to each other for guidance on how to broach the subject with patients.

Why is discussing generative AI with patients so challenging for hospitals?

From many of these hospitals’ perspectives, they’re caught between a rock and a hard place. 

They’re ethically pulled to be transparent with patients. But they’re also underprepared when it comes to staff capacity for managing patient concerns and questions. One of the last things they want is for the technology meant to make clinical staff more efficient to, in fact, give them more work fielding endless confused patient questions 

“We’re not having the conversation with patients yet, because of the expectation that it will be more confusing than anything else,” said Vanderbilt’s health data science center director Brad Malin. Vanderbilt is considering establishing a team to field patient questions once they implement a generative AI-enabled medical note-taking solution they’re currently exploring.

One of the hardest questions many of these hospitals are currently workshopping is one of workflow. In other words: at what point do patients need to be informed?

“We need to figure out under what conditions you need to tell somebody that generative AI is on the table,” Malin said. “If you change from a scribe to a computer, but the physician has the ability to look over the notes, do you need to tell them that’s ChatGPT?…Those types of questions I don’t think have been answered yet.”

Right now, these questions are still hypothetical for many health systems which are testing the use of patient-facing generative AI tools in closed, non-patient-facing environments. Before bringing generative AI to the patient-facing level, many hospitals will opt to canvas patient focus groups, ethics panels, and community health workers on the question of communication.

EHRs lead the way

As with health information sharing consent forms, some responsibility is also falling to EHR giants like Epic and health information exchanges (HIEs) to lead the way.

“I think the collective wisdom of Epic, and all the other institutions working on this, taking a consensus and a collaborative view, and working with everybody, is where we want to land,” said Brent Lamm, Chief Information Officer at UNC Health.

Hospitals using Epic’s AI-based MyChart tool to triage non-emergency based online queries fall back on the disclaimer Epic has built into the tool. Any responses sent to these questions—on topics ranging from elevated heart rates to prescription use—include an automatic disclaimer that the note was AI-generated, but reviewed by human clinicians. 

Our perspective: In a trust-based industry, words are everything

When it comes to explaining medical concepts and helping manage their care, the power of words to build or damage trust is strong. This is true whether the words are coming from human clinicians, AI, or both. The stakes of these questions are high.

American hospitals and EHRs do have a history of adapting to massive changes, managing regulatory shifts to information exchange consent policies and interoperability standards. However, the industry has been historically slow with its uptake and trust of new technology.

Generative AI has proven an interesting exception to this trend, with excitement and big ideas around ChatGPT and generative AI’s massive advancements over the past few months spreading like wildfire. 

For many hospitals, their trust in the technology is not the problem, per se. The question is more around patients’ trust of it. And for good reason, with a majority of surveyed Americans reporting that they would not want medical AI to be involved in their care. 

Of course, hospitals and health IT tools will not be the only entities needing to determine how generative AI use should be communicated to patients—at least, not for long. Eventually, regulation will come. This is where gauging the Office of the National Coordinator for Health Information Technology (ONC)’s attitude towards generative AI is interesting. 

ONC appears to view AI oversight as fitting into its Algorithm Transparency, and Information Sharing (HTI-1) Proposed Rule and has published blogs assessing risks of greater use of AI/ML overall in healthcare. These recent communications about AI may be a good sign for how hospitals and EHRs will need to address patient disclosure in the future: with great caution.

Read more

See all

MedTech Pulse is a newsletter publication on innovation at the intersection of technology and medicine. Stay ahead with unique perspectives on industry news, the latest startup deals, infographics, and inspiring conversations.

Powered by

CeramTec