GET IN TOUCH WITH PAKKO, CREATIVE DIRECTOR ALIGNED FOR THE FUTURE OF CREATIVITY.
PAKKO@PAKKO.ORG

LA | DUBAI | NY | CDMX

PLAY PC GAMES? ADD ME AS A FRIEND ON STEAM

 


Back to Top

Pakko De La Torre // Creative Director

Legal considerations for artificial intelligence in radiology and cardiology

Legal considerations for artificial intelligence in radiology and cardiology

In efforts to speed workflow, improve patient care and augment an increasingly shorthanded clinical workforce, artificial intelligence (AI) is starting to see wider adoption across healthcare. This is especially true in medical imaging, which makes up the majority of U.S. Food and Drug Administration (FDA) AI clearances. But there are questions regarding who is liable if an AI application fails and it results in a malpractice lawsuit.

This was largely an academic discussion just a few years ago, but the number of FDA cleared algorithms has exploded in just the past few years. More than 300 of these AI apps have been cleared just since 2019, and that number is expected to grow rapidly into the future. 

There are now more than 520 FDA-cleared AI algorithms as of January 2023. The vast majority, 396, of these are for radiology. Some of the radiology algorithms are for automated detection of disease across many specialities, most notably for include everything from advanced visualization, PACS workflows, specific scanner applications,  

Cardiology, with 58, is the second largest group of FDA-cleared AI algorithms, many of which are imaging specific. But, there are also 18 radiology algorithms that are specific to cardiac imaging. More than 20 others cover CT and ultrasound systems or their reconstruction algorithms that are also used by cardiology. 

From an healthcare IT perspective, this adds new layers of complexity for where things can glitch that can directly impact patient care. So the big question is who gets sued when AI fails?

“The answer is everyone will get sued, it’s a big hairy issue,” explained Brent Savoie, MD, JD, vice chair for radiology informatics, section chief of cardiovascular imaging, Vanderbilt University. He spoke on this topic at the Society of Cardiovascular Computed Tomography (SCCT) 2022 meeting.

The need for healthcare AI regulatory oversight 

He said this is mainly because there is a not a good regulatory framework for AI in the U.S. at this time. This means there is no guidance on how to deploy the technology safely and there are no clear protections from law suites that say “if you do this” you are not at risk, he explained.

“When you don’t have regulations, your default is tort law, and that is a obviously a scary scenario for anyone looking to implement AI technologies, or who may have already implemented it,” Savoie said. “I think tort law is a pretty bad mechanism to ensure patient safety.”

He said regulatory agencies and quasi-regulatory groups like accreditation bodies offer a more predictable pathway that is more flexible that lawsuits.

“Regulations can really help create standards that people can follow to create an environment that ensures patient safety and reduces error, much more efficiently than tort can do,” Savoie explained. 

He outlined that the physician could share responsibility when AI fails, since they are ultimately responsible for diagnosis and reporting. The AI vendor could be liable if the algorithm has a bug, or a built in bias or missing information. The IT department and hospital could be liable if the AI was not updated or there is a glitch caused by interactions with other software on the system. Savoie said the most likely result is everyone gets sued and it will be up to the courts to sort it out.

IT needs to understand AI is not just software, it has clinical impact

He said IT teams also need to understand this software is classified as a medical device, so more care needs to be taken with AI from a liability standpoint, and this is not another PACS or advanced vis program.

“One of the more frightening things is when you start to look at all the places where error can occur, and this is across all medical devices, there are are a lot of points of potential failure. I think most people focus on what do we do when the algorithm breaks, but what they have not focused much on is the other processes. Did you install it correctly? Are you maintaining your servers? Do you have a process to even make sure the AI is still turned on?” He said. 

Outside the algorithm there is a software wrapping that can break just as easily as the Outlook breaks for most people. He said alerts can go off, integrations can fail. So, Savoie said there need to be ways for vendors and the implementing institutions to know what to monitor and test. 

At the technologist level, they might alter scanner settings that may impact the accuracy of the AI without knowing it. The radiologist or cardiologist reading the study may not understand the outputs being created by the AI.  

“At all this points there are sources of error, so if you are suing someone because something went wrong related to the AI algorithm, the easiest thing to do is just name everybody and then sort it out,” Savoie said. “It is sort of a worst case scenario and hopefully will not happen, but if there is no structured legal environment, that’s what it devolves to.”

He said laws might even be similar between states, but juries that hear these case might be significantly different from place to place. So if there are not specific regulatory protections for an AI vendor or for a healthcare groups using the AI, they may think twice about implementing the AI or doing business in that location. 

“This is all kind of a downer message I feel like I am giving, but I really am optimistic about the future of AI and I think creating these processes is essential for preserving that future, Savoie explained. “If you are at an institution and install something and you face an adverse outcome and it comes to litigation, it’s going to be really difficult to get approval for a budget request for the next application.”

This content was originally published here.