Many healthcare leaders see enormous potential for artificial intelligence in healthcare, but the growing use of AI raises a host of legal questions.
Samuel Hodge, a professor of legal studies at Temple University, has been tackling these questions. He recently wrote an article about the legal implications of AI in healthcare in the Richmond Journal of Law & Technology.
In an interview with Chief Healthcare Executive, Hodge talked about the liability questions for hospitals, doctors and some of the questions health industry leaders should be considering.
“The law always lags behind medicine,” Hodge said. “This is an area that is a classic example.”
Hodge says he is a big supporter of the growing use of AI in medicine, calling it as potentially significant as the X-ray or CT scan. But he said the use of AI raises legal questions that have yet to be answered.
“It’s exciting, but AI has drawbacks and with legal implications because the law lags behind the development of the technology,” Hodge said.
“There are no recorded cases yet on AI in medicine, so the area of liability is open-ended, and hospital administrators and physicians are really going to have to watch the development of the field to stay abreast of the latest developments.”
(See excerpts of our conversation with Samuel Hodge in this video. The story continues below the video.)
Questions of responsibility
Recent studies suggest that artificial intelligence can help reshape healthcare, particularly in identifying patients before adverse events. Mayo Clinic researchers have found AI could help spot patients at risk of stroke or cognitive decline. Another Mayo Clinic study focused on using AI to identify complications in pregnant patients.
Hal Wolf, the president and chief executive officer of the Health Information and Management Systems Society (HIMSS) told Chief Healthcare Executive in a recent interview that he sees health systems turning to AI to identify health risks earlier. “The applications for AI will help in predictive modeling of what to use, where to anticipate diagnoses, how do we maximize the resources in communities,” Wolf said.
Currently, less than 1 in 5 doctors are using augmented intelligence regularly, but 2 out of 5 plan to begin doing so in the next year, according to an American Medical Association survey. The AMA describes augmented intelligence as “a conceptualization of artificial intelligence that focuses on AI’s assistive role, emphasizing that its design enhances human intelligence rather than replaces it.”
As doctors and health systems turn to AI more in treatment, Hodge said they will face new questions about liability. If AI leads to an incorrect diagnosis of a patient’s condition that leads to harm, Hodge asks, the question is, who’s responsible?
As a lawyer, he could see attorneys suing the physician, the health system, the software developer and the manufacturer of the AI.
“The question the court is going to have to resolve and deal with is, who’s responsible and to what extent? It’s issues that we’ve never had before,” Hodge said.
“This is going to come up with artificial intelligence, and nobody knows the answer at this point,” he said. “This is all going to have to play out with litigation.”
As doctors and health systems turn to AI more in treatment, Hodge said they will face new questions about liability.
“There are several issues that hospital administrators should think about,” Hodge said. “Number one, most physicians don’t buy the computers that they use. Hospitals do. Therefore, they are going to end up being vicariously liable for the actions of the physicians, because they supplied the computer that’s being used.”
Changing standard of care
In addition, health systems and doctors could also see new definitions of the standards of care in malpractice cases.
Typically, a doctor in a suburban health system would be judged by the standard of care in that area. The suburban physician in a smaller facility wouldn’t necessarily be compared to a surgeon in a major urban teaching hospital, Hodge said.
As artificial intelligence is used more in treatment and becomes more widely available, the standard of care may change, he said.
“Previously, in a malpractice case, the standard of care is the average physician in the locale where the physician practices,” Hodge said. “With AI technology, the duty of care may be elevated to a higher standard, and it may be then a national standard, because everybody is going to have access to the same equipment. So that standard of care may be made higher.”
Plus, as AI is used more often, doctors could be held to higher standards in the future.
“The issue is, what may not be malpractice today, may be malpractice a year from now,” Hodge said.
Even if a doctor is utilizing artificial intelligence in a diagnosis, Hodge said, “it doesn’t let the physician off the hook.”
“Doctors will be able to render conclusions much quicker,” Hodge said. “Physicians have to realize it’s a double-edged sword to the extent they may be held to higher standards of care in the future, because they have access to this entire database that they didn’t have before.
“Bottom line: The physician is the one who is responsible for the patient’s care, regardless of the use of AI,” he said. “It’s only a tool. It’s not a substitute for the doctor.”
Doctors could confront issues of informed consent with patients, if they are using AI in developing a diagnosis.
Some patients may not welcome the use of AI, even if it could lead to a more accurate diagnosis, Hodge said.
“Any time medical treatment is provided, the physician has to inform the patients of those things being relevant,” Hodge said. “AI in medicine creates added issues. For instance, do you have to tell the patient you used AI to inform diagnosis? If the answer is yes, how much information do you have to tell the patient about the use of the AI? Do you have to tell them the success rate involving AI in making diagnoses?
“One of the things that the research suggests is that AI in medicine, if you disclose its use, it may encourage more arguments between physicians and patients,” Hodge said.
Liability for software manufacturers
Aside from questions for hospitals, health systems and doctors in liability, Hodge said it’s unclear what exposure software developers and manufacturers would have in lawsuits related to AI.
Software manufacturers could also argue that the software was fine until it was modified by the health system over the course of time.
‘There are defenses that a manufacturer or software developer will use, and that is the technology is designed to evolve,” Hodge said. “So I give you the basic software, but it’s designed so that the physician or healthcare provider will supplement it with patient records, diagnostic imaging, so it’s designed to evolve. Therefore the argument is going to be, when the machine was provided, it was not defective. It became defective by the materials that were uploaded by the healthcare provider at a later date.”
Under product liability law, software manufacturers may not be liable, Hodge said. While consumers can sue an automobile company for a defective car if the brakes don’t work, it’s probably going to be more difficult to sue a software company over a botched diagnosis.
“Traditionally, the courts have said software is not a product,” Hodge said. “Therefore you won’t be able to sue under products liability theory. That’s an issue you’re going to have.”
Despite raising concerns about the legal implications of the growing use of AI, Hodge views artificial intelligence as a tool to improve healthcare.
“I’m very excited about the development of AI in medicine,” Hodge said. “I really believe it’s the wave of the future. It’s going to allow physicians no matter where they are located in the United States, in a remote small location or a metropolitan city, they’re all going to have equal access to a database that will assist them with diagnosing patients and rendering proper medical care. It’s going to be a real boon to the industry.”