Introduction to AI in Medicine
Artificial intelligence, or AI as it is commonly known, has started to permeate various sectors, including healthcare. AI, defined as a technical system capable of performing intricate tasks that historically only humans could do, such as reasoning, decision-making, and problem-solving, has the potential to revolutionize patient care. One of the most prevalent forms of AI in healthcare is machine learning (ML), a subtype of AI that automatically learns and adapts from data to identify patterns and make predictions. While the benefits AI offers to patient healthcare are widely acknowledged, its legal implications are yet to be fully understood. As AI continues to grow and become standard in medical practice, its legal consequences become increasingly relevant.
Benefits to Healthcare
AI offers notable benefits to the healthcare field. It has the capability to analyze patient data and medical images such as X-rays, CT scans, and MRI scans, quicker than humans, and with greater accuracy. Additionally, AI can generate comprehensive summaries of patients’ medical histories and their interactions with medical staff, including their symptoms, diagnoses and treatments. It can also provide statistical insights into patients’ potential conditions that could account for all or most of their symptoms. Further, AI can recognize complex patterns and provide quantitative assessments. Essentially, AI reduces the time, energy, and resources expended on individualized research by providing information within seconds from a variety of sources. These advantages can enhance patient care by facilitating earlier diagnoses, promoting preventative care, and allowing personalized treatment strategies. As a result, they contribute to increased efficiency and improved decision-making in healthcare settings.
Disadvantages of AI
The allure of AI is its most dangerous facet, as it can lead to over-reliance on the technology. The more it proves helpful, the more one may be inclined to depend on it. Physicians may fall into the trap of assuming that since the results are generated by a machine, they must be inherently accurate. Consequently, there is a risk that physicians may rely solely on AI for patient diagnosis and treatment plans, neglecting their own clinical judgment. It is, of course, imperative for healthcare providers to recognize that AI is not flawless. It is a novel, recently-developed technology that has not been perfected. Just like any other non-human application, its algorithms are susceptible to errors and may produce inaccurate results, which could harm patients.
The use of AI in healthcare also raises questions about transparency. Should doctors inform patients that they are using AI to assist in their care? Should they ask for permission first? These questions carry significant weight, as their answers could impact liability and lead to medical malpractice lawsuits.
Current and Expected Use of AI
In June 2023, faculty members Tinglong Dai and Shubhranshu Singh from Johns Hopkins University developed a theoretical model, aimed at exploring physician decision-making when utilizing AI models.1 Their research revealed that physicians tend to avoid using AI in high-uncertainty cases, fearing liability if they decide to deviate from AI recommendations and the patient suffers harm. The model further showed that physicians are more likely to rely on AI when they expect AI to agree with their assessment. Consequently, the current use of AI may provide little to no benefit, even if employed. However, with AI as a rapidly growing technology, this perspective may, and probably will, change.
ChatGPT
ChatGPT is an interactive AI tool that has recently gained widespread prominence among the public and is easily accessible. Physicians, nurses, and other members of the healthcare fraternity can use the rapid search engine to ask the platform questions, further their medical knowledge, and learn about updates and new developments in their respective fields. Notably, ChatGPT can pass the United States Medical Licensing Exam (USMLE) with 60% accuracy, a feat achieved by medical students after years of study.2
Although the facets of the program are appealing at first glance, serious legal repercussions may follow from its use. First, the program’s data is only current up to 2021 for non-paying users, resulting in potentially outdated information. Secondly, it faces difficulty in differentiating between reliable and unreliable sources, thereby presenting inaccurate and untrustworthy information. Moreover, the quality and nature of the data upon which ChatGPT bases its responses may be compromised, especially since it is not specifically trained on healthcare data. Lastly, it does not provide citations for its sources, limiting its users’ ability to verify the information. It is very important for healthcare providers to recognize these limitations to prevent misdiagnoses, unreliable medical education, and potential harm to patients
Implications for Physicians
Although AI offers significant benefits in healthcare, it also raises liability concerns for physicians. Physicians are especially vulnerable to liability because they are licensed health care providers who bear ultimate responsibility for patient care. Specifically, physicians may face legal challenges if they fail to exercise independent judgment in determining patient diagnoses and recommending treatment. This may be especially harmful in situations where AI has made an error in analyzing and/or interpreting the data, or in situations where AI-generated recommendations conflict with a physician’s recommendations. In such cases, physicians face the dilemma of whether to follow AI’s recommendation or not. If the physician chooses to follow AI’s recommendation and it turns out to be wrong, the doctor can be sued for failing to use his independent judgment. Alternatively, if the doctor decides against the AI-generated recommendation and the recommendation turns out to be correct, the doctor may be sued for disregarding AI. It is a double-edged sword. Regardless of the manner in which AI is used, physicians may face liability, either for using AI or for failing to use AI, or even failing to use the best-equipped AI program.
How to Avoid Liability
Physicians owe their patients a duty to provide competent medical care, which includes the responsibility to apply independent judgment and critically assess any AI-generated outcomes. Consequently, they bear the burden of accountability for any resulting medical malpractice caused by AI’s incorrect diagnoses and treatment. Therefore, physicians must ensure that they rely on their own clinical expertise when making decisions about patient care and treatment. They should not allow AI to dictate their decisions. Moreover, until the legislature or the courts have ruled on questions of transparency and consent, physicians should exercise caution and carefully rely on their judgment with AI as a supplementary tool, depending on the patient’s medical issues.
Lawsuits
Although we have not yet encountered any lawsuits directly against healthcare providers in the use of AI, it is reasonable to anticipate that AI medical malpractice litigation is likely. There are, for example, current class actions against insurers Humana and UnitedHealthcare for allegedly using AI to override doctors’ recommendations and deny care to elderly patients under their Medicare Advantage Plans.3 4
Precedent also seems to support a legal basis for these claims. That is, even in cases where physician error is less apparent, courts have held that doctors can still be held liable if, for example, patients receive substandard care due to factors like incomplete intake forms or reliance on erroneous medical literature.5
If you would like more information about AI’s impact on healthcare, please contact attorney Paul Cardinale with the Medical Defense Law Group at paul.cardinale@med-defenselaw.com or 916-244-9110.
____
1 Tinglong Dai & Shubhranshu Singh, Malpractice Concerns Impact Physician Decisions to Consult AI, JOHNS HOPKINS CAREY BUSINESS SCHOOL (June 15, 2023).
2 Jennifer Lubell, ChatGPT passed the USMLE. What Does It Mean For Med Ed?, AMERICAN MEDICAL ASSOCIATION (March 3, 2023.)
3 Joanne Barrows, et. al. v. Humana, Inc. (Case No. 3:23-mc-99999; W.D.Ky. 2023)
4 The Estate of Gene B. Lokken, et. al. v. UnitedHealth Group, Inc. (Case No. 0:23-cv-03514; D.Minn. 2023)
5 Jennifer Lubell, ChatGPT passed the USMLE. What Does It Mean For Med Ed?, AMERICAN MEDICAL ASSOCIATION (March 3, 2023.)
Cardinale Fayard | Medical Defense Law Group© 2023. All Rights Reserved.
---
Terms of Use | Privacy Policy