AI’s Increasing Role in the Health Care Delivery System: Key Legal Considerations
No personal services are more important than health care. The use of artificial intelligence (AI), involving machines to perform tasks normally requiring human intelligence, is leading to an expansion of the term “personal.” Recent breakthroughs in generative AI, a type of AI capable of producing natural language, imagery, and audio data, have made the technology increasingly accessible to health care providers.
As AI becomes progressively ingrained in the industry, providers have the opportunity to harness AI to augment the existing care delivery system, and, in some cases, potentially replace existing human processes. This creates a significant necessity to rapidly build regulatory frameworks across the industry to monitor and limit the use of AI.
In a recent Yale CEO Summit survey, 48% of CEOs indicated that AI will have its greatest effect as applied to the health care industry — more than any other industry. This alert analyzes how AI is already affecting the health care industry, as well as some of the key legal considerations that may shape the future of generative AI tools.
1. The Emerging Regulatory Landscape
Government regulators and medical organizations are already setting guardrails to address the sometimes remarkably unreliable information provided by generative AI platforms. The American Medical Association recently addressed the issue of medical advice from generative AI chatbots such as ChatGPT and intends to collaborate with the Federal Trade Commission, the Food and Drug Administration, and others to mitigate medical misinformation generated by these tools. It also plans to propose state and federal regulations to address the subject.
Both the Department of Health and Human Services (HHS) and the Centers for Medicare and Medicaid Services (CMS) issued “AI Playbooks” to outline their positions on AI technology in accordance with the goals outlined in Executive Order 13960, titled “Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government.” These playbooks are of increasing importance and essential reading for providers contemplating the use and effects of AI.
This government guidance is coming as the health care industry becomes more invested in AI technology. In 2019, the Mayo Clinic entered into a 10-year partnership with Google to bolster the use of cloud computing, data analytics, and machine learning. Four years later, the provider announced plans to utilize Google’s AI Search technology in creating network chat platforms with tailored individual user experience for its physicians and patients. Other companies are in the beginning stages of creating generative AI platforms targeting the health care industry. For example, Glass Health’s developing platform will utilize a “large language model” (LLM). This consists of deep learning and voluminous data sets to draft health care plans and indicate possible diagnoses for patients based on short or incomplete medical record entries. Health care is one of the initiative’s primary focus areas. The HHS and CMS AI Playbooks should serve as key references during the development of these platforms and initiatives.
2. Offloading the Administrative Burden
One of AI’s attractions in the health care industry is its potential to streamline administrative processes, reduce operating expenses, and increase the amount of time a physician spends with a patient. Administrative expenses alone account for approximately 15% to 25% of total national health care expenditures in the United States. The American Academy of Family Physicians reports that the average primary care patient visit lasts approximately 18 minutes, and of that time, 27% is dedicated to direct contact with the patient, whereas 49% is consumed by administrative tasks. Process automation of repetitive tasks, which does not involve AI, has long been part of the patient encounter experience, from appointment scheduling to the revenue cycle management process. Nevertheless, half of all medical errors in primary care are administrative errors. Deploying AI to initiate intelligent actions has the potential to reduce clerical errors and improve upon those currently-automated processes.
Health care entities are already taking advantage of this emerging technology to increase administrative efficiencies. Transcription services can now be automated using natural language processing and speech recognition, preventing human error and physician burnout—a growing issue discussed in our prior alert. Health care systems are also applying algorithms in surgical scheduling. An example is the analysis of individual surgeon data to optimize block scheduling of surgical suites, in some cases reducing physician overtime by 10% and increasing space utilization by 19%.
3. Machine Empathy: Androids Dreaming of Electric Sheep
Can AI technology teach providers how to be more empathetic? While Philip K. Dick’s 1968 novel, Do Androids Dream of Electric Sheep? imagined a dystopian future in which AI was viewed as devoid of empathy, today the potential exists for AI to guide physicians’ positive behavior toward their patients. Though currently unconventional, AI has the potential to empower physicians to consider the impact their communications have on patients’ lives. With guidance provided by AI technology regarding how to broach difficult subjects, such as terminal illnesses or the death of a loved one, physicians may be able to more confidently and positively interact with others, building a deeper sense of trust with their patients. In the pre-AI world, positive communication behaviors were shown repeatedly to reduce the likelihood of litigation and reduce health care costs.
A June 2023 study determined that ChatGPT was not only capable of formulating “thoughtful,” compassionate answers to patient questions or concerns, but in some cases, its answers were preferred over the communications by physicians. The University of California San Diego research study compared responses to patient questions generated by ChatGPT against responses from human physicians, addressing simple ailments up to serious medical concerns. Feedback from participants indicated that the chatbot answers were rated on average seven times more empathetic than human responses. While machine-manufactured empathy may be anxiety-inducing to many, AI need not replace physicians in conversations requiring clarity and compassion, but rather can serve as a complement to those interactions.
4. “Dr. ChatGPT” – LLMs and A Call to Regulate
Generative AI chat tools may be useful for patients and physicians alike to locate and allocate resources, develop care plans, and diagnose and treat medical conditions. However, as discussed above, the expanding use of these tools in the health care space creates a significant issue: how to know and be confident that these tools are providing reliable information. Is it appropriate at this point in time to utilize these tools for medical purposes?
Take, for example, the National Eating Disorder Association’s (NEDA) AI-powered LLM chatbot, “Tessa.” Tessa’s mission was to promote wellness and provide resources for people affected by eating disorders. However, like other AI chatbots, Tessa’s responses were prone to “hallucinations”—techspeak for a chatbot’s inaccurate response. NEDA is not alone in experiencing issues with generative AI-powered chat tools like Tessa. False or misleading information, particularly relating to medical information, leaves users vulnerable and potentially at risk. It is yet to be seen the extent of liability arising from chatbot medical advice, particularly when the chatbot is sponsored by a health care industry organization, but this is undoubtedly within the regulators’ sights.
5. Wearable Devices and Privacy Implications
From the invention of the mechanical pedometer in 1780 to current technology capable of detecting medical emergencies and chronic illnesses, wearable devices have become an integral part of today’s health care delivery system. The benefits of data derived from the devices cannot be overstated as patient care decisions can now be made with more speed and accuracy. The devices also serve to deepen the physician-patient relationship through more frequent interactions with the provider or staff that drive patient engagement in the care process. The origin of these technologies, however, is rooted in patient data-driven algorithms that range from demographic data to confidential medical information.
The federal Health Insurance Portability and Accountability Act of 1996 (HIPAA) created national standards to protect patient health information (PHI) from disclosure or use without the patient’s consent or knowledge, absent certain exceptions. HIPAA and its corresponding state laws are the first line of defense against threats related to the collection and transmission of sensitive PHI by wearable devices. The Office of Information Security for HHS addressed these concerns in a September 2022 presentation, essential reading for health care data privacy and security experts, that calls for blanket multi-factor authentication, end-to-end encryption, and whole disk encryption to prevent the interception of PHI from wearable devices.
Litigation regarding AI data collection and use has begun. In one case, a recent class action lawsuit in the Northern District of California against OpenAI, the creator of ChatGPT, alleged, among other things, violation of users’ privacy rights based on data scraping of social media comments, chat logs, cookies, contact information, login credentials, and financial information. P.M. v. OpenAI LP, No. 3:23-cv-03199 (N.D. Cal. filed June 28, 2023). In this context, the ramifications for misuse of PHI is significant.
6. Fraud, Waste, and Abuse Prevention
Companies are harnessing AI to detect and prevent fraud, waste, and abuse (FWA) in health care payment systems. MIT researchers report that insurers indicated a return on FWA systems investment is among the highest of all AI investments. One large health insurer reported a savings of $1 billion annually through AI-prevented FWA. However, at least one federal appellate court determined earlier this year that a company’s use of AI to provide prior authorization and utilization management services to Medicare Advantage and Medicaid managed care plans is subject to a level of qualitative review that may result in liability for the entity utilizing the AI. Doe 1 v. EviCore Healthcare MSI, LLC.
Conclusion
The effect of AI on health care will only continue to grow in scale and scope. New initiatives are announced daily as well as concomitant calls for regulation. Legislators and prominent health care industry voices have called for the creation of a new federal agency that would be responsible for evaluating and licensing new AI technology. Others suggest creation of a federal private right of action that would enable consumers to sue AI developers for harm resulting from the use of AI technology, such as in the OpenAI case discussed above. Whether legislators and regulators can quickly enact a comprehensive framework seems unlikely, but of increasing urgency.
Before utilizing generative AI tools, health care providers should consider whether the specific tools adhere to internal data security and confidentiality standards. Like any third-party software, the security and data processing practices vary. Before implementing the use of generative AI tools, organizations and their legal counsel should (a) carefully review the applicable terms of use, (b) determine whether the tool offers features that enhance data privacy and security, and (c) consider whether to limit or restrict access on company networks to any tools that do not satisfy company data security or confidentiality requirements. It is crucial that these protections be reinforced and augmented quickly because threat proliferation remains a critical issue.
Additional research and writing by Meredith Gillespie, a 2023 summer associate at ArentFox Schiff LLP, Washington, DC’s office and a law student at Wake Forest University School of Law.
Contacts
- Related Industries