The California Attorney General’s Office (AG) unsurprisingly takes an expansive view of how the development, sale, and use of artificial intelligence technology (AI) in healthcare could lead to potential violations of existing California laws. In a recent legal advisory the AG highlights specific areas healthcare organizations should focus on as they develop, train, improve, and deploy AI in connection with patients, plan members, and their data.

In particular, the advisory identifies AI risk hot spots that may trigger certain state consumer protection, anti-discrimination, and privacy/autonomy laws, as described further below.

California’s Health Consumer Protection Laws

  • California’s Unfair Competition Law (UCL) prohibits unlawful, unfair, or fraudulent business acts or practices. Violations of other federal, state, and local laws could also be actionable under the UCL. For example, the advisory mentions that the UCL incorporates numerous laws that may apply to AI in a variety of contexts, such as the protections against false advertising and anticompetitive practices described in other AG guidance. As we recently discussed, amendments to the Knox-Keene Act and California Insurance Code may restrict health plans from using AI to deny or modify coverage based on medical necessity. The AG could attempt to bring UCL claims asserting violations of such laws and regulations.
  • The advisory also states that AI must not replace or override licensed healthcare providers’ decisions, be fair and non-discriminatory, and be open to inspection and audit.

California’s Anti-Discrimination Laws

  • According to the advisory, California’s anti-discrimination laws prohibit any entity receiving state support from performing healthcare programs or activities or otherwise conducting business in a manner that discriminates based on protected classifications such as race, gender, disability, and others. The advisory states that “a disparate impact is permissible only if the covered entity can show that the AI system’s use is necessary for achieving a compelling, legitimate, and nondiscriminatory purpose, and supported by evidence that is not hypothetical or speculative.” We will be following closely to observe how the AG views AI that can significantly help a certain set of people, but not others. Should a provider not use the AI to help certain sets of people over others, or does the noncompliance arise when an organization uses AI for multiple sets of people, knowing that it works better for some sets than other sets? Blanket non-discrimination obligations in the healthcare industry could result in actual harm to individuals who could have been helped by the use of discriminatory AI.
  • The AG asserted that compliance requires that organizations proactively design, acquire, and implement AI to avoid perpetuating past discrimination.
  • The advisory also notes that California’s insurance laws prohibit discrimination in health insurance ratemaking, claims handling, and application reviews.

California’s Patient Privacy and Autonomy Laws

  • The advisory reminds healthcare organizations that they may be required to ensure that Californian patients’ rights to privacy and autonomy are not compromised.
  • California residents have certain rights to medical privacy, including those found in the Confidentiality of Medical Information Act and the Information Practices Act. These laws may apply to providers, health plans, and offerors of digital health solutions for consumers designed to store medical information and facilitate care management. There are also enhanced protections for certain sensitive health information, including mental health and reproductive and sexual health information.
  • The advisory suggests that healthcare providers may need to obtain informed consent to the use of AI for treatment purposes.

Laws, regulations, and guidance applicable to AI are rapidly evolving, just like the technology itself. For example, the federal Department of Health and Human Services’ Office for Civil Rights (HHS OCR) also recently issued guidance on the use of AI, which we describe in further detail here. Organizations in healthcare should monitor new legal developments closely, particularly with anticipated leadership changes at HHS.

Reed Smith will continue to follow these developments at both a state and federal level. If you have any questions on whether and how California laws may apply to your organization’s development or implementation of AI, please do not hesitate to reach out to the authors of this post.