The U.S. Department of Health and Human Services (“HHS”), through its Office for Civil Rights (“OCR”), recently issued a “Dear Colleague” letter, Ensuring Nondiscrimination Through the Use of Artificial Intelligence (“AI”) and Other Emerging Technologies, which emphasizes the importance of fairness and equity in AI use in patient care decision support tools (e.g., clinical algorithms and predictive analytics) in connection with certain health programs and activities. While not the law, HHS continues to provide its views about using AI in health care.  See our prior post about another HHS publication that organizations can use as guidance. Specifically, the letter emphasizes the importance of complying with the federal nondiscrimination requirements of Section 1557 of the Affordable Care Act (“Section 1557”).

OCR’s letter confirms that it will enforce Section 1557’s nondiscrimination protections to the use of AI (effective from July 5, 2024) and it will require organizations that participate in certain regulated programs and activities to identify and mitigate risks of unlawful discrimination when using AI (effective on May 1, 2025). We highlight OCR’s guidance on these two enforcement objectives related to Section 1557 below.

Preventing Unlawful Discrimination

According to the letter, regulated organizations must not discriminate on the basis of race, color, national origin, sex, age, or disability when using patient care decision support tools.  OCR included the following example:

  • Example: A hospital’s advanced emergency triage system must not discriminate against individuals with disabilities by failing to consider how an individual’s disability or age could affect the assessment factors.

Identifying and Mitigating Risks from Unlawful Discrimination

For regulated organizations to comply with Section 1557, according to the letter, they have an ongoing duty to make reasonable efforts to identify and mitigate the risk of discrimination from AI tools that use race, color, national origin, sex, age, or disability as input variables. OCR included the following examples:

  • Example: Regulated organizations can adopt a policy to analyze whether it uses any patient care decision support tools, obtain information from vendors that provide them about the inputs and factors used for decision-making, and train staff to look for potential discriminatory output from the tools.
  • Example: If a clinical decision support tool is known to under-refer certain types of specialists for patients of a certain race, and the regulated organization continues to use the tool, the organization could implement measures for staff to examine and adjust the results to avoid discrimination.

How Regulated Entities Can Identify, Prevent, and Mitigate Risks from Unlawful Discrimination

OCR provided some specific ideas on what a regulated organization’s reasonable efforts to identify and mitigate risk from unlawful discrimination may include:

  • Review and Research: Examine OCR’s discussion of risks in Section 1557, published research, and information from vendors about AI input variables in patient care decision support tools. Monitor registries that focus on AI safety.
  • Implement Policies and Procedures: Establish written policies and procedures governing AI tool use, monitor AI tools’ impacts, and develop ways to address discrimination complaints arising from AI tool use. Obtain necessary information from AI vendors, including characteristics about training data.
  • Training and Auditing: Train staff on the proper use of AI tools and audit their performance in real-world scenarios to ensure compliance.
  • Allow Human Override of AI Tool Decisions: Use tools that allow qualified human staff to override and report discriminatory decisions made by such AI tools through mechanisms such as “human in the loop” AI review. Further, regulated organizations can audit performance of tools in “real world” scenarios and monitor the tool for discrimination.
  • Patient Disclosure: Disclose to patients the use of AI in patient care decision support tools that pose a risk of discrimination.

OCR’s letter underscores the importance of balancing the benefits and potential harm from using AI in health care. If you have any questions on effective compliance policies for your organizations or for more information on how your organization can analyze and potentially mitigate discrimination resulting from emerging technologies, please reach out to the authors of this post.