*This post was co-authored by Josh Yoo, legal intern at Robinson+Cole. Josh is not admitted to practice law.

Health care entities maintain compliance programs in order to comply with the myriad, changing laws and regulations that apply to the health care industry. Although laws and regulations specific to the use of artificial intelligence (AI) are limited at this time and in the early stages of development, current law and pending legislation offer a forecast of standards that may become applicable to AI. Health care entities may want to begin to monitor the evolving guidance applicable to AI and start to integrate AI standards into their compliance programs in order to manage and minimize this emerging area of legal risk.

Executive Branch: Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence

Following Executive Order 13960 and the Blueprint for an AI Bill of Rights, Executive Order No. 14110 (EO) amplifies the current key principles and directives that will guide federal agency oversight of AI. While still largely aspirational, these principles have already begun to reshape regulatory obligations for health care entities. For example, the Department of Health and Human Services (HHS) has established an AI Task Force to regulate AI in accordance with the EO’s principles by 2025. Health care entities would be well-served to monitor federal priorities and begin to formally integrate AI standards into their corporate compliance plans.

  • Transparency: The principle of transparency refers to an AI user’s ability to understand the technology’s uses, processes, and risks. Health care entities will likely be expected to understand how their AI tools collect, process, and predict data. The EO envisions labelling requirements that will flag AI-generated content for consumers as well.
  • Governance: Governance applies to an organization’s control over deployed AI tools. Internal mechanical controls, such as evaluations, policies, and institutions, may ensure continuous control throughout the AI’s life cycle. The EO also emphasizes the importance of human oversight. Responsibility for AI implementation, review, and maintenance can be clearly identified and assigned to appropriate employees and specialists.
  • Non-Discrimination: AI must also abide by standards that protect against unlawful discrimination. For example, the HHS AI Task force will be responsible for ensuring that health care entities continuously monitor and mitigate algorithmic processes that could contribute to discriminatory outcomes. It will be important to permit internal and external stakeholders to have access to equitable participation in the development and use of AI.

National Institute of Standards and Technology: Risk Management Framework

The National Institute of Standards and Technology (NIST) published a Risk Management Framework for AI (RMF) in 2023. Similar to the EO, the RMF outlines broad goals (i.e., Govern, Map, Measure, and Manage) to help organizations address and manage the risks of AI tools and systems. A supplementary NIST “Playbook”  provides actionable recommendations that implement EO principles to assist organizations to proactively mitigate legal risk under future laws and regulations. For example, a health care organization may uphold AI governance and non-discrimination by deploying a diverse, AI-trained compliance team.Continue Reading Forecasting the Integration of AI into Health Care Compliance Programs

The Office of Inspector General (OIG) recently announced the creation of a cybersecurity team focused on combating threats within the Department of Health & Human Services (HHS), and within the health care industry. The team includes auditors, evaluators, investigators, and attorneys with experience in cybersecurity matters, and its work is intended to build on the cybersecurity priorities the OIG has previously identified in its annual assessments and reports.
Continue Reading OIG Announces New Multidisciplinary Cybersecurity Team

The U.S. Department of Transportation’s Office of Inspector General (OIG) released its recent audit of the Federal Aviation Administration (FAA), “FAA Lacks A Risk-Based Oversight Process For Civil Unmanned Aircraft Systems,” stating that while the FAA has taken continuous steps to advance the integration of unmanned aircraft systems (UAS or ‘drones’) into the national airspace, the FAA is still however taking a “reactive approach to UAS oversight.” The audit reveals that while the FAA has taken steps to identify and detect UAS operations and increase awareness and education of operators (like requiring registration), “the agency has taken action primarily after incidents occur.”

Additionally, the OIG criticized the FAA’s processes for UAS operations because they do not verify that operators actually meet or understand the conditions and limitations in their exemptions. Further, while the FAA has taken steps to advance UAS technology, they have yet to establish a risk-based safety oversight process, or a robust data reporting and tracking system for UAS activity according to the OIG.Continue Reading FAA’s Drone Efforts Audited by the OIG

On November 30, 2016, the U.S. House of Representatives voted strongly in favor of the 21st Century Cures Act (the Act), an expansive health bill that addresses the discovery and development of new medical therapies as well the delivery of health care treatment by providers.

In 2015, the House had previously approved an earlier version

The U.S. Department of Health & Human Services (HHS) Office of Inspector General (OIG) recently released a compendium (Compendium) of its top unimplemented recommendations.  The Compendium comprises 25 unimplemented past OIG recommendations that the OIG believes could have a positive impact on HHS programs in terms of cost savings and/or quality improvements.  The Compendium’s recommendations