*This post was co-authored by Josh Yoo, legal intern at Robinson+Cole. Josh is not admitted to practice law.

Health care entities maintain compliance programs in order to comply with the myriad, changing laws and regulations that apply to the health care industry. Although laws and regulations specific to the use of artificial intelligence (AI) are limited at this time and in the early stages of development, current law and pending legislation offer a forecast of standards that may become applicable to AI. Health care entities may want to begin to monitor the evolving guidance applicable to AI and start to integrate AI standards into their compliance programs in order to manage and minimize this emerging area of legal risk.

Executive Branch: Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence

Following Executive Order 13960 and the Blueprint for an AI Bill of Rights, Executive Order No. 14110 (EO) amplifies the current key principles and directives that will guide federal agency oversight of AI. While still largely aspirational, these principles have already begun to reshape regulatory obligations for health care entities. For example, the Department of Health and Human Services (HHS) has established an AI Task Force to regulate AI in accordance with the EO’s principles by 2025. Health care entities would be well-served to monitor federal priorities and begin to formally integrate AI standards into their corporate compliance plans.

  • Transparency: The principle of transparency refers to an AI user’s ability to understand the technology’s uses, processes, and risks. Health care entities will likely be expected to understand how their AI tools collect, process, and predict data. The EO envisions labelling requirements that will flag AI-generated content for consumers as well.
  • Governance: Governance applies to an organization’s control over deployed AI tools. Internal mechanical controls, such as evaluations, policies, and institutions, may ensure continuous control throughout the AI’s life cycle. The EO also emphasizes the importance of human oversight. Responsibility for AI implementation, review, and maintenance can be clearly identified and assigned to appropriate employees and specialists.
  • Non-Discrimination: AI must also abide by standards that protect against unlawful discrimination. For example, the HHS AI Task force will be responsible for ensuring that health care entities continuously monitor and mitigate algorithmic processes that could contribute to discriminatory outcomes. It will be important to permit internal and external stakeholders to have access to equitable participation in the development and use of AI.

National Institute of Standards and Technology: Risk Management Framework

The National Institute of Standards and Technology (NIST) published a Risk Management Framework for AI (RMF) in 2023. Similar to the EO, the RMF outlines broad goals (i.e., Govern, Map, Measure, and Manage) to help organizations address and manage the risks of AI tools and systems. A supplementary NIST “Playbook”  provides actionable recommendations that implement EO principles to assist organizations to proactively mitigate legal risk under future laws and regulations. For example, a health care organization may uphold AI governance and non-discrimination by deploying a diverse, AI-trained compliance team.Continue Reading Forecasting the Integration of AI into Health Care Compliance Programs

This week we are pleased to have a guest post by Robinson+Cole Business Transaction Group lawyer Tiange (Tim) Chen.

On February 28, 2024, the Justice Department published an Advanced Notice of Proposed Rulemaking (ANPRM) to seek public comments on the establishment of a new regulatory regime to restrict U.S. persons from transferring bulk sensitive

This post was co-authored by Yelena Greenberg, a member Robinson+Cole’s Health Law Group.

On February 8, 2024, the U.S. Department of Health and Human Services (HHS) issued a final rule (Final Rule) updating federal “Part 2” regulations to more closely align the requirements applicable to substance use disorder (SUD) treatment records with

On December 13, 2023, the Office of the National Coordinator for Health Information Technology (ONC) issued its final rule entitled “Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing” and known as “HTI-1” (Final Rule). Among other issues addressed in the Final Rule, ONC revised the information blocking rules to add

This week we are pleased to have a guest post by Robinson+Cole Artificial Intelligence Team patent agent Daniel J. Lass.

After previously finding that the Biden White House and the FBI likely violated First Amendment free speech protections for some users of online social media platforms, the Fifth Circuit expanded its ruling to find

This week we are pleased to have a guest post by Robinson+Cole Artificial Intelligence Team patent agent Daniel J. Lass.

As artificial intelligence (AI) becomes better and more prevalent, people will increasingly use its computing power to supplement or replace human creativity. Film director Gareth Edwards attempted to do just that  in his new

On April 12, 2023, the U.S. Department of Health & Human Services (HHS) released a Notice of Proposed Rulemaking (Proposed Rule) that seeks to enhance safeguards of reproductive health care information through changes to the Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule. The proposal is intended to align with President Biden’s Executive Order

On November 28, 2022, the Department of Health and Human Services (HHS) issued a proposed rule to modify the confidentiality protections of Substance Use Disorder (SUD) patient treatment records under 42 CFR Part 2 (Part 2) to implement statutory amendments passed under Section 3221 of the Coronavirus Aid, Relief, and Economic Security (CARES) Act (42

On June 16, and then on July 6, 2021, Connecticut Governor Ned Lamont signed into law a pair of bills that together address privacy and cybersecurity in the state. Cybersecurity risks continue to pose a significant threat to businesses and the integrity of private information. Connecticut joins other states in revisiting its data breach reporting laws to strengthen reporting requirements, and offer protection to businesses that have been the subject of a breach despite implementing cybersecurity safeguards from certain damages in resulting litigation.

Public Act 21-59 “An Act Concerning Data Privacy Breaches” (PA 21-59) modifies Connecticut law addressing data privacy breaches to expand the types of information that are protected in the event of a breach, to shorten the timeframe for reporting a breach, to clarify applicability of the law to anyone who owns, licenses, or maintains computerized data that includes “personal information,” and to create an exception for entities that report breaches in accordance with HIPAA. Public Act 21-119 “An Act Incentivizing the Adoption of Cybersecurity Standards for Businesses” (PA 21-119) correspondingly establishes statutory protection from punitive damages in a tort action alleging that inadequate cybersecurity controls resulted in a data breach against an entity covered by the law if the entity maintained a written cybersecurity program conforming to industry standards (as set forth in PA 21-119).

Both laws take effect October 1, 2021.
Continue Reading Connecticut Enacts Legislation to Incentivize Adoption of Cybersecurity Safeguards and Expand Breach Reporting Obligations