Increasingly, companies use AI to evaluate job applications and make interviewing or hiring decisions. However, government contractors who use artificial intelligence to evaluate job applications should ensure that the AI not only complies with anti-discrimination laws but also fulfills their contractual responsibilities. Federal contractors with contracts of $10,000 or more are subject to Executive Order
USPTO Issues Guidance on Use of AI Based Tools
This week we are pleased to have a guest post by Robinson+Cole Artificial Intelligence Team patent agent Daniel J. Lass and Counsel Kyle G. Hepner
The U.S. Patent and Trademark Office (USPTO) issued guidance on the use of AI-based tools to prepare and prosecute patent and trademark applications. This announcement supplements the previous guidance issued…
Joint Guidance Published by Five Eyes on Deploying AI Systems Securely
On April 15, 2024, the National Security Agency’s Artificial Intelligence Security Center published guidance on “Deploying AI Systems Securely,” together with CISA, the FBI, the Australian Cyber Security Centre, the Canadian Centre for Cyber Security, the New Zealand National Cyber Security Centre and the UK’s National Cyber Security Centre (a/k/a the Five Eyes).
The Cybersecurity…
The State of AI Governance and Diversity: Takeaways from the AI Index Report
The latest edition of the AI Index Report from Stanford University’s Human-Centered Artificial Intelligence Center provides a comprehensive look at artificial intelligence (AI) policy, regulation, and diversity trends across the globe.
The number of AI-related regulations enacted by U.S. federal agencies like the FDA, EPA, and FCC has skyrocketed from just 1 in 2016 to…
HC3 Warns Health Sector About Social Engineering Attacks Against IT Help Desks
The Health Sector Cybersecurity Coordination Center (HC3) recently issued an Alert warning that “threat actors employing advanced social engineering tactics to target IT help desks in the health sector and gain initial access to target organizations” have been on the rise.
The social engineering scheme starts with a telephone call to the IT help desk…
Forecasting the Integration of AI into Health Care Compliance Programs
*This post was co-authored by Josh Yoo, legal intern at Robinson+Cole. Josh is not admitted to practice law.
Health care entities maintain compliance programs in order to comply with the myriad, changing laws and regulations that apply to the health care industry. Although laws and regulations specific to the use of artificial intelligence (AI) are limited at this time and in the early stages of development, current law and pending legislation offer a forecast of standards that may become applicable to AI. Health care entities may want to begin to monitor the evolving guidance applicable to AI and start to integrate AI standards into their compliance programs in order to manage and minimize this emerging area of legal risk.
Executive Branch: Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence
Following Executive Order 13960 and the Blueprint for an AI Bill of Rights, Executive Order No. 14110 (EO) amplifies the current key principles and directives that will guide federal agency oversight of AI. While still largely aspirational, these principles have already begun to reshape regulatory obligations for health care entities. For example, the Department of Health and Human Services (HHS) has established an AI Task Force to regulate AI in accordance with the EO’s principles by 2025. Health care entities would be well-served to monitor federal priorities and begin to formally integrate AI standards into their corporate compliance plans.
- Confidentiality and Security: Federal scrutiny of the privacy and security of entrusted information extends to AI’s interactions with data as a core obligation. This general principle also manifests in more specific directives throughout the EO. The EO also orders the HHS AI Task Force to incorporate “measures to address AI-enhanced cybersecurity threats in the health and human services sector.”
- Transparency: The principle of transparency refers to an AI user’s ability to understand the technology’s uses, processes, and risks. Health care entities will likely be expected to understand how their AI tools collect, process, and predict data. The EO envisions labelling requirements that will flag AI-generated content for consumers as well.
- Governance: Governance applies to an organization’s control over deployed AI tools. Internal mechanical controls, such as evaluations, policies, and institutions, may ensure continuous control throughout the AI’s life cycle. The EO also emphasizes the importance of human oversight. Responsibility for AI implementation, review, and maintenance can be clearly identified and assigned to appropriate employees and specialists.
- Non-Discrimination: AI must also abide by standards that protect against unlawful discrimination. For example, the HHS AI Task force will be responsible for ensuring that health care entities continuously monitor and mitigate algorithmic processes that could contribute to discriminatory outcomes. It will be important to permit internal and external stakeholders to have access to equitable participation in the development and use of AI.
National Institute of Standards and Technology: Risk Management Framework
The National Institute of Standards and Technology (NIST) published a Risk Management Framework for AI (RMF) in 2023. Similar to the EO, the RMF outlines broad goals (i.e., Govern, Map, Measure, and Manage) to help organizations address and manage the risks of AI tools and systems. A supplementary NIST “Playbook” provides actionable recommendations that implement EO principles to assist organizations to proactively mitigate legal risk under future laws and regulations. For example, a health care organization may uphold AI governance and non-discrimination by deploying a diverse, AI-trained compliance team.Continue Reading Forecasting the Integration of AI into Health Care Compliance Programs
California Privacy Protection Agency’s Regulations on Automated Decision-Making Technology, Risk Assessments Delayed
The California Privacy Protection Agency’s (CPPA) highly anticipated regulations for automated decision-making technology and risk assessment requirements are likely far from final. The CPPA met at the beginning of the month but did not come to a consensus on what the final regulations should look like.
The CPPA’s vote was expected to be procedural but…
Privacy Tip #392 – Legitimate Platforms and AI Used to Bypass MFA
Darktrace researchers have outlined a particularly scary scenario of how threat actors are bypassing MFA and using artificial intelligence to launch sophisticated phishing attacks against users.
The case study “leveraged legitimate Dropbox infrastructure and successfully bypassed multifactor authentication (MFA) protocols…which highlights the growing exploitation of legitimate popular services to trick targets into downloading malware and…
Memo for Use of AI During Practice Issued by USPTO
This week we are pleased to have a guest post by Robinson+Cole Artificial Intelligence Team patent agent Daniel J. Lass.
After several high-profile instances of artificial intelligence (AI) hallucination and Chief Justice John Roberts’s year-end report acknowledging the shortcomings of blindly relying on AI in legal writing, Kathi Vidal, the Director of the U.S.
WHO Publishes Guidance for Ethics and Governance of AI for Healthcare Sector
The World Health Organization (WHO) recently published “Ethics and Governance of Artificial Intelligence for Health: Guidance on large multi-modal models” (LMMs), which is designed to provide “guidance to assist Member States in mapping the benefits and challenges associated with the use of for health and in developing policies and practices for appropriate development…