*This post was authored by Daniel Lass, law clerk at Robinson+Cole. Daniel is not admitted to practice law.

In the early to mid-2000s, Yahoo! worked to develop and refine its search engine capabilities.  During this period, Yahoo! obtained U.S. Patent Nos. 8,341,157, 7,698,329; 8,209,317; 9,805,097; and 8,527,623, which are generally related to improving the

*This post was authored by Daniel Lass, law clerk at Robinson+Cole. Daniel is not admitted to practice law.

Swiss company Scandit AG created an application called ShelfView, which enables retailers to verify the prices of various products and ensure that associated promotions are correctly updated. The application utilizes barcode scanning, optical character recognition, and augmented

*This post was authored by Daniel Lass, law clerk at Robinson+Cole. Daniel is not admitted to practice law.

In 2023, visual artists Sarah Andersen, Kelly McKernan, and Karla Ortiz filed a class action lawsuit against several Artificial Intelligence (AI) companies, alleging that the companies’ various AI models violated copyright law by using the artists’ work

Thank you to Jon Schaefer for this post. Jon focuses his practice on environmental compliance counseling, occupational health and safety.

On July 30, 2024, the U.S. EPA Office of Inspector General issued a fraud alert to bring attention to an increasing number of companies reporting that they have received fraudulent EPA Notice of Violation

This blog post was co-authored by Labor, Employment, Benefits + Immigration Group lawyer Abby M. Warren.

It doesn’t seem fair that human resources (HR) personnel have to manage both labor shortages and overwhelming employee management tasks, but here we are.  Companies are facing a critical shortage of skilled workers that is outpacing educational institutions’ training ability, not to mention a mismatch of skills.  Yet, HR personnel are expected to sift through thousands of resumes with dubious potential to find skilled workers to replace the ones who are leaving at an increasing rate. As workers retire without sufficient workers to replace them, the problem will only get worse. 

To meet these challenges and demands, a lot of companies are spending money on artificial intelligence (AI) to compensate for labor shortages in the hope that it alleviates these increasing burdens. AI generally refers to computers that can perform actions that typically require human intelligence. For example, whereas we used to write our texts and emails ourselves, our phones’ generative AI now offers to finish our texts and emails, or even suggests the entire message.

Most frequently, HR personnel use AI in their recruiting process — specifically to screen and review talent (e.g., scan resumés). Theoretically, AI can review more resumés more quickly than an entire HR department can. Trained properly, AI can select the best resumés and enable your team to interview higher quality candidates. And at the interview stage, AI can transcribe and summarize live interviews.

AI can also help train new employees. AI chatbots can guide new hires through the onboarding process and provide answers to questions in real time. It can send welcome emails and schedule training sessions, which can help make an employee’s onboarding experience smoother, with less effort from an HR department.

After training, generative AI can answer employees’ questions about various company policies and functions in real time including:

  • Vacation, parental, and other leaves;
  • Insurance (life and health)
  • Expense reports
  • Retirement accounts
  • Health and wellness
  • Disability coverage
  • Family benefits

Answering these questions can allow HR personnel time to perform more value-added tasks.

Theoretically, generative AI can also help manage employees. Just like your phone’s AI can help you write texts, generative AI like ChatGPT can write or revise entire emails. And AI can adjust the tone of an email, making it more professional, more friendly, more detailed, etc., as the situation requires.

However, every rose has its thorn — or multiple thorns. When evaluating resumés, AI can rely upon outdated stereotypes as easily as people can. A recent study by Rippl found that prompts for doctors, engineers, carpenters, electricians, manufacturing workers, and salespeople produced only male results. When asked to generate images for a HR manager, marketing assistant, receptionist, and nurse AI provided only pictures of women.  When asked to generate images of a CEO, AI offered only white, middle-aged men, whereas manufacturing workers were always young men of color and housekeepers were all young women. This can be especially dangerous, because according to one recent survey, 73 percent of HR professionals said they trust AI to recommend whom to hire. 

As if that weren’t enough, AI can use its generative abilities to formulate a response that is linguistically correct but factually wrong.  This phenomenon, called “hallucination,” has gained attention through media reports of AI guiding people to eat poisonous mushrooms or make other mistakes. That is, the “answers” that your generative AI bot provides employees and AI’s email “corrections” may contain hallucinations that might mislead your employees. Used incorrectly, AI can make mistakes that could take hours or days of HR time to correct.

Unfortunately for employers, their legal obligations under local, state, and federal employment laws remain regardless of whether they are engaging in recruiting, hiring, and managing applicants and employees directly, through a vendor, or through the use of AI. Further, if there are issues with regard to discrimination or bias in recruiting, hiring, and managing, those issues are typically systemic — that is, they have impacted numerous applicants and employees and may result in costly enforcement actions, government investigations, or litigation.Continue Reading AI Lands in the Workplace

Below is an excerpt of a legal update co-authored with Government Enforcement + White Collar Defense Partner David E. Carney.

On June 17, the Department of Justice (DOJ) announced settlements of alleged False Claims Act (FCA) violations associated with cybersecurity requirements in contracts to provide a secure environment for online applications for federal housing

*This post was co-authored by Josh Yoo, legal intern at Robinson+Cole. Josh is not admitted to practice law.

Health care entities maintain compliance programs in order to comply with the myriad, changing laws and regulations that apply to the health care industry. Although laws and regulations specific to the use of artificial intelligence (AI) are limited at this time and in the early stages of development, current law and pending legislation offer a forecast of standards that may become applicable to AI. Health care entities may want to begin to monitor the evolving guidance applicable to AI and start to integrate AI standards into their compliance programs in order to manage and minimize this emerging area of legal risk.

Executive Branch: Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence

Following Executive Order 13960 and the Blueprint for an AI Bill of Rights, Executive Order No. 14110 (EO) amplifies the current key principles and directives that will guide federal agency oversight of AI. While still largely aspirational, these principles have already begun to reshape regulatory obligations for health care entities. For example, the Department of Health and Human Services (HHS) has established an AI Task Force to regulate AI in accordance with the EO’s principles by 2025. Health care entities would be well-served to monitor federal priorities and begin to formally integrate AI standards into their corporate compliance plans.

  • Transparency: The principle of transparency refers to an AI user’s ability to understand the technology’s uses, processes, and risks. Health care entities will likely be expected to understand how their AI tools collect, process, and predict data. The EO envisions labelling requirements that will flag AI-generated content for consumers as well.
  • Governance: Governance applies to an organization’s control over deployed AI tools. Internal mechanical controls, such as evaluations, policies, and institutions, may ensure continuous control throughout the AI’s life cycle. The EO also emphasizes the importance of human oversight. Responsibility for AI implementation, review, and maintenance can be clearly identified and assigned to appropriate employees and specialists.
  • Non-Discrimination: AI must also abide by standards that protect against unlawful discrimination. For example, the HHS AI Task force will be responsible for ensuring that health care entities continuously monitor and mitigate algorithmic processes that could contribute to discriminatory outcomes. It will be important to permit internal and external stakeholders to have access to equitable participation in the development and use of AI.

National Institute of Standards and Technology: Risk Management Framework

The National Institute of Standards and Technology (NIST) published a Risk Management Framework for AI (RMF) in 2023. Similar to the EO, the RMF outlines broad goals (i.e., Govern, Map, Measure, and Manage) to help organizations address and manage the risks of AI tools and systems. A supplementary NIST “Playbook”  provides actionable recommendations that implement EO principles to assist organizations to proactively mitigate legal risk under future laws and regulations. For example, a health care organization may uphold AI governance and non-discrimination by deploying a diverse, AI-trained compliance team.Continue Reading Forecasting the Integration of AI into Health Care Compliance Programs

This week we are pleased to have a guest post by Robinson+Cole Business Transaction Group lawyer Tiange (Tim) Chen.

On February 28, 2024, the Justice Department published an Advanced Notice of Proposed Rulemaking (ANPRM) to seek public comments on the establishment of a new regulatory regime to restrict U.S. persons from transferring bulk sensitive