This blog post was co-authored by Labor, Employment, Benefits + Immigration Group lawyer Abby M. Warren.

It doesn’t seem fair that human resources (HR) personnel have to manage both labor shortages and overwhelming employee management tasks, but here we are.  Companies are facing a critical shortage of skilled workers that is outpacing educational institutions’ training ability, not to mention a mismatch of skills.  Yet, HR personnel are expected to sift through thousands of resumes with dubious potential to find skilled workers to replace the ones who are leaving at an increasing rate. As workers retire without sufficient workers to replace them, the problem will only get worse. 

To meet these challenges and demands, a lot of companies are spending money on artificial intelligence (AI) to compensate for labor shortages in the hope that it alleviates these increasing burdens. AI generally refers to computers that can perform actions that typically require human intelligence. For example, whereas we used to write our texts and emails ourselves, our phones’ generative AI now offers to finish our texts and emails, or even suggests the entire message.

Most frequently, HR personnel use AI in their recruiting process — specifically to screen and review talent (e.g., scan resumés). Theoretically, AI can review more resumés more quickly than an entire HR department can. Trained properly, AI can select the best resumés and enable your team to interview higher quality candidates. And at the interview stage, AI can transcribe and summarize live interviews.

AI can also help train new employees. AI chatbots can guide new hires through the onboarding process and provide answers to questions in real time. It can send welcome emails and schedule training sessions, which can help make an employee’s onboarding experience smoother, with less effort from an HR department.

After training, generative AI can answer employees’ questions about various company policies and functions in real time including:

  • Vacation, parental, and other leaves;
  • Insurance (life and health)
  • Expense reports
  • Retirement accounts
  • Health and wellness
  • Disability coverage
  • Family benefits

Answering these questions can allow HR personnel time to perform more value-added tasks.

Theoretically, generative AI can also help manage employees. Just like your phone’s AI can help you write texts, generative AI like ChatGPT can write or revise entire emails. And AI can adjust the tone of an email, making it more professional, more friendly, more detailed, etc., as the situation requires.

However, every rose has its thorn — or multiple thorns. When evaluating resumés, AI can rely upon outdated stereotypes as easily as people can. A recent study by Rippl found that prompts for doctors, engineers, carpenters, electricians, manufacturing workers, and salespeople produced only male results. When asked to generate images for a HR manager, marketing assistant, receptionist, and nurse AI provided only pictures of women.  When asked to generate images of a CEO, AI offered only white, middle-aged men, whereas manufacturing workers were always young men of color and housekeepers were all young women. This can be especially dangerous, because according to one recent survey, 73 percent of HR professionals said they trust AI to recommend whom to hire. 

As if that weren’t enough, AI can use its generative abilities to formulate a response that is linguistically correct but factually wrong.  This phenomenon, called “hallucination,” has gained attention through media reports of AI guiding people to eat poisonous mushrooms or make other mistakes. That is, the “answers” that your generative AI bot provides employees and AI’s email “corrections” may contain hallucinations that might mislead your employees. Used incorrectly, AI can make mistakes that could take hours or days of HR time to correct.

Unfortunately for employers, their legal obligations under local, state, and federal employment laws remain regardless of whether they are engaging in recruiting, hiring, and managing applicants and employees directly, through a vendor, or through the use of AI. Further, if there are issues with regard to discrimination or bias in recruiting, hiring, and managing, those issues are typically systemic — that is, they have impacted numerous applicants and employees and may result in costly enforcement actions, government investigations, or litigation.Continue Reading AI Lands in the Workplace

On October 30, 2023, President Biden issued Executive Order 14110, aiming to ensure the responsible and safe development and use of Artificial Intelligence (AI) in federal hiring. In compliance, the US Department of Labor’s Office of Federal Contract Compliance Programs (OFCCP) has released guidance for federal contractors to prevent discrimination in artificial intelligence-driven hiring practices. 

Because technology develops so rapidly, and “trends” are fast and furious, it is always hard to predict what the big issues will be for the next year. A year is a long time in the tech field. Just look at how fast ChatGPT became a sensation, with consumers and businesses falling quickly behind in analyzing

Chinese authorities have arrested alleged hackers in what appears to be the first-ever reported case of hackers using AI to develop ransomware. These alleged hackers reportedly used ChatGPT to refine the code for their home-grown ransomware encryption tool. ChatGPT has been banned in China in favor of Chinese tools such as Baidu’s Ernie Bot. However

This week, the CEO of OpenAI, the company behind ChatGPT and the Chief Privacy Officer of IBM testified before the U.S. Senate Judiciary Subcommittee on Privacy, Technology and the Law. During that hearing, it is reported that both “called on U.S. senators…to more heavily regulate artificial intelligence technologies that are raising ethical, legal and national

Researchers at Meta, the owner of Facebook, released a report this week which indicated that since March 2023, Meta “has blocked and shared with our industry peers more than 1,000 malicious links from being shared across our technologies” of unique ChatGPT-themed web addresses designed to deliver malicious software to users’ devices.

According to Meta’s report