The Federal Trade Commission (FTC) has initiated a new “law enforcement sweep called Operation AI Comply.” The operation shows that the FTC is serious about protecting consumers from companies that use artificial intelligence (AI) tools and services to “trick, mislead, or defraud people.” Such conduct is “illegal,” and in announcing the first five enforcement

Recently, the National Institute of Standards and Technology (NIST) released its second public draft of Digital Identity Guidelines (Draft Guidelines). The Draft Guidelines focus on online identity verification, but several provisions have implications for government contractors’ cybersecurity programs, as well as contractors’ use of artificial intelligence (AI) and machine learning (ML). 

Government Contractor Cybersecurity Requirements

Many government contractors have become familiar with personal identity verification standards through NIST’s 2022 FIPS PUB 201-3, “Standard for Personal Identity Verification (PIV) of Federal Employees and Contractors,” which established standards for contractors’ PIV systems used to access federally controlled facilities and information systems. Among other things, FIPS PUB 201-3 incorporated biometrics, cryptography, and public key infrastructure (PKI) to authenticate users, and it outlined the protection of identity data, infrastructure, and credentials.

Whereas FIPS PUB 201-3 set the foundational standard for PIV credentialing of government contractors, the Draft Guidelines expand upon these requirements by introducing provisions regarding identity proofing, authentication, and management.  These additional requirements include:

Expanded Identity Proofing Models. The Draft Guidelines offer a new taxonomy and structure for the requirements at each assurance level based on the means of providing the proofing, whether the means are remote unattended proofing, remote attended proofing (e.g., videoconferencing), onsite unattended (e.g., kiosks), or onsite proofing.

Continuous Evaluation and Monitoring. NIST’s December 2022 Initial Public Draft (IPD) of the guidelines required “continuous improvement” of contractors’ security systems. Building upon this requirement, the Draft Guidelines introduced requirements for continuous evaluation metrics for the identity management systems contractors use. The Draft Guidelines direct organizations to implement a continuous evaluation and improvement program that leverages input from end users interacting with the identity management system and performance metrics for the online service. Under the Draft Guidelines, organizations must document this program, including the metrics collected, the data sources, and the processes in place for taking timely actions based on the continuous improvement process pursuant to the IPD.

Fraud Detection and Mitigation Requirements. The Draft Guidelines add programmatic fraud requirements for credential service providers (CSPs) and government agencies. Additionally, organizations must monitor the evolving threat landscape to stay informed of the latest threats and fraud tactics. Organizations must also regularly assess the effectiveness of current security measures and fraud detection capabilities against the latest threats and fraud tactics.

Syncable Authenticators and Digital Wallets. In April 2024, NIST published interim guidance for syncable authenticators. The Draft Guidelines integrate this guidance and thus allow the use of syncable authenticators and digital wallets (previously described as attribute bundles) as valid mechanisms to store and manage digital credentials. Relatedly, the Draft Guidelines offer user-controlled wallets and attribute bundles, allowing contractors to manage their identity attributes (e.g., digital certificates or credentials) and present them securely to different federal systems.

Risk-Based Authentication. The Draft Guidelines outline risk-based authentication mechanisms, whereby the required authentication level can vary based on the risk of the transaction or system being accessed. This allows government agencies to assign appropriate authentication methods for contractors based on the sensitivity of the information or systems they are accessing.

Privacy, Equity, and Usability Considerations. The Draft Guidelines emphasize privacy, equity, and usability as core requirements for digital identity systems. Under the Guidelines,  “[O]nline services must be designed with equity, usability, and flexibility to ensure broad and enduring participation and access to digital devices and services.” This includes ensuring that contractors with disabilities or special needs are provided with identity solutions. The Draft Guidelines’ emphasis on equity complements NIST’s previous statements on bias in AI.

Authentication via Biometrics and Multi-Factor Authentication (MFA). The Draft Guidelines emphasize the use of MFA, including biometrics, as an authentication mechanism for contractors. This complements FIPS PUB 201-3, which already requires biometrics for physical and logical access but enhances the implementation with updated authentication guidelines.Continue Reading NIST Proposes New Cybersecurity and AI Guidelines for Federal Government Contractors

On July 29, 2024, the American Bar Association issued ABA Formal Opinion 512 titled “Generative Artificial Intelligence Tools.”

The opinion addresses the ethical considerations lawyers are required to consider when using generative AI (GenAI) tools in the practice of law.

The opinion sets forth the ethical rules to consider, including the duties of competence, confidentiality

On May 17, 2024, Colorado Governor Jared Polis signed, “with reservations,” Senate Bill 42-205, “Concerning Consumer Protections in Interactions with Artificial Intelligence Systems” (the Act). The first of its kind in the United States, the Act takes effect on February 1, 2026, and requires artificial intelligence (AI) developers, and businesses that use high-risk AI systems

On August 1, 2024, the Cybersecurity and Infrastructure Security Agency (CISA) announced the appointment of its first CISA Chief Artificial Intelligence Officer. The appointee, Lisa Einstein, served as CISA’s Senior Advisor for AI and as Executive Director of CISA’s Cybersecurity Advisory Committee, advising CISA on the reduction of risk to critical infrastructure. She earned a

This blog post was co-authored by Labor, Employment, Benefits + Immigration Group lawyer Abby M. Warren.

It doesn’t seem fair that human resources (HR) personnel have to manage both labor shortages and overwhelming employee management tasks, but here we are.  Companies are facing a critical shortage of skilled workers that is outpacing educational institutions’ training ability, not to mention a mismatch of skills.  Yet, HR personnel are expected to sift through thousands of resumes with dubious potential to find skilled workers to replace the ones who are leaving at an increasing rate. As workers retire without sufficient workers to replace them, the problem will only get worse. 

To meet these challenges and demands, a lot of companies are spending money on artificial intelligence (AI) to compensate for labor shortages in the hope that it alleviates these increasing burdens. AI generally refers to computers that can perform actions that typically require human intelligence. For example, whereas we used to write our texts and emails ourselves, our phones’ generative AI now offers to finish our texts and emails, or even suggests the entire message.

Most frequently, HR personnel use AI in their recruiting process — specifically to screen and review talent (e.g., scan resumés). Theoretically, AI can review more resumés more quickly than an entire HR department can. Trained properly, AI can select the best resumés and enable your team to interview higher quality candidates. And at the interview stage, AI can transcribe and summarize live interviews.

AI can also help train new employees. AI chatbots can guide new hires through the onboarding process and provide answers to questions in real time. It can send welcome emails and schedule training sessions, which can help make an employee’s onboarding experience smoother, with less effort from an HR department.

After training, generative AI can answer employees’ questions about various company policies and functions in real time including:

  • Vacation, parental, and other leaves;
  • Insurance (life and health)
  • Expense reports
  • Retirement accounts
  • Health and wellness
  • Disability coverage
  • Family benefits

Answering these questions can allow HR personnel time to perform more value-added tasks.

Theoretically, generative AI can also help manage employees. Just like your phone’s AI can help you write texts, generative AI like ChatGPT can write or revise entire emails. And AI can adjust the tone of an email, making it more professional, more friendly, more detailed, etc., as the situation requires.

However, every rose has its thorn — or multiple thorns. When evaluating resumés, AI can rely upon outdated stereotypes as easily as people can. A recent study by Rippl found that prompts for doctors, engineers, carpenters, electricians, manufacturing workers, and salespeople produced only male results. When asked to generate images for a HR manager, marketing assistant, receptionist, and nurse AI provided only pictures of women.  When asked to generate images of a CEO, AI offered only white, middle-aged men, whereas manufacturing workers were always young men of color and housekeepers were all young women. This can be especially dangerous, because according to one recent survey, 73 percent of HR professionals said they trust AI to recommend whom to hire. 

As if that weren’t enough, AI can use its generative abilities to formulate a response that is linguistically correct but factually wrong.  This phenomenon, called “hallucination,” has gained attention through media reports of AI guiding people to eat poisonous mushrooms or make other mistakes. That is, the “answers” that your generative AI bot provides employees and AI’s email “corrections” may contain hallucinations that might mislead your employees. Used incorrectly, AI can make mistakes that could take hours or days of HR time to correct.

Unfortunately for employers, their legal obligations under local, state, and federal employment laws remain regardless of whether they are engaging in recruiting, hiring, and managing applicants and employees directly, through a vendor, or through the use of AI. Further, if there are issues with regard to discrimination or bias in recruiting, hiring, and managing, those issues are typically systemic — that is, they have impacted numerous applicants and employees and may result in costly enforcement actions, government investigations, or litigation.Continue Reading AI Lands in the Workplace

Artificial Intelligence (AI) can offer manufacturers and other companies necessary assistance during the current workforce shortage. It can help workers answer questions from customers and other workers, fill skill gaps, and even help get your new employees up to speed faster. However, using AI comes with challenges and risks that companies must recognize and address.