The National Institute of Standards and Technology (NIST) has issued helpful recommendations for consumers to consider when securing home routers.

The publication, issued on September 10, 2024, emphasizes how important it is to secure the router in your home, particularly with the expansion of the smart home, Internet of Things devices, and remote work.

According

Recently, the National Institute of Standards and Technology (NIST) released its second public draft of Digital Identity Guidelines (Draft Guidelines). The Draft Guidelines focus on online identity verification, but several provisions have implications for government contractors’ cybersecurity programs, as well as contractors’ use of artificial intelligence (AI) and machine learning (ML). 

Government Contractor Cybersecurity Requirements

Many government contractors have become familiar with personal identity verification standards through NIST’s 2022 FIPS PUB 201-3, “Standard for Personal Identity Verification (PIV) of Federal Employees and Contractors,” which established standards for contractors’ PIV systems used to access federally controlled facilities and information systems. Among other things, FIPS PUB 201-3 incorporated biometrics, cryptography, and public key infrastructure (PKI) to authenticate users, and it outlined the protection of identity data, infrastructure, and credentials.

Whereas FIPS PUB 201-3 set the foundational standard for PIV credentialing of government contractors, the Draft Guidelines expand upon these requirements by introducing provisions regarding identity proofing, authentication, and management.  These additional requirements include:

Expanded Identity Proofing Models. The Draft Guidelines offer a new taxonomy and structure for the requirements at each assurance level based on the means of providing the proofing, whether the means are remote unattended proofing, remote attended proofing (e.g., videoconferencing), onsite unattended (e.g., kiosks), or onsite proofing.

Continuous Evaluation and Monitoring. NIST’s December 2022 Initial Public Draft (IPD) of the guidelines required “continuous improvement” of contractors’ security systems. Building upon this requirement, the Draft Guidelines introduced requirements for continuous evaluation metrics for the identity management systems contractors use. The Draft Guidelines direct organizations to implement a continuous evaluation and improvement program that leverages input from end users interacting with the identity management system and performance metrics for the online service. Under the Draft Guidelines, organizations must document this program, including the metrics collected, the data sources, and the processes in place for taking timely actions based on the continuous improvement process pursuant to the IPD.

Fraud Detection and Mitigation Requirements. The Draft Guidelines add programmatic fraud requirements for credential service providers (CSPs) and government agencies. Additionally, organizations must monitor the evolving threat landscape to stay informed of the latest threats and fraud tactics. Organizations must also regularly assess the effectiveness of current security measures and fraud detection capabilities against the latest threats and fraud tactics.

Syncable Authenticators and Digital Wallets. In April 2024, NIST published interim guidance for syncable authenticators. The Draft Guidelines integrate this guidance and thus allow the use of syncable authenticators and digital wallets (previously described as attribute bundles) as valid mechanisms to store and manage digital credentials. Relatedly, the Draft Guidelines offer user-controlled wallets and attribute bundles, allowing contractors to manage their identity attributes (e.g., digital certificates or credentials) and present them securely to different federal systems.

Risk-Based Authentication. The Draft Guidelines outline risk-based authentication mechanisms, whereby the required authentication level can vary based on the risk of the transaction or system being accessed. This allows government agencies to assign appropriate authentication methods for contractors based on the sensitivity of the information or systems they are accessing.

Privacy, Equity, and Usability Considerations. The Draft Guidelines emphasize privacy, equity, and usability as core requirements for digital identity systems. Under the Guidelines,  “[O]nline services must be designed with equity, usability, and flexibility to ensure broad and enduring participation and access to digital devices and services.” This includes ensuring that contractors with disabilities or special needs are provided with identity solutions. The Draft Guidelines’ emphasis on equity complements NIST’s previous statements on bias in AI.

Authentication via Biometrics and Multi-Factor Authentication (MFA). The Draft Guidelines emphasize the use of MFA, including biometrics, as an authentication mechanism for contractors. This complements FIPS PUB 201-3, which already requires biometrics for physical and logical access but enhances the implementation with updated authentication guidelines.Continue Reading NIST Proposes New Cybersecurity and AI Guidelines for Federal Government Contractors

Generative artificial intelligence (AI) has opened a new front in the battle to keep confidential information secure. The National Institute of Standards and Technology (NIST) recently released a draft report highlighting the risk generative AI poses to data security. The report, entitled “Artificial Intelligence Risk Management Framework:  Generative Artificial Intelligence Profile,” details generative AI’s potential

The Federal Trade Commission (FTC) has declined to approve a new method for obtaining parental consent under the Children’s Online Privacy Protection Act (COPPA) that would involve analyzing facial geometry to verify an adult’s identity.

In a letter to the Entertainment Software Rating Board (ESRB), Yoti (a digital identity company), and SuperAwesome (a company that

*This post was co-authored by Josh Yoo, legal intern at Robinson+Cole. Josh is not admitted to practice law.

Health care entities maintain compliance programs in order to comply with the myriad, changing laws and regulations that apply to the health care industry. Although laws and regulations specific to the use of artificial intelligence (AI) are limited at this time and in the early stages of development, current law and pending legislation offer a forecast of standards that may become applicable to AI. Health care entities may want to begin to monitor the evolving guidance applicable to AI and start to integrate AI standards into their compliance programs in order to manage and minimize this emerging area of legal risk.

Executive Branch: Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence

Following Executive Order 13960 and the Blueprint for an AI Bill of Rights, Executive Order No. 14110 (EO) amplifies the current key principles and directives that will guide federal agency oversight of AI. While still largely aspirational, these principles have already begun to reshape regulatory obligations for health care entities. For example, the Department of Health and Human Services (HHS) has established an AI Task Force to regulate AI in accordance with the EO’s principles by 2025. Health care entities would be well-served to monitor federal priorities and begin to formally integrate AI standards into their corporate compliance plans.

  • Transparency: The principle of transparency refers to an AI user’s ability to understand the technology’s uses, processes, and risks. Health care entities will likely be expected to understand how their AI tools collect, process, and predict data. The EO envisions labelling requirements that will flag AI-generated content for consumers as well.
  • Governance: Governance applies to an organization’s control over deployed AI tools. Internal mechanical controls, such as evaluations, policies, and institutions, may ensure continuous control throughout the AI’s life cycle. The EO also emphasizes the importance of human oversight. Responsibility for AI implementation, review, and maintenance can be clearly identified and assigned to appropriate employees and specialists.
  • Non-Discrimination: AI must also abide by standards that protect against unlawful discrimination. For example, the HHS AI Task force will be responsible for ensuring that health care entities continuously monitor and mitigate algorithmic processes that could contribute to discriminatory outcomes. It will be important to permit internal and external stakeholders to have access to equitable participation in the development and use of AI.

National Institute of Standards and Technology: Risk Management Framework

The National Institute of Standards and Technology (NIST) published a Risk Management Framework for AI (RMF) in 2023. Similar to the EO, the RMF outlines broad goals (i.e., Govern, Map, Measure, and Manage) to help organizations address and manage the risks of AI tools and systems. A supplementary NIST “Playbook”  provides actionable recommendations that implement EO principles to assist organizations to proactively mitigate legal risk under future laws and regulations. For example, a health care organization may uphold AI governance and non-discrimination by deploying a diverse, AI-trained compliance team.Continue Reading Forecasting the Integration of AI into Health Care Compliance Programs

As modern companies are increasingly adopting AI systems to automate and augment their businesses, many legal and compliance departments have cautioned against fully embracing this new and untested technology. Successful companies will need to develop an approach that allows them to benefit from AI’s competitive advantage while mitigating their risk of litigation.

In response to

The National Institute of Standards and Technology (NIST) Information Technology Laboratory recently released guidance entitled “Software Supply Chain Security Guidance,” in response to directives set forth in President Biden’s Executive Order 14028—Improving the Nation’s Cybersecurity.

The guidance refers to existing industry standards, tools, and recommended practices that were previously published by NIST in SP800-161 “Cybersecurity

The National Institute of Standards and Technology (NIST) recently released a Request for Information (RFI) that seeks to gather information to help evaluate and improve cybersecurity resources for the cybersecurity framework and cybersecurity supply chain risk management.

NIST indicated in its FAQs about the RFI that it is seeking feedback on the following objectives:

  • Evaluate

The National Institute of Standards and Technology (NIST) continues to offer timely and relevant information for companies to consider when addressing cyber-risks in an ever-changing landscape.

 On February 2, 2021, NIST published an alert outlining tools it has developed to assist companies “to help defend against state-sponsored hackers.” According to its press release, nation-state

There is a new federal IoT law, H.R. 1668, the IoT Cybersecurity Improvement Act of 2020, that recently passed the House and Senate and was signed by the President on December 4. The bill had 26 co-sponsors, representing Democrats and Republicans almost equally, and enjoyed bipartisan support in an era that has not seen