Last week, Illinois Governor JB Pritzker signed S.B. 2979 to amend the Biometric Information Privacy Act (BIPA) immediately to define the repeated collection of the same biometric data without consent as a SINGLE, COLLECTIVE violation of the Act–this is a significant change. The precedent set by the Illinois Supreme Court in February 2023 in Cothron v. White Castle Sys. Inc., which permitted the plaintiffs to seek damages for “every scan or transmission” of biometric information without consent, is altered by this amendment. It will, in fact, reduce the amounts of damages sought by plaintiffs in BIPA class actions. Perhaps this amendment will even reduce the volume of litigation of BIPA claims. With this change, companies will likely see lower sums sought in BIPA suits and more likelihood that their insurers will cover these claims. Of course, insurers may still be hesitant to pay BIPA claims after years of disagreement with businesses over the Illinois law.

What does BIPA require? The Act requires businesses to collect and store biometric data from employees and consumers only with prior written consent. The big difference between BIPA and other state privacy laws is that BIPA provides a private right of action, allowing consumers to seek $1,000 for each negligent violation and $5,000 for each intentional or reckless violation. The first defendant in a BIPA case paid $75 million to settle the case after the jury determined that the defendant had violated the privacy rights of thousands of its employees. The amendment addresses how violations are counted for damages calculations but doesn’t change the fact that consumers still can seek upwards of $5,000 per violation. Further, the amendment doesn’t state whether the change applies retroactively, so the courts are left to decide on that question.

As far as the insurers go, there will still be questions about whether insurance policies cover BIPA claims. Many policies exclude coverage for federal or state law violations, which some insurers argue bars coverage of BIPA claims. On the other hand, some cyber and employment liability policies are clearer on coverage for BIPA claims. So, while this amendment may not have the answers for insurers, it could at least give insurers more clarity around expected damages in BIPA litigation, which will, in turn, provide more clarity in the ability to underwrite these claims, too. Of course, similar to the cyber insurance arena, the underwriting and application process will likely include more specific questions about compliance with BIPA and how the business obtains consent from employees and consumers. We’ll see how this amendment changes the trends.

On May 17, 2024, Colorado Governor Jared Polis signed, “with reservations,” Senate Bill 42-205, “Concerning Consumer Protections in Interactions with Artificial Intelligence Systems” (the Act). The first of its kind in the United States, the Act takes effect on February 1, 2026, and requires artificial intelligence (AI) developers, and businesses that use high-risk AI systems, to adhere to certain transparency requirements and AI governance.

The Governor sent a letter to the Colorado General Assembly explaining his reservations about signing the Act. He noted that the bill “targets ‘high risk’ AI systems involved in making consequential decisions, and imposes a duty on developers and deployers to avoid ‘algorithmic discrimination’ in the use of such systems.” He encouraged the legislature to “reexamine” the concept of algorithmic discrimination of the results of AI system use before the effective date in 2026.

If your company does business in Colorado and either develops or deploys AI systems, your company may need to first determine whether the systems used qualify as high-risk AI systems. A “High-Risk AI System” means any AI system that, when deployed, makes or is a substantial factor in making a consequential decision. A “Consequential Decision” has a material legal or significant effect on the provision or denial of education enrollment/education opportunity, employment opportunity, financial or lending service, essential government service, health care services, housing, insurance, or a legal service.

Unlike other state consumer privacy laws, this Act does not have a threshold number of consumers to trigger applicability. Further, both the Act and the Colorado Privacy Act (CPA) (similar to the California Consumer Privacy Act (CCPA)) use the term “consumers,” but the term refers to Colorado residents under this Act. At the same time, the CPA defines consumers as Colorado residents “acting only in an individual or household context,” excluding anyone in a commercial or employment context. Therefore, businesses that may not be subject to the CPA may have obligations under the Act.

The Act aims to prevent algorithmic discrimination in the development and use of AI systems. “Algorithmic discrimination” means any condition in which the use of an artificial intelligence system results in an unlawful differential treatment or impact that disfavors an individual or group of individuals based on their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other protected classification under state or federal law.

What are the requirements of the Act?

For Developers:

  • To avoid algorithmic discrimination in the development of high-risk artificial intelligence systems, developers must develop a statement describing the “reasonably foreseeable uses and known harmful or inappropriate uses of the system,” the type of data used to train the system, risks of algorithmic discrimination, the purpose of the system and the intended benefits and uses of the system. 
  • Additionally, the developer must provide documentation with the AI product stating how the system was evaluated to mitigate algorithmic discrimination before it was made available for use, the data governance measures utilized in development, how the system should be used (and not be used), and how the system should be monitored when used for consequential decision-making. Developers are also required to update the statement no later than 90 days after modifying the system.
  • Developers must also disclose to the Colorado Attorney General any known or reasonably foreseeable risks of algorithmic discrimination arising from system’s intended uses without unreasonable delay but no later than 90 days after discovery (through ongoing testing and analysis or a credible report from a business).

For Businesses:

  • Businesses that use high-risk AI systems must implement a risk management policy and program to govern the system’s deployment. The Act sets out specific requirements for that policy and program and instructs businesses to consider the size and complexity of the company itself, the nature and scope of the systems, and the sensitivity and volume of data processed by the system. Businesses must also conduct an impact assessment for the system at least annually in accordance with the Act. However, there are some exemptions from this impact assessment requirement (e.g., fewer than 50 employees, does not use its own data to train the high-risk AI system, etc.).
  • Additionally, businesses must notify consumers that they are using an AI system to make a consequential decision before the decision is made. The Act sets forth the specific content requirements of the notice, such as how the business manages known or reasonably foreseeable risks of algorithmic discrimination that may arise from the system’s deployment. As applicable, if the CPA applies to the business (in addition to the Act), the company must also provide consumers the right to opt out of the processing of personal data by such AI systems for profiling purposes.
  • Businesses must also disclose to the Colorado Attorney General any known or reasonably foreseeable risks of algorithmic discrimination arising from the use of the system no later than 90 days after discovery.

The Act requires developers and businesses who deploy, offer, sell, lease, license, give, or otherwise make available an AI system that is intended to interact with consumers to disclose to each consumer who interacts with the system that the consumer is interacting with an AI system.

Although noting that the Act is “among the first in the county to attempt to regulate the burgeoning artificial intelligence industry on such a scale,” Colorado’s Governor stated in his letter to the legislature that “stakeholders, including industry leaders, must take the intervening two years before this measure takes effect to fine-tune the provisions and ensure that the final product does not hamper development and expansion of new technologies in Colorado that can improve the lives of individuals across our state.” He further noted:

“I want to be clear in my goal of ensuring Colorado remains home to innovative technologies and our consumers are able to fully access important AI-based products. Should the federal government not preempt this with a needed cohesive federal approach, I encourage the General Assembly to work closely with stakeholders to craft future legislation for my signature that will amend this bill to confirm with evidence based findings and recommendations for the regulation of this industry.”

As we have seen with state consumer privacy rights laws, this new AI law may be a model that other states will follow but, based upon the Governor’s letter to the Colorado legislature, we anticipate that there will be additional iterations of the law before it becomes effective. Stay tuned.

On August 1, 2024, the Cybersecurity and Infrastructure Security Agency (CISA) announced the appointment of its first CISA Chief Artificial Intelligence Officer. The appointee, Lisa Einstein, served as CISA’s Senior Advisor for AI and as Executive Director of CISA’s Cybersecurity Advisory Committee, advising CISA on the reduction of risk to critical infrastructure. She earned a dual master’s degree in computer science and international cyber policy from Stanford.

According to CISA, the appointment of a Chief Artificial Intelligence Officer “reflects CISA’s commitment to responsibly use AI to advance its cyber defense mission and to support critical infrastructure owners and operators across the United States in the safe and secure development and adoption of AI.”

HealthEquity, an administrator of workplace benefits for more than 15 million people, is notifying 4.3 million individuals, starting on August 9, 2024, that their personal information was compromised. The compromised data includes names, addresses, phone numbers, employee IDs, employers, Social Security numbers, health card numbers, health plan member numbers, benefit types, dependent information, and diagnosis information, prescription information, and payment card information.

The incident was caused when a third-party vendor’s user account was compromised, and the user’s password was stolen. The vendor’s credentials were then used to access a data repository that included the customers’ personal information. HealthEquity has posted a notice of the data breach on its website. It will offer affected individuals with two years of credit monitoring. If you have an account with HealthEquity, access its website here, which includes a toll-free number for questions.  

It is heartwarming that 16 prisoners, including innocent ex-Marine Paul Whelan and Wall Street Journal reporter Evan Gershkovich, have been freed from their wrongful imprisonment in Russia in exchange for 24 convicted Russian prisoners. What is disturbing is that innocent individuals wrongfully convicted are being used to bargain for convicted individuals, including cybercriminals.

Krebs on Security has reported that several convicted Russian cybercriminals are part of the swap. They include Roman Seleznev, “who was sentenced in 2017 to 27 years in prison for racketeering convictions tied to a lengthy career in stealing and selling payment card data.” Seleznev is the son of a member of Russia’s parliament, an ally of Vladimir Putin. He was captured in 2014 by the Secret Service in the Maldives. His cybercrimes netted him $50 million, which he was ordered to restore.

Another Russian convicted cybercriminal, Vladislav Klyushin, is included in the swap. He was sentenced in September of 2023 to nine years in prison for a “$93 million hack-to-trade conspiracy.” Information stolen by Klyushin was used to make illegal stock trades. He reportedly owns M-13, a technology company located in Russia with ties to the Russian government.

Although these convicted cybercriminals may not be considered violent, they have still wreaked chaos on U.S.-based companies with their cybercrimes and have stolen millions of dollars from the U.S. economy. Cybercrime continues to be a significant risk for organizations and municipalities (like Columbus, Ohio) and having these cybercriminals back in action is disappointing.

The city of Columbus, Ohio, announced on May 29, 2024, that it was forced to take its systems offline due to a ransomware attack. According to its notice, the attack was perpetrated by “an established, sophisticated threat actor operating overseas,” and that it was working with law enforcement to investigate the incident.

According to Security Week, the Rhysida ransomware group has claimed responsibility. In November 2023, CISA, FBI and MS-ISAC released an advisory on Rhysida. Although the Advisory does not attribute the cybercriminals behind Rhysida to a particular country, most Ransomware-as-a-Service gangs operate out of Russia, North Korea, or China.

The incident occurred when a city employee became a victim of a phishing email and downloaded a file from a malicious website. The city is determining what data was included in the incident and will provide notice to those affected.

This week, two class actions were filed in the U.S. District Court for the Eastern District of Pennsylvania against David’s Bridal based on two data breaches. The actions allege that David’s Bridal failed to protect the personal information of employees and customers.

In January 2024, David’s Bridal suffered a ransomware attack instigated by ransomware group LockBit. The complaint states that “[i]nstead of remedying its deficient cybersecurity practices following LockBit’s theft of [personal information, David’s Bridal] did nothing” and then suffered a second attack by a different ransomware group, WereWolves in February 2024. The affected information included names, addresses, identification documents, dates of birth, Social Security numbers, and financial account information.

The plaintiffs state that by providing their personal information to David’s Bridal, the company “promised to safeguard the sensitive, confidential data and only to use it for authorized and

legitimate purposes.” Additionally, the complaint alleges that David’s Bridal failed to adequately notify the affected individuals, which did not give them the “opportunity to mitigate harm” related to the breaches. The class actions were filed on behalf of current and former employees and customers. The causes of action are negligence, breach of implied contract, breach of fiduciary duty, and unjust enrichment. One of the plaintiffs also brought a cause of action under the California Consumer Privacy Act, which allows for a private right of action for a data breach. The plaintiffs are seeking compensatory, actual, and punitive damages, restitution, pre-and post-judgment interest, as well as attorneys’ fees and costs. Additionally, the plaintiffs ask that David’s Bridal be required to implement technical and administrative security controls

Thank you to Jon Schaefer for this post. Jon focuses his practice on environmental compliance counseling, occupational health and safety.

On July 30, 2024, the U.S. EPA Office of Inspector General issued a fraud alert to bring attention to an increasing number of companies reporting that they have received fraudulent EPA Notice of Violation letters demanding payment. Businesses have received these fraudulent letters through email and U.S. Postal Service mail. The letters allege that the target business violated an environmental regulation, such as the Clean Air Act or Clean Water Act. The contact information provided – invoice@epa.services – is not associated with the EPA. Official U.S. government organizations only use the “.gov” domain name.

If you have received a Notice of Violation and are concerned about its validity or have other questions or concerns, consider consulting experienced legal counsel. You can also contact the U.S. EPA’s enforcement office at OECA_Communications@epa.gov with any concerns regarding potentially fraudulent letters.

The U.S. EPA’s Office of Inspector General’s Hotline ((888) 546-8740 or OIG.Hotline@epa.gov) is always available if you believe you have been the victim of fraud or have knowledge of potential waste, fraud, or abuse involving EPA operations and programs.

This post is also being shared on our Environmental Law + blog. If you’re interested in getting updates on timely and thoughtful developments in the environmental, health and safety (EH+S) and energy landscapes, we invite you to subscribe to the blog.

Anecdotally, we know that cybercriminals hailing from Russia are a significant risk to U.S.-based and world companies and governmental entities. With two convicted Russian cybercriminals being released this week in the prisoner swap I was curious just how significant Russian cybercriminals play in cybercrime chaos.

According to Bleeping Computer, “Russian-speaking threat actors accounted for at least 69% of all crypto proceeds linked to ransomware throughout the previous year, exceeding $500,000,000.” This staggering number was provided by TRM Labs, an analytics firm “specializing in crypto-assisted money laundering and financial crime.”

TRM Labs reports that North Korea leads Russia in stealing cryptocurrency through exploits and breaches and stole over a billion dollars in 2023, while Asia, including the Chinese Communist Party, leads in scams and investment fraud.

Nonetheless, Russian-based cybercriminals “consistently drive most types of crypto-enabled cybercrime, from ransomware to illicit crypto exchanges and darknet markets.”

If you are a customer of CrowdStrike, you are working on recovering from the outage that occurred on July 19, 2024. As if that isn’t enough disruption, CrowdStrike is warning customers that threat actors are taking advantage of the situation by using fake websites and domains, sending phishing emails impersonating CrowdStrike, and offering malicious products and services to “assist” customers with recovery from the outage.

CrowdStrike has been monitoring malicious activity and is reporting that threat actors are conducting the following activity:

  • Sending phishing emails posing as CrowdStrike support to customers.
  • Impersonating CrowdStrike staff in phone calls.
  • Posing as independent researchers, claiming to have evidence the technical issue is linked to a cyber-attack and offering remediation insights.
  • Selling scripts purporting to automate recovery from the content update issue.

CrowdStrike Intelligence “recommends that organizations ensure they are communicating with CrowdStrike representatives through official channels and they adhere to technical guidance the CrowdStrike support teams have provided.” CrowdStrike has listed multiple fake domains that may contain malicious content on its website. The domains can also be used to “support future social-engineering operations.”