Tennessee Governor Bill Lee signed legislation on May 22, 2024, that will shield private entities from class action lawsuits stemming from a cybersecurity event unless the event was caused by willful, wanton, or gross negligence.

The bill, as introduced, “declares a private entity to be not civilly liable in a class action resulting from a cybersecurity event unless the cybersecurity event was caused by willful, wanton, or gross negligence on the part of the private entity. The bill amends TCA Title 29 and Title 47.”

This bill will be a blow to class action plaintiffs’ law firms that have routinely filed suit against companies that are victims of criminal cybersecurity attacks, alleging that the companies were negligent in protecting consumer data. The bill provides a high bar for plaintiffs to overcome to pursue class action litigation in Tennessee.

It will be very interesting to see how other states follow. We will be following this closely.

This week Marriott Hotel Services was hit with a class action lawsuit for alleged violations of the Illinois’ Biometrics Information Privacy Act (BIPA). The lawsuit alleges that the hotel violated BIPA by requiring workers to scan their fingerprints as a means to clock in at work without proper notice or consent.

BIPA prohibits businesses from:

  • Collecting biometric data without written consent;
  • Collecting biometric data without informing the person in writing of the purpose and length of time the data will be used; and
  • Selling or profiting from consumers’ biometric information.

The complaint states that the fingerprint scanner is connected to the timekeeping and payroll system and then stored on a third-party platform (Kronos, Inc.). The plaintiff alleges that Marriott did not inform employees of the system or how long the data would be retained. The proposed class includes all employees who worked for Marriott in Illinois since 2019.

BIPA permits plaintiffs to seek statutory damages between $1,000 and $5,000 per violation.

Illinois is not the only state with this type of biometric privacy law: the states of Texas and Washington also have regulations that address the collection and use of biometric data. Other states have narrower biometric regulations, such as industry-specific laws and certain provisions under state consumer privacy rights statutes (e.g., California, Colorado, Connecticut, Utah, and Virginia). Additionally, many other states have introduced biometric privacy laws, such as Massachusetts and Missouri. Companies should be on the lookout for new laws and regulations in this space and confirm that their actions related to biometric data collection and use are in compliance with applicable laws.

Intercontinental Exchange, Inc. (ICE), the owner of the New York Stock Exchange, has agreed to settle with the Securities and Exchange Commission (SEC) for $10 million over allegations that it failed to timely notify the SEC of the cybersecurity incident it experienced in 2021 involving its virtual private network.

The SEC alleged that ICE should have notified it immediately of the incident, but ICE contends that “[t]his settlement involves an unsuccessful attempt to access our network more than three years ago…The failed incursion had zero impact on market operations. At issue was the time frame for reporting this type of event under Regulation SCI.”

Apparently, the SEC alleges that it should have been notified immediately, and ICE contends that the incident was not material and did not rise to the level of significance that ICE believed  obligated it to notify the SEC “immediately.”

A settlement does not indicate fault. The lesson here is that the SEC takes a conservative approach to reporting obligations and will use its muscle if reporting is not provided in what it deems is a timely manner.

On May 9, 2024, Governor Wes Moore signed the Maryland Online Data Privacy Act (MODPA) into law. MODPA applies to any person who conducts business in Maryland or provides products or services targeted to Maryland residents and, during the preceding calendar year:

  1. Controlled or processed the personal data of at least 35,000 consumers (excluding personal data solely for the purpose of completing a payment transaction); or
  2. Controlled or processed the personal data of at least 10,000 consumers and derived more than 20 percent of its gross revenue from the sale of personal data.

MODPA does not apply to financial institutions subject to Gramm-Leach-Bliley or registered national securities associations. It also contains exemptions for entities governed by HIPAA.

Under MODPA, consumers have the right to access their data, correct inaccuracies, request deletion, obtain a list of those who have received their personal data, and rights to opt-out of processing for targeted advertising, the sale of personal data, and profiling in furtherance of solely automated decisions. The data controller must provide this information to the consumer free of charge once during any 12-month period unless the requests are excessive, in which case the controller may charge a reasonable fee or decline to honor the request.

MODPA prohibits a controller – defined as “a person who determines the purpose and means of processing personal data” – from selling “sensitive data.” MODPA defines “sensitive data” to include genetic or biometric data, children’s personal data, and precise geolocation data.  “Sensitive data” also means personal data that includes data revealing a consumer’s:

  • Racial or ethnic origin
  • Religious beliefs
  • Consumer health data
  • Sex life
  • Sexual orientation
  • Status as transgender or nonbinary
  • National origin, or
  • Citizenship or immigration status

A MODPA controller also may not process “personal data” in ways that violate discrimination laws. Under MODPA, “personal data” is any information that is linked or can be reasonably linked to an identified or identifiable consumer but not de-identified data or “publicly available information.” However, MODPA contains an exception if the processing is for (1) self-testing to prevent or mitigate unlawful discrimination; (2) diversifying an applicant, participant, or customer pool, or (3) a private club or group not open to the public.

MODPA has a data minimization requirement as well. Controllers must limit the collection of personal data to that which is reasonably necessary and proportionate to provide or maintain the specific product or service the consumer requested.

A violation of MODPA constitutes an unfair, abusive, or deceptive (UDAP) trade practice, which the Maryland Attorney General can prosecute. Each violation may incur a civil penalty of up to $10,000 for each violation and up to $25,000 for each repeated violation. Additionally, a person committing a UDAP violation is guilty of a misdemeanor, punishable by a fine of up to $1,000 or imprisonment of up to one year, or both. MODPA does not allow a consumer to pursue a UDAP claim for a MODPA violation, although it also does not prevent a consumer from pursuing any other legal remedies. MODPA will take effect October 1, 2025, with enforcement beginning April 1, 2026.

Anthropic has achieved a major milestone by identifying how millions of concepts are represented within their large language model Claude Sonnet, using a process somewhat akin to a CAT scan. This is the first time researchers have gained a detailed look inside a modern, production-grade AI system.

Previous attempts to understand model representations were limited to finding patterns of neuron activations corresponding to basic concepts like text formats or programming syntax. However, Anthropic has now uncovered high-level abstract features in Claude spanning a vast range of concepts – from cities and people to scientific fields, programming elements, and even abstract ideas like gender bias, secrets, and inner ethical conflicts.

Remarkably, they can even manipulate these features to change how the model behaves and force certain types of hallucinations. Amplifying the “Golden Gate Bridge” feature caused Claude to believe it was the Golden Gate Bridge when asked about its physical form (Claude normally responds with a variation on, “I have no form, I am an AI model.”) Intensifying the “scam email” feature overcame Claude’s training to avoid harmful outputs, making it suggest formats for scam emails.

Other features corresponding to malicious behavior or content with the potential for misuse included code backdoors and bioweapons, as well as problematic behaviors like bias, manipulation, and deception. Normally, these features activate when the user asks Claude to “think” about one of these concepts, and Claude’s ethical guardrails keep it from drawing from these sources when generating content. This validates that these features don’t just map to parsing user input but directly shape the model’s responses. It also points to the exact kind of malicious capability that hackers and other unauthorized users will undoubtedly exploit on pirate models.

While much work remains to fully map these large models, Anthropic’s breakthrough seems like an extremely promising step forward in the burgeoning field of AI auditing. And, given that researchers were able to directly tweak the features to influence Claude’s output, this research may also open the door to the sort of under-the-hood tinkering that has eluded generative AI developers for years. Of course, it may also open the door to direct, feature-level regulation as well as creative plaintiff’s arguments as the standard of care for AI developers takes shape.

Read the full blog post from Anthropic here.

To add to TikTok’s legal woes in the U.S., Nebraska Attorney General Mike Hilgers (AG) filed suit against TikTok on May 22, 2024, alleging that TikTok violated Nebraska’s consumer protection laws and engaged in deceptive trade practices by “designing and operating a platform that is addictive and harmful to teens and children.” In addition, the AG alleges that “TikTok’s features fail to protect kids and regularly expose underage users to age-inappropriate and otherwise harmful content.”

The AG alleges that the use of TikTok by children in Nebraska “has fueled a youth mental health crisis in Nebraska.” The AG further alleges that TikTok is “addictive, that compulsive use is rampant, and that its purported safety features, such as age verification and parental controls, are grossly ineffective.”

The AG filed suit after his office conducted an investigation into TikTok’s practices, which included creating TikTok accounts of fictitious minors. According to the AG, TikTok’s algorithm directed the fictitious minors to inappropriate content “within minutes” of opening an account.

TikTok denies the allegations. We will continue to follow and update our readers on developments in the TikTok debate.

On May 10, 2024, CISA, along with the FBI, HHS, and MS-ISAC, issued a joint Cybersecurity Advisory relating to Black Basta ransomware affiliates “that have targeted over 500 private industry and critical infrastructure entities, including healthcare organizations, in North America, Europe, and Australia.”

The Black Basta Advisory provides information on how the threat actors gain initial access to victims’ systems, which primarily use spearphishing tactics. In addition, “starting in February 2024, Black Basta affiliates began exploiting ConnectWise vulnerability (CVE-2024-1709). In some instances, affiliates have been observed abusing valid credentials.”

The affiliates use different tools for lateral movement, including Remote Desktop Protocol, Splashtop, Screen Connect, and Cobalt Strike. In addition, they use credential scraping tools like Mimikatz to escalate privileges and have exploited prior zero-day vulnerabilities for local and Windows Active Domain privilege escalation.

The Advisory lists indicators of compromise, file indicators, and suspected domains used by Black Basta, which are helpful for IT professionals to compare against company systems. Mitigations listed by the Advisory include current patching, MFA, training, securing remote access software, backups, and other mitigation techniques. This Advisory is an important read for IT professionals in all industries.

In a significant setback for Elon Musk’s X Corp (formerly Twitter), a U.S. District Judge has dismissed the company’s lawsuit against an Israeli data-scraping firm, Bright Data Ltd. We previously reported on X’s recent spree of lawsuits against data-scraping companies.

The court held X Corp failed to demonstrate that Bright Data violated its user agreement by enabling the scraping and sale of content from the platform. The judge asserted that using scraping tools is not inherently fraudulent and that social media companies should not have free rein to dictate how public data is used, as this could lead to “information monopolies that would disserve the public interest.”

This case is the latest win for free information advocates. In January, another San Francisco judge ruled that Bright Data did not violate Meta Platforms’ (Facebook and Instagram) terms of service by scraping data from those platforms. In March, a judge also dismissed X Corp’s lawsuit against the nonprofit Center for Countering Digital Hate, which had published articles based on scraped data critical of the platform. This latest ruling underscores the growing recognition among the courts that social media platforms cannot unilaterally control access to public data, even if it may be inconvenient or uncomfortable for the platform owners.

Last week, the Vermont legislature passed H. 121, the Vermont Data Privacy Act. This law will make Vermont the 18th state to grant consumers privacy rights similar to those under the California Consumer Privacy Act (CCPA). It is scheduled to go into effect on July 1, 2025.

While the Vermont Data Privacy Act includes provisions similar to those granted under the CCPA (e.g., consumer rights to delete, access, correct, and opt-out), the Act also includes some provisions that are more protective than the CCPA:

  • The Act includes data minimization requirements that prohibit businesses from collecting personal information for ANY purpose outside of providing the product or service.
  • The Act grants consumers a private right of action against businesses not only when the entity causes a breach of personal information (as is the case under the CCPA) but also if the business misuses data about their race, religion, sexual orientation, health, or other categories of sensitive information. 

Note, however that the law’s private right of action must be reauthorized after two years and only applies to large data brokers. The Vermont legislature pushed this law along amidst the push by the federal government to pass a comprehensive privacy law, which has yet to come to fruition over the last decade. We will continue to monitor new consumer privacy rights laws and how these laws may affect your business and its data collection and use practices.

Generative artificial intelligence (AI) has opened a new front in the battle to keep confidential information secure. The National Institute of Standards and Technology (NIST) recently released a draft report highlighting the risk generative AI poses to data security. The report, entitled “Artificial Intelligence Risk Management Framework:  Generative Artificial Intelligence Profile,” details generative AI’s potential data security pitfalls and suggests actions for generative AI management.

NIST identifies generative AI’s data security risk as “[l]eakage and unauthorized disclosure or de-anonymization of biometric, health, location, personally identifiable [information], or other sensitive data.” Training generative AI requires an enormous amount of data culled from the internet and other publicly available sources. For example, ChatGPT4 was trained with 570 gigabytes from books, web texts, articles, and other writing on the internet, which amounts to about 300 billion words residing in a generative AI database. Much of generative AI’s training data is personal, confidential, or sensitive information. 

Generative AI systems have been known to disclose any information within its training data, including confidential information, upon request. During adversarial attacks, large language models have revealed private or sensitive information within their training data, including phone numbers, code, and conversations. The New York Times has sued ChatGPT’s creator, OpenAI, alleging in part that ChatGPT will furnish articles behind the Times paywall. This disclosure risk poses obvious data security issues.

Less obvious are the data security issues that generative AI’s capacity for predictive inference poses. With the vast quantity of data available to generative AI, it can correctly infer personal or sensitive information, including a person’s race, location, gender, or political leanings – even if that information is not within the AI’s training data. NIST warns that these AI models, or individuals using the models, might disclose this inferred information, use it to undermine privacy or apply it in a discriminatory manner. Already, we have seen a company settle an EEOC lawsuit alleging that it used AI to make discriminatory employment decisions.  Generative AI threatens to increase this legal exposure. 

From an AI governance perspective, NIST suggests several broad principles to mitigate the data privacy risk. Among other things, NIST recommends:

  • Aligning generative AI use with applicable laws, including those related to data privacy and the use, publication, or distribution of intellectual property;
  • Categorizing different types of generative AI content with associated data privacy risks;
  • Develop an incident response plan specifically tailored to address breaches, and regularly test and update the incident response plan with feedback from external and third-party stakeholders;
  • Establish incident response plans for third-party generative AI technologies deemed high-risk. As with all incident response plans, this incident response plan should include:
    • Communicating third-party generative AI incident response plans to all relevant AI actors;
    • Defining ownership of the incident response functions;
    • Rehearsing (or “table topping”) the incident response plans regularly;
    • Regular review of incident response plans for alignment with relevant breach reporting, data protection, data privacy, or other laws;
  • Update and integrate due diligence processes for generative AI acquisition and procurement vendor assessments to include data privacy, security, and other risks; and
  • Conduct periodic audits and monitor AI-generated content for privacy risks.

These actions will involve more than simply adding a reference to artificial intelligence to existing cybersecurity plans. They will involve carefully analyzing a company’s legal obligations, its contract obligations, and the company culture to design an AI governance plan that keeps confidential information out of the public domain and away from bad actors.