The California Privacy Protection Agency (CPPA) the agency responsible for implementing and enforcing the California Consumer Privacy Act (CCPA) and the California Privacy Rights Act (CPRA) (collectively the CCPA), protecting consumer privacy, and ensuring compliance with data privacy regulations, has announced an investigate sweep into companies’ collection of sensitive location data. The CPPA has already sent out inquiries to “advertising networks, mobile app providers, and data brokers that appear to be in violation” of the CCPA.

California Attorney General Rob Bonta said, “Every day, we give off a steady stream of data that broadcasts not only who we are, but where we go. This location data is deeply personal, can let anyone know if you visit a health clinic or hospital, and can identify your everyday habits and movements.” The CPPA is concerned that this sensitive location data will be used to target vulnerable populations. The CPPA urges businesses to take responsibility as stewards of this sensitive data seriously and affirmatively protect location data.

The CPPA’s investigation will focus on how companies are informing consumers about their right to opt out of the sale and sharing of their data (as required under the CCPA), including geolocation data and other types of personal information collected by businesses. Additionally, the CPPA will investigate how companies actually apply this opt-out requirement when a consumer asserts that right.

If your company hasn’t assessed its opt-out processes and procedures lately, now is the time to confirm that consumers are clearly notified of this right and that they can readily opt-out of such tracking and collection and subsequent sale and/or sharing of that data with their parties.

With the proliferation of artificial intelligence (AI) usage over the last two years, companies are developing AI tools at an astonishing rate. When pitching their AI tools, these companies claim that their products can do certain things and promise and exaggerate their capabilities. AI washing “is a marketing tactic companies employ to exaggerate the amount of AI technology they use in their products. The goal of AI washing is to make a company’s offerings seem more advanced than they are and capitalize on the growing interest in AI technology.”

Isn’t this mere puffery? No, according to the Federal Trade Commission (FTC), Securities and Exchange Commission (SEC), and investors.

The FTC released guidance in 2023, outlining certain questions companies can assess to determine if they are AI washing. It urges companies to determine whether they are overpromising what the algorithm or AI tool can deliver. According to the FTC, “You don’t need a machine to predict what the FTC might do when those claims are unsupported.”   

In March 2024, the SEC charged two investment advisors with AI washing by making “false and misleading statements about their use of artificial intelligence.” Both cases were settled for $400,000. The SEC found the two companies had “marketed to their clients and prospective clients that they were using AI in certain ways when, in fact, they were not.” 

Investors are getting into the hunt as well. In February and March 2025, investors sued two companies in securities litigation that alleged AI washing. In the first case, the company allegedly made statements to investors about its AI capabilities and reported “impressive financial results, outlooks and guidance.” It was subsequently the subject of short-seller reports that alleged they were using “manipulative practices” that inflated its numbers and profitability. The litigation alleged that, as a result, the company’s shares declined.

In the second case, the class action named plaintiff alleged that the company overstated “its position and ability to capitalize on AI in the smartphone upgrade cycle,” which caused investors to invest at an artificially inflated price.

Lessons learned from these examples? Look at the FTC’s guidance and assess whether your sales and marketing plan takes AI washing into consideration.

British Prime Minister Keir Starmer wants to turn the U.K. into an artificial intelligence (AI) superpower to help grow the British economy by using policies that he describes as “pro-innovation.” One of these policies proposed relaxing copyright protections. Under the proposal, initially unveiled in December 2024, AI companies could freely use copyrighted material to train their models unless the owner of the copyrighted material opted out.

Although some Parliament members called the proposal an effective compromise between copyright holders and AI companies, over a thousand musicians released a “silent album” to protest the proposed changes to U.K. copyright laws. The album, currently streaming on Spotify, includes 12 tracks of only ambient sound. According to the musicians, the silent tracks illustrate empty recording studios and represent the impact they “expect the government’s proposals would have on musicians’ livelihoods.” To further convey their unhappiness with the proposed changes, the title of these twelve songs, when combined, reads, “The British government must not legalize music theft to benefit AI companies.” 

High-profile artists like Elton John, Paul McCartney, Dua Lipa, and Ed Sheeran have also signed a letter urging the British government to avoid implementing these proposed changes. According to the artists, implementing the new rule would effectively give artists’ rights away to big tech companies. 

The British government launched a consultation that sought comments on the potential changes to the copyright laws. The U.K. Intellectual Property Office received over 13,000 responses before the consultation closed at the end of February 2025, which the government will now review as it seeks to implement a final policy.

Artificial Intelligence (AI) is rapidly transforming the legal landscape, offering unprecedented opportunities for efficiency and innovation. However, this powerful technology also introduces new challenges to established information governance (IG) processes. Ignoring these challenges can lead to significant risks, including data breaches, compliance violations, and reputational damage.

AI Considerations for Information Governance Processes,” a recent paper published by Iron Mountain, delves into these critical considerations, providing a framework for law firms and legal departments to adapt their IG strategies for the age of AI.

Key Takeaways:

  • AI Amplifies Existing IG Risks: AI tools, especially machine learning algorithms, often require access to and process vast amounts of sensitive data to function effectively. This makes robust data security, privacy measures, and strong information governance (IG) frameworks absolutely paramount. Any existing vulnerabilities or weaknesses in your current IG framework can be significantly amplified by the introduction and use of AI, potentially leading to data breaches, privacy violations, and regulatory non-compliance.
  • Data Lifecycle Management is Crucial: From the initial data ingestion and collection stage, through data processing, storage, and analysis, all the way to data archival or disposal, a comprehensive understanding and careful management of the AI’s entire data lifecycle is essential for maintaining data integrity and ensuring compliance. This includes knowing exactly how data is used for training AI models, for analysis and generating insights, and for any other purposes within the AI system.
  • Vendor Due Diligence is Non-Negotiable: If you’re considering using third-party AI vendors or cloud-based AI services, conducting rigorous due diligence on these vendors is non-negotiable. This due diligence should focus heavily on evaluating their data security practices, their compliance with relevant industry standards and certifications, and their contractual obligations and guarantees regarding data protection and privacy.
  • Transparency and Explainability are Key: “Black box” AI systems that make decisions without any transparency or explainability can pose significant risks. It’s crucial to understand how AI algorithms make decisions, especially those that impact individuals, to ensure fairness, accuracy, non-discrimination, and compliance with ethical guidelines and legal requirements. This often requires techniques like model interpretability and explainable AI.
  • Proactive Policy Development is Essential: Organizations need to proactively develop clear policies, procedures, and guidelines for AI usage within their specific context. These should address critical issues such as data access and authorization controls, data retention and storage policies, data disposal and deletion protocols, as well as model training, validation, and monitoring practices.

The Time to Act is Now:

AI is not a future concern; it’s a present reality. Law firms and legal departments must proactively adapt their information governance processes to mitigate the risks associated with AI and unlock its full potential.

We have educated our readers about phishing, smishing, QRishing, and vishing scams, and now we’re warning you about what we have dubbed “snailing.” Yes, believe it or not, threat actors have gone retro and are using snail mail to try to extort victims. TechRadar is reporting that, according to GuidePoint Security, an organization received several letters in the mail, allegedly from the BianLian cybercriminal gang, stating:

“I regret to inform you that we have gained access to [REDACTED] systems and over the past several weeks have exported thousands of data files, including customer order and contact information, employee information with IDs, SSNs, payroll reports, and other sensitive HR documents, company financial documents, legal documents, investor and shareholder information, invoices, and tax documents.”

The letter alleges that the recipient’s network “is insecure and we were able to gain access and intercept your network traffic, leverage your personal email address, passwords, online accounts and other information to social engineer our way into [REDACTED] systems via your home network with the help of another employee.” The threat actors then demand $250,000-$350,000 in Bitcoin within ten days. They even offer a QR code in the letter that directs the recipient to the Bitcoin wallet.

It’s comical that the letters have a return address of an actual Boston office building.

GuidePoint Security says the letters and attacks mentioned in them are fake and are inconsistent with BianLian’s ransom notes. Apparently, these days, even threat actors get impersonated. Now you know—don’t get scammed by a snailing incident.

CrowdStrike recently published its 2025 Global Threat Report, which among other conclusions, emphasized that social engineering tactics aimed to steal credentials grew an astounding 442% in the second half of 2024. Correspondingly, use of stolen credentials to attack systems increased.

Other observations in the report include:

  • Adversaries are operating with unprecedented speed and adaptability;
  • China expanded its cyber espionage enterprise;
  • Stolen credential use is increasing ;
  • Social engineering tactics aim to steal credentials;
  • Generative AI drives new adversary risks;
  • Cloud-conscious actors continue to innovate; and
  • Adversaries are exploiting vulnerabilities to gain access

The details behind these conclusions include that the time an adversary starts moving through a network “reached an all-time low in the past year. The average fell to 48 minutes, and the fastest breakout time we observed dropped to a mere 51 seconds.” This means that threat actors are breaking in and swiftly moving within the system, making it difficult to detect, block, and tackle.

Vishing “saw explosive growth—up 442% between the first and second half of 2024.”

CrowdStrike’s observations are instructive to plan and harden defenses against these risks. Crucial pieces of the defense are:

  • Continued education and training of employees (including how social engineering schemes work;
  • The importance of protecting credentials;
  • How credentials are used to enter into a system.

Although we have been repeatedly educating employees on these themes, the statistics and real life experiences show that the message is not getting through. Addressing these specific risks through your training program may help ebb the tide of these successful social engineering campaigns.

Last week, two separate class actions were filed in the federal district court for the Southern District of Texas against DISA Global Solutions (DISA), a third-party employment screening services provider, related to an April 2024 cyber-attack.

DISA provides drug and alcohol testing and background checks for employers. DISA reportedly faced a cyber-attack from February to April 2024, which resulted in unauthorized third-party access to over 3.3 million individuals’ personal information. According to DISA, the information may have contained individuals’ names, Social Security numbers, driver’s license numbers, and financial account information.

DISA sent notification letters to individuals around February 24, 2025. The lead plaintiffs in both actions claim that they were required to provide their personal information to DISA as part of a job application or to obtain certain employment-related benefits.

Data breach class actions can help inform entities’ risk management strategies. We will consider some key considerations from the class action complaints against DISA.

Reasonable Safeguards

One plaintiff alleges that DISA had a duty to exercise reasonable care in securing data, but that DISA breached that duty by “neglect[ing] to adequately invest in security measures.” The complaint lists numerous commonly accepted security standards, including:

  • Maintaining a secure firewall configuration;
  • Monitoring for suspicious credentials used to access servers; and
  • Monitoring for suspicious or irregular server requests.

The other plaintiff similarly alleges that DISA failed to adequately implement measures. This complaint also enumerates common measures, including:

  • Scanning all incoming and outgoing emails;
  • Configuring access controls; and
  • Applying the principle of least-privilege.

Such claims of inadequate security and privacy measures are common in data breach class action litigation. Organizations should evaluate their security standards and ensure they are aligned with current best practices.

Notification Timeframe

DISA’s notification letter to affected individuals states that the unauthorized access occurred between February and April 2024. DISA sent notification letters in February 2025. One plaintiff alleges that the “unreasonable delay in notification” heightened the foreseeability that affected individuals’ personal information has been or will be used maliciously by cybercriminals.

It can take months to investigate a cyber incident and determine the nature and extent of information involved. Still, organizations who experience such incidents should be mindful of the ways in which plaintiffs can use the notification timeframe in their litigation.

Heightened Sensitivity of Social Security Numbers

One plaintiff includes in their complaint that Social Security numbers are “invaluable commodities and a frequent target of hackers.” This plaintiff alleges that, given the type of information DISA maintains and the frequency of other “high profile” data breaches, DISA should have foreseen and been aware of the risk of a cyber-attack.

The other plaintiff states that various courts have referred to Social Security numbers as the “gold standard” for identity theft and that their involvement is “significantly more valuable than the loss of” other types of personal information.

When it comes to information, not all data elements present the same level of risk if subject to unauthorized access. Organizations should track the types of information they maintain and understand that certain information may present higher risk if exposed, potentially requiring heightened security standards to protect it. The suits against DISA highlight that organizations should implement robust measures to not only minimize risk of cyber-attacks but also to minimize litigation risk in the often-inevitable class actions that follow.

This post was authored by Class Action Defense team chair Wystan Ackerman and is also being shared on our Class Actions Insider blog.

Some data breach class actions settle quickly, with one of two settlement structures:

(1) a “claims made” structure, in which the total amount paid to class members who submit valid claims is not capped, and attorneys’ fees are awarded by the court and paid separately by the defendant; or

(2) a “common fund” structure, in which the defendant pays a lump sum that is used to pay class member claims, administration costs and attorneys’ fees awarded by the court.

A recent Ninth Circuit decision affirmed the district court’s approval of a “claims made” settlement but reversed and remanded the attorney’s fee award. The decision highlights how the approval of the settlement terms should be independent of the attorney’s fees, although some courts seem to merge them.

In re California Pizza Kitchen Data Breach Litigation, – F.4th –, 2025 WL 583419 (9th Cir. Feb. 24, 2025) involved a ransomware attack that compromised data, including Social Security numbers, of the defendant’s current and former employees. After notification of the breach, five class action lawsuits were filed, four of which were consolidated and proceeded directly to mediation. A settlement was reached providing for reimbursement for expenses and lost time, actual identity theft, credit monitoring, and $100 statutory damages for a California subclass. The defendant agreed not to object to attorneys’ fees and costs for class counsel of up to $800,000. The plaintiffs estimated the total settlement value at $3.7 million.

The plaintiffs who brought the fifth (non-consolidated) case objected to the settlement. The district court held an unusually extensive preliminary approval hearing, at which the mediator testified. The court preliminarily approved the settlement, deferring its decision on attorneys’ fees until the information regarding claims submitted by class members was available. At that point, the district court, after estimating the total value of the class claims at $1.16 million (the claim rate was 1.8%), awarded the full $800,000 of attorneys’ fees and costs requested, which was 36% of the total class benefit of $2.1 million (including the $1.16 million plus settlement administration costs and attorneys’ fees and costs).

On appeal, the Ninth Circuit majority concluded that the district court did not abuse its discretion in approving the settlement. Based on the mediator’s testimony, the district court reasonably concluded that the settlement was not collusive. The Ninth Circuit explained that “the settlement offers real benefits to class members,” “the class’s standing rested on questionable footing—there is no evidence that any CPK employee’s compromised data was misused,” and “courts do not have a duty to maximize settlement value for class members.”

The attorneys’ fee award, however, was reversed and remanded. The Ninth Circuit explained that the class claims were properly valued at $950,000 (due to a miscalculation by the district court), and the fee award was 45% of the settlement value, “a significant departure from our 25% benchmark.” In remanding, the Ninth Circuit noted that a “downward adjustment” would likely be warranted on remand.

Judge Collins concurred in part and dissented in part. He would have reversed the approval of the settlement, concluding that the district court failed to adequately address the objections and the low claims rate, and citing “the disparity between the size of the settlement and the attorney’s fees.” From a defendant’s perspective, this decision demonstrates how it can be important to convey to the court that the approval of the proposed settlement should be evaluated independently of the attorney’s fees application. If the court finds the proposed fee award too high, that should not warrant settlement disapproval if the proposed relief for the class members is fair and reasonable. This is true of both “claims made” and “common fund” settlement structures.

Eyeglass manufacturer and retailer Warby Parker recently settled a 2018 data breach investigation by the Office for Civil Rights (OCR) for $1.5 million. According to OCR’s press release, Warby Parker self-reported that between September and November of 2018, unauthorized third parties had access to customer accounts following a credential stuffing attack. The names, mailing and email addresses, payment card information, and prescription information of 197,986 patients was compromised.

Following the OCR’s investigation, it alleged three violations of the HIPAA Security Rule, “including a failure to conduct an accurate and thorough risk analysis to identify the potential risks and vulnerabilities to ePHI in Warby Parker’s systems, a failure to implement security measures sufficient to reduce the risks and vulnerabilities to ePHI to a reasonable and appropriate level, and a failure to implement procedures to regularly review records of information system activity.” The settlement reiterates the importance of conducting an annual security risk assessment and implementing a risk management program.

The California Privacy Protection Agency (CPPA) and Background Alert, Inc. (a California-based data broker) settled allegations that Background Alert failed to register and pay the annual fee required by the California Delete Act. This settlement is part of the CPPA’s investigative initiative announced back in October 2024.

The Delete Act requires data brokers to register with the CPPA and pay an annual fee to fund the California Data Broker Registry. Data brokers can face fines of $200 per day for failing to register by the deadline. The CPPA alleged that Background Alert failed to register between February 1 and October 8, 2024.

The California Consumer Privacy Act (CCPA) defines a “data broker” as a business that collects and sells personal information about consumers with whom they do not have a direct relationship or any direct interaction with those consumers. Data brokers act as intermediaries in the data market by acquiring information from other sources and then selling it to third parties. 

As a result of this failure, Background Alert is required to shut down its business from now until 2028 or it could be subject to a $50,000 fine.

The CPPA alleged that Background Alert created consumer profiles and sold those profiles through its website, backgroundalert.com. According to the settlement agreement, Background Alert analyzed and summarized billions of public records and then drew inferences from those records to identify consumers who “may somehow be associated with” a searched-for individual and identified patterns to generate profiles about consumers. Background Alert actually promoted its business by claiming: “It’s scary how much information you can dig up on someone.”

CPPA’s Head of Enforcement, Michael Macko, warned other businesses that California residents have certain rights and protections over their personal information, including the creation of profiles using inferences about consumers; “[i]f that’s your business, then you have responsibilities under California’s comprehensive privacy law, the CCPA and you might also qualify as a data broker under the Delete Act. [This] action shows that [the CPPA] won’t hesitate to pursue violations based on inferences and profiling.” If your company hasn’t assessed whether it’s a data broker under the CCPA or registered its business in accordance with the Delete Act, the time is now. If the Delete Act does apply, your business must register with the CPPA, pay the annual fee, disclose detailed information about its data collection practices, provide a mechanism for consumers to request deletion of their personal data, process deletion requests within a specified timeframe, and undergo periodic independent audits.