A recent federal court decision highlights the power of online terms and conditions, and how “choice-of-law” clauses can dramatically influence privacy litigation. In Crowell v. Audible, a Seattle judge dismissed a proposed class action alleging that Audible unlawfully shared its California customers’ browsing and listening data with Meta, finding that the case must proceed (if at all) under Washington, not California, privacy law.

Two Audible customers from California, Gloria Crowell and Kevin Smith, filed a lawsuit claiming that Audible installed tracking pixels on its website. These pixels allegedly enabled the audiobook platform to gather and share users’ browsing, listening, and purchasing data with Meta for targeted advertising—violations, the plaintiffs argued, of the California Invasion of Privacy Act and the state constitution’s privacy protections.

Audible responded that when customers create an account, they agree to terms specifying that any dispute must be governed by Washington law, not California’s, regardless of users’ home state.

U.S. District Judge Kymberly K. Evanson agreed with Audible and dismissed the suit, at least for now. The court found:

  • Notice of Terms: Customers were given “reasonable notice” of Audible’s conditions of use, including the critical choice-of-law provision pointing to Washington;
  • Consent by Use: Every website sign-in reiterated agreement to these terms; and
  • No Unfair Surprise: Audible’s sign-in process, unlike in some recent Ninth Circuit cases, clearly indicated that clicking “continue” constituted agreement, so the customers were properly bound by the terms each time they logged in.

Why does this matter? The heart of the plaintiffs’ remaining argument: applying Washington law undermines California’s strong privacy protections, a violation of public policy, especially since California’s wiretap statute is broader than Washington’s. While California law generally forbids interception of communications without consent from “all parties,” potentially including businesses and automated systems, Washington law prohibits the interception of communications between two or more individuals, but isn’t as protective regarding communications between a person and a website or automated system.

Judge Evanson was not persuaded that enforcing Washington’s law, even though it might provide fewer remedies for the plaintiffs, would violate a fundamental California public policy. She observed that Washington’s statute is still recognized as one of the strictest anti-wiretapping laws in the country, and that difference alone was not enough to override the contract’s choice-of-law provision.

The case isn’t necessarily over. The judge left the door open for the plaintiffs to rework their lawsuit and assert claims under Washington’s own wiretap law, the Washington Privacy Act. Crowell and Smith have until November 17 to file an amended complaint. But under Washington law, their prospects may be much narrower, particularly because the law focuses on communications between individuals, not individuals and businesses.

This ruling is a reminder to consumers, and a message to businesses, about just how powerful those often-overlooked checkboxes and hyperlinks to “terms of use” can be. When you sign up for an online service, you’re almost always agreeing to more than you think, including which state’s laws will determine your rights if something goes wrong.

For companies, the decision affirms that robust, clearly communicated online terms can withstand legal scrutiny and play a decisive role in defending against state-specific consumer lawsuits.

The use of AI tools is revolutionizing our society. The efficiency it presents is like nothing we have ever experienced. That said, there are risks worth considering.

“AI poses risks including job loss, deepfakes, biased algorithms, privacy violations, weapons automation and social manipulation. Some experts and leaders are calling for stronger regulation and ethical oversight as AI grows more powerful and integrated into daily life.”

The risks are not theoretical—they are real. Individuals who have devoted their lives to developing AI tools are warning society about its dangers.

The linked article above provides an excellent summary of 15 risks posed by AI and is well worth the read.

OpenAI recently published research summarizing how criminal and nation-state adversaries are using large language models (LLMs) to attack companies and create malware and phishing campaigns. In addition, the use of deepfakes has increased, including audio and video spoofs used for fraud campaigns.

Although “most organizations are aware of the danger,” they “lag behind in [implementing] technical solutions for defending against deepfakes.” Security firm Ironscales reports that deepfakes are increasing and working well for threat actors. It found that the “vast majority of midsized firms (85%) have seen attempts at deepfake and AI-voice fraud, and more than half (55%) suffered financial losses from such attacks.”

CrowdStrike’s 2025 Threat Hunting Report projects that audio deepfakes will double in 2025, and Ironscales reports that 40% of companies surveyed had experienced deepfake audio and video impersonations. Although companies are training their employees on deepfake schemes, they have “been unsuccessful in fending off such attacks and have suffered financial losses.”

Ways to mitigate the effect of deepfakes include:

  • Training employees on how to detect, respond to, and report deepfakes;
  • Creating policies that reduce the impact that one person can cause a compromise;
  • Embedding multiple levels of authorization for wire transfers, invoice payments, payroll and other financial transactions; and
  • Employing tools to detect threats that may be missed by employees.

Threat actors will continue to use AI to develop and hone new strategies to evade detection and compromise systems and data. Understanding the risk, responding to it, educating employees, and monitoring can help mitigate the risks and consequences.

On October 9, 2025, the Northern District of California denied Mashable, Inc.’s motion to dismiss a class action alleging violations of the California Invasion of Privacy Act (CIPA). Mashable operates a digital news and entertainment website that publishes articles and multimedia content online. The plaintiff alleged that Mashable disclosed the IP addresses and device identifiers of its website visitors to Microsoft and other third parties in violation of CIPA.

Case History

The plaintiff alleged that Mashable’s website embedded third-party trackers from Microsoft and others which collected users’ IP addresses and device information, transmitting that data to third parties for advertising and profiling. The plaintiff argued these trackers function as “pen registers” under CIPA. The Pen Register Act (the chapter of CIPA at issue) prohibits installing or using any device or process that records addressing information from electronic communications without a court order.

Mashable’s Arguments

Mashable moved to dismiss, contending that the Pen Register Act, applies only to person-to-person communications such as phone calls or emails, not to general website activity. It further argued that the complaint did not plausibly allege a violation and that the CIPA statute is ambiguous and should be interpreted narrowly under the rule of lenity– a legal principle that requires courts to interpret ambiguous criminal statutes in favor of defendants. Mashable also asserted that pending Senate Bill 690 indicates the California Legislature’s disagreement with the plaintiff’s interpretation of the Pen Register Act.

Court’s Reasoning and Decision

The court rejected Mashable’s arguments and found that the Pen Register Act’s language is intentionally broad, covering any “device or process” that records addressing information. Pen Register is defined as a “device or process that records or decodes dialing, routing, addressing, or signaling information transmitted by an instrument or facility from which a wire or electronic communication is transmitted, but not the contents of a communication.” Hinging on the inclusion of “process” in the definition, the court held that this definition was intended to encompass software trackers embedded in websites, not just traditional telephone hardware. According to the court, “[w]hat matters under the statute is not the form of the tool, but rather the function.”

The court further held that a user’s visit to a website, which involves transmitting an HTTP request containing IP and device data, constitutes an “electronic communication” under the statute. The court likened an IP address to a telephone number, as “quintessential ‘addressing’ information…[that] identifies the device sending the communication and determines where the packet is to be routed.”

In addition, the court found that Mashable’s role in embedding trackers and using the resulting data was sufficient to plead “installation” and “use” under CIPA, even if third-party vendors operated the trackers. The court held that, even if the trackers were operated by a third party, Mashable embedded the trackers onto its website and used the data captured from them to facilitate targeted advertising.

Moreover, the court found the statute clear enough in its “language, structure, history and purpose” to apply to the conduct alleged and declined to apply the rule of lenity. It also held that S.B. 690 is not persuasive evidence because it has not passed and “[the] Court must apply the statute as it currently exists.”

Implications

This decision significantly broadens the scope of CIPA compliance risk for website operators in California. By holding that the Pen Register Act applies to modern web tracking technologies, such as embedded third-party trackers that collect and transmit IP addresses and device identifiers, the court clarified that CIPA is not limited to traditional telephone or direct person-to-person communications. Any website embedding third-party advertising or analytics tools that record user “addressing” information could face similar exposure, even if the trackers are managed and operated by external vendors. Importantly, the court’s dismissal of the rule of lenity and disregard for pending legislative amendments further signal that California courts are leaning toward expansive interpretations of digital privacy statutes.

Takeaways

Website operators, especially those with significant California user bases, should reassess their use of tracking technologies and the sharing of user data with third parties. Compliance strategies should include conducting thorough audits of all embedded trackers, understanding what information is collected, ensuring appropriate disclosures, and evaluating the legal basis for any data-sharing practices. The court’s emphasis on function over form warns that simply relying on vendor installation or technical outsourcing does not insulate companies from liability under CIPA. In this environment, proactive reviews of privacy practices and ongoing legal monitoring are critical, not only to limit exposure in potential CIPA litigation, but also to align with evolving regulatory and judicial expectations.

Mergers and acquisitions (M&A) can be transformative, but hidden compliance risks—especially regarding privacy and data protection—often lurk beneath the surface, especially regarding privacy and data protection. In California, strict laws like the California Consumer Privacy Act (CCPA) and the California Invasion of Privacy Act (CIPA) are being aggressively enforced through litigation. Plaintiffs’ firms are increasingly targeting companies whose websites use certain technologies (e.g., chatbots, session replay, cookies) that may run afoul of CIPA and CCPA, potentially resulting in significant liability for acquirers post-close.

Whether you are buying or selling a company, it’s crucial to address these privacy issues early in your M&A process.

For Buyers: Ask the Right Questions—Don’t Buy Liability

Due diligence is the buyer’s opportunity to identify and mitigate risks before finalizing a deal. To avoid inheriting a ticking privacy time bomb, buyers should:

  • Incorporate Specific Privacy Diligence Questions
    • Is the target’s website CIPA and CCPA compliant?
    • Are visitors notified about the collection and sharing of personal information (including IP addresses, chat transcripts, session replays, cookies, etc.)?
    • Has the target ever received any demand letters, lawsuits, or regulatory notices relating to CCPA or CIPA compliance?
    • What third-party technologies (e.g., session replay, analytics, advertising plugins) are used on the website? Are vendor agreements in place, and do they address privacy?
  • Review Web and App Technology
    • Inventory all tracking, chat, and recording technologies on the website.
    • Ensure required consents/disclosures are in place (pop-ups, banners, disclosures in privacy policy).
  • Assess the Cost of Remediation
    • If gaps are found, estimate the financial, operational, and reputational impact of bringing the website into compliance.
    • Negotiate indemnity, escrow, or purchase price adjustments as appropriate.

For Sellers: Shore Up Compliance Before Negotiations

Buyers will discover privacy gaps, unless you address those gaps first, which can delay the deal, reduce the sale price, or create hard questions post-close. Sellers should:

  • Audit the Website Now
    • Identify all data collection, tracking, chat, or recording technologies.
    • Engage privacy counsel or consultants to flag CCPA/CIPA compliance issues.
  • Update Documentation and Policies
    • Ensure your privacy policy, cookie disclosures, and consent mechanisms are current and legally sufficient for California and other relevant jurisdictions.
  • Remediate High-Risk Practices
    • Disable or properly disclose any session replay or “trap-and-trace” technologies.
    • Review agreements with vendors that process web visitor data.
  • Document Your Compliance Efforts
    • Maintain records of your investigation and remediation steps.
    • Be transparent with buyers; proactive efforts can build trust and defend your valuation.

Website privacy litigation isn’t going away, and regulatory scrutiny will only increase. For buyers, robust due diligence can prevent expensive surprises shortly after closing. For sellers, fixing compliance weaknesses before sale preserves deal value and speeds up negotiations. In every M&A involving a consumer-facing website or app, CIPA and CCPA compliance must be an explicit part of diligence. Ask the right questions, address vulnerabilities, and avoid inheriting (or passing along) privacy liabilities that could haunt both parties for years to come.

California continues to lead the way in digital privacy. Its latest step is AB 566, the California Opt Me Out Act. This new law amends the already robust California Consumer Privacy Act (CCPA) and specifically targets how internet browsers empower users to control their personal information.

AB 566 requires that all consumer web browsers (i.e., Chrome, Firefox, Safari, and others) make it simple and obvious for users to send a technical “opt-out preference signal” to websites. This signal tells websites not to sell or share the user’s personal information (as required by the CCPA).

The key provisions of AB 566 include:

  • Easy-To-Use Opt-Out Function: By January 1, 2027, every business that develops or maintains a web browser for consumers must include an accessible, easy-to-find control that lets users send an opt-out signal to websites.
  • Transparency Required: Browser providers must clearly disclose:
    • How their opt-out function works
    • What effect using the signal has.
  • Liability Shield for Browsers: If a website fails to honor a user’s opt-out signal, the browser is not liable, as long as it provides the signal in accordance with the law.
  • Further Rulemaking: The California Privacy Protection Agency is authorized to adopt additional regulations if necessary to clarify or enforce these new requirements.

Note that “browser” is defined under AB 566 as “any software consumers use to access internet websites;” “opt-out preference signal” is “a technical signal, made easy to send, that communicates a user’s decision to block the sale or sharing of their personal data.”

AB 566 is designed to further empower California consumers, making it easier to exercise their privacy rights under the CCPA and the California Privacy Rights Act. Instead of navigating complicated website settings or privacy forms, consumers can use their browser’s built-in privacy controls to express their preferences with a simple click.

Beginning in January 2027, if you use a browser in California, you’ll have a clear and convenient way to tell websites not to sell or share your data with just one simple click. If you develop or maintain a browser, now’s the time to start thinking ahead about compliance, design, and user communication.

Continuing the weekly blog posts about lawyers using AI and getting in trouble, the Massachusetts Office of Bar Counsel recently issued an article entitled “Two Years of Fake Cases and the Courts are Ratcheting Up the Sanctions,” summarizing the problems encountered by courts when confronted with lawyers citing fake cases, and the subsequent referral to disciplinary counsel.

The article outlines multiple cases of lawyers being sanctioned for filing pleadings containing fake cases after using generative AI tools to draft the pleading. The cases range from lawyers not checking the cites themselves, to supervising lawyers not checking the cites of lawyers they are supervising before filing the pleading.

The article reiterates our professional ethical obligations as officers of the court to always file pleadings that “to the best of the attorney’s knowledge, information and belief, there is a good ground to support it,” that “any lawyer who signs, files, submits, or later advocates for any pleading, motion or other papers is responsible for its content,” and that lawyers are to provide proper supervision to subordinate lawyers and nonlawyers.

The article outlines two recent sanctions imposed upon lawyers in Massachusetts in 2025. The author states, “Massachusetts practitioners would be well-served to read the sanction orders in these matters.” I would suggest that non-Massachusetts practitioners should read the article and the sanctions imposed as they are similar to what other courts are imposing on lawyers who are not checking the content and cites of the pleadings before filing them.

Courts are no longer giving lawyers free passes for being unaware of the risk of using generative AI tools for drafting pleadings. According to the article, sanctions will continue be issued, and practitioners and firms need to address the issue head on.

The article points out several mitigations that lawyers and firms can take to avoid sanctions. My suggestion is that lawyers use caution when using AI to draft pleadings, communicate with any other lawyers involved in drafting the pleadings to determine whether AI is being used (including if you are serving as local counsel), and check and re-check every cite before you file a pleading with a court.

Dating sure has changed since I was in the market decades ago. Some of us can’t imagine online dating, let alone dating a bot. Get over it—it’s now reality.

According to Vantage Point, a counseling company located in Texas, it surveyed 1,012 adults and a whopping 28% of them admitted to having “at least one intimate or romantic relationship with an AI system.” Vantage Point recently released “Artificial Romance: A Study of AI and Human Relationships,” that found:

  • 28.16% of adults claim to have at least one intimate or romantic relationship with an AI.
  • Adults 60 years and older are more likely to consider intimate relationships with AI as not cheating.
  • More than half of Americans claim to have some kind of relationship with an AI system.
  • ChatGPT is the #1 AI platform adults feel they have a relationship with, Amazon’s Alexa is #3, Apple’s Siri is #4, and Google’s Gemini is #5.
  • Adults currently in successful relationships are more likely to pursue an intimate or romantic relationship with an Artificial Intelligence.

The article explores whether having an intimate or romantic relationship with a bot is cheating on your partner or not, which we will not delve into here. The point is that it appears that a lot of adults are involved in relationships with bots.

According to Gizmo, younger generations, including 23% of Millennials and 33% of Gen Z report having romantic interactions with AI.

For adults, the pitfalls and “dangers” associated with dating a bot are thoroughly outlined in an informative article in Psychology Today. Some of the experts believe that dating a bot:

  • Threatens our ability to connect and collaborate in all areas of life.
  • In most cases, users actually create the characteristics, both physical and “emotional,” that they want in their bot. Some users lose interest in real-world dating because of intimidation, inadequacy, or disappointment.
  • AI relationships will potentially displace some human relationships and lead young men to have unrealistic expectations about real-world partners.
  • Sometimes the bots are manipulative and can be destructive. This can lead to feelings of depression, which can lead to suicidal behavior.

What is more alarming is the “astonishing proportion of high schoolers [who] have had a ‘romantic’ relationship with an AI” bot. According to the article by the same name, “this should worry you.”

Presently, one in five high school students say that “they or a friend have used AI to have a romantic relationship” according to a recent report from the Center for Democracy and Technology. This is consistent with other studies noting the high percentage of teens that are forming relationships with AI bots. The concerns for youngsters forming relationships with bots include the fact that they can “give dangerous advice to teens…encourage suicide, explaining how to self-harm, or hide eating disorders. Numerous teens have died by suicide after developing a close and sometimes romantic relationship with a chatbot.”

The Report found that 42% of highschoolers use AI “as a friend, or to get mental health support, or to escape from real life.” Additionally, 16% percent say they converse with an AI bot every day. It is also being used for AI-fabricated revenge porn, producing deepfakes, sexual harassment and bullying.

Like social media usage, parents need to be aware of the prevalence of kids interacting with AI bots for romantic relationships or mental health advice, and discuss the risks presented with them.

Oracle has confirmed that the threat actor group Cl0p is actively exploiting a zero-day vulnerability in the Oracle E-Business Suite product, versions 12.2.3-12.2.14. On October 4, 2025, Oracle advised its customers in a security advisory that the supplied patch should be applied “as soon as possible.” According to Oracle, “this vulnerability is remotely exploitable without authentication, i.e., it may be exploited over a network without the need for a username and password. If successfully exploited, this vulnerability may result in remote code execution.”

The vulnerability, CVE02025-61882, was added to the Cybersecurity and Infrastructure Security Agency’s (CISA) known exploited vulnerabilities catalog earlier this week and CISA issued an urgent alert regarding the vulnerability advising that threat actors are exploiting the vulnerability in the wild to launch ransomware attacks against organizations worldwide. 

The Federal Bureau of Investigation (FBI) has commented that the zero-day is “an emergency putting Oracle E-Business Suite environments at risk of full compromise.”

The zero-day is an additional vulnerability that Cl0p exploited “to steal large amounts of data from several victims in August.” According to CrowdStrike researchers, the exploitation of the zero-day occurred on August 9, 2025. Cl0p has used the zero-day to initiate a data theft and extortion campaign, including sending extortion emails to Oracle customers. The zero-day has a CVSS rating of 9.8, which means that it is critical and should be patched immediately.

On October 6, 2025, Bloomberg reported that the Securities and Exchange Commission (SEC) has launched an investigation into AppLovin Corporation’s data-collection practices, following an alleged whistleblower complaint and a series of short-seller reports. We previously covered the shareholder class action against AppLovin in another blog post. The company is a mobile advertising technology business that operates a software-based platform connecting mobile game developers to new users. AppLovin, which recently rebranded its consumer ads division as “AXON,” now faces heightened scrutiny over how its technology interacts with user data. Forbes estimates that news of the probe caused AppLovin’s stock to drop 14% on October 6, wiping out approximately $8.65 billion in executive and major investor wealth.

The SEC’s Cyber and Emerging Technologies Unit is conducting the investigation, which was established by the agency in February 2025 to address cyber-related misconduct and safeguard retail investors in the emerging technologies sector. Allegedly, the probe centers on allegations that AppLovin violated platform partner agreements to deliver more targeted advertising to consumers, potentially using unauthorized tracking methods such as device fingerprinting. These methods were reportedly outside the parameters permitted by the service agreements of its platform partners.

Fingerprinting is a tracking technique that collects a combination of device and browser characteristics, such as screen resolution, installed fonts, and hardware settings, to create a unique identifier for a user. Unlike cookies, which can be deleted or blocked, fingerprints are harder to erase and often invisible to users. When used for ad targeting without clear consent, fingerprinting could raise privacy compliance concerns under platform policies and data protection laws. In fact, fingerprinting has been banned by Apple and was previously restricted by Google.

AppLovin has not yet commented on the investigation beyond statements that it “regularly engages with regulators” and will disclose any material developments through appropriate channels. However, the SEC’s alleged involvement, especially from its cyber-focused unit, signals that data privacy and platform integrity are no longer peripheral concerns. These issues are central to how companies will be evaluated by regulators, investors, and consumers alike.

For companies operating in the ad tech space, this SEC investigation is a reminder that violations of partner agreements, or opaque data practices, can trigger not only regulatory action but reputational damage as well. Businesses should review their platform agreements and assess whether data practices align with the terms of service of major platforms. Companies should also audit their tracking technologies, as advanced tracking methods such as fingerprinting may expose a company to regulatory risk if not properly disclosed or authorized. As AI continues to reshape digital advertising, regulators and consumers are watching closely. Businesses should be, too.