Recently, the National Security Agency (NSA), Cybersecurity and Infrastructure Security Agency (CISA), Australian Signals Directorate’s Australian Cyber Security Centre (ASD’s ACSC), and Canadian Centre for Cyber Security (Cyber Centre) issued guidance outlining security best practices for administrators on hardening on-premises Exchange servers.

The guidance emphasizes that “the threat to Exchange servers remains persistent…and should be considered under imminent threat.” Accordingly, “the authoring agencies strongly encourage organizations to take proactive steps to mitigate risks and prevent malicious activity. The authoring agencies recommend the following prevention and hardening defenses as critical for Exchange servers to mitigate various compromise techniques and protect the sensitive information and communications they manage.”

The recommendations include:

  • Maintain security updates and patching cadence
  • Migrate end-of-life Exchange servers
  • Ensure emergency mitigation service remains enabled
  • Apply security baselines
  • Enable built-in protections
  • Restrict administrative access
  • Harden authentication and encryption
  • Configure transport layer security
  • Configure extended protection
  • Configure Kerberos and SMB instead of NTLM
  • Configure modern authentication and multifactor authentication
  • Configure certificate-based signing of PowerShell serialization
  • Configure strict transport security
  • Configure download domains
  • Use role management and split permissions
  • Use P2 FROM header manipulation detection

The guidance is specific and relevant to the importance of updating, hardening, and monitoring Exchange servers to reduce the ongoing risk of cyber-attacks.

A recent ruling from the U.S. District Court for the Northern District of California underscores the limits of state privacy statutes, particularly when plaintiffs reside outside the state and the alleged misconduct lacks a clear connection to California. The decision by Judge Jacqueline Scott Corley dismissed a proposed class action against California-based analytics company Samba TV Inc., clarifying the reach of both state and federal privacy protections. Steve Dellasala, et al., v. Samba TV, Inc., No. 3:25-CV-03470-JSC, 2025 WL 3034069 (N.D. Cal. Oct. 30, 2025).

Plaintiffs from North Carolina and Oklahoma claimed that Samba TV, whose technology is installed on certain Sony televisions, intercepted their private video-viewing data in real time and without consent. The data allegedly included unique device identifiers such as IP addresses.

These individuals brought their suit in federal court in California, asserting claims under:

  • The Comprehensive Computer Data Access and Fraud Act (CCDAFA)
  • The California Invasion of Privacy Act (CIPA)
  • The federal Video Privacy Protection Act (VPPA)
  • Intrusion upon seclusion under general privacy law

Judge Corley found that both the CCDAFA and CIPA specifically indicate the California legislature’s intent for the statutes not to apply extraterritorially; that is, they do not reach conduct occurring wholly outside California. Since the alleged collection and interception of data took place from televisions in North Carolina and Oklahoma and the complaint did not sufficiently allege that the conduct occurred within California, the court held that the California statutes are inapplicable.

The plaintiffs also invoked the federal VPPA, a law designed to protect video rental records from unauthorized disclosure. Judge Corley ruled these claims failed as well, holding that Samba TV does not qualify as a “video tape service provider” under the statute. Instead, Samba was deemed an analytics provider using information about video product usage, rather than distributing, renting, or selling video materials.

Lastly, the court evaluated whether the invasion of privacy was sufficiently “highly offensive” to sustain a claim for intrusion upon seclusion. Collecting an IP address alone, without more, the court held, does not meet the legal threshold for such a claim, especially without allegations showing exactly how the data was used or disclosed.

This decision provides a few key takeaways:

  • Geographic Limitations of State Laws: California’s privacy statutes are meant to protect California residents and activities occurring within the state’s borders. Out-of-state plaintiffs cannot easily reach for these statutes in California federal court if the alleged misconduct occurred elsewhere.
  • VPPA’s Narrow Scope: Plaintiffs should be cautious in applying the VPPA to technology or analytics firms unless those entities directly provide video rental, sale, or similar services.
  • Heightened Requirements for Privacy Claims: Simply collecting identifiers like IP addresses, without more “highly offensive” conduct or misuse, may not clear the hurdle for intrusion upon seclusion or comparable claims under tort law.

This decision highlights the continued challenges in holding technology and analytics companies accountable under a patchwork of state and federal privacy laws, especially for consumers outside states with robust data privacy protections. Plaintiffs seeking redress for alleged privacy violations must pay close attention to jurisdictional limits and the scope of relevant statutes, as courts may remain vigilant in enforcing these boundaries. As data privacy concerns continue to grow, both legislatures and courts will likely face ongoing pressure to clarify and expand the reach of these protections.

A recent federal class action lawsuit challenging Home Depot Inc.’s use of facial scanning technology at self-checkout kiosks has come to a sudden halt. The plaintiff, Benjamin Jankowski, voluntarily dropped the case, with the U.S. District Court for the Northern District of Illinois granting dismissal without prejudice. Jankowski v. The Home Depot, No. 1:25-cv-09144 (N.D. Ill. Oct. 31, 2025). The underlying reasons for this move remain unclear, as neither Jankowski nor Home Depot have commented publicly.

Jankowski’s lawsuit focused on Home Depot’s alleged failure to comply with the Illinois Biometric Information Privacy Act (BIPA), a statute recognized as the strictest biometric privacy law in the country. Key points from the allegations include:

  • Facial Scan Collection and Retention: The complaint accused Home Depot of collecting facial scans from customers at self-checkout kiosks using “computer vision” technology, a program reportedly rolled out in 2023 and expanded in 2024 to help deter theft.
  • No Notice or Consent: Jankowski claimed he was never informed his facial data would be collected, nor was he asked for consent, both core requirements under BIPA.
  • Retention Policy Concerns: The lawsuit highlighted that Home Depot’s policy retained biometric data “as long as reasonably necessary,” which, according to Jankowski, failed to comply with BIPA’s mandate that such data must be deleted after three years from the individual’s last interaction.

A voluntary dismissal “without prejudice” means Jankowski can potentially refile his lawsuit in the future. It does not constitute a resolution on the merits, nor does it prevent similar or related claims from arising. For now, the case and the specific issues it raised, for example, whether Home Depot’s practices violate BIPA, will remain unresolved by the courts.

At this stage, there’s no public explanation for Jankowski’s decision to withdraw. Possible reasons could range from settlement discussions, discovery-related challenges, pursuit of alternative legal strategies, or simply a reconsideration of the litigation’s viability. The lack of comment from either side leaves room for speculation.

While the sudden dismissal of the Home Depot lawsuit leaves important legal questions unanswered, it’s a clear reminder that biometric data collection is scrutinized by regulators and private litigants. Companies operating in Illinois (and beyond) should treat BIPA compliance as a critical business priority, proactively reviewing their policies and ensuring all required permissions and notices are in place before deploying technologies that use facial scans or other biometric data.

In today’s regulatory environment, the risks of “collect now, ask later” are simply too high.

Section 230 of the Communications Decency Act, 47 U.S. Code § 230, has played a critical role in shaping the modern internet, but the boundaries of its protections remain a persistent point of discussion in law and policy. Ongoing litigation is testing just how far Section 230 stretches when it comes to the design and operation of today’s social media platforms.

Section 230 Overview

Section 230 is a key part of U.S. internet law, and many consider it to have created the internet as we know it today. It provides that online service providers are not considered the publisher or speaker of content posted by users and it generally insulates platforms from legal responsibility for third-party content. For example, if someone uploads a misleading or offensive post to Instagram, Instagram itself is generally not held legally responsible for what that user shares because Section 230 protects the platform from being treated as the content publisher. This immunity has encouraged the growth of the internet economy, allowing platforms to innovate without facing publisher-level liability for every user post or comment.

However, Section 230’s protections are not all-encompassing. They do not extend to liability arising from a company’s own actions, such as product design decisions, business practices, or forms of direct engagement with users. The statute also contains explicit carve-outs for areas like federal criminal law and intellectual property disputes.

Meta Litigation – User Content or Product Design?

Although Section 230 is widely perceived to provide broad protections to online platforms, recent proceedings in federal court in California offer a reminder that the scope of Section 230 immunity is not without bounds. In multidistrict litigation brought against Meta Platforms, Inc., a coalition of state attorneys general is challenging Meta’s reliance on Section 230 in response to allegations that Meta’s platform features such as algorithms, infinite scroll, and notifications contribute to adolescent compulsive use and other harmful mental health impacts. The action is now pending in the Circuit Court of Appeals for the Ninth Circuit. California v. Meta, Inc., No. 24-7032.

The states’ reply brief (filed October 14, 2025) draws a distinction between user-generated content and product design. The attorneys general assert that their claims against the technology company focus solely on Meta’s design choices, not the substance of third-party content. From this perspective, the claims address Meta’s actions as a product designer, not as a publisher, and thus the attorneys general maintain that these assertions fall outside Section 230’s protections. In turn, Meta argues that these platform features are intertwined with content moderation decisions and therefore are covered by Section 230.

This dispute highlights a trend: regulators and plaintiffs are increasingly scrutinizing the architecture and user experience of online platforms, moving beyond content moderation as the primary legal battleground. The Court of Appeals’ eventual decision may clarify whether Section 230 shields platforms from liability related to choices in core design, particularly when those choices are allegedly harming young or otherwise vulnerable users.

Takeaways for Online Platforms

Businesses should not assume that Section 230 offers a blanket defense against all forms of internet liability. Where cases turn on design choices or other conduct by the platform, rather than on the role of publishing third-party content, Section 230 may not apply. Organizations should also periodically review their features with attention to potential risks, especially those aimed at high engagement among minors or other vulnerable user populations. As the contours of Section 230 continue to evolve, staying attuned to these developments is increasingly important not only for legal compliance, but also for ethical and responsible product design.

In today’s increasingly digital world, connected devices are an integral part of daily life. From smart speakers and thermostats to fitness trackers and home security cameras, these devices offer convenience and automation—but they also present new privacy and security challenges. Recognizing the growing concern among consumers, Consumer Reports (CR) has undertaken comprehensive testing to evaluate how well these devices protect user data and defend against potential cyber threats.

CR engineers conducted tests in the cybersecurity and privacy testing lab focused on key criteria such as data encryption, vulnerability to unauthorized access, and the transparency of privacy policies. By examining a wide range of popular connected devices, their findings help consumers make informed decisions about which products prioritize security and user privacy. This ongoing effort underscores the importance of not only embracing smart technology, but also advocating for robust protections as our homes and lives become increasingly connected.

The CR engineers test popular connected devices to determine “which ones follow good cybersecurity and privacy practices.” When they encounter a problem, they contact the company and, sometimes, the conversation leads to product improvements.

Some products recently tested by CR include home security cameras and routers. In one case, CR engineers found that the maker of home hubs and devices was “sending unencrypted thumbnail images from its camera over the public network.” In another, CR engineers found that some routers were sending “WI-FI SSIDS and passwords in plain text across the local network and the public internet” so “a hacker could have relatively easy access to a person’s WI-FI password.”

I have always been a fan of Consumer Reports. This gives me one more reason to admire its work. As consumers, following the CR engineers’ tests in the cybersecurity and privacy testing lab is a no-brainer.

The New York Division of Financial Services (NYDFS) recently issued new cybersecurity guidance to assist covered entities in understanding and responding to the heightened risks posed by third party service providers (TPSP). NYDFS emphasized that covered entities must acknowledge and account for these risks and offer assistance in addressing them.

Based upon NYDFS’ enforcement activities it has:

“Identified the need for more robust due diligence, contractual provisions, monitoring and oversight, and TPSP risk management policies and procedures. Moreover, DFS has observed a trend in which some Covered Entities outsource critical cybersecurity compliance obligations to TPSPs without ensuring appropriate oversight and verification by Senior Governing Bodies or Senior Officers. As noted in previous guidance, Covered Entities may not delegate responsibility for compliance with the Cybersecurity Regulation to an affiliate or a TPSP.”

“Additionally, Covered Entities should develop a tailored, risk-based plan to mitigate risks posed by each TPSP. The following is a non-exhaustive list of considerations that Covered Entities should assess when performing due diligence on TPSPs:

  • The type and extent of access to Information Systems and [Nonpublic Information] NPI.
  • The TPSP’s reputation within the industry, including its cybersecurity history and financial stability.
  • Whether the TPSP has developed and implemented a strong cybersecurity program that addresses, at a minimum, the cybersecurity practices and controls required by the Covered Entity and Part 500.
  • The access controls implemented by the TPSP for its own systems and data, as well as to access the Covered Entity’s Information Systems, and the proposed handling and storage of Covered Entity data, including whether appropriate controls, such as data segmentation and encryption, are applied based on the sensitivity of the data.
  • The criticality of the service(s) provided and the availability of alternative TPSPs.
  • Whether the TPSP uses unique, traceable accounts for personnel accessing the Covered Entity’s systems and data and whether it maintains audit trails meeting the requirements of Section 500.6.
  • Whether the TPSP, its affiliates, or vendors are located in, or operate from, a country or territory jurisdictions that is considered high-risk based on geopolitical, legal, socio-economic, operational, or other regulatory risks.
  • Whether the TPSP maintains and regularly tests its incident response and business continuity plans.
  • The TPSP’s practices for selecting, monitoring, and contracting with downstream service providers (fourth parties).
  • Whether the TPSP undergoes external audits or independent assessments (e.g., ISO/IEC 27000 series, HITRUST) or can otherwise demonstrate, in writing, compliance with Part 500 or industry frameworks such as the National Institute of Standards and Technology’s (NIST) Cybersecurity Framework.”

Companies subject to NYDFS regulations may wish to consider reviewing and adhering to the guidelines.

A class action complaint filed in the Northern District of California on October 17, 2025, alleges that entertainment and arcade franchise Dave & Buster’s Entertainment Inc., misled website visitors about users’ ability to reject cookies and tracking technologies. The lawsuit, brought by two California residents, claims that the Dave & Buster’s website continued to place third-party cookies and transmit user data to advertising partners even after users selected a “Reject All” option on the site’s cookie banner.

How The Alleged Tracking Works

The complaint explains the website’s alleged use of GET and POST requests, which are two common ways browsers communicate with web servers. A GET request is typically used to retrieve information from a website, such as loading a page, while a POST request is used to send information to the website, such as submitting a form. Here, the plaintiffs allege that, despite users opting out of cookies, the website continued to send data about their browsing activity to third parties, including Meta, Google, TikTok, Microsoft, and X (formerly Twitter), through GET and POST these requests, allowing them to receive information about users’ interactions, location, and other website communications.

Cookie Consent and Third-Party Data Sharing

A key complaint allegation is that Dave & Buster’s website presented users with a pop-up banner offering the ability to “Reject All” cookies used for analytics and advertising. However, the complaint asserts that third-party cookies, including those from major platforms, were still stored on users’ devices even after they rejected cookies. These cookies reportedly transmitted a range of information, including browsing history, site interactions, and location data, to external advertising and analytics partners.

Wiretapping and Pen Register Claims

In addition to multiple privacy and fraud claims, the complaint includes causes of action under California’s Invasion of Privacy Act (CIPA), where the plaintiffs claim that the ongoing transmission of user data constitutes a CIPA violation pursuant to both its wiretapping and pen register provisions. The CIPA wiretapping provision prohibits anyone from intentionally intercepting, tapping, or making an unauthorized connection to a telephone or telegraph wire, as well as willfully reading or attempting to read the contents of a communication in transit without the consent of all involved parties. Although this law originally addressed analog phone lines, plaintiffs are increasingly seeking to apply its protection to modern website tracking technologies. In the Dave & Buster’s lawsuit, the plaintiffs claim that information passed from users’ browsers to the website was captured and shared with third-party companies without proper consent, thus constituting wiretapping under CIPA.

The complaint further alleges that Dave & Buster’s violated CIPA’s pen register provisions. A pen register under CIPA is any device or process that records dialing, routing, addressing, or signaling information from electronic communications. CIPA generally prohibit such devices without a court order. Plaintiffs are increasingly asserting that the definition of “pen register” includes internet tracking technologies like software that logs user data and IP addresses. Here, the plaintiffs allege that Dave & Buster’s allowed third parties to use tracking technologies that recorded users’ interactions with the site, again, without proper consent and in contradiction to the “Reject All” consent management option.

Key Takeaways for Organizations

This latest CIPA complaint provides several considerations for companies concerning website compliance:

  • Cookie consent must match practice. If your website allows users to reject cookies or tracking, their choice must be respected in actual practice. Banner options should work as described, not simply appear to offer the user control.
  • Transparency is critical: Companies should clearly disclose what data is collected, how it is used, and with whom it is shared. Businesses should also regularly audit their website’s tracking technologies to confirm compliance.
  • Understand technical flows: It is important for both technical and non-technical teams to have a basic grasp of how tracking technologies function. Internal stakeholders should understand how GET and POST requests, cookies, and third-party scripts work on their sites. Without this foundational knowledge, implementing tracking technologies can inadvertently create compliance issues, even when everyone is acting in good faith.

As plaintiffs and courts look more closely at consent and tracking, companies should be mindful that their website privacy controls aren’t just for show. CIPA litigation continues to evolve, and finding yourself in the middle of such litigation is no one’s idea of fun and games.

A recent federal court decision highlights the power of online terms and conditions, and how “choice-of-law” clauses can dramatically influence privacy litigation. In Crowell v. Audible, a Seattle judge dismissed a proposed class action alleging that Audible unlawfully shared its California customers’ browsing and listening data with Meta, finding that the case must proceed (if at all) under Washington, not California, privacy law.

Two Audible customers from California, Gloria Crowell and Kevin Smith, filed a lawsuit claiming that Audible installed tracking pixels on its website. These pixels allegedly enabled the audiobook platform to gather and share users’ browsing, listening, and purchasing data with Meta for targeted advertising—violations, the plaintiffs argued, of the California Invasion of Privacy Act and the state constitution’s privacy protections.

Audible responded that when customers create an account, they agree to terms specifying that any dispute must be governed by Washington law, not California’s, regardless of users’ home state.

U.S. District Judge Kymberly K. Evanson agreed with Audible and dismissed the suit, at least for now. The court found:

  • Notice of Terms: Customers were given “reasonable notice” of Audible’s conditions of use, including the critical choice-of-law provision pointing to Washington;
  • Consent by Use: Every website sign-in reiterated agreement to these terms; and
  • No Unfair Surprise: Audible’s sign-in process, unlike in some recent Ninth Circuit cases, clearly indicated that clicking “continue” constituted agreement, so the customers were properly bound by the terms each time they logged in.

Why does this matter? The heart of the plaintiffs’ remaining argument: applying Washington law undermines California’s strong privacy protections, a violation of public policy, especially since California’s wiretap statute is broader than Washington’s. While California law generally forbids interception of communications without consent from “all parties,” potentially including businesses and automated systems, Washington law prohibits the interception of communications between two or more individuals, but isn’t as protective regarding communications between a person and a website or automated system.

Judge Evanson was not persuaded that enforcing Washington’s law, even though it might provide fewer remedies for the plaintiffs, would violate a fundamental California public policy. She observed that Washington’s statute is still recognized as one of the strictest anti-wiretapping laws in the country, and that difference alone was not enough to override the contract’s choice-of-law provision.

The case isn’t necessarily over. The judge left the door open for the plaintiffs to rework their lawsuit and assert claims under Washington’s own wiretap law, the Washington Privacy Act. Crowell and Smith have until November 17 to file an amended complaint. But under Washington law, their prospects may be much narrower, particularly because the law focuses on communications between individuals, not individuals and businesses.

This ruling is a reminder to consumers, and a message to businesses, about just how powerful those often-overlooked checkboxes and hyperlinks to “terms of use” can be. When you sign up for an online service, you’re almost always agreeing to more than you think, including which state’s laws will determine your rights if something goes wrong.

For companies, the decision affirms that robust, clearly communicated online terms can withstand legal scrutiny and play a decisive role in defending against state-specific consumer lawsuits.

The use of AI tools is revolutionizing our society. The efficiency it presents is like nothing we have ever experienced. That said, there are risks worth considering.

“AI poses risks including job loss, deepfakes, biased algorithms, privacy violations, weapons automation and social manipulation. Some experts and leaders are calling for stronger regulation and ethical oversight as AI grows more powerful and integrated into daily life.”

The risks are not theoretical—they are real. Individuals who have devoted their lives to developing AI tools are warning society about its dangers.

The linked article above provides an excellent summary of 15 risks posed by AI and is well worth the read.

OpenAI recently published research summarizing how criminal and nation-state adversaries are using large language models (LLMs) to attack companies and create malware and phishing campaigns. In addition, the use of deepfakes has increased, including audio and video spoofs used for fraud campaigns.

Although “most organizations are aware of the danger,” they “lag behind in [implementing] technical solutions for defending against deepfakes.” Security firm Ironscales reports that deepfakes are increasing and working well for threat actors. It found that the “vast majority of midsized firms (85%) have seen attempts at deepfake and AI-voice fraud, and more than half (55%) suffered financial losses from such attacks.”

CrowdStrike’s 2025 Threat Hunting Report projects that audio deepfakes will double in 2025, and Ironscales reports that 40% of companies surveyed had experienced deepfake audio and video impersonations. Although companies are training their employees on deepfake schemes, they have “been unsuccessful in fending off such attacks and have suffered financial losses.”

Ways to mitigate the effect of deepfakes include:

  • Training employees on how to detect, respond to, and report deepfakes;
  • Creating policies that reduce the impact that one person can cause a compromise;
  • Embedding multiple levels of authorization for wire transfers, invoice payments, payroll and other financial transactions; and
  • Employing tools to detect threats that may be missed by employees.

Threat actors will continue to use AI to develop and hone new strategies to evade detection and compromise systems and data. Understanding the risk, responding to it, educating employees, and monitoring can help mitigate the risks and consequences.