As we have warned before, threat actors using QR codes in attacks against victims continue to rise. To illustrate the risk, on January 8, 2026, the FBI issued a FLASH alert, entitled “North Korean Kimsuky Actors Leverage Malicious QR Codes in Spearphishing Campaigns Targeting U.S. Entities.”

The alert warns that North Korean state-sponsored actors (Kimsuky) are conducting spearphishing campaigns leveraging QR codes (Quishing) to compromise U.S. entities. These attacks target organizations including think tanks, academic institutions, NGOs, and government contractors.

The threat attackers are embedding malicious QR codes in email attachments or graphics through spearphishing emails impersonating trusted contacts (e.g., advisors, embassy staff). When victims receive the QR code, they scan them using mobile devices, which allows the threat actor to bypass corporate email security and endpoint monitoring. After scanning the QR code, the victim is routed through “attacker-controlled redirectors that collect device and identity attributes” and serve phishing pages mimicking Microsoft 365, Okta, VPN portals, or Google login screens.

The threat actor is then able to steal credentials to enable unauthorized access to cloud services. Since the attacks originate from unmanaged mobile devices,  threat detection is difficult.

The FBI recommends:

  • Employee Awareness: Train staff to avoid scanning unsolicited QR codes.
  • Verify Sources: Confirm legitimacy before interacting with QR codes.
  • Mobile Device Management (MDM): Enforce security controls on all mobile endpoints.
  • Phishing-Resistant MFA: Implement for sensitive systems and remote access.
  • Access Reviews: Apply least privilege and conduct regular audits.
  • Incident Reporting: Notify the FBI Cyber Division or IC3 immediately if suspicious activity is detected.

As we have previously noted, employees are particularly vulnerable to Quishing campaigns as many don’t understand the technology and QR codes are now ubiquitous. When we conduct employee training, this lack of understanding is reinforced. We strongly recommend that you educate your employees about Quishing. If you are interested in learning more about our cybersecurity training, please contact us.

A new California trial court decision offers website operators some long-awaited relief in the ongoing wave of website privacy suits under the California Invasion of Privacy Act (CIPA). In early December, the Los Angeles County Superior Court, rejected an increasingly common theory that routine website analytics and tracking tools function as illegal “pen registers” or “trap and trace” devices under CIPA. Rodriguez v. Ink America Int’l Group LLC, LASC 25STCV153 (Dec. 10, 2025).

Although Rodriguez is not binding precedent (it is only a state trial court ruling), its reasoning may help businesses push back on expansive CIPA theories, particularly where plaintiffs attempt to repackage ordinary website operations as criminal wiretapping-style conduct.

Enacted in 1967, CIPA was aimed at traditional telephone wiretapping and eavesdropping. In recent years, however, plaintiffs have invoked CIPA to challenge common web technologies, such as cookies, pixels, chat tools, beacons, and session replay, arguing that these tools “intercept” communications or capture routing information in a manner supposedly comparable to pen registers and trap-and-trace devices. This theory has fueled a surge of class actions seeking statutory damages based on everyday website functionality.

In Rodriguez, the plaintiff alleged that Ink America operated a website that:

  • collected users’ IP addresses;
  • deployed analytics and beacon software; and
  • did so without user consent.

The complaint asserted that these tools enabled third parties to collect and link identifiers such as IP addresses, browser and operating system information, geolocation data, and email addresses. Based on those allegations, the plaintiff claimed Ink America’s analytics practices amounted to the unlawful use of a pen register or trap and trace device under CIPA.

The court dismissed the CIPA claims without leave to amend, concluding that the complaint failed to state a viable claim. The ruling is notable for several reasons:

  • The Court Treated CIPA as Ambiguous in the Website Context and Looked to Legislative Intent: Rather than assuming CIPA clearly applies to internet analytics, the court found the statute’s language ambiguous when mapped onto modern web technologies. To resolve that ambiguity, the court looked at how California’s privacy framework functions today, particularly the California Consumer Privacy Act (CCPA).
  • The Court Found a Conflict with the CCPA if Plaintiffs’ Theory Were Adopted: The CCPA expressly contemplates that businesses will collect and use certain data through website operations, provided they give appropriate notice and honor consumer rights (such as opt-out and deletion rights). The court reasoned that if ordinary analytics tools were treated as criminal “pen registers” under CIPA (i.e., illegal absent a court order), then the CCPA’s compliance structure would be undermined. Put differently, the plaintiff’s interpretation would effectively turn routine, CCPA-regulated website conduct into a criminal act, creating a regime that “punish[es] compliance” rather than improving consumer protection. The court ultimately concluded that the pen register statute “did not, and does not, criminalize the process by which websites communicate with users who choose to access them.”
  • The “Service Provider” Concept Helped the Website Operator: The court also held that website operators can qualify as “electronic communication service providers” under CIPA 638.50(b). Even assuming IP address collection could be framed as pen-register-like, § 638.51(b) contains an exception allowing recording or use of such information when necessary to operate, maintain, test, or protect the service. That analysis provides another pathway for defendants to challenge CIPA pen register/trap-and-trace claims at the pleading stage.
  • The Court Reaffirmed CIPA’s Pen-Register Focus on Telephonic-Style Surveillance: Lastly, the court emphasized that CIPA’s pen register provisions were designed to address telephonic-style surveillance, not the normal operation of commercial websites and analytics tools. Given the ambiguity, the court declined to stretch CIPA into a broad regulator of standard website analytics practices.

Many prior website privacy cases have treated CIPA as straightforwardly applicable to modern web tracking. Rodriguez stands out because the court:

  • acknowledged the ambiguity in applying decades-old statutory language to modern web technologies;
  • examined legislative intent and California’s broader privacy framework; and
  • dismissed the claims outright (and without leave to amend).

If other courts adopt this reasoning, it could significantly narrow—and possibly eliminate—pen register and trap-and-trace theories based on routine website analytics. That said, the litigation risk is not over; until a  binding appellate decision issues, outcomes can still vary across California trial courts.

Even with Rodriguez as helpful authority, businesses should continue to manage risk proactively by prioritizing CCPA compliance, inventorying and auditing website technologies, tightening vendor and “service provider” contractual terms, monitoring CIPA case law and coordinating compliance and litigation strategies.

Rodriguez signals meaningful judicial skepticism toward efforts to transform ordinary website analytics into criminal wiretapping conduct under CIPA. While not the final word, it is a useful, defense-friendly decision that may help recalibrate how courts evaluate CIPA claims in the online context, especially where plaintiffs’ theories would collide with the CCPA’s established compliance framework.

Enforcement of California’s Delete Act is accelerating. The California Privacy Protection Agency (CPPA) recently sent a clear message to data brokers: register, pay the required fee, and be prepared to defend your data practices, especially when they involve sensitive populations.

CPPA announced recent settlements with two data brokers totaling more than $100,000 for failing to register as required under the Delete Act:

  • Datamasters (Texas-based reseller): $45,000 settlement; and
  • S&P Global (New York-based market intelligence company): $62,600 settlement.

Datamasters was also ordered to stop selling all personal information about Californians, effectively preventing it from operating as a data broker in the state.

The Datamasters case was not only about a registration failure, but also about the nature of the data involved. According to the decision, in 2024, Datamasters:

  • Bought and resold names, addresses, phone numbers, and email addresses of millions of people with certain health conditions, including Alzheimer’s disease, drug addiction, and bladder incontinence;
  • Marketed audience segments for targeted advertising based on sensitive or potentially discriminatory categorizations, including “Senior Lists” and “Hispanic Lists”; and
  • Maintained additional lists based on political views, banking activity, and grocery- and health-related purchases.

Enforcement head Michael Macko framed the risk in terms of downstream misuse, not just advertising compliance: “Reselling lists of people battling Alzheimer’s disease is a recipe for trouble… History teaches us that certain types of lists can be dangerous.” The takeaway is that regulators are treating sensitive list-based targeting as high-risk because it can enable profiling, discrimination, manipulation, or the targeting of vulnerable individuals.

Similarly, S&P Global also failed to register and lacked certain controls. As a result, S&P Global must adopt registration and compliance auditing procedures.

The Delete Act’s core requirement is straightforward: it requires companies to register annually and pay a fee if they were data brokers in the previous year. These enforcement actions show that a failure to register can escalate quickly, particularly where the business model involves sensitive personal data or audience lists tied to health, demographics, or beliefs.

Data brokers should take note that:

  • Registration is not optional. Unintentional failures can still trigger penalties and mandated process changes;
  • Sensitive-data monetization invites scrutiny. Health, age, perceived race, and political views are treated as inherently higher risk;
  • Controls matter. Expect pressure for durable compliance systems such as internal audits and documented procedures; and
  • Enforcement can restrict operations. Consequences can extend beyond fines (like what happened to Datamasters).

California’s 2025 legislative session ended with a familiar message to businesses: privacy compliance is expanding in scope, and artificial intelligence (AI) governance is moving quickly from voluntary best practices to enforceable transparency and safety obligations. On the last day of 2025, lawmakers introduced 33 privacy and AI bills and passed 16 for Governor Gavin Newsom to sign or veto. Ultimately, the governor signed four privacy bills and seven AI bills into law, while vetoing five others. Below is a summary of the most consequential enacted measures and some practical compliance takeaways for organizations operating in, or selling/providing services in, California. The through-line is consistent: California is regulating data practices and AI system behavior through disclosure, documentation, accountability, and enforcement hooks.

New Privacy Law

  • The California Opt Me Out Act requires browser-based universal opt-out signals and mandates that companies developing or maintaining a web browser provide consumers with a universal opt-out preference signal that applies to all websites they visit. Even though the mandate is aimed at browser developers, businesses should prepare for more machine-readable opt-out signals, and ensure that adtech, analytics, personalization, consent tools, and vendor flows can recognize and honor universal signals consistently across websites.
  • AB1043 (online age verification signals) embraces an alternate model for children’s online safety by shifting obligations; under this regulation, operating system providers must offer an interface for users to input age-verification information during account setup. The model diverges from other states by imputing liability to developers and excluding operating system providers and app stores from liability. App developers should treat OS-provided age signals as compliance-critical attributes.
  • SB 361 (data brokers data collection and deletion) expands data broker oversight through broader disclosures and tougher enforcement. Brokers must disclose whether they collect personal information in numerous categories, including sexual orientation, citizenship status, biometric information, and government identification numbers. Brokers must disclose to the California Privacy Protection Agency whether they sold or shared data with foreign actors, federal or state agencies (including law enforcement), or generative AI system developers. As a result of this regulation, data brokers, and companies that resemble brokering in practice, should revisit registration, reporting, and deletion workflows. Data purchasers should strengthen vendor diligence because these disclosures can surface regulatory and reputational exposure tied to data sources and onward transfers, including to AI developers.
  • AB 45 (collection of health and location data limits) extends prohibitions to include collection, use, disclosure, sale, sharing, or retention of personal information of an individual located at or within the precise geolocation of clinics or reproductive health care service centers. Because this regulation provides a private right of action for violations, organizations should inventory location collection and retention, geofence and near sensitive locations advertising logic, SDK behavior including third-party trackers, and research data governance and law enforcement response playbooks.

AI Transparency and Safety

The Transparency in Frontier Artificial Intelligence Act requires certain frontier AI developers, with revenues of at least $500 million, to disclose safety efforts, including:

  • Disclosures to the Office of Emergency Services plus mandatory third-party audits;
  • Disclosures must identify which accepted standards were integrated into their frameworks; and
  • A clear safety framework must be adopted and published on the developer’s website.

In-scope developers should make governance audit-ready through written safety frameworks, mapped standards, and disclosure processes. Enterprises buying frontier model services should expect procurement and due diligence changes as vendors align to audit and disclosure expectations.

Additionally, SB 243 (companion chatbots with disclosure, safety protocols, and suicide prevention reporting) regulates companion chatbots, described as systems designed to use human-like responses, satisfy social needs, and sustain relationships. The regulation requires:

  • Clear and conspicuous disclosure that the user is engaging with AI, not a human, if a reasonable person could be misled; and
  • Safeguards that prevent engagement if there is no protocol to prevent harmful content, including self-harm and suicide ideation-related outputs, or sexual content if the user is a known minor.

What to Expect in 2026

The second half of the California legislative session has begun. More than 22 bills will be carried over, with a February 20, 2026, deadline for new bills. Privacy bills in committee include tweaks to the California Consumer Privacy Act (CCPA) and the California Invasion of Privacy Act (CIPA), plus workplace surveillance bills. Potential new data broker obligations and new reporting for businesses collecting precise geolocation are also on the list. Ten AI bills are pending further action, including multiple bills on algorithmic pricing, plus bills on AI bots, high-risk automated decision-making, copyright, and discrimination.

Together, these enactments confirm that California is steadily converting privacy and AI risk management into operational requirements that regulators and plaintiffs can test through documentation, disclosures, and demonstrable controls. Organizations should align engineering, product, procurement, and legal around a single compliance roadmap that covers universal opt-out signal recognition, age and minor related safeguards, data broker style reporting and deletion workflows, sensitive geolocation and health data guardrails, and AI transparency and safety governance that can withstand audit and vendor diligence. With dozens of bills carrying over and new proposals imminent, the most resilient posture is to institutionalize repeatable processes now, including data mapping, vendor oversight, policy and notice updates, testing and monitoring, and incident response playbooks, so that new California requirements can be absorbed as incremental changes rather than disruptive rework.

On January 5, 2026, the federal U.S. District Court for the Southern District of New York upheld two discovery orders requiring OpenAI to produce a sample of 20 million de-identified user logs from ChatGPT as part of wide-ranging copyright litigation brought by news organizations and class plaintiffs. This decision offers important insights into how federal courts are currently approaching the intersection of discovery, user privacy, and the relevance of data from large language models.

Factual and Procedural Background

The plaintiffs sought discovery of logs reflecting users’ conversations with ChatGPT, including both prompts and model outputs. OpenAI, which retains tens of billions of such logs in the ordinary course of business, initially resisted a July 2025 motion by the plaintiffs to compel the production of a 120-million-log sample. OpenAI instead proposed a smaller sample of 20 million de-identified conversations and indicated it would remove personally identifiable and other private information from the sample using a custom de-identification tool. Plaintiffs agreed to this smaller log sample, but preserved their request for a larger production if warranted.

In October 2025, OpenAI changed its position, offering to run search terms across the 20 million log sample and produce only those conversations which implicated the plaintiffs’ works. OpenAI argued that this approach would better protect the privacy of ChatGPT users. The following day, plaintiffs responded with a renewed motion to compel production of the entire de-identified 20 million-log sample rather than just a filtered subset.

On November 7, 2025, Magistrate Judge Wang granted the plaintiffs’ motion and ordered production of the full, 20-million log de-identified sample. OpenAI’s motion for reconsideration was denied. The court concluded that the full log sample comprising logs both relevant and seemingly irrelevant to the plaintiffs’ claims was necessary for a complete analysis, noting that even logs not directly implicating plaintiffs’ content could be relevant to OpenAI’s asserted fair use defenses. Judge Wang also weighed privacy considerations but determined that these concerns were sufficiently addressed by three main safeguards: (i) reducing the volume from tens of billions of logs to 20 million; (ii) de-identification; and (iii) a standing protective order governing the use of discovery in the case.

District Court’s Analysis

OpenAI objected to Magistrate Judge Wang’s orders, arguing that they inadequately balanced privacy interests against the requested discovery and that the court should have adopted its proposed, less burdensome production method. District Court Judge Stein, reviewing the objections, affirmed both discovery orders.

Key findings from the January 6,2026, order include:

  • Balancing relevance and privacy: The District Court found that Magistrate Judge Wang adequately balanced user privacy with discovery needs. Particularly, the reduction in sample size, use of de-identification, and a protective order were sufficient to address privacy concerns in the context of this litigation;
  • No requirement for least burdensome discovery: The District Court rejected OpenAI’s argument that Magistrate Judge Wang was obligated to order the “least burdensome” means of production, such as filtered search-term results. Notably, the District Court emphasized that no applicable authority required such a standard in these circumstances;
  • Distinguishing Rajaratnam: OpenAI relied primarily on Securities and Exchange Comm’n. v. Rajaratnam, 622 F.3d 159 (2d. Cir. 2010), to argue that stronger privacy protections were necessary. The District Court distinguished Rajaratnam, noting that it involved surreptitiously recorded, potentially illegal wiretaps and far greater privacy interests. By contrast, ChatGPT users voluntarily provided their data to OpenAI as part of ordinary platform usage, and there is no question regarding the legality of OpenAI’s retention of logs here; and
  • Relevance beyond direct infringement: Echoing Magistrate Judge Wang’s findings, the District Court observed that even logs which do not reproduce plaintiffs’ works may help OpenAI assert defenses such as fair use and are thus “relevant for this case” under the governing discovery standard.

Implications for Organizations: Legal and Governance Considerations

The court’s order carries several practical takeaways for organizations:

  • Discovery of de-identified user data: Even where data is produced in de-identified form and subject to a protective order, courts may still require production at scale if the dataset is relevant and proportional. Privacy risk management for AI interactions should assume that de-identification is a risk-reduction step and not a guarantee. Further, protective order terms and access controls can become as important as the underlying redaction method;
  • Adequacy of safeguards: Here, the analysis credited a bundle of controls: reduced scope (tens of billions down to 20 million), OpenAI’s de-identification, and an existing protective order. Organizations should expect the “safeguards” inquiry to be fact-specific and cumulative. For example, if a vendor cannot describe its de-identification workflow or cannot operationalize access restrictions and auditing for large text datasets, a court might be less receptive to privacy objections than if those controls are mature and demonstrable;
  • Relevance is not limited to “copies”: The court accepted that logs “disconnected from” plaintiffs’ works could still be relevant, including to OpenAI’s fair use defense. This holding implies that, once a party’s defenses turn on how a system behaves across many interactions, the discoverability of “non-infringing” examples can increase. Organizations should anticipate that litigation positions taken on defenses (not only claims) can influence how much data becomes discoverable and what sampling methodologies a court will accept;
  • No absolute right to “minimally intrusive” discovery: The court held that OpenAI identified no caselaw requiring a court to order the least burdensome discovery possible or to specifically explain why it rejected a party’s alternative discovery proposal, and it upheld Magistrate Judge Wang’s decision to require production of the full de-identified sample. For organizations, a “we will run the search and give you results” approach can be characterized not only as burden-reducing, but also as control-shifting. When one party controls the tooling, index, and match logic, the other party may argue it cannot validate completeness or test alternative hypotheses. This order suggests courts may favor approaches that preserve the requesting party’s ability to analyze the dataset when proportionality and privacy safeguards are otherwise satisfied;
  • Vendor and platform data practices (contracting should account for discovery posture, not just steady-state privacy): The order reflects that OpenAI retained “tens of billions” of chat logs in the ordinary course of business, and that those logs became a central discovery target. Organizations should focus on vendor diligence. Contracts should address discovery realities, including what is retained, for how long, under what governance controls, and what mechanisms exist to support de-identification and controlled production if litigation compels it. A contract that is silent on these points can leave customers with limited leverage once a vendor’s logs are in scope;
  • Highlighting litigation risks in AI use: Magistrate Judge Wang treated users’ “sincere” privacy interests as one factor in proportionality but still found production appropriate given the safeguards. For organizations, this is a reminder that user expectations about privacy are relevant, but they do not necessarily prevent disclosure in civil discovery. That is especially important for employees using consumer-facing tools for work-related tasks, where the organization may not control retention settings and may not even know what was submitted; and
  • Importance of comprehensive AI governance: Because this dispute is about “conversations” defined as prompts and outputs, organizations should treat conversational AI data as a discoverable record category and build governance accordingly. This could mean mapping what AI interaction data exists, clarifying approved and prohibited data types, implementing technical controls to reduce sensitive inputs, litigation holds, and vendor coordination.

The January 5, 2026, order is a reminder that, in AI litigation, interaction logs can play a key role. Courts may be willing to compel production of very large datasets when relevance is framed broadly enough to include defenses like fair use, and when sampling, de-identification, and protective orders are presented as workable privacy safeguards. For organizations, the most durable lesson is not simply “be careful what you type into AI,” but that AI governance now includes discovery posture: what is retained, what can be produced, who controls the tooling, and what protections actually function at scale.

After a decade of cloud migration and incremental modernization, the technology sector is approaching an inflection point. This year, 2026, is shaping up to be the year AI must move from pilots to production. The focus is shifting from more tools and bigger platforms toward autonomy, context, and embedded intelligence across the stack, from software to devices to semiconductors to hyperscalers. The biggest risk is no longer betting too aggressively on AI; it is hesitating too long.

Many enterprises have spent years re-platforming legacy applications and adopting cloud-first operating models. Now cloud investment is beginning to plateau as budgets and leadership attention shift toward agentic and autonomous systems that can act in real time.

The opportunity is large, but the blockers are familiar:

  • Legacy systems that are difficult to integrate or refactor;
  • Fragmented data that limits context and governance;
  • Regulatory and compliance demands that require stronger control frameworks;
  • Labor constraints and skills gaps;
  • Geopolitical shifts affecting supply chains, infrastructure planning, and security priorities;
  • AI systems increase the surface area for sensitive data exposure, retention risk, and secondary use; this raises the bar for consent, minimization, and auditability across training, inference, and logging; and
  • Agentic systems introduce new failure modes, including prompt injection, tool misuse, data exfiltration, and privilege escalation; traditional app security controls often do not map cleanly to AI workflows

The old playbook of slow modernization, endless pilots, and delayed scaling will not hold. Organizations that remain in pilot mode will fall behind.

Shifts that will define 2026:

  1. Edge computing becomes a growth engine. Intelligence moves closer to devices, vehicles, factories, and chip-level inference engines, enabling real-time decisions and driving demand for inference-optimized semiconductors;
  2. Fiber and satellite enable the next wave of services. As AI becomes heavier and more distributed, the ceiling is set by connectivity. Fiber buildouts and satellite networks expand reliable, low-latency access and unlock new markets;
  3. Policy and domestic production reshape strategy. U.S. investments in broadband, data infrastructure, and domestic chip capacity increase resilience, while raising expectations for data sovereignty, AI safety, and labor compliance;
  4. Ecosystems replace do-it-yourself transformation. As architectures grow more complex, success depends on partnerships across hyperscalers, SaaS, semiconductors, startups, and industry collaborators. Build versus buy becomes compose, partner, and integrate;
  5. Workforce reskilling becomes the differentiator. The limiting factor is capability. As autonomy scales, the most valuable employees combine domain expertise with the ability to work across data, platforms, and integrated AI systems. The biggest differentiator will be operationalizing models through people and process; and
  6. Privacy and security become the gating layer for AI at scale. As AI moves from copilots to autonomous execution, organizations will treat privacy and security as product requirements, not after-the-fact controls. That means least-privilege access for agents, strong identity and authorization around tools and data, encrypted and governed pipelines, and clear boundaries on what models can retain, log, and learn from. Teams that operationalize “secure by design” and “privacy by design” will ship faster because they will spend less time reworking incidents, approvals, and compliance surprises.

Pilot paralysis is becoming a competitive liability. In production, AI is not just a model problem—it is a data handling and security problem. The organizations that scale safely will be the ones that can prove where data flows, who or what can act, what decisions were made, and how systems fail safely when context is incomplete or adversarial. This coming year will reward companies that treat AI as production infrastructure and invest in the foundations, governance, ecosystems, and workforce capability required to scale.

Gmail users are being urged to review and disable two key “Smart Features” settings following privacy concerns stemming from reports that these tools may allow Google to access email content to support AI‑driven services and may use users’ data for training. The two features are included in Gmail, Chat and Meet, and Google Workspace Smart Features.

A viral alert by YouTuber Davey Jones claims that users have been automatically opted in to permit Gmail and Chat and Meet to use message content and attachments, prompting calls to turn off Smart Features in both the primary Gmail settings and the separate Google Workspace smart‑feature controls so the content can’t be used for AI training.

Users have noted that disabling these settings makes Gmail harder to use. Google disputes the alert and says user content is not being used to train Gemini, and that no settings have been changed or auto-enabled. Despite the response, there is confusion about how large platforms are using content to train AI models. This confusion highlights ongoing concerns about transparency and data use. Users who opt out of the features may lose conveniences such as inbox categories, smart compose, grammar tools, and enhanced filtering, but doing so protects your privacy while the debate over default AI‑related settings continues. You may wish to consider following the steps provided to assist with turning off the features by going to Settings in your Google application.

Happy New Year! 2025 was a busy year for the Insider authors—we published 271 posts throughout 2025. To kick-off 2026, in case you missed them last year, we are providing the articles from 2025 that were the most interesting to our readers across various categories.

We hope you enjoy them and look forward to another productive year of keeping our readers informed on the rapidly changing and dynamic areas of data privacy, cybersecurity, information governance, artificial intelligence, and of course—the weekly Privacy Tip!

CYBERSECURITY

FBI Warns of Account Takeover Fraud

Insider Threats Climb + Are Costly

ENFORCEMENT + LITIGATION

EdTech and Privacy of Student Information: A Case Study

Breaches Within Breaches: Contractual Obligations After a Security Incident

DATA PRIVACY

Privacy Under Pressure: What the NYT v. OpenAI Teaches Us About Data Governance

New State Privacy Laws Expand Consumer Data Control in 2026

INFORMATION GOVERNANCE

Why Dumping Sensitive Data on Network Shares is a Liability

ARTIFICAL INTELLIGENCE

When AI Notetakers Take the Stand: The Legal Risks Lurking in Your Virtual Meetings

PRIVACY TIPS

Privacy Tip #431 – DOGE Has Access to Our Personal Information: What You Need to Know

Privacy Tip #7 – Who is listening to your conversations through your smartphone microphone?

Threat actors had another banner year in 2025. As we head into 2026, looking back on the five top security threats of 2025 may inform our strategy and budgeting for 2026 to prepare for the continued onslaught of attacks.

According to Dark Reading, the top five security threats from 2025 include:

  1. Salt Typhoon

Salt Typhoon, also known as Operator Panda, is a Chinese state-sponsored threat actor best known for targeting telecom giants and the systems used by police for court-authorized wiretapping. The group uses sophisticated techniques to conduct espionage against targets and to pre-position itself for longer-term attacks.

  • CISA Layoffs and Budget Cuts

Early in the year, the Trump administration cut all advisory committee members within the Cyber Safety Review Board (CSRB), a group run by public and private sector experts to research and make judgments about cybersecurity issues affecting all industries. At the very time the CSRB was dismantled, it was working on a report about Salt Typhoon. (Recall that Salt Typhoon is listed as the #1 threat from 2025).

In addition to the dismantling of CSRB, the Cybersecurity Infrastructure and Security Agency (CISA) faced layoffs and budget cuts throughout the year, in part due to the Department of Government Efficiency’s slashing of government spending.

CISA has provided a wide range of services for organizations, including vulnerability guidance, physical and cyber security assessments, election security, and incident response support, including for state and municipal governments and smaller organizations. The cuts have hampered entities’ efforts to protect themselves despite threat actors continuing to target them, which will continue into 2026.

  •  React2Shell / Log4Shell

React2Shell (CVE-2025-55182), is a vulnerability that was disclosed in early December that affects the React Server Components (RSC) open-source protocol. “Caused by unsafe deserialization, vulnerability was considered easily exploitable and highly dangerous, earning it a maximum CVSS score of 10. Even worse, React is fairly ubiquitous, and at the time of disclosure it was thought that a third of cloud providers were vulnerable. The vulnerability was named React2Shell in apparent reference to Log4Shell, a similarly dangerous bug from late 2021 that impacted environments with Log4j.” Nation-state actors were among the first to exploit the vulnerability, but within days, the vulnerability was being exploited by run-of-the-mill threat actors.

  •  Self-Replicating Malware Shai-Hulud

In September 2025, a self-replicating malware emerged known as Shai-Hulud appeared on the scene. Shai-Hulud is an infostealer that infects open-source software components. “When a user downloads a package infected by the worm, Shai-Hulud infects other packages maintained by the user and publishes poisoned versions, automatically and without much direct attacker input. The cycle continues.” The infostealer “uses defenders’ own automation to …corrupt the open source ‘well’ that thousands of companies draw from daily. This creates a significant danger because the threat isn’t just common vulnerabilities; it’s deeply nested, multilayer dependencies,” according to Unit 42’s Justin Moore. “This creates a massive, multilayered attack surface where a single compromise deep in the stack can cascade across thousands of companies simultaneously.”

  • Threat Campaigns Targeting Salesforce Customers

Earlier in 2025, a threat actor compromised Salesloft’s GitHub account to leverage the access to steal OAuth tokens associated with Salesloft Drift’s Salesforce integration. This led to downstream attacks against hundreds of Salesforce customers’ instances. This attack emphasizes threat actors’ continued attack against prominent supply chain companies, where a successful attack provides access to hundreds or thousands of upstream customers.

These significant security events of 2025 are worthy of consideration when determining a cybersecurity strategy, shoring up vendor management, and budgeting for 2026.

California’s strict privacy laws, particularly the California Invasion of Privacy Act (CIPA), are fueling a surge in class action lawsuits against major companies over their use of online tracking technologies. In recent weeks, prominent brands including Estée Lauder, Nike, and Luxottica have been hit with proposed class actions in the Northern District of California, all alleging unlawful surveillance of website visitors’ personal data. Here’s a breakdown of these cases.

Estée Lauder

Estée Lauder Inc. has been sued by Taajudin Elmarouk, a California resident, who claims the beauty company “secretly deployed” Google and Facebook tracking software on its website without obtaining user consent. The complaint, filed in federal court, alleges Estée Lauder violated CIPA by using tracking technologies that allegedly function as illegal “pen registers” or “trap and trace devices.” Under California law, these tools are likened to surveillance devices and require explicit user permission or a court order.

The lawsuit seeks:

  • Class certification for all California residents who visited the website;
  • Statutory damages;
  • Injunctive relief; and
  • Attorney fees.

Significantly, the complaint notes this case is one of many targeting businesses over so-called “pixel trackers,” a trend rising amid concerns over opaque and challenging CIPA requirements, which even federal judges have called difficult to interpret.

Nike

Nike Inc. is the latest global brand facing a CIPA suit over online tracking. Plaintiff Saleha Abdullah filed a proposed class action claiming Nike’s website deploys tracking technologies from Google, Meta, and The Trade Desk without user consent, collecting:

  • IP addresses;
  • Browsing data; and
  • Device information.

The complaint alleges these trackers serve as unlawful “pen registers” and “trap and trace devices,” just as in the Estée Lauder suit. It further claims Nike uses the acquired data for targeted advertising and real-time bidding, where user profiles are sold to advertisers behind the scenes.

The lawsuit seeks:

  • Class certification on behalf of thousands of California users;
  • Injunctive relief; and
  • Statutory damages.

Additionally, the complaint describes pending state legislative efforts that could curb online tracker suits and echoes the critique from the bench that CIPA is difficult to interpret.

Luxottica

The eyewear giant Luxottica of America Inc. (which operates sites such as Oakley.com, LensCrafters.com, and Ray-Ban.com) is facing a class action alleging it continued tracking users via third-party cookies even after users opted out. Plaintiffs Brandon Moore, Daniel Aldana, and Hope Kambick allege Luxottica violated CIPA by allowing Google, Meta, and Adobe to collect personal browsing data in defiance of users’ explicit choices.

Lawsuit highlights include allegations of:

  • Invasion of privacy;
  • Unjust enrichment;
  • Fraud and deceit; and
  • Violations of CIPA’s wiretapping and pen-register provisions.

The suit aims to represent all California residents who rejected cookies but whose data was still collected. Plaintiffs are seeking:

  • Statutory damages of at least $5,000 per violation;
  • Compensatory and punitive damages;
  • Restitution;
  • Injunctive relief;
  • Attorneys’ fees and costs; and
  • Pre- and post-judgment interest.

These lawsuits reflect a growing trend of privacy litigation in California focused on the use of online tracking and data analytics tools. With federal judges expressing concerns about the complexity of CIPA and state legislators proposing changes, the legal landscape remains unsettled.

For businesses it is critical to audit and disclose all data collection and third-party integrations;  providing explicit, informed user consent is more important than ever.

For consumers, expect more transparency and notices, but also more complex privacy landscapes until the law evolves or is clarified by the courts or legislature. As the outcomes of these cases unfold, any clarifications or amendments to CIPA will be closely watched by privacy advocates, technologists, and business leaders alike.