Until California’s legislature provides clearer guardrails, companies should expect continued class action activity under the California Invasion of Privacy Act (CIPA), targeting common website tracking technologies. Plaintiffs’ firms are actively testing how far this decades-old statute extends in the modern web environment, and courts have not reached a consensus. That uncertainty creates real litigation risk for organizations that rely on tools like chat widgets, session replay, and analytics.

Many companies use website tools that help improve customer experience, measure performance, prevent fraud, and support marketing efforts. These tools often capture data about how visitors interact with webpages, including clicks, cursor movements, page navigation, chat messages, and form entries. Plaintiffs are increasingly arguing that certain implementations of these tools amount to unlawful interception or recording of communications under CIPA.

The result is a rising wave of proposed class actions that can be expensive to defend, difficult to predict, and costly to resolve. The practical takeaway is straightforward—even if you believe your organization’s practices are reasonable, it is worth reviewing disclosures, consent flows, and vendor configurations now, rather than after a demand letter or complaint arrives.

CIPA was enacted in 1967 to prevent secret wiretapping by both law enforcement and private individuals. The plaintiffs’ bar has since repurposed the statute to challenge modern website technologies, including:

  • Chat features that allow visitors to communicate with a company in real time;
  • Session-replay tools that record user interactions with webpages for troubleshooting and UX improvements; and
  • Analytics code that tracks usage patterns and behavior across the site.

The core allegation is that these tools record or “listen in” on communications without proper consent. Plaintiffs often frame routine website telemetry as covert monitoring, particularly when data flows to third-party vendors.

Some courts have concluded that visitors could reasonably expect chats, form entries, or even certain click activity to remain private. In these decisions, disclosures may not be treated as sufficiently clear or sufficiently tied to meaningful consent for the specific tracking at issue. Other courts have held that website interactions are not confidential where users are clearly told their data and usage may be collected or tracked. In these decisions, prominent disclosures and clear notice can undermine the claim that a “secret” interception occurred.

This lack of uniformity is a major driver of continued filings. Plaintiffs can point to decisions that let claims survive early motions, while defendants can cite dismissals, but neither side has a guaranteed playbook.

While the courts remain split, companies can reduce risk by focusing on a few concrete areas:

  • Revisit Privacy Policy and Terms of Use disclosures;
  • Evaluate consent banners and how consent is captured;
  • Reassess whether you need each tracking tooland its configuration; and
  • Consider arbitration provisions and class action waivers.

CIPA was not written with session replay, chat widgets, or modern analytics in mind, but is being used to challenge them now. With courts split on whether website interactions are “confidential” and what level of disclosure and consent is sufficient, the best risk-management approach is proactive: confirm what your site is doing, align disclosures with reality, strengthen notice and consent flows, and evaluate contractual tools like arbitration clauses and class waivers.

Novelty is a core requirement for any invention to be patentable. Put simply, your invention generally cannot have been publicly disclosed before the patent application’s effective filing date. In the United States, 35 U.S.C. § 102 includes a one-year grace period for certain public disclosures made before you file—many other jurisdictions do not have this grace period. Europe, for example, generally applies an absolute novelty standard, where your invention can bar patentability if you publicly disclose first and file later.

This is where the EU AI Act can create an unexpected patentability issue. The Act sets out a comprehensive framework for regulating AI and includes a mandatory registration requirement for AI systems considered “high-risk.” An AI system is considered high-risk when it relates to areas such as safety components, critical infrastructure, education, border control, and law enforcement.

Before a high-risk AI system can be placed on the market, the provider must register the system with the EU Commission and submit information about the system in a searchable and publicly accessible EU database. If the information submitted includes enabling technical details, that registration can function as a public disclosure and can block patentability in absolute novelty jurisdictions, like Europe and China.

Bottom line: if EU AI Act registration is on your roadmap, build IP planning into the timeline. Companies should consider preparing and filing patent applications at or before submitting information to the EU Commission, so their own disclosures do not become prior art against later-filed patent applications.

On February 5, 2026, a Massachusetts federal judge issued an order staying information-sharing between the IRS and ICE, as well as  a preliminary injunction prohibiting Kristi Noem, Secretary of the Department of Homeland Security, ICE, acting-Director Todd Lyons, and any DHS and ICE agent from “inspecting, viewing, using, copying, distributing, relying on, or otherwise acting upon any return information that had been obtained from or disclosed by the IRS” through a Memorandum of Understanding (MOU) that was signed between the two agencies on April 7, 2025. The MOU was designed with the purpose of “sharing of tax information across agencies to implement President Trump’s direction that DHS ‘take immediate steps to identify, exclude, or remove aliens illegally present in the United States.’”

In addition, the court found that the storage of the taxpayer data on an unnamed ICE employee’s computer, “constitutes impermissible storage” of the data “in contravention” of the Internal Revenue Code. The court ordered that the defendants provide a copy of the order to the employee “whose government-issued computer holds the information received from the IRS on August 7, 2025,” and required that ICE confirm that the order was delivered to that individual by February 10, 2026. The court held that “Plaintiffs have established a high likelihood that ICE’s handling, use, and storage of taxpayer addresses violated and continues to violate” the Internal Revenue Code.

The facts behind the lawsuit are that following the execution of the MOU between the IRS and ICE, ICE requested the data of almost 1.3 million taxpayers on August 6, 2025, and the IRS provided taxpayer information of 47,000 individuals that matched the ICE request on August 7, 2025. That data has been stored and used by ICE since August 7, 2025.

The court stated that the disclosure of taxpayer data from the IRS to ICE was contrary to Section 6103 of the Internal Revenue Code which “strictly prohibits the disclosure” of taxpayer data to government agencies, private entities, or citizens, which taxpayers rely upon when filing their taxes. The court further noted that “the Internal Revenue Code provides strong privacy protections for information submitted by taxpayers and/or obtained by the IRS.”

The court found that the plaintiffs sufficiently demonstrated irreparable harm and that they have “been and will continue to be harmed by the data-sharing due to the erosion of trust between the organizations and their members.”

We have outlined numerous privacy concerns about how taxpayer and other data have been shared between federal agencies in this administration, including the actions of the Department of Government Efficiency a/k/a DOGE.. These issues will continue to be litigated, and we will update you on court rulings as they progress.

Security professionals rely on the implementation of multifactor authentication (MFA) to defend against phishing attacks and intrusions. Unfortunately, we can’t completely rely on MFA to protect us as threat actors (more specifically, ShinyHunters) are now targeting companies in technology, financial services, real estate, energy, healthcare, logistics, and retail with synchronized vishing-phishing attacks.

The newest attacks involve the threat actors pretending to be IT staff who called employees to tell them that the company was updating MFA settings. While on the phone with the employee, the threat actor directed them to a malicious credential harvesting site that spoofed the company to capture the employees’ single sign on credentials and MFA codes, then registered their device for the MFA push.

The threat actors cover their tracks and bypass security notices. Once they gain access to the company system, they download sensitive data and extort ransoms from companies and harass employees.

It is crucial that companies continue to educate employees on the newest cybersecurity threats and schemes so they can identify them and prevent themselves from becoming victims. The use of sophisticated vishing and phishing schemes like the one described above are unusual and many users don’t understand how combining vishing and phishing can be very powerful and successful. Incorporate these recent threats into your next cybersecurity training or company-wide cyber tip.

States are weighing in on whether grocery stores, hotel chains, and retailers should be using personal consumer information such as “browsing history” and “location data” to decide what price you see, when someone else might see something different. Pioneering this inquiry is California, approaching this individualized pricing as a potential privacy problem. At the end of last month, California Attorney General Rob Bonta announced an “investigative sweep” into businesses’ use of personal data to set individualized prices, warning that “surveillance pricing” may violate the California Consumer Privacy Act (CCPA). The inquiry is aimed at companies in the retail, grocery, and hotel sectors, focusing on how they use data like “shopping and internet browsing history, location, demographics, and other data” to price goods and services.

Attorney General Bonta is also asking about the surrounding governance: what businesses disclose, what “pricing experiments” they run, and how they ensure compliance with “algorithmic pricing, competition, and civil rights laws.” The core consumer-facing concern is “whether businesses are charging people different prices for the same good or service.”

Not everyone agrees that states should police this through disclosure requirements. The National Retail Federation sued New York Attorney General Letitia James over the state’s algorithmic pricing disclosure law, arguing it violates the First Amendment. The trade group’s concern is that even when consumers “know” pricing is personalized through loyalty programs, companies may still be compelled to display a disclosure that personal data was used to set the price “based on an algorithm.” California may see similar complaints and arguments.

States seem to be moving from theory to enforcement and mandates. Companies will need to respond by reassessing loyalty programs, discount targeting, and other data-driven pricing strategies for regulatory risk.

Florida website tracking litigation is gaining momentum this year, with plaintiffs increasingly invoking the Florida Security of Communications Act (FSCA) to challenge common website analytics and advertising tools, especially where those tools allegedly capture and share sensitive user communications. The FSCA is an old state wiretap statute now aimed at modern website tracking. The FSCA provides for liquidated damages of up to $1,000 per violation.

Specifically,  a 2025 ruling resulted in a decision that changed Florida’s legal landscape and opened the door for a possible flood of FSCA claims. W.W. v. Orlando Health, Inc., No. 6-24-cv-1068-JSS-RMU, 2025 WL 722892 (M.D. Fla. Mar. 6, 2025). In Orlando Health, the plaintiff alleged that the defendant’s website intercepted communications about her healthcare treatment, then used that information for advertising purposes, and did so through third-party pixels that intercepted and transmitted the plaintiff’s communications with the website including “the plaintiff’s health conditions,” “desired treatment,” and “preferred doctors.” Id. at 2. The plaintiff further alleged the information was used to serve targeted advertisements.

The court cited the legislative intent of FSCA, emphasizing that the legislature specifically intended to protect private medical information. On that basis, the court concluded the plaintiff adequately alleged interception of contents under FSCA. In other words, the theory that the website tools captured substantive healthcare communications was sufficient, at least at the motion to dismiss stage. Since this decision, plaintiffs have filed hundreds of similar wiretap claims in small claims court under the FSCA arising out of website tracking technology. This may be a signal that Florida federal courts may allow similar privacy and wiretapping pleadings to survive early challenges.

The combination of alleged interception of content, the use of third-party pixels, and the statute’s liquidated damages framework are perhaps the driving forces for the emerging FSCA litigation trend.

In a strongly worded order, Judge Julie A. Robinson of the U.S. District Court for the District of Kansas publicly admonished and sanctioned four lawyers representing a plaintiff company in a patent infringement case for using ChatGPT to find caselaw to support a response to a motion to exclude an expert witness, and a response to the defendant’s motion for summary judgment.

In the 36-page order, the court made it clear that not only the lawyer who used AI to generate the hallucinated citations, but also his partners and local counsel bore responsibility for the filing of the motion. This is a clear reminder of the non-delegable duty of lawyers under Rule 11 of the Federal Rules of Civil Procedure. The Court held that “[b]ecause there is no dispute that all five . . . attorneys signed both documents that included these errors, and they admit that not one of them verified that the case law in those briefs actually exist and stand for the propositions for which they were cited, their conduct violates Rule 11(b)(2).”

The brief facts are that a seasoned lawyer admitted pro hac vice before the court that the prepared motion was created using ChatGPT, and he admitted that it was his first time doing so. He was under stress personally and admitted he was not thinking straight, and although he meant to check the citations before filing the motion, he never did. His partners, although also admitted pro hac vice, were not responsible for the motion, never read it, and did not participate in its preparation. The associate assigned to the case read the motion, made a few changes, but was not assigned to check the citations. The local counsel relied on the pro hac vice counsel and reviewed it briefly before filing it but never checked the citations. The court’s order points out that the response to the motion to exclude “contains a litany of problems: (1) nonexistent quotations; (2) nonexistent and incorrect citations; and (3) misrepresentations about cited authority.” Some of the same issues were included in the response to the motion for summary judgment.

Here’s what the court had to say about each of the attorneys’ responsibilities and the sanctions it assessed:

  • The most culpable lawyer used ChatGPT and failed to check the citations. Although he was experiencing difficulties in his personal life, he never asked for an extension or help from the other five lawyers representing the plaintiff in the case. Instead, he filed the motion on time, cut corners by using ChatGPT and filed a response that included the deficiencies above. Neither his co-counsel nor his client were aware of the generative AI use. He was a “novice” at using it and is only now aware of the risks. Although the court was sympathetic to his personal plight (and he graciously emphasized that he was the only one culpable), the court stated that “citing to a nonexistent case, attributing a nonexistent quotation to an existing case, and misstating the law violates Rule 11(b).” The violation is the failure to verify the cases, not the intent behind the failure. The court noted that the attorney’s unawareness of the “very real risk of case hallucinations,” after several instances of Rule 11 sanctions being levied against lawyers for this same violation was an aggravating fact. The court directed the attorney to implement a robust policy to “deter any future instance of submitting unverified authority in a filing…[by requiring] him to submit to the Clerk for filing a certificate outlining specific internal procedures at his firm that he intends to impose. . . and imposes a monetary fine of $5,000, . . .and revokes his pro hac vice admission to this Court.” The court further directed the attorney to “self-report to the state disciplinary authorities where he is licensed by providing them with a copy of this Order.”
  • The attorney’s partners, who did not participate in brief preparations, signed the filing despite failing to determine the accuracy of the contents. One of the partners assigned an associate to help and was on a family vacation when it was filed. The court pointed out that merely affixing their names to the brief without reviewing it, “violated [their] duty to conduct a reasonably inquiry into the facts and the law before filing.” The court reiterated that Rule 11 is non-delegable and imposed a fine of $3,000 for each of the co-counsel who signed the pleadings.
  • The court did not sanction the associate assigned to the case, as he had no supervisory authority and “was placed in a difficult position by his supervising attorneys.”
  • As for local counsel, he also signed the defective pleadings, and “by doing so, he vouched for the Texas attorneys in this matter.” He failed to cite-check them. The firm set forth its efforts to ensure this doesn’t happen again and provided the court with a formal policy around generative AI use, and the attorney “voluntarily sanctioned himself in the form of refraining from serving as sponsoring or local counsel for pro hac vice attorneys for a period of 12 months.” With the above considerations, the court sanctioned the local counsel $1,000.

The clear takeaway is that firms need to address the fact that lawyers may be tempted to use GenAI even when they have no experience, know or don’t know of the consequences, and are addressing personal issues. Judges have no sympathy when it comes to hallucinations and misrepresentations in briefs, as it is a waste of time and resources, and is a clear violation of Rule 11. Ignorance is not a defense, and relying on your partners, co-counsel, or local counsel will not get you off the hook, as Rule 11 is non-delegable. Firms may wish to consider adopting policies and guidance for attorneys on the use of GenAI tools, requiring all lawyers who are signing pleadings to be responsible for checking and verifying cites before affixing their signature to a pleading. There can be no reliance on others before a pleading is filed.

On January 27, 2026, the Federal Trade Commission (FTC) signaled the agency’s reduced appetite for regulating artificial intelligence. At the Privacy State of the Union Conference in Washington, DC, FTC Bureau of Consumer Protection Director Chris Mufarrige stated there is “no appetite for anything AI-related” in the FTC’s rulemaking pipeline, while adding that the agency has other rule ideas in development. Mufarrige’s statement follows the FTC’s December 2025 decision to reopen and set aside a 2024 consent order involving AI writing assistant Rytr that had barred the company from providing AI-enabled services that was alleged to help users write false or misleading product reviews.

This shift aligns with the current federal administration’s broader deregulatory stance on AI, which emphasizes removing barriers to innovation rather than expanding agency-made rules. The FTC specifically cited President Trump’s AI Action Plan as part of its rationale for revisiting the Rytr matter, pointing to a policy preference for rolling back rules and decisions viewed as standing in the way of AI development. Mufarrige also indicated the Commission will pursue more “sparing” rulemaking than the Biden-era FTC, suggesting the agency may lean more heavily on selective enforcement priorities and existing legal authorities instead of launching new AI-specific regulations.

Importantly, the FTC is not stepping back from privacy enforcement altogether. Mufarrige emphasized that protecting children’s privacy online will “play a big role” in the coming year’s enforcement docket, with particular focus on how age verification interacts with the Children’s Online Privacy Protection Act (COPPA), including any “tension between the two” and how it might be resolved. The agency’s recent COPPA track record, including a recent $10 million settlement with Walt Disney Co., reflects what Mufarrige described as a consistent theme: ensuring “that parents have control over their kids’ data.”

It’s that time of the year when W2s and 1099s pile up in preparation for that dreaded tax return filing deadline. Now that everyone is using AI tools to assist with complicated tasks,  as they seem to make any task, even the most dreaded, more efficient, it is tempting to use one to assist with your tax return. Sounds good, but experts are warning against it.

In particular, “experts have raised concerns about potential inaccuracies and privacy issues associated with using AI for tax returns.” Tax returns are complicated, and AI tools have not developed sufficiently to be able to navigate those complexities. Because tax returns are complex, users may not be able to determine that its output is inaccurate. The inaccuracy could have a significant impact on the tax filer potentially leading to fines and penalties. Experts warn that you cannot rely on the tool’s calculation of what is owed or refunded.

In addition, highly sensitive and personal information is included in tax returns, including full Social Security numbers, personal information, health information, and financial information. Most generative AI tools specifically caution users from inputting this highly sensitive information as it will be used by the tool to train the algorithms and could be used as output to another query by another person. This means that your highly sensitive, identifiable information could be disclosed to someone else without your knowledge.

Bottom line: think again before you use an AI tool for your tax return. Once you provide that highly sensitive information to an AI tool, the data is lost forever and could be disclosed to others.

For resources to assist with your tax filing, you can visit the IRS website, IRS.gov, and for resources related to privacy concerns with tax returns, visit the FTC website, FTC.gov.

We continue to alert our readers to the uptick and successful use of vishing attacks against companies. Threat actors continue to be creative in developing strategies to use vishing to gain access into systems.

According to Cyberscoop, (a publication that I read religiously), Mandiant has confirmed that “multiple cybercrime groups,” including ShinyHunters, are “combining voice calls and advanced phishing kits to trick victims into handing over access” to company systems. The scary thing about this new wave of vishing attacks is that threat actors are using sophisticated vishing campaigns to compromise single sign on (SSO) credentials, then “enroll threat actor controlled devices into victim multifactor authentication solutions.” This effectively bypasses well-known security tools used by companies to prevent unauthorized access into their systems.

Once threat actors gain access, they move into the company’s SaaS environment to exfiltrate data and then launch extortion campaigns. In addition,

Cybercriminals are registering custom domains that mimic legitimate single sign-on portals used by targeted companies, then deploying tailored voice-phishing kits to call victims while remotely controlling which pages appear in the victim’s browser. This lets the attackers sync their spoken prompts with multifactor-authentication requests in real time, increasing the likelihood the victim approves or enters the needed codes on cue.

In response to these attacks, Okta released threat intelligence confirming that it has seen “multiple phishing kits developed” to use with other SSO and cryptocurrency providers. To be clear, this is not a vulnerability with the SSO products, but a scary way for threat actors to dupe users into providing credentials.  

Due to the success of these new vishing campaigns using SSO, now is the time to remind your users about vishing, how it works, the newest ways threat actors are trying to get users to provide their credentials, and how SSO can give the threat actors the keys to the kingdom.