Security professionals rely on the implementation of multifactor authentication (MFA) to defend against phishing attacks and intrusions. Unfortunately, we can’t completely rely on MFA to protect us as threat actors (more specifically, ShinyHunters) are now targeting companies in technology, financial services, real estate, energy, healthcare, logistics, and retail with synchronized vishing-phishing attacks.

The newest attacks involve the threat actors pretending to be IT staff who called employees to tell them that the company was updating MFA settings. While on the phone with the employee, the threat actor directed them to a malicious credential harvesting site that spoofed the company to capture the employees’ single sign on credentials and MFA codes, then registered their device for the MFA push.

The threat actors cover their tracks and bypass security notices. Once they gain access to the company system, they download sensitive data and extort ransoms from companies and harass employees.

It is crucial that companies continue to educate employees on the newest cybersecurity threats and schemes so they can identify them and prevent themselves from becoming victims. The use of sophisticated vishing and phishing schemes like the one described above are unusual and many users don’t understand how combining vishing and phishing can be very powerful and successful. Incorporate these recent threats into your next cybersecurity training or company-wide cyber tip.

States are weighing in on whether grocery stores, hotel chains, and retailers should be using personal consumer information such as “browsing history” and “location data” to decide what price you see, when someone else might see something different. Pioneering this inquiry is California, approaching this individualized pricing as a potential privacy problem. At the end of last month, California Attorney General Rob Bonta announced an “investigative sweep” into businesses’ use of personal data to set individualized prices, warning that “surveillance pricing” may violate the California Consumer Privacy Act (CCPA). The inquiry is aimed at companies in the retail, grocery, and hotel sectors, focusing on how they use data like “shopping and internet browsing history, location, demographics, and other data” to price goods and services.

Attorney General Bonta is also asking about the surrounding governance: what businesses disclose, what “pricing experiments” they run, and how they ensure compliance with “algorithmic pricing, competition, and civil rights laws.” The core consumer-facing concern is “whether businesses are charging people different prices for the same good or service.”

Not everyone agrees that states should police this through disclosure requirements. The National Retail Federation sued New York Attorney General Letitia James over the state’s algorithmic pricing disclosure law, arguing it violates the First Amendment. The trade group’s concern is that even when consumers “know” pricing is personalized through loyalty programs, companies may still be compelled to display a disclosure that personal data was used to set the price “based on an algorithm.” California may see similar complaints and arguments.

States seem to be moving from theory to enforcement and mandates. Companies will need to respond by reassessing loyalty programs, discount targeting, and other data-driven pricing strategies for regulatory risk.

Florida website tracking litigation is gaining momentum this year, with plaintiffs increasingly invoking the Florida Security of Communications Act (FSCA) to challenge common website analytics and advertising tools, especially where those tools allegedly capture and share sensitive user communications. The FSCA is an old state wiretap statute now aimed at modern website tracking. The FSCA provides for liquidated damages of up to $1,000 per violation.

Specifically,  a 2025 ruling resulted in a decision that changed Florida’s legal landscape and opened the door for a possible flood of FSCA claims. W.W. v. Orlando Health, Inc., No. 6-24-cv-1068-JSS-RMU, 2025 WL 722892 (M.D. Fla. Mar. 6, 2025). In Orlando Health, the plaintiff alleged that the defendant’s website intercepted communications about her healthcare treatment, then used that information for advertising purposes, and did so through third-party pixels that intercepted and transmitted the plaintiff’s communications with the website including “the plaintiff’s health conditions,” “desired treatment,” and “preferred doctors.” Id. at 2. The plaintiff further alleged the information was used to serve targeted advertisements.

The court cited the legislative intent of FSCA, emphasizing that the legislature specifically intended to protect private medical information. On that basis, the court concluded the plaintiff adequately alleged interception of contents under FSCA. In other words, the theory that the website tools captured substantive healthcare communications was sufficient, at least at the motion to dismiss stage. Since this decision, plaintiffs have filed hundreds of similar wiretap claims in small claims court under the FSCA arising out of website tracking technology. This may be a signal that Florida federal courts may allow similar privacy and wiretapping pleadings to survive early challenges.

The combination of alleged interception of content, the use of third-party pixels, and the statute’s liquidated damages framework are perhaps the driving forces for the emerging FSCA litigation trend.

In a strongly worded order, Judge Julie A. Robinson of the U.S. District Court for the District of Kansas publicly admonished and sanctioned four lawyers representing a plaintiff company in a patent infringement case for using ChatGPT to find caselaw to support a response to a motion to exclude an expert witness, and a response to the defendant’s motion for summary judgment.

In the 36-page order, the court made it clear that not only the lawyer who used AI to generate the hallucinated citations, but also his partners and local counsel bore responsibility for the filing of the motion. This is a clear reminder of the non-delegable duty of lawyers under Rule 11 of the Federal Rules of Civil Procedure. The Court held that “[b]ecause there is no dispute that all five . . . attorneys signed both documents that included these errors, and they admit that not one of them verified that the case law in those briefs actually exist and stand for the propositions for which they were cited, their conduct violates Rule 11(b)(2).”

The brief facts are that a seasoned lawyer admitted pro hac vice before the court that the prepared motion was created using ChatGPT, and he admitted that it was his first time doing so. He was under stress personally and admitted he was not thinking straight, and although he meant to check the citations before filing the motion, he never did. His partners, although also admitted pro hac vice, were not responsible for the motion, never read it, and did not participate in its preparation. The associate assigned to the case read the motion, made a few changes, but was not assigned to check the citations. The local counsel relied on the pro hac vice counsel and reviewed it briefly before filing it but never checked the citations. The court’s order points out that the response to the motion to exclude “contains a litany of problems: (1) nonexistent quotations; (2) nonexistent and incorrect citations; and (3) misrepresentations about cited authority.” Some of the same issues were included in the response to the motion for summary judgment.

Here’s what the court had to say about each of the attorneys’ responsibilities and the sanctions it assessed:

  • The most culpable lawyer used ChatGPT and failed to check the citations. Although he was experiencing difficulties in his personal life, he never asked for an extension or help from the other five lawyers representing the plaintiff in the case. Instead, he filed the motion on time, cut corners by using ChatGPT and filed a response that included the deficiencies above. Neither his co-counsel nor his client were aware of the generative AI use. He was a “novice” at using it and is only now aware of the risks. Although the court was sympathetic to his personal plight (and he graciously emphasized that he was the only one culpable), the court stated that “citing to a nonexistent case, attributing a nonexistent quotation to an existing case, and misstating the law violates Rule 11(b).” The violation is the failure to verify the cases, not the intent behind the failure. The court noted that the attorney’s unawareness of the “very real risk of case hallucinations,” after several instances of Rule 11 sanctions being levied against lawyers for this same violation was an aggravating fact. The court directed the attorney to implement a robust policy to “deter any future instance of submitting unverified authority in a filing…[by requiring] him to submit to the Clerk for filing a certificate outlining specific internal procedures at his firm that he intends to impose. . . and imposes a monetary fine of $5,000, . . .and revokes his pro hac vice admission to this Court.” The court further directed the attorney to “self-report to the state disciplinary authorities where he is licensed by providing them with a copy of this Order.”
  • The attorney’s partners, who did not participate in brief preparations, signed the filing despite failing to determine the accuracy of the contents. One of the partners assigned an associate to help and was on a family vacation when it was filed. The court pointed out that merely affixing their names to the brief without reviewing it, “violated [their] duty to conduct a reasonably inquiry into the facts and the law before filing.” The court reiterated that Rule 11 is non-delegable and imposed a fine of $3,000 for each of the co-counsel who signed the pleadings.
  • The court did not sanction the associate assigned to the case, as he had no supervisory authority and “was placed in a difficult position by his supervising attorneys.”
  • As for local counsel, he also signed the defective pleadings, and “by doing so, he vouched for the Texas attorneys in this matter.” He failed to cite-check them. The firm set forth its efforts to ensure this doesn’t happen again and provided the court with a formal policy around generative AI use, and the attorney “voluntarily sanctioned himself in the form of refraining from serving as sponsoring or local counsel for pro hac vice attorneys for a period of 12 months.” With the above considerations, the court sanctioned the local counsel $1,000.

The clear takeaway is that firms need to address the fact that lawyers may be tempted to use GenAI even when they have no experience, know or don’t know of the consequences, and are addressing personal issues. Judges have no sympathy when it comes to hallucinations and misrepresentations in briefs, as it is a waste of time and resources, and is a clear violation of Rule 11. Ignorance is not a defense, and relying on your partners, co-counsel, or local counsel will not get you off the hook, as Rule 11 is non-delegable. Firms may wish to consider adopting policies and guidance for attorneys on the use of GenAI tools, requiring all lawyers who are signing pleadings to be responsible for checking and verifying cites before affixing their signature to a pleading. There can be no reliance on others before a pleading is filed.

On January 27, 2026, the Federal Trade Commission (FTC) signaled the agency’s reduced appetite for regulating artificial intelligence. At the Privacy State of the Union Conference in Washington, DC, FTC Bureau of Consumer Protection Director Chris Mufarrige stated there is “no appetite for anything AI-related” in the FTC’s rulemaking pipeline, while adding that the agency has other rule ideas in development. Mufarrige’s statement follows the FTC’s December 2025 decision to reopen and set aside a 2024 consent order involving AI writing assistant Rytr that had barred the company from providing AI-enabled services that was alleged to help users write false or misleading product reviews.

This shift aligns with the current federal administration’s broader deregulatory stance on AI, which emphasizes removing barriers to innovation rather than expanding agency-made rules. The FTC specifically cited President Trump’s AI Action Plan as part of its rationale for revisiting the Rytr matter, pointing to a policy preference for rolling back rules and decisions viewed as standing in the way of AI development. Mufarrige also indicated the Commission will pursue more “sparing” rulemaking than the Biden-era FTC, suggesting the agency may lean more heavily on selective enforcement priorities and existing legal authorities instead of launching new AI-specific regulations.

Importantly, the FTC is not stepping back from privacy enforcement altogether. Mufarrige emphasized that protecting children’s privacy online will “play a big role” in the coming year’s enforcement docket, with particular focus on how age verification interacts with the Children’s Online Privacy Protection Act (COPPA), including any “tension between the two” and how it might be resolved. The agency’s recent COPPA track record, including a recent $10 million settlement with Walt Disney Co., reflects what Mufarrige described as a consistent theme: ensuring “that parents have control over their kids’ data.”

It’s that time of the year when W2s and 1099s pile up in preparation for that dreaded tax return filing deadline. Now that everyone is using AI tools to assist with complicated tasks,  as they seem to make any task, even the most dreaded, more efficient, it is tempting to use one to assist with your tax return. Sounds good, but experts are warning against it.

In particular, “experts have raised concerns about potential inaccuracies and privacy issues associated with using AI for tax returns.” Tax returns are complicated, and AI tools have not developed sufficiently to be able to navigate those complexities. Because tax returns are complex, users may not be able to determine that its output is inaccurate. The inaccuracy could have a significant impact on the tax filer potentially leading to fines and penalties. Experts warn that you cannot rely on the tool’s calculation of what is owed or refunded.

In addition, highly sensitive and personal information is included in tax returns, including full Social Security numbers, personal information, health information, and financial information. Most generative AI tools specifically caution users from inputting this highly sensitive information as it will be used by the tool to train the algorithms and could be used as output to another query by another person. This means that your highly sensitive, identifiable information could be disclosed to someone else without your knowledge.

Bottom line: think again before you use an AI tool for your tax return. Once you provide that highly sensitive information to an AI tool, the data is lost forever and could be disclosed to others.

For resources to assist with your tax filing, you can visit the IRS website, IRS.gov, and for resources related to privacy concerns with tax returns, visit the FTC website, FTC.gov.

We continue to alert our readers to the uptick and successful use of vishing attacks against companies. Threat actors continue to be creative in developing strategies to use vishing to gain access into systems.

According to Cyberscoop, (a publication that I read religiously), Mandiant has confirmed that “multiple cybercrime groups,” including ShinyHunters, are “combining voice calls and advanced phishing kits to trick victims into handing over access” to company systems. The scary thing about this new wave of vishing attacks is that threat actors are using sophisticated vishing campaigns to compromise single sign on (SSO) credentials, then “enroll threat actor controlled devices into victim multifactor authentication solutions.” This effectively bypasses well-known security tools used by companies to prevent unauthorized access into their systems.

Once threat actors gain access, they move into the company’s SaaS environment to exfiltrate data and then launch extortion campaigns. In addition,

Cybercriminals are registering custom domains that mimic legitimate single sign-on portals used by targeted companies, then deploying tailored voice-phishing kits to call victims while remotely controlling which pages appear in the victim’s browser. This lets the attackers sync their spoken prompts with multifactor-authentication requests in real time, increasing the likelihood the victim approves or enters the needed codes on cue.

In response to these attacks, Okta released threat intelligence confirming that it has seen “multiple phishing kits developed” to use with other SSO and cryptocurrency providers. To be clear, this is not a vulnerability with the SSO products, but a scary way for threat actors to dupe users into providing credentials.  

Due to the success of these new vishing campaigns using SSO, now is the time to remind your users about vishing, how it works, the newest ways threat actors are trying to get users to provide their credentials, and how SSO can give the threat actors the keys to the kingdom.

Businesses that run consumer-facing websites have spent the past several years contending with a steady stream of California Invasion of Privacy Act (CIPA) demands and class actions aimed at everyday digital tools such as cookies, pixels, and analytics scripts. A recent decision from the Southern District of California, Camplisson v. Adidas Am., Inc., 2025 WL 3228949 (S.D. Cal. Nov. 18, 2025), suggests that this wave is not fading. If anything, it may pick up further in 2026.

CIPA is a California privacy statute that, among other things, limits the interception of communications and the deployment of certain surveillance-style technologies without proper authorization. In the current round of cases, plaintiffs have increasingly trained their focus on CIPA’s prohibition on using “pen registers” and “trap and trace” devices absent a court order or user consent. They argue that common website tracking technologies function like modern equivalents of these wiretap-adjacent tools. The stakes are high because CIPA allows statutory damages of up to $5,000 per violation, even without proof of actual harm.

In Camplisson, website users brought a putative class action alleging that Adidas violated CIPA by using two tracking pixels on its website, the TikTok Pixel and Microsoft Bing. According to the complaint, the trackers were placed on visitors’ browsers without consent and collected data including IP addresses, browser information, unique identifiers, and other personal information. Adidas moved to dismiss on two primary grounds; first, it argued that the alleged tracking tools do not qualify as a “pen register” as a matter of law, and second, it contended that users had consented.

The court declined to accept either argument at the pleading stage. Emphasizing what it characterized as CIPA’s deliberately broad language, the court reasoned that a narrow reading of “pen register” limited to tools that capture all outgoing information could undermine the statute’s privacy-protective purpose. The court also held that the consent allegations were deficient based on how the website presented its terms and privacy disclosures. In particular, visitors allegedly had to scroll to the footer to locate links to the online terms and privacy policy, and the website did not present a pop-up, or similar mechanism, requiring users to affirmatively consent before the pixels fired.

From a forward-looking perspective, Camplisson hands plaintiffs a new citation for the proposition that standard website pixels can plausibly qualify as pen registers when they capture identifiers such as IP address information and other alleged personal information. It also offers a template for pleading around consent by highlighting the user’s practical path to notice on the website, and whether any meaningful opt-in occurred before tracking began. Together, those concepts are likely to drive additional pre-suit demand letters and new filings, particularly against companies that primarily rely on footer-based links for notice or that allow pixels to run before any affirmative consent. Longer term, unless appellate courts bring greater clarity or the legislature modernizes this decades-old statutory framework, businesses should plan for continued uncertainty and inconsistent results from courts.

This week, the U.S. Supreme Court granted certiorari in Salazar v. Paramount Global, No. 25-459 (cert. granted Jan. 26, 2026), to resolve a circuit split over the scope of the federal Video Privacy Protection Act (VPPA). Enacted in 1988, the VPPA has helped fuel a wave of class actions in recent years, especially suits aimed at digital media and entertainment companies with embedded video content.

This case stems from allegations that the plaintiff subscribed to a newsletter from a sports entertainment website owned by Paramount Global, watched videos on the site, and that Facebook’s Meta pixel caused the browser to transmit the plaintiff’s Facebook ID and the webpage URL to Facebook—allegedly disclosing viewing habits in violation of the VPPA. The district court dismissed the complaint, and the Sixth Circuit affirmed.

The dispute here centers on who qualifies as a “consumer” entitled to sue under the VPPA, which prohibits a “video tape service provider” from knowingly disclosing personally identifiable information concerning any “consumer.” The statute defines “consumer” as “any renter, purchaser, or subscriber of goods or services from a video tape service provider,” and it defines “video tape service provider” in terms of being in the business of rental, sale, or delivery of “prerecorded video cassette tapes or similar audio visual materials.”

The Sixth Circuit applied a narrower approach, holding a newsletter subscriber was not a “consumer” absent an alleged subscription to video content or other qualifying “audio visual materials,” but other circuits have interpreted the “term” consumer more broadly.

For companies that use embedded video and third-party tracking tools, the Supreme Court’s eventual ruling may help sharpen the VPPA risk picture in the age of digital entertainment.

On January 23, 2026, a bipartisan group of 35 state Attorneys General issued a letter to xAI stating their concern “about artificial-intelligence produced deepfake nonconsensual intimate images (NCII) of real people, including, children, wherever it is made or found,” including xAI’s chatbot, Grok. This is in addition to the letter sent on January 13, 2026, to X and other AI companies by eight United States senators requesting information on non-consensual “bikini” and “non-nude” images produced by their products. 

The letter “strongly urges (xAI) to be a leader in this space by further addressing the harms resulting from this technology.” The letter further calls for xAI to “immediately take all available additional steps to protect the public and users of your platforms, especially the women and girls who are the overwhelming target of NCII.”

The letter outlines all ways Grok can be easily used as a “nudify” tool that can “embarrass, intimidate, and exploit people by taking away their control over how their bodies and likenesses are portrayed.” It alleges that not only is Grok enabling the ability to make these images with a mere click, but it is also actually “encouraging this behavior by design.”

Grok is not only being used to alter images of adults, as the letter outlines how the chatbot has “altered images of children to depict them in minimal clothing and sexual situations…including photorealistic images of ‘very young’ people engaged in sexual activity.”

The letter emphasizes the importance of this issue to the Attorneys General, and requests that xAI provide answers on what measures it will take to prohibit Grok from producing NCII, and how it will eliminate existing content, suspend and report to authorities users producing such content, and “grant X users control over whether their content can be edited by Grok.”

We will continue to update you on the information provided by the companies in response to these inquiries.