Security researchers at Huntress Labs have identified a vulnerability in SolarWinds’s Web Help Desk that threat actors are exploiting to allow them to execute code remotely.

The vulnerability was listed on the Cybersecurity and Infrastructure Security Agency’s known exploited vulnerabilities last week, and SolarWinds issued a warning, classifying it as a “critical severity” for users to patch the vulnerability. According to SolarWinds, the vulnerability “could lead to remote code execution, which would allow an attacker to run commands on the host machine. This could be exploited without authentication.”

SolarWinds has provided indicators of compromise, suspicious IPs, and the software release, which security professionals should review and update.

Huntress Labs has identified three exploited customers, and Cybersecurity Dive reports that Shadowserver has found 150 instances of compromise. Huntress Labs researchers “believe a threat group tracked as Storn-2603 is behind the attacks.”

If your organization uses SolarWinds’s Web Help, patching the vulnerability should be a priority.

California resident Nathaniel Bee filed a lawsuit this week alleging that the ATP Tour’s website used third-party tracking technology that captured details on how visitors interacted with the site, including what content they viewed; how they navigated the website; and what type of device they used, without user consent in violation of the California Invasion of Privacy Act. According to the complaint, that information was transmitted to third parties, including Google and Comscore Inc., and was used for targeted advertising and analytics.

The lawsuit centers on what users were told and what the website allegedly did anyway. The plaintiff alleges that users first visiting the ATP Tour website are presented with two options: accept “essential cookies only” or accept “cookies.” The plaintiff argues that the “essential cookies only” option gives visitors the impression that they can opt out of tracking that shares information with “social media, advertising and analytics partners.” However, even after a user selects “essential cookies only,” the ATP Tour allegedly continued transmitting non-essential information that could be used for targeted advertising. The complaint states that “even when users attempted to limit tracking by rejecting nonessential cookies, ATP Tour failed to prevent third parties from receiving information generated by users’ website communications.”

Even at the allegation stage, the case highlights a pressure point for many consumer-facing websites where consent interfaces are only as reliable as the technical controls behind them.

If a website offers an “essential cookies only” option, the expectation is that third-party tags, pixels, and scripts tied to advertising and analytics are actually disabled or prevented from transmitting data when a user opts out.

Regardless of how the claims ultimately shake out, the complaint underscores a simple but increasingly litigated reality: privacy disclosures and consent banners are only as defensible as the engineering behind them. If a site presents an “essential cookies only” option, users reasonably expect that advertising and analytics tags, pixels, and scripts are actually blocked from firing and from transmitting data to third parties. For consumer-facing organizations, this case is a reminder to align what the interface promises with what the site does in practice, and to validate that opt-out choices are enforced consistently across all third-party tools and integrations.

In January, the General Services Administration’s (GSA) Office of the Chief Information Security Officer issued a new procedural guide, CIO-IT Security-21-112 Rev. 1, that sets expectations for protecting Controlled Unclassified Information (CUI) when it resides in nonfederal contractor systems. Although the document is internal guidance, it creates an approval framework that may soon determine whether a contractor is eligible for GSA contracts involving CUI.

The security baseline is built on NIST SP 800-171 Rev. 3, and it applies when CUI resides in a contractor system that is not operated on behalf of the federal government and therefore is not subject to FISMA or FedRAMP. Covered CUI could include CUI stored in internal file shares or processed in a commercial cloud tenant.

The GSA describes a five-phase lifecycle—Prepare, Document, Assess, Authorize, and Monitor—derived from the National Institute of Standards and Technology’s Risk Management Framework. Contractors must document their CUI-handling system, complete an independent security assessment, obtain GSA approval, and then meet ongoing monitoring and periodic reassessment requirements. Perhaps the most notable cybersecurity requirement is the incident reporting timeline: contractors must report suspected and confirmed CUI incidents within one hour of discovery. By comparison, many state breach notification laws are measured in days, and the New York Department of Financial Services cybersecurity rule generally uses a 72-hour notice window for certain reportable events. This one-hour requirement is unusually compressed and may be difficult to operationalize.

GSA’s CUI-focused compliance track will look familiar to contractors following DoD’s CMMC, but there are differences. GSA aligns to NIST SP 800-171 Rev. 3, while DoD currently relies on Rev. 2 under DFARS 252.204-7012/CMMC. The GSA also appears willing to approve systems with gaps if certain key requirements are met.

Among other uncertainties, the guide does not specify when the requirements will take effect. Still, the document signals that the GSA is moving toward a model where contractors may need to demonstrate the security of the specific system handling CUI, not just accept contract language. Contractors that handle CUI under GSA contracts may want to begin mapping where their CUI resides, test incident reporting procedures, and plan for a more robust GSA contract approval process.

Until California’s legislature provides clearer guardrails, companies should expect continued class action activity under the California Invasion of Privacy Act (CIPA), targeting common website tracking technologies. Plaintiffs’ firms are actively testing how far this decades-old statute extends in the modern web environment, and courts have not reached a consensus. That uncertainty creates real litigation risk for organizations that rely on tools like chat widgets, session replay, and analytics.

Many companies use website tools that help improve customer experience, measure performance, prevent fraud, and support marketing efforts. These tools often capture data about how visitors interact with webpages, including clicks, cursor movements, page navigation, chat messages, and form entries. Plaintiffs are increasingly arguing that certain implementations of these tools amount to unlawful interception or recording of communications under CIPA.

The result is a rising wave of proposed class actions that can be expensive to defend, difficult to predict, and costly to resolve. The practical takeaway is straightforward—even if you believe your organization’s practices are reasonable, it is worth reviewing disclosures, consent flows, and vendor configurations now, rather than after a demand letter or complaint arrives.

CIPA was enacted in 1967 to prevent secret wiretapping by both law enforcement and private individuals. The plaintiffs’ bar has since repurposed the statute to challenge modern website technologies, including:

  • Chat features that allow visitors to communicate with a company in real time;
  • Session-replay tools that record user interactions with webpages for troubleshooting and UX improvements; and
  • Analytics code that tracks usage patterns and behavior across the site.

The core allegation is that these tools record or “listen in” on communications without proper consent. Plaintiffs often frame routine website telemetry as covert monitoring, particularly when data flows to third-party vendors.

Some courts have concluded that visitors could reasonably expect chats, form entries, or even certain click activity to remain private. In these decisions, disclosures may not be treated as sufficiently clear or sufficiently tied to meaningful consent for the specific tracking at issue. Other courts have held that website interactions are not confidential where users are clearly told their data and usage may be collected or tracked. In these decisions, prominent disclosures and clear notice can undermine the claim that a “secret” interception occurred.

This lack of uniformity is a major driver of continued filings. Plaintiffs can point to decisions that let claims survive early motions, while defendants can cite dismissals, but neither side has a guaranteed playbook.

While the courts remain split, companies can reduce risk by focusing on a few concrete areas:

  • Revisit Privacy Policy and Terms of Use disclosures;
  • Evaluate consent banners and how consent is captured;
  • Reassess whether you need each tracking tooland its configuration; and
  • Consider arbitration provisions and class action waivers.

CIPA was not written with session replay, chat widgets, or modern analytics in mind, but is being used to challenge them now. With courts split on whether website interactions are “confidential” and what level of disclosure and consent is sufficient, the best risk-management approach is proactive: confirm what your site is doing, align disclosures with reality, strengthen notice and consent flows, and evaluate contractual tools like arbitration clauses and class waivers.

Novelty is a core requirement for any invention to be patentable. Put simply, your invention generally cannot have been publicly disclosed before the patent application’s effective filing date. In the United States, 35 U.S.C. § 102 includes a one-year grace period for certain public disclosures made before you file—many other jurisdictions do not have this grace period. Europe, for example, generally applies an absolute novelty standard, where your invention can bar patentability if you publicly disclose first and file later.

This is where the EU AI Act can create an unexpected patentability issue. The Act sets out a comprehensive framework for regulating AI and includes a mandatory registration requirement for AI systems considered “high-risk.” An AI system is considered high-risk when it relates to areas such as safety components, critical infrastructure, education, border control, and law enforcement.

Before a high-risk AI system can be placed on the market, the provider must register the system with the EU Commission and submit information about the system in a searchable and publicly accessible EU database. If the information submitted includes enabling technical details, that registration can function as a public disclosure and can block patentability in absolute novelty jurisdictions, like Europe and China.

Bottom line: if EU AI Act registration is on your roadmap, build IP planning into the timeline. Companies should consider preparing and filing patent applications at or before submitting information to the EU Commission, so their own disclosures do not become prior art against later-filed patent applications.

On February 5, 2026, a Massachusetts federal judge issued an order staying information-sharing between the IRS and ICE, as well as  a preliminary injunction prohibiting Kristi Noem, Secretary of the Department of Homeland Security, ICE, acting-Director Todd Lyons, and any DHS and ICE agent from “inspecting, viewing, using, copying, distributing, relying on, or otherwise acting upon any return information that had been obtained from or disclosed by the IRS” through a Memorandum of Understanding (MOU) that was signed between the two agencies on April 7, 2025. The MOU was designed with the purpose of “sharing of tax information across agencies to implement President Trump’s direction that DHS ‘take immediate steps to identify, exclude, or remove aliens illegally present in the United States.’”

In addition, the court found that the storage of the taxpayer data on an unnamed ICE employee’s computer, “constitutes impermissible storage” of the data “in contravention” of the Internal Revenue Code. The court ordered that the defendants provide a copy of the order to the employee “whose government-issued computer holds the information received from the IRS on August 7, 2025,” and required that ICE confirm that the order was delivered to that individual by February 10, 2026. The court held that “Plaintiffs have established a high likelihood that ICE’s handling, use, and storage of taxpayer addresses violated and continues to violate” the Internal Revenue Code.

The facts behind the lawsuit are that following the execution of the MOU between the IRS and ICE, ICE requested the data of almost 1.3 million taxpayers on August 6, 2025, and the IRS provided taxpayer information of 47,000 individuals that matched the ICE request on August 7, 2025. That data has been stored and used by ICE since August 7, 2025.

The court stated that the disclosure of taxpayer data from the IRS to ICE was contrary to Section 6103 of the Internal Revenue Code which “strictly prohibits the disclosure” of taxpayer data to government agencies, private entities, or citizens, which taxpayers rely upon when filing their taxes. The court further noted that “the Internal Revenue Code provides strong privacy protections for information submitted by taxpayers and/or obtained by the IRS.”

The court found that the plaintiffs sufficiently demonstrated irreparable harm and that they have “been and will continue to be harmed by the data-sharing due to the erosion of trust between the organizations and their members.”

We have outlined numerous privacy concerns about how taxpayer and other data have been shared between federal agencies in this administration, including the actions of the Department of Government Efficiency a/k/a DOGE.. These issues will continue to be litigated, and we will update you on court rulings as they progress.

Security professionals rely on the implementation of multifactor authentication (MFA) to defend against phishing attacks and intrusions. Unfortunately, we can’t completely rely on MFA to protect us as threat actors (more specifically, ShinyHunters) are now targeting companies in technology, financial services, real estate, energy, healthcare, logistics, and retail with synchronized vishing-phishing attacks.

The newest attacks involve the threat actors pretending to be IT staff who called employees to tell them that the company was updating MFA settings. While on the phone with the employee, the threat actor directed them to a malicious credential harvesting site that spoofed the company to capture the employees’ single sign on credentials and MFA codes, then registered their device for the MFA push.

The threat actors cover their tracks and bypass security notices. Once they gain access to the company system, they download sensitive data and extort ransoms from companies and harass employees.

It is crucial that companies continue to educate employees on the newest cybersecurity threats and schemes so they can identify them and prevent themselves from becoming victims. The use of sophisticated vishing and phishing schemes like the one described above are unusual and many users don’t understand how combining vishing and phishing can be very powerful and successful. Incorporate these recent threats into your next cybersecurity training or company-wide cyber tip.

States are weighing in on whether grocery stores, hotel chains, and retailers should be using personal consumer information such as “browsing history” and “location data” to decide what price you see, when someone else might see something different. Pioneering this inquiry is California, approaching this individualized pricing as a potential privacy problem. At the end of last month, California Attorney General Rob Bonta announced an “investigative sweep” into businesses’ use of personal data to set individualized prices, warning that “surveillance pricing” may violate the California Consumer Privacy Act (CCPA). The inquiry is aimed at companies in the retail, grocery, and hotel sectors, focusing on how they use data like “shopping and internet browsing history, location, demographics, and other data” to price goods and services.

Attorney General Bonta is also asking about the surrounding governance: what businesses disclose, what “pricing experiments” they run, and how they ensure compliance with “algorithmic pricing, competition, and civil rights laws.” The core consumer-facing concern is “whether businesses are charging people different prices for the same good or service.”

Not everyone agrees that states should police this through disclosure requirements. The National Retail Federation sued New York Attorney General Letitia James over the state’s algorithmic pricing disclosure law, arguing it violates the First Amendment. The trade group’s concern is that even when consumers “know” pricing is personalized through loyalty programs, companies may still be compelled to display a disclosure that personal data was used to set the price “based on an algorithm.” California may see similar complaints and arguments.

States seem to be moving from theory to enforcement and mandates. Companies will need to respond by reassessing loyalty programs, discount targeting, and other data-driven pricing strategies for regulatory risk.

Florida website tracking litigation is gaining momentum this year, with plaintiffs increasingly invoking the Florida Security of Communications Act (FSCA) to challenge common website analytics and advertising tools, especially where those tools allegedly capture and share sensitive user communications. The FSCA is an old state wiretap statute now aimed at modern website tracking. The FSCA provides for liquidated damages of up to $1,000 per violation.

Specifically,  a 2025 ruling resulted in a decision that changed Florida’s legal landscape and opened the door for a possible flood of FSCA claims. W.W. v. Orlando Health, Inc., No. 6-24-cv-1068-JSS-RMU, 2025 WL 722892 (M.D. Fla. Mar. 6, 2025). In Orlando Health, the plaintiff alleged that the defendant’s website intercepted communications about her healthcare treatment, then used that information for advertising purposes, and did so through third-party pixels that intercepted and transmitted the plaintiff’s communications with the website including “the plaintiff’s health conditions,” “desired treatment,” and “preferred doctors.” Id. at 2. The plaintiff further alleged the information was used to serve targeted advertisements.

The court cited the legislative intent of FSCA, emphasizing that the legislature specifically intended to protect private medical information. On that basis, the court concluded the plaintiff adequately alleged interception of contents under FSCA. In other words, the theory that the website tools captured substantive healthcare communications was sufficient, at least at the motion to dismiss stage. Since this decision, plaintiffs have filed hundreds of similar wiretap claims in small claims court under the FSCA arising out of website tracking technology. This may be a signal that Florida federal courts may allow similar privacy and wiretapping pleadings to survive early challenges.

The combination of alleged interception of content, the use of third-party pixels, and the statute’s liquidated damages framework are perhaps the driving forces for the emerging FSCA litigation trend.

In a strongly worded order, Judge Julie A. Robinson of the U.S. District Court for the District of Kansas publicly admonished and sanctioned four lawyers representing a plaintiff company in a patent infringement case for using ChatGPT to find caselaw to support a response to a motion to exclude an expert witness, and a response to the defendant’s motion for summary judgment.

In the 36-page order, the court made it clear that not only the lawyer who used AI to generate the hallucinated citations, but also his partners and local counsel bore responsibility for the filing of the motion. This is a clear reminder of the non-delegable duty of lawyers under Rule 11 of the Federal Rules of Civil Procedure. The Court held that “[b]ecause there is no dispute that all five . . . attorneys signed both documents that included these errors, and they admit that not one of them verified that the case law in those briefs actually exist and stand for the propositions for which they were cited, their conduct violates Rule 11(b)(2).”

The brief facts are that a seasoned lawyer admitted pro hac vice before the court that the prepared motion was created using ChatGPT, and he admitted that it was his first time doing so. He was under stress personally and admitted he was not thinking straight, and although he meant to check the citations before filing the motion, he never did. His partners, although also admitted pro hac vice, were not responsible for the motion, never read it, and did not participate in its preparation. The associate assigned to the case read the motion, made a few changes, but was not assigned to check the citations. The local counsel relied on the pro hac vice counsel and reviewed it briefly before filing it but never checked the citations. The court’s order points out that the response to the motion to exclude “contains a litany of problems: (1) nonexistent quotations; (2) nonexistent and incorrect citations; and (3) misrepresentations about cited authority.” Some of the same issues were included in the response to the motion for summary judgment.

Here’s what the court had to say about each of the attorneys’ responsibilities and the sanctions it assessed:

  • The most culpable lawyer used ChatGPT and failed to check the citations. Although he was experiencing difficulties in his personal life, he never asked for an extension or help from the other five lawyers representing the plaintiff in the case. Instead, he filed the motion on time, cut corners by using ChatGPT and filed a response that included the deficiencies above. Neither his co-counsel nor his client were aware of the generative AI use. He was a “novice” at using it and is only now aware of the risks. Although the court was sympathetic to his personal plight (and he graciously emphasized that he was the only one culpable), the court stated that “citing to a nonexistent case, attributing a nonexistent quotation to an existing case, and misstating the law violates Rule 11(b).” The violation is the failure to verify the cases, not the intent behind the failure. The court noted that the attorney’s unawareness of the “very real risk of case hallucinations,” after several instances of Rule 11 sanctions being levied against lawyers for this same violation was an aggravating fact. The court directed the attorney to implement a robust policy to “deter any future instance of submitting unverified authority in a filing…[by requiring] him to submit to the Clerk for filing a certificate outlining specific internal procedures at his firm that he intends to impose. . . and imposes a monetary fine of $5,000, . . .and revokes his pro hac vice admission to this Court.” The court further directed the attorney to “self-report to the state disciplinary authorities where he is licensed by providing them with a copy of this Order.”
  • The attorney’s partners, who did not participate in brief preparations, signed the filing despite failing to determine the accuracy of the contents. One of the partners assigned an associate to help and was on a family vacation when it was filed. The court pointed out that merely affixing their names to the brief without reviewing it, “violated [their] duty to conduct a reasonably inquiry into the facts and the law before filing.” The court reiterated that Rule 11 is non-delegable and imposed a fine of $3,000 for each of the co-counsel who signed the pleadings.
  • The court did not sanction the associate assigned to the case, as he had no supervisory authority and “was placed in a difficult position by his supervising attorneys.”
  • As for local counsel, he also signed the defective pleadings, and “by doing so, he vouched for the Texas attorneys in this matter.” He failed to cite-check them. The firm set forth its efforts to ensure this doesn’t happen again and provided the court with a formal policy around generative AI use, and the attorney “voluntarily sanctioned himself in the form of refraining from serving as sponsoring or local counsel for pro hac vice attorneys for a period of 12 months.” With the above considerations, the court sanctioned the local counsel $1,000.

The clear takeaway is that firms need to address the fact that lawyers may be tempted to use GenAI even when they have no experience, know or don’t know of the consequences, and are addressing personal issues. Judges have no sympathy when it comes to hallucinations and misrepresentations in briefs, as it is a waste of time and resources, and is a clear violation of Rule 11. Ignorance is not a defense, and relying on your partners, co-counsel, or local counsel will not get you off the hook, as Rule 11 is non-delegable. Firms may wish to consider adopting policies and guidance for attorneys on the use of GenAI tools, requiring all lawyers who are signing pleadings to be responsible for checking and verifying cites before affixing their signature to a pleading. There can be no reliance on others before a pleading is filed.