A recent white paper issued by SocRadar, entitled “Operation DoppelBrand: Weaponizing Fortune 500 Brands for Credential Theft and Remote Access,” provides a stark outline of how a threat actor known as GS7 has been “targeting banking institutions, technology companies, payment platforms, and other entities” with creating fake “highly similar” web portals to harvest customer credentials of. The campaign has been dubbed “Operation DoppelBrand.” The threat actor uses rotating infrastructure such as NameCheap and OwnRegistrar to obtain the fake, but realistic web portals.

GS7 uses “sophisticated custom phishing kits to download remote management and monitoring tools on victim systems, enabling remote access or the deployment of additional tools such as malware.” It then uses bots and Telegram to exfiltrate data for financial fraud.

Between December 2025 and January 2026, “more than 150 domains related to the modus operandi and characteristics of the latest campaign are estimated to have been used.” GS7 is targeting U.S. based companies, including banks, financial institutions, and technology companies.

To combat website or portal impersonation, companies may wish to consider several practical steps that the Forbes Technology Council has outlined here, including monitoring domains that could be created to impersonate your brand.  

Website tracking litigation continues to generate high stakes compliance risk, but not all privacy statutes are moving through the courts at the same pace. A notable divergence is emerging between the Video Privacy Protection Act (VPPA) and the California Invasion of Privacy Act (CIPA). Where the first is rapidly heading toward definitive interpretation by the United States Supreme Court, the other remains stalled in uncertainty, with litigants still waiting for meaningful appellate guidance.

The U. S. Supreme Court will hear Salazar v. Paramount Global, No. 25-459, to decide who qualifies as a “consumer” under the VPPA after a two-to-two circuit split. The core question is whether the VPPA applies a) narrowly, to subscribers of “audiovisual materials,” or b) broadly, to all company subscribers. For companies that stream video, embed video players, run video-heavy marketing pages, or monetize audiences through targeted advertising, that “consumer” definition can be outcome determinative. If “consumer” is interpreted broadly, VPPA exposure can extend beyond classic video subscription relationships and into ordinary customer or account relationships that happen to interact with video content.

On the other hand, in Fregosa v. Mashable, Inc., No. 3:2025cv01094 (N.D. Cal.  23, 2026)a  dispute involving interpretation of CIPA’s “pen register” provision and whether it applies to website tracking technologies, the District Court denied the plaintiff’s request for an immediate interlocutory appeal. The court concluded there were no “substantial grounds for difference of opinion,” citing “a handful of federal district courts that had adopted similar interpretations,” even though “a growing number of state courts” had taken a conflicting view. The practical result is a familiar one for CIPA defendants and plaintiffs alike: litigation continues to multiply; interpretations continue to diverge; and appellate guidance continues to lag.

When the Supreme Court resolves an issue like who is a “consumer” under VPPA, companies will get a clearer national baseline; even if the decision expands liability, it reduces the cost of uncertainty. With that decision under VPPA, legal teams can align disclosures, consent flows, vendor contracts, and tracking architecture to a clear definition. However, with the ways in which courts are making CIPA decisions, a company can face materially different outcomes depending on which court hears the matter, how the court interprets “pen register,” whether the court decides that website tracking technologies fall under the law’s scope, and whether the court is persuaded by the reasoning of the federal district courts or that of the state courts.

For VPPA defendants, the Supreme Court’s decision will provide clear guidance and strategy for handling these claims, but until appellate courts deliver clearer answers on CIPA, companies should assume that plaintiffs will continue to test aggressive theories and that outcomes will remain uneven.

The Office of California Attorney General Rob Bonta announced the largest settlement for violations of the California Consumer Privacy Act (CCPA) to date, imposing a $2.75 million civil penalty and injunctive relief focused on how Disney implements consumer opt-outs across its streaming ecosystem. Disney must pay the penalty within 30 days of the judgment’s effective date. Beyond the headlining number, the settlement highlights an enforcement theme that has become increasingly explicit. Opt-out rights must be effective in practice across the interfaces where consumers actually interact, not merely available as a formal policy or isolated control.

According to allegations, Disney did not fully effectuate consumers’ requests to opt-out of the sale or sharing of personal information across all devices and streaming services connected to a consumer’s Disney account. The court entered a stipulated “Final Judgment and Permanent Injunction” in Los Angeles County Superior Court pursuant to the CCPA and California’s Unfair Competition Law. The judgment defines the covered footprint broadly, stating that “Disney streaming services” include, without limitation, Disney+, Hulu, and ESPN+. That framing matters for any company operating multiple “distinctly branded” services that are nonetheless tied together through shared identity, ad tech, or data infrastructure.

For most regulated organizations, the larger financial exposure is often operational rather than punitive. Investigation response, engineering remediation, vendor reconfiguration, and validation across multiple apps and device types can quickly outpace the civil penalty, particularly when opt-outs must propagate through identity graphs and pseudonymous profiles used for selling, sharing, or cross context behavioral advertising.

The injunctive provisions, however, are the real compliance signal. Disney must implement a “consumer friendly, easy to execute opt out process” with minimal steps and support for opt-out preference signals, then apply opt-outs account wide for logged-in users across all associated Disney streaming services .

The order also addresses common failure points for non-logged-in users, requiring clear instructions about logging in or providing only minimal personal information needed to fully effectuate the opt-out, while otherwise treating the opt-out as applying to the browser, app, or device and associated profiles. It further requires clear and conspicuous opt-out links, device scaled notices, and controls that do not rely on hard to find or friction heavy interface patterns, a way for consumers to confirm the opt-out was processed, and guardrails against confusing “choice architecture” that could imply cookie settings or marketing preferences substitute for a full opt-out of sale or sharing.

Finally, Disney must provide ongoing progress updates and maintain a 3-year assessment and monitoring program with annual reporting, reinforcing that California’s focus is shifting from one time user interface fixes to durable operational controls. The full order and settlement can be found here.

The Fair Credit Reporting Act (FCRA) is decades old, but a recent artificial intelligence (AI)-related complaint suggests that plaintiffs are testing whether legacy consumer-reporting rules can apply to AI-driven hiring assessments.

In January, a class action complaint was filed in California, Kistler v. Eightfold AI Inc., No. C26-00214 (Cal. Super. Ct. Jan. 20, 2026). Eightfold is an AI recruiting tool that provides employers with tools for a more streamlined hiring process. The class action complaint raises a familiar consumer protection issue in a contemporary HR context: when an AI tool scores job applicants in the background, what legal regime governs that activity? The plaintiffs, and both job applicants, allege that Eightfold uses hidden AI during online job applications to collect sensitive and sometimes inaccurate information about applicants and generate a “likelihood of success” score that employers use to rank candidates. They further allege that applicants often do not even know Eightfold is involved and have no meaningful chance to review or dispute the AI-generated output before it influences whether they advance in the hiring process.

The pleading asserts that Eightfold’s outputs are “consumer reports” used for employment purposes, and that the company operates as a consumer reporting agency under the FCRA. If the court is persuaded by that reasoning, it may find that Eightfold was responsible for FCRA compliance, including clear disclosures and authorization; certifications from employer-clients; and practical mechanisms that allow applicants to access, dispute, and correct information before adverse action is taken against them.

The case offers several takeaways for organizations exploring AI for hiring purposes. First, be clear on what data your AI tool is using. The complaint alleges that Eightfold’s system does not rely only on what the applicant submits but also pulls in information from the employer and third-party online sources, even allegedly generating additional inferences about the applicant to build a profile. The more an AI model relies on external and inferred data, the more you should think about accuracy, transparency, and whether applicants can see and correct information about them.

In addition, there may be regulatory support for the plaintiffs’ position here. The complaint points to Consumer Financial Protection Bureau (CFPB) guidance indicating that FCRA concepts may extend to algorithmic scores used for hiring, particularly where a third party assembles or evaluates consumer information to generate scores for employers. Whether the court agrees to apply FCRA to this context remains to be determined, but it may be the case that AI does not displace existing consumer-reporting frameworks such as the FCRA. If you use an  AI tool to materially influence high-stakes decisions such as hiring, traditional consumer protection measures, the FCRA could potentially apply.

Given its extraterritorial reach, companies outside Europe should start preparing for the EU AI Act now. In general, the Act will apply to companies that develop high-risk AI systems used in the EU and that provide outputs from those systems, even if the companies have no physical presence in Europe.

Ahead of August 2026, companies, especially those who have no direct dealings within the EU, should begin auditing their systems and practices to determine whether they fall under the Act to avoid surprises of later enforcement actions. That starts with mapping how the organization uses AI to generate outputs and identifying where AI is being used by third-party vendors and partners. Once those AI use cases are documented, the next step is assessing whether any of documented cases could be considered high-risk, which may trigger compliance requirements under the Act.

Companies using or deploying high-risk AI outside Europe should work with suppliers and customers to update contracts so they are notified when AI is used and can restrict AI systems and outputs from being shared in the EU, where appropriate. For companies that intend to do business in Europe, now is also the time to begin building a risk management program in preparation for the Act’s enforcement.

If you are among the one billion individuals who own an Android device running on Android 12, or a previous iteration of the operating system, now is the time to consider upgrading your device. According to Forbes, this represents approximately 40% of all Android devices in the market.

Google has issued a warning that any Android device running the Android 12 operating system or older are no longer supported with patches or updates for vulnerabilities. This means that any Android 12 or older is at risk of compromise through spyware or malware attacks. The newest software patch is version 16. If your Android phone is not running version 16, it is out of date.

Whatever device you own, it is critical to update to the newest software to protect against known vulnerabilities. As pointed out before, as soon as your device manufacturer issues a patch, it is critical to apply the patch as soon as possible as the patch is designed to mitigate critical vulnerabilities. To read our previous posts about iOS software updates, click here.

Patch, patch, patch. If your device doesn’t support the newest patch, it’s time to invest in a new one.

Security researchers at Huntress Labs have identified a vulnerability in SolarWinds’s Web Help Desk that threat actors are exploiting to allow them to execute code remotely.

The vulnerability was listed on the Cybersecurity and Infrastructure Security Agency’s known exploited vulnerabilities last week, and SolarWinds issued a warning, classifying it as a “critical severity” for users to patch the vulnerability. According to SolarWinds, the vulnerability “could lead to remote code execution, which would allow an attacker to run commands on the host machine. This could be exploited without authentication.”

SolarWinds has provided indicators of compromise, suspicious IPs, and the software release, which security professionals should review and update.

Huntress Labs has identified three exploited customers, and Cybersecurity Dive reports that Shadowserver has found 150 instances of compromise. Huntress Labs researchers “believe a threat group tracked as Storn-2603 is behind the attacks.”

If your organization uses SolarWinds’s Web Help, patching the vulnerability should be a priority.

California resident Nathaniel Bee filed a lawsuit this week alleging that the ATP Tour’s website used third-party tracking technology that captured details on how visitors interacted with the site, including what content they viewed; how they navigated the website; and what type of device they used, without user consent in violation of the California Invasion of Privacy Act. According to the complaint, that information was transmitted to third parties, including Google and Comscore Inc., and was used for targeted advertising and analytics.

The lawsuit centers on what users were told and what the website allegedly did anyway. The plaintiff alleges that users first visiting the ATP Tour website are presented with two options: accept “essential cookies only” or accept “cookies.” The plaintiff argues that the “essential cookies only” option gives visitors the impression that they can opt out of tracking that shares information with “social media, advertising and analytics partners.” However, even after a user selects “essential cookies only,” the ATP Tour allegedly continued transmitting non-essential information that could be used for targeted advertising. The complaint states that “even when users attempted to limit tracking by rejecting nonessential cookies, ATP Tour failed to prevent third parties from receiving information generated by users’ website communications.”

Even at the allegation stage, the case highlights a pressure point for many consumer-facing websites where consent interfaces are only as reliable as the technical controls behind them.

If a website offers an “essential cookies only” option, the expectation is that third-party tags, pixels, and scripts tied to advertising and analytics are actually disabled or prevented from transmitting data when a user opts out.

Regardless of how the claims ultimately shake out, the complaint underscores a simple but increasingly litigated reality: privacy disclosures and consent banners are only as defensible as the engineering behind them. If a site presents an “essential cookies only” option, users reasonably expect that advertising and analytics tags, pixels, and scripts are actually blocked from firing and from transmitting data to third parties. For consumer-facing organizations, this case is a reminder to align what the interface promises with what the site does in practice, and to validate that opt-out choices are enforced consistently across all third-party tools and integrations.

In January, the General Services Administration’s (GSA) Office of the Chief Information Security Officer issued a new procedural guide, CIO-IT Security-21-112 Rev. 1, that sets expectations for protecting Controlled Unclassified Information (CUI) when it resides in nonfederal contractor systems. Although the document is internal guidance, it creates an approval framework that may soon determine whether a contractor is eligible for GSA contracts involving CUI.

The security baseline is built on NIST SP 800-171 Rev. 3, and it applies when CUI resides in a contractor system that is not operated on behalf of the federal government and therefore is not subject to FISMA or FedRAMP. Covered CUI could include CUI stored in internal file shares or processed in a commercial cloud tenant.

The GSA describes a five-phase lifecycle—Prepare, Document, Assess, Authorize, and Monitor—derived from the National Institute of Standards and Technology’s Risk Management Framework. Contractors must document their CUI-handling system, complete an independent security assessment, obtain GSA approval, and then meet ongoing monitoring and periodic reassessment requirements. Perhaps the most notable cybersecurity requirement is the incident reporting timeline: contractors must report suspected and confirmed CUI incidents within one hour of discovery. By comparison, many state breach notification laws are measured in days, and the New York Department of Financial Services cybersecurity rule generally uses a 72-hour notice window for certain reportable events. This one-hour requirement is unusually compressed and may be difficult to operationalize.

GSA’s CUI-focused compliance track will look familiar to contractors following DoD’s CMMC, but there are differences. GSA aligns to NIST SP 800-171 Rev. 3, while DoD currently relies on Rev. 2 under DFARS 252.204-7012/CMMC. The GSA also appears willing to approve systems with gaps if certain key requirements are met.

Among other uncertainties, the guide does not specify when the requirements will take effect. Still, the document signals that the GSA is moving toward a model where contractors may need to demonstrate the security of the specific system handling CUI, not just accept contract language. Contractors that handle CUI under GSA contracts may want to begin mapping where their CUI resides, test incident reporting procedures, and plan for a more robust GSA contract approval process.

Until California’s legislature provides clearer guardrails, companies should expect continued class action activity under the California Invasion of Privacy Act (CIPA), targeting common website tracking technologies. Plaintiffs’ firms are actively testing how far this decades-old statute extends in the modern web environment, and courts have not reached a consensus. That uncertainty creates real litigation risk for organizations that rely on tools like chat widgets, session replay, and analytics.

Many companies use website tools that help improve customer experience, measure performance, prevent fraud, and support marketing efforts. These tools often capture data about how visitors interact with webpages, including clicks, cursor movements, page navigation, chat messages, and form entries. Plaintiffs are increasingly arguing that certain implementations of these tools amount to unlawful interception or recording of communications under CIPA.

The result is a rising wave of proposed class actions that can be expensive to defend, difficult to predict, and costly to resolve. The practical takeaway is straightforward—even if you believe your organization’s practices are reasonable, it is worth reviewing disclosures, consent flows, and vendor configurations now, rather than after a demand letter or complaint arrives.

CIPA was enacted in 1967 to prevent secret wiretapping by both law enforcement and private individuals. The plaintiffs’ bar has since repurposed the statute to challenge modern website technologies, including:

  • Chat features that allow visitors to communicate with a company in real time;
  • Session-replay tools that record user interactions with webpages for troubleshooting and UX improvements; and
  • Analytics code that tracks usage patterns and behavior across the site.

The core allegation is that these tools record or “listen in” on communications without proper consent. Plaintiffs often frame routine website telemetry as covert monitoring, particularly when data flows to third-party vendors.

Some courts have concluded that visitors could reasonably expect chats, form entries, or even certain click activity to remain private. In these decisions, disclosures may not be treated as sufficiently clear or sufficiently tied to meaningful consent for the specific tracking at issue. Other courts have held that website interactions are not confidential where users are clearly told their data and usage may be collected or tracked. In these decisions, prominent disclosures and clear notice can undermine the claim that a “secret” interception occurred.

This lack of uniformity is a major driver of continued filings. Plaintiffs can point to decisions that let claims survive early motions, while defendants can cite dismissals, but neither side has a guaranteed playbook.

While the courts remain split, companies can reduce risk by focusing on a few concrete areas:

  • Revisit Privacy Policy and Terms of Use disclosures;
  • Evaluate consent banners and how consent is captured;
  • Reassess whether you need each tracking tooland its configuration; and
  • Consider arbitration provisions and class action waivers.

CIPA was not written with session replay, chat widgets, or modern analytics in mind, but is being used to challenge them now. With courts split on whether website interactions are “confidential” and what level of disclosure and consent is sufficient, the best risk-management approach is proactive: confirm what your site is doing, align disclosures with reality, strengthen notice and consent flows, and evaluate contractual tools like arbitration clauses and class waivers.