Sophisticated vishing (voice phishing) attacks continue to target and victimize company call centers and help desks. Recently, a large ad tech company reported that customer information had been compromised as a result of a vishing attack. The company warns that the information obtained in the incident can be used by threat actors to conduct phishing and vishing attacks against customers through the use of emails, texts or telephone numbers.

The attackers, believed to be ShinyHunters (again), use similar tactics in their attacks against companies in all industries. The threat actor, impersonating a company’s information technology employee, calls company employees, (often a help desk or call center), and tricks them into entering credentials and multifactor authentication (MFA) codes on phishing sites that mimic the company’s portal, or asks them to assist the “employee” with changing his or her credentials to access the company network. They also use device code vishing to bypass MFA defenses. Once they have access to the company network, and access to the data the impersonated employee had access to, they often escalate privileges and exfiltrate data to use against the company in an extortion campaign.

These attacks continue to escalate and call centers and help desks are central to thwarting them. Companies may wish to consider immediate additional training and education for in-house call center and help desk personnel, update processes for employees to change credentials through voice requests, implement more robust identification requirements (including using internal company information that only employees would have access to), and conducting tabletop exercises on how to respond to them.

A newly filed putative class action in the Western District of Texas targets Bumble, Inc., over an alleged “massive and preventable” cyberattack in or around January 2026, in which attackers allegedly accessed highly sensitive user data stored in Bumble’s systems. The complaint alleges the compromised information included names, dates of birth, addresses, telephone numbers, Social Security numbers, and account numbers, as well as highly sensitive, context-rich dating data such as chat history and dating history, the kind of data combination that can heighten identity-theft risk and privacy harms. The named plaintiff alleges time loss, anxiety, and increased risk of fraud and identity theft, and seeks damages and injunctive relief on behalf of the individuals whose information was stored and/or exposed in the breach. 

For companies watching this case, the “what went wrong” allegations read like a checklist of avoidable security and communications failures. The complaint claims Bumble promised “appropriate and reasonable security measures” (including secured servers and firewalls) in its public-facing privacy policy but allegedly did not adhere to those claims. The complaint further alleges the breach occurred through a phishing attack attributed to the “ShinyHunters” threat actor group, and argues that the fact of a successful phishing compromise suggests inadequate security controls pointing to measures like organization-wide two-factor authentication and adequate employee cybersecurity training as known safeguards. The complaint also alleges that Bumble failed to properly secure and encrypt data, failed to implement timely breach detection, and failed to provide prompt and accurate notice.

The takeaway is that privacy policy statements, phishing training failures, encryption decisions, breach detection, and notification practices can quickly become central allegations in a class action when a security incident occurs. Even at this stage, this lawsuit is a reminder that aligning written privacy and security commitments with day-to-day implementation, and documenting those efforts, can be just as important as the technical controls themselves when an incident triggers litigation.

Figure Lending, LLC, which markets itself as America’s #1 non-bank Home Equity Line of Credit lender, has been named in a proposed federal class action following a reported cyber incident that allegedly exposed customer personal information. Mardikian v. Figure Lending, LLC, 3:26-cv-00135 (W.D.N.C. Feb. 19, 2026). The complaint alleges that the company’s systems were improperly accessed and customers’ personally identifiable information was compromised.

The complaint highlights the growing litigation risk created when a company’s public-facing privacy representations are juxtaposed against breach allegations. It quotes Figure Lending’s privacy policy, stating it uses “reasonable precautions, including technical and administrative measures” to protect personal data. The complaint also quotes policy language stating the company does not sell personal data and is “committed to respecting your privacy choices.”

For fintech companies and mortgage providers, this case is a reminder that protecting sensitive financial and identity data must be treated as a core business control, not just an IT function, especially where plaintiffs may frame claims through financial-privacy statutes. The complaint alleges Figure Lending is a financial institution under the Gramm-Leach-Bliley Act (GLBA) and is subject to GLBA-related obligations, including the Safeguards Rule’s requirement for a written information security program with reasonable administrative, technical, and physical safeguards. It also alleges GLBA violations tied to sharing  personally identifiable information with a non-affiliated third party without an opt-out notice and a reasonable opportunity to opt out.

The Figure Lending complaint is a reminder that cybersecurity and privacy commitments rise and fall together. When an incident is alleged to stem from a human-layer attack like social engineering, attention often shifts beyond technical controls to governance, consumer communications, and whether an organization’s public privacy statements align with its security posture. For lenders and fintechs handling sensitive financial and identity data, that alignment (and the ability to provide timely, legally compliant notice) can be a consequential component of incident response.

DJI, the world’s leading manufacturer of civilian drones, has escalated its dispute with the Federal Communications Commission (FCC) by filing an appeal in the Ninth Circuit after the FCC placed many DJI products on its “covered list,” which the FCC uses for telecommunications equipment it deems an unacceptable national security risk. DJI says the decision effectively prevents it from “marketing, selling, and importing new products into the United States,” and that the order covers DJI communications and video surveillance equipment. Rather than waiting for the FCC to decide whether it will reconsider, DJI says it is appealing the decision now to protect its business and all of the consumers and businesses that reply on its products.

For drone users and manufacturers, the practical takeaway is that a regulatory designation can become an immediate operational and commercial disruption, even before the courts resolve the underlying legal issues. DJI alleges the FCC “exceeded its statutory authority, failed to observe statutorily required procedures, and violated the Fifth Amendment,” and also claims that the FCC has used the ruling “as a justification” to restrict DJI’s ability to import not only covered products but even other existing and new products “outside the scope of the ruling.” DJI’s earlier reconsideration petition also described the FCC’s approach as unprecedented, asserting that “for the first time,” the bureau added an “entire category of products” rather than particular products produced by particular entities. Right now drone operators can map their fleets and supply chain exposure by identifying which platforms, payloads, and support items are affected; review contracts, customer commitments, and lead times in case availability tightens (which could include replacement parts); consider contingency procurement and platform diversification where feasible; and prepare clear customer and internal communications so operational teams know what can be bought, serviced, and deployed while the appeal proceeds.

The fight between creators and big tech has mostly been focused on the alleged copyright infringement of using creative works in AI training data. However, trademark law might be the next battleground as creators look for additional ways to protect their work from AI-related misuse. Actor Matthew McConaughey recently received U.S. Registration No. 8,070,191 for his famous line, “Alright, Alright, Alright” delivered in the 1993 movie Dazed and Confused. The registration is for a sensory mark that protects the distinct intonation he made famous in the film. 

Trademarks protect a brand’s identity. Unlike copyright infringement, trademark infringement generally turns on whether a mark is used in commerce in a way that is likely to cause consumer confusion. That means using a trademarked phrase as an AI input, or having it appear in an AI output, is not automatically trademark infringement. Even so, trademarks can be a useful tool for artists looking to protect their identity from deepfakes and other AI-generated content that improperly imitates them. In that sense, trademark law may offer creators another path to push back against AI developers or users who try to profit from an artist’s identity without permission.

Researchers at UpGuard have discovered a misconfigured cloud database online while conducting routine internet scanning that contains billions of records, including 2.7 billion Social Security numbers (SSNs) and 3 billion plaintext email addresses and password combinations. The fairly easy-to-find data was accessed without authentication.

After reporting the access to the FBI’s Internet Crime Complaint Center (IC3) and the German hosting provider Hetzner, the database was taken down.

According to Cyberinsider, “the sheer volume of records suggests the dataset may have been constructed by aggregating and refining data from prior large-scale breaches.” The researchers estimate the full dataset could contain over 1 billion unique Social Security numbers and more than 2.2 billion unique passwords.

The researchers even attempted to verify the authenticity of the data by cross-checking it with people they knew. They found that the Social Security numbers in the dataset were valid, and one individual had already been a past victim of identity theft. They concluded that most of the data was harvested before 2016.

The findings show that compromised data from past breaches are aggregated and used today for identity theft and fraud. Since Social Security numbers are an authentication tool to open financial accounts and credit cards, it is important that consumers protect themselves from identity theft and fraud. Checking your credit report, enrolling in credit monitoring and placing a credit freeze on your accounts are still effective ways to protect yourself from identity theft. If you haven’t done so yet, now is the time to make it a priority.

A recent white paper issued by SocRadar, entitled “Operation DoppelBrand: Weaponizing Fortune 500 Brands for Credential Theft and Remote Access,” provides a stark outline of how a threat actor known as GS7 has been “targeting banking institutions, technology companies, payment platforms, and other entities” with creating fake “highly similar” web portals to harvest customer credentials of. The campaign has been dubbed “Operation DoppelBrand.” The threat actor uses rotating infrastructure such as NameCheap and OwnRegistrar to obtain the fake, but realistic web portals.

GS7 uses “sophisticated custom phishing kits to download remote management and monitoring tools on victim systems, enabling remote access or the deployment of additional tools such as malware.” It then uses bots and Telegram to exfiltrate data for financial fraud.

Between December 2025 and January 2026, “more than 150 domains related to the modus operandi and characteristics of the latest campaign are estimated to have been used.” GS7 is targeting U.S. based companies, including banks, financial institutions, and technology companies.

To combat website or portal impersonation, companies may wish to consider several practical steps that the Forbes Technology Council has outlined here, including monitoring domains that could be created to impersonate your brand.  

Website tracking litigation continues to generate high stakes compliance risk, but not all privacy statutes are moving through the courts at the same pace. A notable divergence is emerging between the Video Privacy Protection Act (VPPA) and the California Invasion of Privacy Act (CIPA). Where the first is rapidly heading toward definitive interpretation by the United States Supreme Court, the other remains stalled in uncertainty, with litigants still waiting for meaningful appellate guidance.

The U. S. Supreme Court will hear Salazar v. Paramount Global, No. 25-459, to decide who qualifies as a “consumer” under the VPPA after a two-to-two circuit split. The core question is whether the VPPA applies a) narrowly, to subscribers of “audiovisual materials,” or b) broadly, to all company subscribers. For companies that stream video, embed video players, run video-heavy marketing pages, or monetize audiences through targeted advertising, that “consumer” definition can be outcome determinative. If “consumer” is interpreted broadly, VPPA exposure can extend beyond classic video subscription relationships and into ordinary customer or account relationships that happen to interact with video content.

On the other hand, in Fregosa v. Mashable, Inc., No. 3:2025cv01094 (N.D. Cal.  23, 2026)a  dispute involving interpretation of CIPA’s “pen register” provision and whether it applies to website tracking technologies, the District Court denied the plaintiff’s request for an immediate interlocutory appeal. The court concluded there were no “substantial grounds for difference of opinion,” citing “a handful of federal district courts that had adopted similar interpretations,” even though “a growing number of state courts” had taken a conflicting view. The practical result is a familiar one for CIPA defendants and plaintiffs alike: litigation continues to multiply; interpretations continue to diverge; and appellate guidance continues to lag.

When the Supreme Court resolves an issue like who is a “consumer” under VPPA, companies will get a clearer national baseline; even if the decision expands liability, it reduces the cost of uncertainty. With that decision under VPPA, legal teams can align disclosures, consent flows, vendor contracts, and tracking architecture to a clear definition. However, with the ways in which courts are making CIPA decisions, a company can face materially different outcomes depending on which court hears the matter, how the court interprets “pen register,” whether the court decides that website tracking technologies fall under the law’s scope, and whether the court is persuaded by the reasoning of the federal district courts or that of the state courts.

For VPPA defendants, the Supreme Court’s decision will provide clear guidance and strategy for handling these claims, but until appellate courts deliver clearer answers on CIPA, companies should assume that plaintiffs will continue to test aggressive theories and that outcomes will remain uneven.

The Office of California Attorney General Rob Bonta announced the largest settlement for violations of the California Consumer Privacy Act (CCPA) to date, imposing a $2.75 million civil penalty and injunctive relief focused on how Disney implements consumer opt-outs across its streaming ecosystem. Disney must pay the penalty within 30 days of the judgment’s effective date. Beyond the headlining number, the settlement highlights an enforcement theme that has become increasingly explicit. Opt-out rights must be effective in practice across the interfaces where consumers actually interact, not merely available as a formal policy or isolated control.

According to allegations, Disney did not fully effectuate consumers’ requests to opt-out of the sale or sharing of personal information across all devices and streaming services connected to a consumer’s Disney account. The court entered a stipulated “Final Judgment and Permanent Injunction” in Los Angeles County Superior Court pursuant to the CCPA and California’s Unfair Competition Law. The judgment defines the covered footprint broadly, stating that “Disney streaming services” include, without limitation, Disney+, Hulu, and ESPN+. That framing matters for any company operating multiple “distinctly branded” services that are nonetheless tied together through shared identity, ad tech, or data infrastructure.

For most regulated organizations, the larger financial exposure is often operational rather than punitive. Investigation response, engineering remediation, vendor reconfiguration, and validation across multiple apps and device types can quickly outpace the civil penalty, particularly when opt-outs must propagate through identity graphs and pseudonymous profiles used for selling, sharing, or cross context behavioral advertising.

The injunctive provisions, however, are the real compliance signal. Disney must implement a “consumer friendly, easy to execute opt out process” with minimal steps and support for opt-out preference signals, then apply opt-outs account wide for logged-in users across all associated Disney streaming services .

The order also addresses common failure points for non-logged-in users, requiring clear instructions about logging in or providing only minimal personal information needed to fully effectuate the opt-out, while otherwise treating the opt-out as applying to the browser, app, or device and associated profiles. It further requires clear and conspicuous opt-out links, device scaled notices, and controls that do not rely on hard to find or friction heavy interface patterns, a way for consumers to confirm the opt-out was processed, and guardrails against confusing “choice architecture” that could imply cookie settings or marketing preferences substitute for a full opt-out of sale or sharing.

Finally, Disney must provide ongoing progress updates and maintain a 3-year assessment and monitoring program with annual reporting, reinforcing that California’s focus is shifting from one time user interface fixes to durable operational controls. The full order and settlement can be found here.

The Fair Credit Reporting Act (FCRA) is decades old, but a recent artificial intelligence (AI)-related complaint suggests that plaintiffs are testing whether legacy consumer-reporting rules can apply to AI-driven hiring assessments.

In January, a class action complaint was filed in California, Kistler v. Eightfold AI Inc., No. C26-00214 (Cal. Super. Ct. Jan. 20, 2026). Eightfold is an AI recruiting tool that provides employers with tools for a more streamlined hiring process. The class action complaint raises a familiar consumer protection issue in a contemporary HR context: when an AI tool scores job applicants in the background, what legal regime governs that activity? The plaintiffs, and both job applicants, allege that Eightfold uses hidden AI during online job applications to collect sensitive and sometimes inaccurate information about applicants and generate a “likelihood of success” score that employers use to rank candidates. They further allege that applicants often do not even know Eightfold is involved and have no meaningful chance to review or dispute the AI-generated output before it influences whether they advance in the hiring process.

The pleading asserts that Eightfold’s outputs are “consumer reports” used for employment purposes, and that the company operates as a consumer reporting agency under the FCRA. If the court is persuaded by that reasoning, it may find that Eightfold was responsible for FCRA compliance, including clear disclosures and authorization; certifications from employer-clients; and practical mechanisms that allow applicants to access, dispute, and correct information before adverse action is taken against them.

The case offers several takeaways for organizations exploring AI for hiring purposes. First, be clear on what data your AI tool is using. The complaint alleges that Eightfold’s system does not rely only on what the applicant submits but also pulls in information from the employer and third-party online sources, even allegedly generating additional inferences about the applicant to build a profile. The more an AI model relies on external and inferred data, the more you should think about accuracy, transparency, and whether applicants can see and correct information about them.

In addition, there may be regulatory support for the plaintiffs’ position here. The complaint points to Consumer Financial Protection Bureau (CFPB) guidance indicating that FCRA concepts may extend to algorithmic scores used for hiring, particularly where a third party assembles or evaluates consumer information to generate scores for employers. Whether the court agrees to apply FCRA to this context remains to be determined, but it may be the case that AI does not displace existing consumer-reporting frameworks such as the FCRA. If you use an  AI tool to materially influence high-stakes decisions such as hiring, traditional consumer protection measures, the FCRA could potentially apply.