The American Hospital Association (AHA) is advising hospitals and healthcare entities to “take precautionary measures in case Iran, its proxies or self-radicalized individuals attempt attacks in the U.S.” during the conflict between Israel, the United States and Iran. The precautionary measures include strengthening cybersecurity and physical security measures.

Although the AHA is unaware of any specific credible threats against U.S. based healthcare organizations, adversaries of the United States are known to attack critical infrastructure, including healthcare organizations, during geopolitical conflicts. In the past, nation state adversaries use cyber proxies or hacktivist groups to disrupt critical infrastructure during conflicts, and concern is heightened that such disruption could occur during this current conflict with Iran.

The Federal Bureau of Investigation, the Cybersecurity and Infrastructure Security Agency, the Department of Defense and the National Security Agency have also recently issued a joint Fact Sheet warning critical infrastructure organizations to review cybersecurity protections in light of geopolitical tensions involving Iran. The fact sheet “details the need for increased vigilance for potential cyber activity against U.S. critical infrastructure by Iranian state-sponsored or affiliated threat actors.”

The fact sheet “urges owners and operators of critical infrastructure organizations and other potentially targeted entities to review this fact sheet to learn more about the Iranian state-backed cyber threat and actionable mitigations to harden cyber defenses.” Reviewing the fact sheet and implementing the mitigations should be a high priority for all critical infrastructure organizations.

Cybersecurity firm Darktrace recently issued its Annual Threat Report, which offered some startling statistics and findings. The Threat Report provides a “comprehensive assessment of the global cyber threat landscape and the trends shaping cyber risk in 2026.”

Findings are summarized below, but we strongly encourage read the whole report.

  • Email attacks are getting more sophisticated (which we know). Darktrace analyzed 32 million phishing emails and determined that threat actors are using AI to create content and evade detection, in addition to a marked increase in “identity-targeting techniques.”
  • QR-code phishing attacks increased 28% between 2024 and 2025. A new technique, dubbed “splishing” (“in which a QR code is split into two distinct images”) and QR code “nesting” (“where a legitimate QR code is embedded with a malicious one”) are designed to bypass link-scanning tools and re-route victims to malicious sites.
  • Newly created domains are on the rise. 1.6 million phishing emails “relied on newly created domains spun up specifically for malicious activity.”
  • “70% of phishing emails passed DMARC authentication, helping them appear legitimate to both users and automated controls.”
  • Critical national infrastructure is being targeted.

The report is consistent with what we see on a day-to-day basis. It provides valuable insight into the threats facing companies and individuals and what the trends will be in 2026, all of which can be used to build a cybersecurity strategy and education for your organization.

The U.S. Court of Appeals for the Fifth Circuit recently issued a significant Telephone Consumer Protection Act (TCPA) decision in Bradford v. Sovereign Pest Control of TX, Inc., No. 24-20379, Doc. 85-1 (5th Cir. Feb. 25, 2026). The court affirmed summary judgment for the company and held that the TCPA’s “prior express consent” standard does not require “prior express written consent” for prerecorded or autodialed calls to wireless numbers, even if the calls are alleged to be telemarketing.

The defendant, Sovereign Pest, provides pest-control service plans. The plaintiff provided his cell-phone number on the service-plan agreement and later stated that he did so in case the company needed to contact him. During the course of the agreement, Sovereign Pest placed prerecorded calls seeking to schedule renewal inspections. The plaintiff scheduled inspections after receiving the calls and renewed his service plan four times. He later filed a putative class action alleging the calls were unsolicited prerecorded calls made without his prior express written consent.

On appeal, the Fifth Circuit concluded that the statutory text of the TCPA requires only “prior express consent” and does not distinguish between telemarketing and informational calls for purposes of the consent standard. In reaching its conclusion, the court declined to follow the Federal Communication Commission’s (FCC) regulation that imposes a written consent requirement for prerecorded telemarketing calls. Citing Supreme Court authority that courts must interpret Congress’s enacted text using ordinary tools of statutory interpretation without deferring to an agency’s reading, the court interpreted TCPA without deference to the FCC’s added “written” requirement. Applying that standard to the record, the court held the plaintiff provided prior express consent based on his provision of his number, his statements, and later confirmations that the company could call him, his lack of objection, and his repeated renewals of the service.

Bradford could reshape TCPA litigation strategy (at least in the Fifth Circuit) in several ways. It may be harder for plaintiffs to establish liability where “no written consent” is pleaded as the core theory. Courts may also be more willing to consider defendants’ claim that consent was given based on relationship-based and operational evidence, including intake flows, account notes, call logs, recordings, and vendor records. At least in the Fifth Circuit, TCPA cases may turn less on what’s “in writing” and more on what’s in the record.

Data brokers are lining up to comply with California’s one-stop deletion tool requirement under the Delete Act, and the numbers signal a major shift in how privacy rights may be exercised and enforced in California starting this summer.

At its most recent meeting, the California Privacy Protection Agency (CPPA) reported that more than 575 data brokers have registered with its Delete Request and Opt-out Platform (DROP). DROP is the first tool of its kind in the United States. It allows California residents to submit a single request to delete personal information held by brokers registered in California. The platform went live on January 1, 2026, and early usage was immediate and substantial. The CPPA reported that over 242,000 California residents have signed up with DROP, and more than 18,000 deletion requests were submitted within 48 hours of launch. However, a big operational turning point arrives soon; data brokers must begin complying with those deletion requests on August 1, 2026.

Historically, deleting personal information held by data brokers has often required consumers to identify brokers one by one, locate opt-out or deletion pages, and repeat the process across dozens or even hundreds of companies. DROP is designed to reduce that burden by centralizing the request process into one form for brokers registered in the state. If the platform performs as intended at scale, it could meaningfully reduce “privacy friction” by consolidating deletion requests into a single workflow for California residents; raising the compliance baseline for brokers by standardizing intake and response expectations; and increasing accountability because registration and compliance timelines are visible to regulators.

For brokers, the compliance impact is direct. The volume of registered brokers, along with a large resident signup count, suggests that this August could bring sustained, high-volume deletion activity. Plus, once a platform makes consumer requests easier to submit at scale, non-compliance becomes easier to detect and potentially easier to prioritize for enforcement. If early participation is any indicator, DROP could quickly become the default way Californians exercise deletion rights, and the easiest way for regulators to spot which brokers are keeping up and which are not.

Next up for the CPPA, beyond the data broker ecosystem, are compliance checklists and guidance for cybersecurity audits and risk assessments, as well as guidelines for automated decision-making technology, aimed at helping companies comply with regulations adopted in 2025.

ShinyHunters continues to wreak havoc against well-known brands; most recently, Wynn Resorts. Wynn Resorts has confirmed that “an unauthorized third party acquired certain employee data.” It is believed that the threat actor was ShinyHunters. Fortunately for Wynn, the incident is not affecting its operations, and its resorts remain fully functional.

ShinyHunters announced it was the culprit on its leak site on February 20, 2026. It alleges that it stole more than 800,000 records, including Social Security numbers. Wynn was removed from the site four days later, and reported that “the unauthorized third party has stated that the stolen data has been deleted.”

Wynn has confirmed that it will be offering credit monitoring and identity protection services to affected employees.

Wynn is not alone in being a target of ShinyHunters. It is reported that over 100 organizations have been successfully attacked through vishing attacks and compromised single sign on credentials by ShinyHunters.

The techniques used by ShinyHunters and other threat actors using vishing campaigns are relevant and provide strong current scenarios to warn employees through education and training, and to use for cybersecurity tabletop exercises.

Sophisticated vishing (voice phishing) attacks continue to target and victimize company call centers and help desks. Recently, a large ad tech company reported that customer information had been compromised as a result of a vishing attack. The company warns that the information obtained in the incident can be used by threat actors to conduct phishing and vishing attacks against customers through the use of emails, texts or telephone numbers.

The attackers, believed to be ShinyHunters (again), use similar tactics in their attacks against companies in all industries. The threat actor, impersonating a company’s information technology employee, calls company employees, (often a help desk or call center), and tricks them into entering credentials and multifactor authentication (MFA) codes on phishing sites that mimic the company’s portal, or asks them to assist the “employee” with changing his or her credentials to access the company network. They also use device code vishing to bypass MFA defenses. Once they have access to the company network, and access to the data the impersonated employee had access to, they often escalate privileges and exfiltrate data to use against the company in an extortion campaign.

These attacks continue to escalate and call centers and help desks are central to thwarting them. Companies may wish to consider immediate additional training and education for in-house call center and help desk personnel, update processes for employees to change credentials through voice requests, implement more robust identification requirements (including using internal company information that only employees would have access to), and conducting tabletop exercises on how to respond to them.

A newly filed putative class action in the Western District of Texas targets Bumble, Inc., over an alleged “massive and preventable” cyberattack in or around January 2026, in which attackers allegedly accessed highly sensitive user data stored in Bumble’s systems. The complaint alleges the compromised information included names, dates of birth, addresses, telephone numbers, Social Security numbers, and account numbers, as well as highly sensitive, context-rich dating data such as chat history and dating history, the kind of data combination that can heighten identity-theft risk and privacy harms. The named plaintiff alleges time loss, anxiety, and increased risk of fraud and identity theft, and seeks damages and injunctive relief on behalf of the individuals whose information was stored and/or exposed in the breach. 

For companies watching this case, the “what went wrong” allegations read like a checklist of avoidable security and communications failures. The complaint claims Bumble promised “appropriate and reasonable security measures” (including secured servers and firewalls) in its public-facing privacy policy but allegedly did not adhere to those claims. The complaint further alleges the breach occurred through a phishing attack attributed to the “ShinyHunters” threat actor group, and argues that the fact of a successful phishing compromise suggests inadequate security controls pointing to measures like organization-wide two-factor authentication and adequate employee cybersecurity training as known safeguards. The complaint also alleges that Bumble failed to properly secure and encrypt data, failed to implement timely breach detection, and failed to provide prompt and accurate notice.

The takeaway is that privacy policy statements, phishing training failures, encryption decisions, breach detection, and notification practices can quickly become central allegations in a class action when a security incident occurs. Even at this stage, this lawsuit is a reminder that aligning written privacy and security commitments with day-to-day implementation, and documenting those efforts, can be just as important as the technical controls themselves when an incident triggers litigation.

Figure Lending, LLC, which markets itself as America’s #1 non-bank Home Equity Line of Credit lender, has been named in a proposed federal class action following a reported cyber incident that allegedly exposed customer personal information. Mardikian v. Figure Lending, LLC, 3:26-cv-00135 (W.D.N.C. Feb. 19, 2026). The complaint alleges that the company’s systems were improperly accessed and customers’ personally identifiable information was compromised.

The complaint highlights the growing litigation risk created when a company’s public-facing privacy representations are juxtaposed against breach allegations. It quotes Figure Lending’s privacy policy, stating it uses “reasonable precautions, including technical and administrative measures” to protect personal data. The complaint also quotes policy language stating the company does not sell personal data and is “committed to respecting your privacy choices.”

For fintech companies and mortgage providers, this case is a reminder that protecting sensitive financial and identity data must be treated as a core business control, not just an IT function, especially where plaintiffs may frame claims through financial-privacy statutes. The complaint alleges Figure Lending is a financial institution under the Gramm-Leach-Bliley Act (GLBA) and is subject to GLBA-related obligations, including the Safeguards Rule’s requirement for a written information security program with reasonable administrative, technical, and physical safeguards. It also alleges GLBA violations tied to sharing  personally identifiable information with a non-affiliated third party without an opt-out notice and a reasonable opportunity to opt out.

The Figure Lending complaint is a reminder that cybersecurity and privacy commitments rise and fall together. When an incident is alleged to stem from a human-layer attack like social engineering, attention often shifts beyond technical controls to governance, consumer communications, and whether an organization’s public privacy statements align with its security posture. For lenders and fintechs handling sensitive financial and identity data, that alignment (and the ability to provide timely, legally compliant notice) can be a consequential component of incident response.

DJI, the world’s leading manufacturer of civilian drones, has escalated its dispute with the Federal Communications Commission (FCC) by filing an appeal in the Ninth Circuit after the FCC placed many DJI products on its “covered list,” which the FCC uses for telecommunications equipment it deems an unacceptable national security risk. DJI says the decision effectively prevents it from “marketing, selling, and importing new products into the United States,” and that the order covers DJI communications and video surveillance equipment. Rather than waiting for the FCC to decide whether it will reconsider, DJI says it is appealing the decision now to protect its business and all of the consumers and businesses that reply on its products.

For drone users and manufacturers, the practical takeaway is that a regulatory designation can become an immediate operational and commercial disruption, even before the courts resolve the underlying legal issues. DJI alleges the FCC “exceeded its statutory authority, failed to observe statutorily required procedures, and violated the Fifth Amendment,” and also claims that the FCC has used the ruling “as a justification” to restrict DJI’s ability to import not only covered products but even other existing and new products “outside the scope of the ruling.” DJI’s earlier reconsideration petition also described the FCC’s approach as unprecedented, asserting that “for the first time,” the bureau added an “entire category of products” rather than particular products produced by particular entities. Right now drone operators can map their fleets and supply chain exposure by identifying which platforms, payloads, and support items are affected; review contracts, customer commitments, and lead times in case availability tightens (which could include replacement parts); consider contingency procurement and platform diversification where feasible; and prepare clear customer and internal communications so operational teams know what can be bought, serviced, and deployed while the appeal proceeds.

The fight between creators and big tech has mostly been focused on the alleged copyright infringement of using creative works in AI training data. However, trademark law might be the next battleground as creators look for additional ways to protect their work from AI-related misuse. Actor Matthew McConaughey recently received U.S. Registration No. 8,070,191 for his famous line, “Alright, Alright, Alright” delivered in the 1993 movie Dazed and Confused. The registration is for a sensory mark that protects the distinct intonation he made famous in the film. 

Trademarks protect a brand’s identity. Unlike copyright infringement, trademark infringement generally turns on whether a mark is used in commerce in a way that is likely to cause consumer confusion. That means using a trademarked phrase as an AI input, or having it appear in an AI output, is not automatically trademark infringement. Even so, trademarks can be a useful tool for artists looking to protect their identity from deepfakes and other AI-generated content that improperly imitates them. In that sense, trademark law may offer creators another path to push back against AI developers or users who try to profit from an artist’s identity without permission.