The SafePay ransomware group has been active since fall 2024 and has increased its activity this spring and summer. According to NCC Group, SafePay hit the most victims of any threat actor in May 2025—it is linked to 248 victims to date, according to Ransomware.live and RansomFeed.

The group uses common tactics, including social engineering with telephone calls and spam. One of SafePay’s particular techniques worth informing employees about is sending “a ton of spam, and at the same time, when they are panicking and raising concerns, a call comes from ‘the company’s IT department’ via Microsoft teams.” Posting as a third-party IT department, the threat actors request remote access, then “drop a PowerShell script and often live on the network for up to a week to investigate and another week to slowly move towards exploitation.”

SafePay employs a double extortion model—exfiltrating files that they threaten to leak, and then deploying the ransomware to affect operations and pressure victims to pay. They are targeting private companies in the financial, legal, insurance, health care, and critical services, as well as pivoting to the public sector.

On June 30, 2025, Block, Inc.—an electronic financial services company that operates Cash App—entered into a proposed settlement with customers regarding unsolicited text messages from the company. The dispute stemmed from a marketing campaign that allowed Cash App users to refer their contacts to use the application.

Cash App allowed users to click an “Invite Friends” button to select phone contacts to invite. Once a contact was selected, the user could then send a pre-generated text message from Cash App with the referral information and an individualized hyperlink for the recipient to create a Cash App account and also receive their referral credit.

Plaintiffs alleged that they received such unsolicited invitation text messages via Cash App. The plaintiffs’ lawsuit was filed under Washington’s Consumer Electronic Mail Act (CEMA). Under the relevant provision of CEMA, it is unlawful for a person (including a business entity) to “initiate or assist in the transmission of an electronic commercial text message to a telephone number assigned to a Washington resident for cellular telephone or pager service.” Wash. Rev. Code § 19.190.060. The plaintiffs alleged that Block substantially assists in users sending impermissible text messages by financially incentivizing referral text messages and by providing pre-composed messages with unique user-specific referral links. According to one plaintiff’s declaration, Block sent “Cash App Invite Friends” text messages to approximately 1,975,187 unique phone numbers with Washington area codes.

In a May 2024 motion to dismiss, Block argued that the court should use the Telephone Consumer Protection Act (TCPA) to determine whether Block’s involvement in sending the messages was “substantial.” The court ruled, however, that TCPA was inapplicable here because it does not impose liability for assisting with a text message. In contrast, CEMA does impose liability for assisting in the transmission of a message, as plaintiffs alleged in this case.

Under the $12.5 million proposed settlement, each class action claimant will receive between $88 and $147. Although companies may avoid TCPA liability for aiding users in sending text messages, CEMA could cast a broader shadow, and even a minor role in sending the text messages could lead to major CEMA liability. Even if the company did not physically press the send button, CEMA doesn’t care.

If you’ve ever browsed Etsy looking for a handmade candle or a quirky T-shirt, you might have unknowingly shared more than just your shopping preferences. A new lawsuit filed last week in California claims that Etsy has been quietly allowing third-party companies like Google, Meta, and Microsoft to collect personal data from users through website tools known as pixel trackers, without clear consent.

Pixel trackers are tiny, often invisible bits of code embedded in websites. They kick into gear when a page loads or when you click something, and then they quietly send information to outside companies such as your IP address, browser details, how long you stayed on a page, and what you looked at.

Plaintiff Austin White filed the suit, claiming that Etsy’s use of these trackers violates the California Invasion of Privacy Act (CIPA) and other state laws. According to the lawsuit, users never agreed to this kind of tracking on the website.

White is asking the court to stop Etsy from using these trackers and to award damages to affected users. If approved as a class action, the case could potentially include all Etsy users in California whose browsing was tracked in this way.

This lawsuit is part of the growing wave of legal actions filed under CIPA aimed at cracking down on digital surveillance.

This post was authored by William Ollayos, Summer Associate. William is not admitted to practice law.

On June 27, 2025, the U.S. Supreme Court upheld a Texas law requiring pornography websites to verify users’ ages through government-issued ID. The 6–3 decision in Free Speech Coalition v. Paxton marks a significant shift in First Amendment jurisprudence and opens the door for expanded digital age-verification laws nationwide.

While framed as a child-protection measure, the Court’s ruling raises serious questions about adult privacy, free speech, and the disproportionate impact such laws may have on LGBTQ+ individuals – particularly minors and young adults who rely on the internet for community, health information, and identity development.

The Law and the Court’s Holding

Texas statute H.B. 1181 requires commercial websites hosting content that is at least one-third “sexual material harmful to minors” to implement age-verification systems before granting access. Users must upload a form of government ID or go through a third-party credentialing service. The law also mandates that sites display government-authored “health warnings” about pornography.

The Supreme Court upheld the law under intermediate scrutiny, with the majority opinion (authored by Justice Thomas) emphasizing the state’s “traditional power” to protect minors from sexually explicit content. The Court concluded that the law imposes only an “incidental burden” on adults’ speech and likened online ID checks to showing proof of age when entering a movie theater or liquor store.

In dissent, Justice Kagan criticized the majority for creating an exception to the First Amendment. She argued that the law is content-based and should have triggered strict scrutiny, especially given its chilling effect on lawful adult speech.

Privacy Risks of Mandatory Age Verification

Unlike in-person age checks, online verification often requires uploading sensitive personal data—such as driver’s license scans or facial recognition inputs—to websites or third-party services. This poses several risks, including:

  • Loss of anonymity: Users engaging in constitutionally protected speech (including viewing legal erotic or educational material) must now disclose their identity, or at least identifying data, to proceed.
  • Data exposure: Even when statutes prohibit long-term retention, there is no practical way for users to verify how their data is stored, shared, or secured. A breach or subpoena could expose sensitive browsing habits.
  • Chilling effect: The fear of being identified or tracked may deter people – particularly those in stigmatized communities – from accessing lawful content.

These risks are not hypothetical. Cybersecurity experts warn that databases of adult-content visitors are prime targets for hacking, blackmail, or misuse by data brokers. For LGBTQ+ users, who may not be “out” to family or employers, these risks are magnified.

Disproportionate Impact on LGBTQ+ Users

Age-verification laws like Texas’s may have particularly harsh effects on LGBTQ+ individuals, especially minors in unsupportive households. For many, the internet provides a rare lifeline—to peer communities, affirming content, sexual health education, or information about identity.

LGBTQ+ teens are statistically more likely to face bullying, isolation, and mental health challenges. Online platforms and forums can be critical sources of support. However, under an age-verification mandate:

  • Minors cannot legally access content deemed “harmful,” even if that content is affirming or educational.
  • Uploading an ID may be impossible or unsafe, especially for closeted teens.
  • Some laws may define LGBTQ+ content as “sexual” or “obscene” thus including non-explicit material under the vague standards of an age-verification mandate.

Some states (such as Florida’s 2023 “Don’t Say Gay” bill) have attempted to classify LGBTQ+ health or relationship content as inappropriate for youth, raising the concern that age-verification regimes will become a backdoor to censorship.

Practical Takeaways

For platforms and content hosts:

  • Assess whether your content may trigger age-verification requirements in any of the growing number of states enacting similar laws.
  • Consider implementing privacy-preserving verification solutions (such as anonymized age tokens or minimal-data processes).
  • Review vendor agreements and user disclosures for transparency and data minimization.

For compliance teams:

  • Monitor evolving legal developments – state-by-state requirements could vary widely and may conflict.
  • Treat age-verification data as high-risk PII. Establish and enforce strict retention, access, and deletion protocols.
  • Coordinate with product and legal teams to evaluate whether geoblocking, content filtering, or policy adjustments are appropriate.

For advocates and digital rights professionals:

  • Watch for overbroad applications that disproportionately affect marginalized users.
  • Consider litigation or legislative efforts to build in privacy safeguards, transparency, and carveouts for educational or identity-based content.
  • Continue educating the public – especially youth and LGBTQ+ users – on digital privacy tools and rights.

Looking Ahead

The Free Speech Coalition decision is likely to accelerate legislative momentum behind age-verification laws, with nearly two dozen states already proposing or passing similar statutes. However, as these laws spread, so do the stakes for adult privacy, youth access to critical information, and the future of anonymity in digital spaces.

Robust protections for children need not come at the expense of privacy, identity, or speech. As compliance professionals and policymakers adapt to this new terrain, the challenge will be clear: safeguard minors without silencing or exposing the very communities most in need of digital safety and expression.

The Office for Civil Rights (OCR) entered into two recent settlements with covered entities alleging that they failed to conduct security risk assessments. The settlements indicate that OCR will continue to aggressively regulate potential violations of the Health Insurance Portability and Accountability Act (HIPAA), particularly for failure to conduct risk assessments.

Deer Oaks

On July 7, 2025, OCR announced a settlement with Deer Oaks, a behavioral health provider, for alleged violations of HIPAA. The settlement resolves OCR’s allegations that Deer Oaks “failed to conduct an accurate and thorough risk analysis to determine the potential risks and vulnerabilities to the ePHI that it held.”

OCR commenced an investigation into Deer Oaks following a complaint that it had disclosed patient names, dates of birth, patient identification numbers, facilities, and diagnoses publicly accessible online by disclosing patient discharge summaries. The OCR confirmed that the discharge summaries of 35 individuals were publicly available on the internet from at least December 2021 until May 19, 2023.

OCR expanded its investigation following another incident when Deer Oaks experienced a breach following a compromised account. That incident resulted in exfiltration of data and an extortion threat that electronic personal health information (ePHI) would be posted on the dark web. Following that incident, Deer Oaks provided notification to the Department of Health and Human Services, and 171,871 affected individuals.

Based on its investigation into both incidents, OCR found that Deer Oaks failed to conduct an accurate and thorough risk assessment. The settlement includes payment of $225,000, and implementation of a corrective action plan that OCR will monitor for two years, which requires Deer Oaks to:

  • Review and update its risk analysis;
  • Develop and implement a risk management plan to address and mitigate security risks and vulnerabilities identified in its risk analysis;
  • Develop, maintain, and revise as necessary, certain written policies and procedures to comply with the HIPAA Rules; and
  • Provide annual training for each workforce member who has access to PHI.

Comstar, LLC

On May 30, 2025, OCR announced its settlement with Comstar, LLC, a business associate providing billing and collection services to ambulance companies, for allegations that it had failed to conduct a security risk assessment.

The investigation was initiated after Comstar notified OCR that it was the victim of a ransomware attack that encrypted its network servers and affected the ePHI of approximately 585,621 individuals. The data affected by the ransomware attack included medical assessments and medication administration information. OCR’s investigation “determined that Comstar failed to conduct an accurate and thorough risk analysis to determine the potential risks and vulnerabilities to the ePHI that it holds.”

Comstar agreed to pay OCR $75,000 and implement a corrective action plan, including its agreement to:

  • Conduct a comprehensive and thorough analysis of the potential risks and vulnerabilities to the confidentiality, integrity, and availability of ePHI that Comstar holds;
  • Develop a risk management plan to address and mitigate security risks and vulnerabilities identified in the risk analysis;
  • Review and revise, as necessary, its written policies and procedures to comply with the HIPAA Privacy, Security, and Breach Notification Rules; and
  • Train its workforce members who have access to PHI on HIPAA policies and procedures.

The requirements OCR set with these two entities provide guidance to covered entities and business associates. HIPAA requires conducting an annual risk assessment, which has been, and will continue to be, a priority for OCR. In addition, entities are required to develop a risk management plan to address the gaps found in the risk assessment, including addressing security gaps, updating policies and procedures to manage the risks, and training employees on an annual basis. These HIPAA requirements are not new. Following the requirements, and heeding the clear guidance provided by OCR will reduce the risk of an OCR enforcement action and potential monetary settlement.

Last week, we outlined the building blocks for a strong IG program. Now that you’ve laid the groundwork, it’s time to bring your IG program to life. The ARMA IGIM framework emphasizes operational execution in three key areas:

  1. Procedural Framework
  2. Capabilities
  3. Information Lifecycle

These domains are where your framework tangibly interacts with AI systems, ensuring tools like machine learning models work with clean, structured data.

1. Procedural Framework

Your Procedural Framework establishes consistent policies, roles, and accountability measures. For AI, having standardized processes ensures that models produce reliable outputs.

Key for AI Adoption:
Without uniform procedures, AI systems can misinterpret data. For example, inconsistent naming conventions in datasets can skew analytics or predictions.

Actionable Tip: Create a policy requiring metadata tagging for all incoming data to improve accessibility for AI models.

2. Capabilities

Capabilities refer to the tools and technologies that power your IG program. AI tools are only as good as the systems they connect with.

Key for AI Adoption:
Role-based access controls prevent sensitive data from being used irresponsibly in AI training, while metadata management enhances the searchability of training datasets.

Example: A retail company equipped its e-commerce platform with AI product recommendations. By integrating IGIM-driven policies on access control and metadata, they ensured only accurate, permissible data informed the algorithms.

3. Information Lifecycle

AI relies on data that evolves through its lifecycle—from creation to disposition. The Information Lifecycle ensures that outdated or incorrect data doesn’t compromise AI tools.

Key for AI Adoption:
By defining retention schedules, organizations ensure AI models are trained on relevant data, reducing errors and increasing trust in outputs.

Next week, we’ll discuss how to sustain your IG program, enabling continuous innovation with AI.

What can you do now? Make sure that your data policies, tools, and lifecycle management strategies are aligned to support your AI-driven initiatives.

The Federal Bureau of Investigation (FBI) recently issued a public service announcement “to inform individuals and businesses about proxy services taking advantage of end of life routers susceptible to vulnerabilities.” When technology reaches its end of life, the manufacturer no longer supports patching the technology, which opens it to vulnerabilities. This has been a long-standing problem—you may remember that Microsoft no longer supports Windows 7, 8, or 8.1, and support for Windows 10 will be terminated on October 14, 2025.

When technology is not updated, it is left open to hacking by threat actors—this includes routers. According to the FBI, “Routers dated 2010 or earlier likely no longer receive software updates issued by the manufacturer and could be compromised by cyber actors exploiting known vulnerabilities.” These routers are being hit with TheMoon malware. “This malware allows cyber actors to install proxies on unsuspecting victim routers and conduct cyber crimes anonymously.”

The FBI recommends individuals and companies take the following precautions:

  • If the router is at its end of life, replace the device with an updated model if possible.
  • Immediately apply any available security patches and/or firmware updates for your devices.
  • Login online to the router settings and disable remote management/remote administration, save the change, and reboot the router.
  • Use strong passwords that are unique and random and contain at least 16 but no more than 64 characters. Avoid reusing passwords and disable password hints.
  • If you believe there is suspicious activity on any device, apply any necessary security and firmware updates, change your password, and reboot the router.

If you have an old router, now is the time to upgrade it.

On June 30, 2025, a Joint Advisory was issued by the National Security Agency, the Cybersecurity and Infrastructure Security Agency, the Federal Bureau of Investigation and the Department of Defense Cyber Crime Center issued a Joint Cybersecurity Information Sheet (CIS) titled “Iranian Cyber Actors May Target Vulnerable U.S. Networks and Entities of Interest,” warning that:

Despite a declared ceasefire and ongoing negotiations towards a permanent solution, Iranian Islamic Revolutionary Guard Corps (IRGC)-affiliated cyber actors—including hacktivists and Iranian government-affiliated actors—may target U.S. devices and networks for near-term cyber operations. These actors have historically targeted poorly secured U.S. networks and internet-connected devices for disruptive cyberattacks, often exploiting targets of opportunity, outdated software, and the use of default or common passwords on internet-connected accounts and devices.

The CIS further notes that “Iranian state-sponsored or affiliated threat actors are likely to significantly increase their Distributed Denial of Service (DDoS) campaigns, and potentially also conduct ransomware attacks.” The CIS notes that Defense Industrial Base companies, “particularly those possessing holdings or relationships with Israeli research and defense firms,” are at particular increased risk.

Therefore, the CIS recommends that “organizations, especially those within U.S. critical infrastructure, [should] remain vigilant for the outlined potential targeted malicious cyber activity.”

The CIS outlines the tools and techniques utilized to target vulnerable networks and devices, including unpatched or outdated software, or using compromised, default or common passwords on internet-connected accounts and devices. The threat actors also:

Use techniques such as automated password guessing, cracking password hashes using online resources, and inputting default manufacturer passwords. When specifically targeting operational technology (OT), these malicious cyber actors also use system engineering and diagnostic tools to target entities such as engineering and operator devices, performance and security systems, and vendor and third-party maintenance and monitoring systems.

The CIS provides strategies that entities can deploy to prevent or mitigate the attacks, which are helpful tools to prepare during this time of increased risk.

On June 27, 2025, the Federal Bureau of Investigation (FBI) issued a warning on X to the airline and transportation sectors that the notorious cyber criminal ring Scattered Spider is attacking those sectors.

The warning states:

These actors rely on social engineering techniques, often impersonating employees or contractors to deceive IT help desks into granting access. These techniques frequently involve methods to bypass multi-factor authentication (MFA), such as convincing help desk services to add unauthorized MFA devices to compromised accounts. They target large corporations and their third-party IT providers, which means anyone in the airline ecosystem, including trusted vendors and contractors, could be at risk.

Palo Alto’s Unit 42 and Mandiant have confirmed seeing activity by Scattered Spider in these sectors. Mandiant has said “This means that organizations can take proactive steps like training their help desk staff to enforce robust identity verification processes and deploying phishing-resistant MFA [multi-factor authentication] to defend against these intrusions.”

Mandiant has issued a Hardening Guide specifically for Scattered Spider, which provides helpful information to plan, prepare, mitigate, and recover from a Scattered Spider attack.  Consider the warning from the FBI, confirmed by Palo Alto and Mandiant, and proactively implement strategies to prevent an attack from social engineering.

On July 1, 2025, California Attorney General Rob Bonta announced a settlement with Healthline Media LLC stemming from alleged violations of the state’s consumer privacy law, the California Consumer Privacy Act (CCPA). According to the complaint, Healthline’s privacy practices failed to comply with several core CCPA requirements.

Opt-Out Mechanisms

Under the CCPA, California residents have the right to opt out of the sale or sharing of their personal information for targeted advertising. However, according to the complaint, when the California Attorney General’s office tested the Healthline website’s opt-out mechanisms in 2023, the mechanisms failed to prevent data from being transmitted to third parties. Even after visitors opted out through the site’s cookie preference center, the site allegedly still placed over 100 cookies tied to third-party advertisers.

The Purpose Limitation Principle

The complaint further raises concerns under the CCPA’s purpose limitation principle, which prohibits businesses from using personal data for purposes beyond what a consumer provides that information for and the use they would reasonably expect. In this case, the Attorney General argues that users visiting Healthline for medical information did not reasonably expect their health-related activity, such as reading about Crohn’s disease, would be shared with advertisers.

The complaint asserts that Healthline transmitted information including article titles and cookie identifiers to third parties. Healthline’s privacy policy reportedly did not disclose this type of sharing. One investigator for the Attorney General reportedly began receiving ads for Crohn’s disease and IBS-related medications after viewing a Crohn’s disease page. When that same individual later requested his consumer data from a data broker, his profile allegedly included references to Crohn’s disease. The Attorney General suggests that article titles that are shared with third party advertisers, particularly those about specific diagnoses, could effectively reveal sensitive health information about an individual, especially when paired with cookie identifiers.

The stipulated judgment in the settlement defines a “diagnosed medical condition article” as “an article with a title or URL that indicates the consumer visiting the article has already been diagnosed with a medical condition.” This language echoes reasoning from the now-invalidated December 2022 Department of Health and Human Services (HHS) Guidance Bulletin, which asserted that IP addresses collected on health-related websites could constitute individually identifiable health information (IIHI). Although a Texas federal district court ruled in June 2024, in American Hospital Association v. Becerra, that HHS exceeded its authority in adopting that definition of IIHI, the California Attorney General’s complaint appears to embrace a similar rationale in its own definition of covered articles under this settlement.

Contractual Provisions

The CCPA also requires that businesses entering into contracts involving the sale or sharing of personal information for targeted advertising must have a written contract in place with the third party. These contracts must list the limited and specific purposes for which the data may be used. According to the complaint, Healthline’s agreements with third-party recipients of advertising data used broad terms like “any business purpose” and “internal use,” which the Attorney General alleges fall short of statutory requirements.

This settlement sends a clear message: digital tracking and ad tech practices must align with CCPA’s evolving interpretation, especially where sensitive information like health data is involved.

Key Takeaways

1. Validate and Monitor Opt-Out Mechanisms: Conduct regular testing of your website’s cookie banners and opt-out tools to confirm that no personal data, including cookie IDs, is transmitted to third parties after users opt out.

2. Re-Evaluate How You Handle Sensitive Web Activity: Treat content consumption related to health, finances, or personal conditions as potentially sensitive, even if the data doesn’t squarely fit under a particular law or regulation. 

3. Review Purpose Limitation Compliance: Align all data collection and sharing practices with what consumers would reasonably expect based on their interaction with your platform.

4. Tighten Contractual Language with Ad Tech Vendors: Make sure that contracts with third parties specify limited, explicit purposes for data use and contain CCPA-required terms, including obligations around deletion, access, and use restrictions.

Conclusion

This enforcement action underscores CCPA’s reach into digital health privacy and ad tech practices. It also signals a continued regulatory interest in treating certain combinations of web activity and metadata as health data, even in the absence of traditional medical records. For businesses—particularly those handling sensitive categories of data—it’s a timely reminder to conduct their “preventative care” for CCPA compliance, including auditing opt-out functionality, drafting accurate privacy policies, and reviewing contracts with vendors for privacy measures.