Google recently issued its June Android Security Bulletin that is designed to patch 34 vulnerabilities, all of which Google designates as high-severity defects. The most serious flaw the patch is designed to fix in the Android system would allow threat actors “to achieve local escalation of privilege with no additional privileges required.” The bulletin contains two security patch levels so that “Android partners have the flexibility to fix a subset of vulnerabilities that are similar across all Android devices more quickly.”

The bulletin provides common questions and answers for Android users, including how to determine if your device is updated, why it has two security patch levels, how to read its entries, and how to update Google Play.

Google states, “Android partners are encouraged to fix all issues in this bulletin and use the latest security patch level.”

If you own an Android device, confirm that you have patched these vulnerabilities as soon as possible.

What do a global sportswear giant and a prestigious medical center have in common? Apparently, a shared struggle defending data breach lawsuits for breaches of sensitive personal information caused by third-party vendors. 

This week, Adidas America and the University of Chicago Medical Center found themselves on the receiving end of data breach lawsuits. The plaintiffs say both organizations failed to keep their personal info safe, and now want the courts to step in. According to the complaints, Adidas customer Karim Khowaja and UChicago patients Alta Young and Judy Rintala are calling out the companies for what they claim were lax data protection practices that led to their sensitive personal information falling into the wrong hands. Their key argument? The organizations should have known—and done—better.

Khowaja’s lawsuit alleges that Adidas provided a notification of the data breach that left customers with more questions than answers. Khowaja claims that Adidas did not identify the third-party vendor involved, what data was accessed, or when the breach occurred. Further, Khowaja claims this is not Adidas’ first data security blunder—he points back to a 2018 breach as proof the company should have been more vigilant.

“The more accurate pieces of data an identity thief obtains about a person, the easier it is… to take on the victim’s identity,” Khowaja warns in his complaint.

The same allegations are being directed at the University of Chicago Medical Center. According to Young and Rintala, the hospital didn’t discover the breach until ten months after suspicious activity was first detected—by its financial services vendor, National Recovery Services LLC (NRS). Young’s lawsuit claims the breach affected 38,000 patients, and Rintala’s goes further, alleging that the hospital didn’t encrypt or redact any of the compromised data—leaving names, birth dates, and other sensitive information widely available to cybercriminals. “That ‘utter failure’ will present risks to patients for their respective lifetimes,” Rintala claims.

All three plaintiffs are looking to represent classes of similarly affected individuals and are asking for damages and injunctive relief. Each of the plaintiffs are also emphasizing the “real-world” costs of these breaches: time, money, and the emotional stress of trying to prevent identity theft or fraud.

These lawsuits highlight a growing trend: courts being asked to hold companies accountable for third-party vendor breaches. It raises an important question: How far does the responsibility go when it comes to data security? It may be  as simple as: if you use a third-party vendor who has access to or maintains sensitive personal information, there is a known risk. Here, a “known risk” refers to a security vulnerability or threat that a reasonable organization should have been aware of—either through industry standards, past incidents, or internal warnings—and failed to adequately address.

In the UChicago case, Young argues that the medical center knew about the risks of working with external vendors like NRS, especially since the kind of breach that occurred is a common method of attack in healthcare data security:

  • Healthcare is a top target for hackers due to the volume of sensitive personal and financial data. This isn’t new—HIPAA guidance and cybersecurity advisories have warned about it for years.
  • NRS discovered “suspicious activity” ten months prior to informing UChicago.
  • The plaintiffs say this delay, paired with the lack of encryption or redaction, shows UChicago failed to properly vet or monitor its vendor—even though outsourcing doesn’t relieve them of responsibility under HIPAA and other regulations.

In Khowaja’s complaint, he makes a similar argument: Adidas previously experienced a breach. So, when it happened again—this time via a third-party customer service provider—he says the company can’t plead ignorance:

  • Adidas “knew or should have known” that outsourcing customer service introduced a risk of exposure.
  • Despite that, they allegedly didn’t put in the necessary safeguards to protect customer data or notify affected users with enough information to respond.

Again, the argument isn’t just about the breach itself—it’s about Adidas’ failure to anticipate a risk they’d already seen firsthand.

If the courts agree that failure to safeguard against a “known risk” is enough to trigger liability, we could see more plaintiffs lining up in similar cases across industries for incidents caused by third-party vendors.

New York Attorney General Letitia James and 13 other Attorneys General filed suit in October 2024 against TikTok “for misleading the public about the safety of its platform and harming young people’s mental health.” TikTok moved to dismiss the case and, on May 28, 2025, New York Supreme Court Judge Anar Rathod Patel denied the motion.

The denial of TikTok’s motion to dismiss allows the New York case to move forward and serves as precedent for TikTok’s other efforts to dismiss the cases filed by over a dozen Attorneys General nationwide.

Video Privacy Protection Act (VPPA) class action lawsuits have been on the rise, and the owner of the The Onion, a popular satire site, finds itself the subject of a recent one. On May 16, 2025, a plaintiff-initiated litigation against Global Tetrahedron, LLC, the owner of The Onion, alleges that the defendant installed the Meta Pixel on its website, with host videos for streaming, without user knowledge.

The plaintiff alleges that, unbeknownst to consumers, the Meta Pixel tracks users’ video consumption habits “to build profiles on consumers and deliver targeted advertisements to them.” According to the complaint, the Meta Pixel is configured to collect HTTP headers, which contain IP addresses, information about the user’s web browser, page location, and document referrer (the URL of the previous document or page that loaded the current one). Since the Meta Pixel is reportedly attached to a user’s browser, “if the user accesses Facebook.com through their Safari browser, then moves to theonion.com after leaving Facebook, the Meta Pixel will continue to track that user’s activity on that browser.” The complaint also alleges that Meta Pixel collects a Meta-specific value called the c-user cookie, which is a unique user ID for users logged into Facebook. By combining the points of data collection, the complaint asserts, the Onion transmits personally identifiable information to Meta.

In a novel approach, the complaint uses screenshots of the plaintiff’s ChatGPT conversation to demonstrate how ChatGPT can help an ordinary user decipher what information is allegedly being disclosed to Meta through the Onion website. According to the screenshots, when the plaintiff asked ChatGPT how to check if a website was disclosing their browsing activity to Meta, the plaintiff was directed to use developer tools to inspect the page’s network traffic. Each internet browser has an integrated developer tool, which allows developers to analyze network traffic, measure performance, and make temporary changes to a page. Any website user can open the developer tool, as ChatGPT directed the plaintiff to do.

Following ChatGPT’s instructions, the plaintiff reportedly opened the developer tool page for the Onion website. Then, the plaintiff uploaded a screenshot of the Onion’s developer tool onto ChatGPT. ChatGPT analyzed the request in the screenshot and broke down the parameters contained within, including Pixel ID, Page Views, URL, and Facebook cookie ID. Many VPPA complaints in recent months have described the technical processes behind tracking technologies, but by using ChatGPT in this complaint, the plaintiff underscores how such large language model tools can help an average website user decipher seemingly complex technical concepts and better understand the data flows from tracking technologies.

The case reflects a broader trend in VPPA litigation, in which plaintiffs are challenging the use of third-party tracking technologies on sites that offer any form of video content. As VPPA litigation evolves, this case could peel back another layer of risk for publishers across industries providing video streaming content.

It’s 2025, and somehow, we’re still dealing with lawsuits over a law that was born in the pen registers and rotary phones era. That law, the California Invasion of Privacy Act (CIPA), a decades-old statute that’s suddenly found new life in the digital age, could put your company in legal crosshairs based on its website and its tracking technology.

Over the past year, we’ve seen a sharp uptick in demand letters and litigation targeting businesses over alleged privacy violations tied to digital website tools like:

  • Chatbots and live chat features
  • Website analytics tools
  • Ad campaign tracking (Meta Pixel)
  • Social media plugins and integrations

In many of these cases, plaintiffs allege that businesses are “eavesdropping” on users, all under the theory that using these technologies without their consent violates CIPA.

Enacted in 1967, CIPA outlawed wiretapping and pen registers, tools used to monitor telephone calls and communication metadata.

Fast forward to today: plaintiffs are arguing that third-party tracking cookies, IP address collection, session replays, and chatbots serve as modern-day equivalents of those old-school surveillance devices. And, surprisingly, some courts are letting these arguments move forward.

What can you do to avoid these types of claims? First, ask yourself some basic questions:

  • Do you operate a website or mobile app?
    • If yes, you’re already in the conversation. These are the primary platforms where privacy issues pop up.
  • Do you use a chatbot or live chat feature?
    • If you’ve installed any customer support chat tool, even through a third-party vendor, you could be logging and transmitting data that CIPA litigants say violates user privacy.
  • Are you using web analytics, ad tracking, or social media plugins?
    • These tools often track user behavior via cookies, beacons, or IP logs, which are now being challenged as CIPA violations.
  • Does your website have a privacy policy?
    • If so, is it up-to-date and accurate? A vague or outdated policy can hurt you more than it helps.
  • Do you have a cookie notice and consent mechanism?
    • Simply saying “we use cookies” isn’t enough anymore. Laws increasingly require clear disclosures and opt-in mechanisms, especially in California and Europe.
  • Does your chatbot have a disclaimer?
    • Users should know what data is collected via chat and how it’s used. No disclaimer could be a big risk.

What actions can you take?

  1. Update your privacy policy: make sure it reflects all current data practices, including chat features, tracking tools, and any third-party sharing, and that it is compliant with applicable consumer privacy rights laws.
  2. Give notice and get consent: for tools like analytics and targeted advertising, disclosure is key. In some jurisdictions, prior consent is required before deploying any tracking technology.
  3. Review your chat tools: add a disclaimer or notification to users when they engage with chat features, explaining how their data is handled.
  4. Rethink your tech stack: not all third-party vendors are created equal. Vet your service providers, understand their data practices, and ensure contracts include privacy and indemnification clauses.

These CIPA (or trap and trace) lawsuits are not fringe cases anymore. They’re part of a broader wave of privacy litigation targeting the ad tech ecosystem. The claims may sound like a stretch, but courts are entertaining them. Businesses that don’t stay ahead of these developments may find themselves paying to settle lawsuits they didn’t even see coming.

If your business touches user data online, you can’t afford to ignore these issues. A proactive approach to privacy and transparency is no longer optional.

On June 3, 2025, a bipartisan group of 260 state lawmakers sent a letter to the U.S. House of Representatives and the U. S. Senate expressing “strong opposition to the provision in Subtitle C, Part 2 of the tax and budget reconciliation bill, which would undermine ongoing work in the states to address the impact of artificial intelligence (AI).”

The letter was in response to a provision in the proposed tax and budget reconciliation bill that seeks to legislate a ten-year freeze on any state or local regulation of AI, in effect preempting states from enacting any laws that would regulate AI.

According to the letter, the preemption would “would cut short democratic discussion of AI policy in the states with a sweeping moratorium that threatens to halt a broad array of laws and restrict policymakers from responding to emerging issues.” Moreover,

[t]he sweeping federal preemption provision in Congress’s reconciliation bill would also overreach to halt a broad array of laws elected officials have already passed to address pressing digital issues. Over the past several years, states across the country have enacted AI-related laws increasing consumer transparency, setting rules for the government acquisition of new technology, protecting patients in our healthcare system, and defending artists and creators. State legislators have done thoughtful work to protect constituents against some of the most obvious and egregious harms of AI that the public is facing in real time. A federal moratorium on AI policy threatens to wipe out these laws and a range of legislation, impacting more than just AI development and leaving constituents across the country vulnerable to harm.

States have always been on the forefront of protecting consumers’ rights and interests. A proposed ten-year ban on states’ ability to determine what its citizens deem appropriate for their protection when it comes to AI is paternalistic and contradictory to the notion of decreasing the federal government to allow states to legislate for its citizens. It is inconceivable how rapidly AI will develop in the next ten years. Hamstringing state legislatures from addressing AI in any way for a decade is not prudent. If you agree, call your members of Congress and urge them to “reject any provisions that preempt state and local AI legislation in this year’s reconciliation package.

In a surprising move, China-based DJI, the world’s largest drone manufacturer, is not flinching at the prospect of tighter U.S. restrictions on Chinese drone companies. In fact, they’re embracing it.

Currently, the Trump administration is finalizing executive orders that would affect the commercial drone landscape in the U.S., which could be set for a serious shake-up. These potential measures would require companies like DJI, and its competitor Autel, to undergo national security reviews before selling new drone models in the U.S.

You might think DJI would be sounding the alarm—but instead, they’re rolling out the welcome mat. “DJI welcomes and embraces any opportunities to demonstrate our privacy controls and security features,” explained a company spokesperson.

The company has been submitting its systems for independent security audits since 2017. Evaluations from heavyweights like Booz Allen Hamilton, FTI Consulting, and even U.S. government bodies like the Department of the Interior and Idaho National Laboratory have come to a consistent conclusion: DJI’s drones are secure, and there’s no evidence of data being transmitted to unauthorized entities—including the Chinese government.

The legal spotlight is now on Section 1709 of the FY2025 National Defense Authorization Act. This provision requires a designated national security agency to determine—within a year—whether DJI’s equipment presents an “unacceptable risk” to U.S. national security.

If that assessment isn’t completed within the deadline, DJI could end up on the FCC’s Covered List by default, effectively barring them from launching new products in the U.S.

So, yes, the stakes are high. But DJI seems ready to bet on its track record.

In response to longstanding concerns over data privacy and national security, DJI has introduced several robust features aimed at giving control back to users:

  • Local Data Mode: Operates like an air-gapped device—no internet, no data leakage.
  • Default Data Settings: No automatic syncing of photos, flight logs, or videos.
  • Third-party software compatibility: Users can fly DJI drones and analyze data using U.S.-based software, without touching DJI’s ecosystem.
  • DJI no longer allows U.S. users to sync flight records to its servers.

“Unlike our competitors, we do not force people to use our software,” DJI spokesperson pointed out.

While the upcoming executive orders are designed to boost domestic drone production and address national security risks, DJI is using the moment to double down on its commitment to transparency. Their message is clear: judge us by the tech, not the passport.

Whether that’s enough to maintain access to the U.S. market will depend on how these reviews play out—and how political winds blow in the coming months.

But one thing’s for sure: DJI isn’t backing down. It’s gearing up for inspection—and maybe even looking forward to it.

Stay tuned as we track legal developments on this issue and how it could reshape the drone industry in the U.S.

Smishing schemes involving Departments of Motor Vehicles nationwide have increased. Scammers are sending SMS text messages falsely claiming to be from the DMV that “are designed to deceive recipients into clicking malicious links and submitting personal and/or financial information under false threats of license suspension, fines and credit score or legal penalties.”

The Rhode Island Division of Motor Vehicles (RIDMV) issued an alert to the public indicating that one of the smishing messages sent to drivers was a “final notice” from the DMV that states that if the driver doesn’t pay an unpaid traffic violation that enforcement penalties, including license suspension will begin imminently. The DMV warned drivers that the text message cites “fictitious legal code and link to fraudulent websites.”

The DMV warned drivers that the messages are not from the DMV and that it does “NOT send payment demands or threats via text message, and we strongly urge the public to avoid clicking on any suspicious links or engaging with these messages. Clicking any links may expose individuals to identity theft, malware, or financial fraud.”

The RIDMV provides these tips to avoid smishing scams:

  • Do NOT click on any links or reply to suspicious text messages.
  • Do NOT provide personal or financial information.
  • Be aware that DMV related information is sent via mail, not text messages.
  • Report fraudulent messages to the FBI’s Internet Crime Complaint Center (www.ic3.gov) or forward. them to 7726 (SPAM) to notify your mobile provider.
  • Report the message to the FTC.

These tips apply to all drivers. No state DMV is sending a text message to drivers, so if you get one, it is surely a scam. Do not be lured into clicking on links in text messages for fear of license suspension or other actions by the DMV. If you get a text purporting to be from DMV alleging your license is at risk, don’t click on the link—it’s a smishing scam.

Google sent out a warning that the cybercriminal group Scattered Spider is targeting U.S.-based retailers. Scattered Spider is believed to have been responsible for the recent attack on Marks & Spencer in the U.K. A security researcher at Google has posited that Scattered Spider concentrates attacks on one industry at a time and predicts that it will continue to target the retail sector. They have warned that “US retailers should take note. These actors are aggressive, creative, and particularly effective at circumventing mature security programs.”

Mandiant issued a threat intelligence report on May 6, 2025, highlighting Scattered Spider’s social engineering methods and “brazen communication with victims.” It has seen Scattered Spider target specific sectors, such as financial services and food services.  Recently, Scattered Spider has been seen deploying DragonForce ransomware. The operators of DragonForce have claimed control of RansomHub.

Mandiant has published recommendations on proactive hardening against the tactics used by Scattered Spider, including prioritizing:

  • Identity
  • Endpoints
  • Applications and Resources
  • Network Infrastructure
  • Monitoring / Detections

Although retailers should be on high alert with these warnings, all industries would do well to review Mandiant’s recommendations, as they are timely and effective.

This post was co-authored by Summer Legal Intern Mark Abou Naoum. Mark is not admitted to practice law.

This week, the U.S. District Court for the Northern District of California ruled in favor of children’s clothing retailer Janie & Jack, which sought to enjoin over 2,400 individual arbitration claims resulting from alleged violations of the California Invasion of Privacy Act (CIPA). Now, Janie & Jack will confront a single privacy class action suit as opposed to the more than 2,400 individual arbitration claims by its website visitors.

The parties notified the court of their agreement not to pursue arbitration but to rather proceed through a consolidated class action. Janie & Jack voluntarily dismissed its lawsuit in an attempt to avert the numerous claims by consumers.

Website visitors accused Janie & Jack of violating CIPA and the federal Wiretap Act through its website’s information gathering and tracking practices (also known as trap and trace claims). Janie & Jack alleges that such claims are inadequate because they lack allegations that the consumers created any accounts or conducted any transactions on the website or that Janie & Jack had breached any of its online terms.

Further, although Janie & Jack’s website terms include an arbitration clause, it claimed that the claimants never assented to the contract.

In its response, the retailer emphasized its intent to prevent the growing use of arbitration agreements as “weapons” by plaintiffs’ attorneys, thwarting their intended use of an efficient, effective, and timely progression of claims.

This case highlights a common practice: thousands of individuals, all represented by the same counsel, simultaneously file, or threaten to file, arbitration demands with nearly identical claims.

These allegations mark yet another instance of the growing trend of the plaintiffs’ bars’ push for “trap and trace” claims because they can leverage existing wiretap laws (particularly in California under CIPA) to argue that common online tracking technologies like cookies, pixels, and website analytics tools essentially function as trap and trace devices, allowing them to file complaints against companies for collecting user data without proper consent, even though these technologies were originally designed for traditional phone lines, not the internet, opening up a large pool of potential plaintiffs and potentially significant damages.

If you haven’t heard it enough, here it is again: NOW is the time to assess your website’s online trackers and update your cookie consent management platform, website privacy policy, and consumer data collection processes.