Global medical device company Medtronic recently confirmed that it had been attacked by the threat actor group, ShinyHunters. According to Bleeping Computer, Medtronic is “the largest medical device maker in the world by revenue ($33.5 billion) and also develops healthcare technologies and therapies.”

ShinyHunters alleges that it has stolen over nine million Medtronic records containing personal information, and “terabytes of internal corporate data”.

Medtronic acknowledged the incident but confirmed that its customers, products, and operations have not been affected, and that “hospital customer networks remain separate from Medtronic IT networks and are secured and managed by customers’ IT teams.”

Medtronic is investigating the incident.

The Cybersecurity and Infrastructure Security Agency (CISA) and the UK National Cyber Security Centre (NCSC) have confirmed that threat actors are using FIRESTARTER malware to maintain persistence on Cisco network devices, allowing the threat actors to maintain access even after patching and reboots. 

FIRESTARTER malware targets Cisco Firepower and Secure Firewall devices running Adaptive Security Appliance (ASA) or Firepower Threat Defense (FTD) software, which were previously compromised prior to September 2025. 

FIRESTARTER malware enables a persistent backdoor by hooking into the device’s core engine, allowing it to survive firmware updates, software upgrades, and regular reboots. It maintains persistence by detecting shutdown signals and automatically re-installing itself, so typical remediation methods fail. 

The threat actor is believed to be a state-sponsored threat actor known as UAT-4356. The attackers exploited CVE-2025-20333 (RCE) and CVE-2025-20362 (Auth Bypass) to install the malware. Because Firestarter survives standard patches, CISA warns that patching alone is insufficient if the device was compromised before a patch was installed. It recommends several measures, including physically unplugging the device from all power sources (including redundant power) for at least one minute. In addition, CISA and Cisco recommend completely wiping and reimaging affected Cisco devices to ensure the malware is completely removed.

The Driver’s Privacy Protection Act (DPPA) may not draw as much regular attention as statutes like the VPPA, CCPA, or TCPA, but it remains a source of privacy litigation risk where motor vehicle record information is involved. The DPPA is a federal law that limits how personal information from state motor vehicle records may be obtained, disclosed, or used, and it allows individuals to sue over alleged misuse of that information.

In Cicale v. Professional Parking Management Corporation, No. 24-61146-CIV-SINGHAL (S.D. Fla. May 1, 2026), the plaintiff alleged in 2024 that a parking management company used license plate reader technology in private lots, matched plate numbers to Department of Motor Vehicles (DMV) records and then mailed parking charge notices to vehicle owners without first obtaining written consent. He claimed the notices were designed to resemble official citations, demanded $90 plus a $4.99 surcharge, and warned that nonpayment could lead to collections, booting, or towing. The complaint sought to represent a nationwide DPPA class and also asserted Florida consumer protection claims.

In an order entered on May 1, 2026, the District Court in the Southern District of Florida did not decide whether the company’s alleged access to DMV records violated the DPPA. Instead, the court held that the plaintiff had not alleged the kind of concrete injury needed to proceed in federal court. The order found that broad assertions of distress, annoyance, and privacy harm were too conclusory, and it rejected the plaintiff’s attempt to compare access to DMV records with a traditional invasion-of-privacy claim under Florida law. The court also rejected the claimed financial injury, reasoning that the plaintiff parked, left without paying, and then paid a bill he owed. The court dismissed the complaint and closed the case.

The decision underscores that, in DPPA litigation, a plaintiff must show real harm, not just alleged misuse of motor vehicle data. For businesses that use vehicle or location-related data in billing, enforcement, or operations, that means the fight may center as much on injury as on the underlying data practice. For now, this claim is parked.

California companies may have less time than they think to prepare for privacy audits. The California Privacy Protection Agency’s (CPPA) new Audits Division, created in February 2026, is expected to begin assessing companies’ compliance with the California Consumer Privacy Act (CCPA) this year, according to Executive Director Tom Kemp. This is a notable remark because—while the formal deadline to submit cybersecurity audit certifications does not begin until 2028 for some businesses—the CPPA expects companies to already be building and maintaining real audit-ready compliance programs.

So, what will these audits likely look at? The CPPA has not laid out a full roadmap, but recent comments suggest the CPPA may focus on practical problem areas that have already drawn enforcement attention. That includes whether consumers can actually exercise their rights to access, correct, delete, and opt out, whether privacy policies are accurate and complete, and how businesses handle newer risk areas like chatbots, large language models, surveillance pricing, and sensitive data. Auditors may also review a company’s cybersecurity program, internal governance, systems, and vendor relationships. If they find serious gaps, those issues could be referred for enforcement, where penalties have already reached six and seven figures.

The messaging is clear: if your organization does business in California or operates nationally, it’s time to stop treating audit obligations as a future paperwork exercise and start treating them as a present compliance priority. Companies should assess whether the rules apply to them, test whether their cybersecurity program is properly documented and owned by qualified personnel, and align their audit readiness work with California’s separate risk assessment requirements. These audits may be new, but the expectation to be prepared is already here.

Fashion, beauty, and wearable technology brands are heading into 2026 with a lot more to think about concerning data privacy. What used to feel like a back-end legal issue is now shaping how companies design products, personalize experiences, and build trust with customers. With new state privacy laws taking effect in Indiana, Kentucky, and Rhode Island, updates to California’s rules, and more changes expected across the country, brands can no longer afford to treat privacy as a simple compliance exercise. For companies, being open and thoughtful about data practices can actually become a real point of differentiation.

The biggest pressure points are clear: biometric data, consumer health and wellness data, children’s privacy, and AI are all facing increased scrutiny this year. For brands using virtual try-on tools, skin analysis, body scanning, wearables, or AI-powered personalization, the compliance stakes are especially high because many of these tools rely on sensitive personal information. At the same time, regulators are paying closer attention to targeted advertising, cookies, and tracking technologies, while class-action lawsuits tied to tools like pixels and similar technologies continue to rise. That means companies need to think carefully not just about what data they collect, but why they collect it, how they disclose it, and whether users are given real, meaningful choices.

The good news is that strong privacy practices can do more than reduce legal risk. They can strengthen brand reputation and deepen consumer loyalty. Companies that invest in privacy by design, clear consent flows, transparent notices, thoughtful AI governance, and stronger controls around children’s and health-related data will be better positioned to keep up with fast-moving laws and consumer expectations. Privacy is not just about compliance; it’s about earning trust in a way customers can see and value. For brands operating in California, that also means ensuring their privacy programs align with the California Consumer Privacy Act’s requirements around notice, consumer rights, and meaningful choices about how personal information is collected, used, and shared.

In the category of how technology can be fun, yet dangerous, a 19 year old college student alleges that the dating app Meete took a video she innocently posted on TikTok of her high school graduation, then “overlayed it with graphics advertising the app, and added a voiceover to make it appear she was saying ‘Are you looking for a friend with benefits? This app shows you women around you who are looking for some fun. You can video chat with them.’”

Unfortunately, the student had no idea this was happening until another student who had met her showed her the video. The other student took screen shots of the content and provided it to her. She hired a lawyer who hired an investigator to obtain additional information.

According to her lawyer, “they wanted viewers of these advertisements, and candidly this is pretty clearly targeted at male viewers—to have their eye caught by someone they may know or recognize…and that’s part of what makes it so disturbing.” He believes other women’s content has been “misappropriated” and they have no idea that this is happening.

The student alleges that the company used geotargeting to serve the ads on social media platforms to users near the student, “including men in her own dormitory.” According to CyberScoop, “the allegations, if proven, offer another example of how modern technology has made it easier than ever today for bad actors to imitate, objectify, profit off and harass individuals, often women.”

It is another example of how posting innocent content to the world can be used by deceitful individuals for harm, and how deepfakes and altered content are becoming more prevalent.

According to Cisco Talus researchers, phishing is the primary method threat actors use to gain unauthorized access to networks, accounting for more than one-third of all incidents in the first quarter of 2026. This increase is attributed to threat actors using legitimate AI tools to enhance phishing campaigns, particularly against health care and government sectors.

According to the blog, “State-sponsored and criminal actors have been observed abusing large language models to aid in the development of phishing lures, malicious scripts, and other tasks.” They have also adopted AI algorithms to evade detections and orchestrate attacks.

The use of AI tools makes it easier for threat actors to gain entry, accelerate the speed of phishing campaignz, and harvest credentials faster, all without having to use code.

To prevent being victimized, Cisco recommends that organizations:

  • Implement properly configured MFA and other access control solutions;
  • Conduct robust patch management; and
  • Configure centralized logging capabilities across the environment.

The IR Trends Q1 2026 post describes the ways AI tools are used to initiate attacks, and how phishing is again the most frequent entry by threat actors. It reinforces the need to keep users vigilant and educated on the importance of detecting and reporting phishing attempts.

Earlier this year, the Pennsylvania Supreme Court held that users generally lack a reasonable expectation of privacy in unprotected Google search records, underscoring how aggressively some courts are still applying third-party doctrine principles to digital data. Commonwealth v. Kurtz, 348 A.3d 133 (Pa. 2025). Our previous blog post on Kurtz is available here. The question of how much constitutional protection survives once a technology provider holds sensitive digital information is now before the United States Supreme Court in Chatrie v. United States, No. 25-112, which heard arguments on April 27, 2026, on the question of whether a geofence warrant violates the Fourth Amendment to the Constitution.

In Chatrie, police investigating a credit union robbery used a three-step geofence warrant process that led Google to produce anonymized device-location data for a defined place and time, then expanded location data for selected accounts, and finally subscriber information for a smaller subset, one of which pointed to the suspect, Chatrie. Chatrie argued in the lower courts that this is a digital version of the general warrants the Fourth Amendment was designed to forbid, whereas the government argued that users who enable location-history features voluntarily expose that data to a third party.

The most interesting part of the argument was not whether the Supreme Court will approve or reject geofence warrants across the board, but whether the Court may instead focus on how police move from a broad set of anonymous location data to a smaller set of identified users. In Chatrie, police first obtained anonymized device data for everyone in the area, then asked for more detailed location information for selected accounts and finally obtained subscriber information. Several justices appeared concerned that all of those steps were authorized in advance under a single warrant, without requiring police to return to a judge once they knew which devices they wanted to examine more closely.

A ruling for the government would push further in the direction suggested by cases like Kurtz, where sensitive data held by a provider is treated as something a user has voluntarily exposed. A narrower ruling, by contrast, could leave geofence warrants available in some form, but require clearer limits and renewed judicial approval before anonymous location data can be expanded or tied to a particular person. Either way, the case could shape how much privacy people retain when third parties store their movements and other revealing digital records.

The California Consumer Privacy Act (CCPA) continues to stand apart as the only comprehensive state privacy law in the U.S. that applies to personal information relating to employees, job applicants, and independent contractors. Since that coverage expanded in January 2023, many employers have had to navigate the difficult task of applying a consumer privacy framework to workforce data. That has created practical challenges, particularly in areas such as privacy notices, internal data practices, and responses to requests from workers seeking to exercise their privacy rights.

On April 20, 2026, the California Privacy Protection Agency (CPPA) began preliminary rulemaking focused on employee data and related privacy notice and disclosure requirements under the CCPA. The CPPA is exploring whether separate or more tailored regulations are needed to clarify how the law should apply to personal information collected in the employment context. Its request for input suggests that regulators recognize the uncertainty businesses and workers have faced. Among other issues, the CPPA is asking what difficulties employers encounter when giving job applicants and employees the ability to exercise privacy rights, and how regulations could better address those concerns.

Although it is still too early to predict the substance of any final rules, this process could have significant consequences for employers subject to the CCPA. New regulations could better align compliance obligations with the realities of human resources and workforce data management. At the same time, such regulations may introduce additional notice, disclosure, or operational requirements that increase regulatory burden. For now, the rulemaking remains in a pre-proposal stage, with preliminary comments due by May 20, 2026. If the CPPA moves forward with formal rulemaking, proposed regulations and another round of public comment would follow, with any final requirements unlikely to take effect before 2027.

Multiple class action cases have been filed against Tempus AI  alleging that, during its acquisition of Ambry Genetics, the company improperly collected and disclosed genetic information without obtaining prior written consent from individuals during its acquisition of Ambry. Tempus acquired Ambry, a genetic testing firm, in February 2025 for $600 million. The acquisition included the transfer of Ambry’s database, which included the genetic information of hundreds of thousands of its customers. The allegations are that the database was transferred without proper consent, in violation of the Illinois Genetic Information Privacy Act.

In addition, the lawsuits claim that Tempus used the genetic information it collected for training its AI models and is sharing it with pharmaceutical companies for profit. The lawsuits seek damages and an injunction to prevent Tempus from further sharing the genetic information.