A new wave of state consumer privacy laws focused on limiting data collection is creating anxiety among businesses—and Maryland is leading the charge. The Maryland Online Data Privacy Act (MODPA), set to take effect in October 2025, requires companies to collect only data that is “reasonably necessary and proportionate” to their stated purposes. However, with no official guidance for compliance from the Maryland Attorney General, businesses are left guessing.

Under MODPA’s data minimization requirement, businesses should avoid collecting or processing more data than is necessary to provide a specific product or service to a consumer. In addition to the limited data collection requirement, MODPA also requires:

  1. Stricter Data Collection Practices for Sensitive Data: The data minimization requirements are more stringer for sensitive data, such as health information, religious beliefs, and genetic data. 
  2. Ban on the Sale of Sensitive Data: The law prohibits the sale of sensitive data unless it is strictly necessary to provide or maintain a requested product or service. 
  3. Explicit Consent: A business may not process personal information for a purpose other than the purpose(s) disclosed to the consumer at the time of collection unless the consumer provides explicit consent. 
  4. Limited Retention: A business may not retain consumer data for longer than necessary to fulfill the purpose for which it was collected (i.e., now is the time to update or implement your retention program).

This shift towards data minimization marks a departure from the more familiar “notice and choice” model, pushing companies to operationalize data minimization in ways that may significantly alter their data practices. While some businesses, particularly those already operating under stricter global standards like the European Union’s General Data Protection Regulation (GDPR), may be better prepared, others are weighing whether to reduce data collection or even scale back operations in certain states.

Companies developing or utilizing generative artificial intelligence are especially concerned, as these laws may limit access to large, diverse datasets required to train their models. Still, some see this as an opportunity to innovate with privacy-first technologies, such as synthetic data.

States like Maine, Massachusetts, Connecticut, and Minnesota are considering similar laws, signaling a growing trend. But as businesses await clearer definitions and enforcement standards, the central question remains: Can regulators strike the right balance between protecting privacy and supporting innovation?

A new survey from Intapp, titled “2025 Tech Perceptions Survey Report,” summarizes findings from a survey of fee-earners that there has been a “surge in AI usage.” The professions surveyed included accounting, consulting, finance, and legal sectors. Findings include that “AI usage among professionals has grown substantially, with 72% using AI at work versus 48% in 2024.” AI adoption among firms increased to 56%, with firms utilizing it for data summarization, document generation, research, error-checking, quality control, voice queries, data entry, consultation (decision-making support), and recommendations. That said, the vast majority of AI adoption in the four sectors is in finance, with 89% of professionals using AI at work. Specifically, 73% of accounting professionals, 68% of consulting professionals, and 55% of legal professionals use AI.

A significant conclusion is that when firms do not provide AI tools for professionals to use, they often develop their own. Over 50% of professionals have used unauthorized AI tools in the workplace, which increases risk for companies. They are reallocating the time saved with AI tools by improving work-life balance, focusing on higher-level client work, focusing on strategic initiatives and planning, cultivating relationships with clients, and increasing billable hours.

The survey found that professionals want and need technology to assist with tasks. Only 32% of professionals believe they have the optimal technology to complete their job effectively. The conclusion is that professionals who are given optimal technology to perform their jobs are more satisfied and likely to stay at the firm, optimal tech “powers professional-and firm-success, and AI is becoming non-negotiable for future firm leaders.”

AI tools are rapidly developing and adopted by all industries, including professional sectors. As noted in the Intapp survey, if firms are not providing AI tools for workers to use to enhance their jobs, they will use them anyway. The survey reiterates how important it is to have an AI Governance Program in place to provide sanctioned tools for workers to reduce the risks associated with using unauthorized AI tools. Developing and implementing an AI Governance Program and acceptable use policies should be high on the priority list for all industries, including professional services.

Never underestimate an operating system update from any mobile phone manufacturer. This week, Apple issued iOS 18.5 which provides enhancements to the user experience, but also fixes bugs and flaws.

This update fixes over 30 security bugs. The sooner you update to the new version, the better from a security standpoint. The security flaws that the patch responds to includes known and unknown vulnerabilities and zero-days that may or may not be exploited in the wild.

If you haven’t updated to iOS 18.5, plug your phone in now and install it as soon as possible. Not only for the enhancements, but most importantly, for the bug fixes. If you don’t have your phone set to automatic installation, you may wish to add that feature in your setting, as that is a good way to stay on top of new releases in a timely manner.

The U.S. Attorney’s Office for the District of Massachusetts has charged a student at Assumption University with hacking into two U.S.-based companies’ systems and demanding a ransom.

Matthew D. Lane, 19, has agreed to plead guilty to one count of cyber extortion conspiracy, one count of cyber extortion, one count of unauthorized access to protected computers, and one count of aggravated identity theft.

The U.S. Attorney’s Office’s press release states that Lane agreed with co-conspirators between April and May 2024 to extort a $200,000 ransom payment from a telecommunications company by threatening to publish private data. When the telecommunications company questioned the payment, Lane used stolen login credentials to access the computer network of a software and cloud storage company that served school systems. The company received threats that the “PII of more than 60 million students and 10 million teachers – including names, email addresses, phone numbers, Social Security numbers, dates of birth, medical information, residential addresses, parent and guardian information and passwords, among other data – would be ‘leak[ed] . . . worldwide’ if the company did not pay a ransom of approximately $2.85 million in Bitcoin.”

A plea hearing has not been scheduled. If convicted, “the charges of cyber extortion conspiracy, cyber extortion and unauthorized access to protected computers each provide for a sentence of up to five years in prison, three years of supervised release and a fine of up to $250,000, or twice the gross gain or loss, whichever is greater. The charge of aggravated identity theft provides for a mandatory sentence of two years in prison, consecutive to any sentence imposed on the computer fraud charges.”

U.S. companies are running out of time to comply with a sweeping new Department of Justice (DOJ) rule that limits sharing sensitive personal data with certain foreign countries—including China, Russia, and Iran. With a hard compliance deadline of July 8, 2025, businesses must act quickly to avoid steep civil or criminal penalties.

The rule, which is part of a broader DOJ national security initiative, took effect on April 8, 2025. However, the agency is offering a short “good faith” grace period for companies actively working to meet the new requirements. After July 8, enforcement actions can carry fines of up to $1 million and potential prison sentences of up to 20 years.

What the Rule Covers

The DOJ’s data security rule prohibits or restricts U.S. companies from sharing bulk sensitive personal datawith individuals or entities from designated “foreign adversary” nations. Affected data types include:

  • Human genomic and biometric data
  • Precise geolocation
  • Health information
  • Financial data and identifiers like account names and passwords
  • Logs from fitness apps or wearables
  • Government-related location data or data linked to U.S. government employees

What Companies Need to Do Now

To comply, businesses can take the following actions:

  1. Audit Data
    Identify whether the company stores or transmits regulated data and whether the volumes meet “bulk” thresholds defined by the rule.
  2. Review Contracts and Data-Sharing Agreements
    Amend or terminate any transactions or contracts that give covered foreign persons access to sensitive data, including data licensing, brokerage, or research partnerships.
  3. Evaluate Foreign Partnerships
    Agreements with non-adversary foreign entities must now include language stating that data will not be passed on to restricted parties.
  4. Assess Vendor and Investment Exposure
    Transactions that grant foreign employees, investors, or vendors access to regulated data require strong security controls and may require renegotiation.
  5. Build a Compliance Program
    Companies should implement written policies, employee training, and auditing systems and report violations to the DOJ.

With less than two months remaining, companies are urged to determine the next steps for compliance, conduct a comprehensive risk assessment, and review the DOJ’s newly released compliance guide. The DOJ encourages informal inquiries before the deadline but will not review requests for advisory opinions or licenses before July 8.

Companies that handle sensitive personal data must treat the new rule as a top compliance priority or risk serious consequences for the business.

On May 21, 2025, the Federal Trade Commission (FTC) finalized its order with GoDaddy over allegations that GoDaddy “failed to implement standard data security tools and practices to protect customers’ websites and data.” In a Complaint filed against GoDaddy in January 2025, the FTC alleged that the company had “failed to implement reasonable and appropriate security measures to protect and monitor its website-hosting environments for security threats, and misled customers about the extent of its data security protections on its website hosting services.”

The allegations against GoDaddy include not implementing multi-factor authentication, monitoring for security threats, and securing connections to consumer data. As a result, GoDaddy suffered several data breaches, which “allowed bad actors to gain unauthorized access to customers’ websites and data.” In addition, the FTC alleged that GoDaddy “deceived” users about its data security practices and compliance with the EU-U.S. and Swiss-U.S. Privacy Shield Frameworks.

Pursuant to the order, GoDaddy is:

  • Prohibited from making misrepresentations about its security and the extent to which it complies with any privacy or security program sponsored by a government, self-regulatory, or standard-setting organization;
  • Required to establish and implement a comprehensive information-security program that protects the security, confidentiality, and integrity of its website-hosting services; and
  • Required to hire an independent third-party assessor to conduct reviews of its information-security program.

The FTC voted unanimously, 3-0, to finalize the order. The order emphasizes the FTC’s continued focus on data security and companies’ representations of data security measures to consumers. Therefore, companies may wish to reassess and update data security practices to confirm that they are commercially reasonable and consistent with their assertions to the public.

On May 15, 2025, a district court in Illinois denied a motion by defendant Hospital Sisters Health System and Saint Francis (HSHS) to dismiss a class action claim brought against the hospital system under the Illinois Genetic Information Privacy Act (GIPA).

GIPA regulates the use, disclosure, and acquisition of genetic information and has adopted the same definition of genetic information as provided in the federal Health Insurance Portability and Accountability Act (HIPAA):

(i) the individual’s genetic tests; (ii) the genetic tests of family members of the individual; (iii) the manifestation of a disease or disorder in family members of such individual; or (iv) any request for, or receipt of, genetic services, or participation in clinical research which includes generic services, by the individual or any family member of the individual.

GIPA prohibits employers from soliciting or requesting genetic testing or genetic information of a person or their family members as a condition of employment. GIPA also prohibits employers from changing the terms, conditions, or privileges of employment or terminating the employment of any person due to a person or their family member’s genetic testing or information.

In this case, the plaintiff filed their complaint in December 2024, which states that the hospital system requires potential employees to submit a pre-employment medical examination that an HSHS employee conducts. This examination allegedly entails job applicants being required to disclose information concerning their family medical histories. The plaintiff alleges that she was a job applicant with HSHS and that she, too, was required to submit a medical examination that asked questions about her family’s medical history. These questions reportedly included inquiries on family history of heart disease, asthma, or psychological conditions in the plaintiff’s family. 

In its motion to dismiss filed in February 2025, HSHS argued that the generic family medical history questions included in its medical examination are routine medical questions that do not constitute genetic information as protected by GIPA. The court was unconvinced, holding that “these questions involved[d] a clear report of the manifestation of a disease or disorder in a family which is clearly specified in GIPA through its adaptation of HIPAA’s definitions.” In addition, to support its holding, the court noted that the federal Genetic Information Nondiscrimination Act (GINA), which is also incorporated into GIPA, defines the term “family medical history” as “information about the manifestation of disease or disorder” in family members.

Though GIPA litigation has not yet risen to the level of litigation regarding Illinois’ Biometric Information Privacy Act (BIPA), several courts in 2024 have noted that GIPA should apply broadly. In Taylor v. Union Pacific Railroad Co., No. 23-CV-16404, 2024 WL 3425751, (N.D. Ill. July 16, 2024), the court held that GIPA plaintiffs have lenient standing requirements, concluding that BIPA’s definition of “aggrieved persons” – which encompasses individuals who sustained no actual injury beyond a violation of their rights under the statute – applies to GIPA, as well. In McKnight v. United Airlines, Inc., No. 23-CV-16118, 2024 WL 3426807, at *1 (N.D. Ill. July 16, 2024), the court found that individuals outside of Illinois may nonetheless initiate GIPA litigation if the underlying activity “occurred primarily substantially in Illinois” and that GIPA has a five-year statute of limitations.

Employers with ties to Illinois should note that GIPA may apply to them. Any questions about a job applicant’s family medical history may be considered genetic information under the act—even if these questions are intended to be routine health inquiries—and could give rise to a GIPA claim. Pre-employment exams should be structured carefully to avoid running afoul of GIPA and potential class action risks.

Pennsylvania-based Chord Specialty Dental Partners is under fire after a September 2024 data breach compromised the personal information of over 173,000 individuals. At least seven proposed class action lawsuits have been filed in federal courts in Tennessee and Pennsylvania, alleging the company failed to secure and protect patient data properly.

The lawsuits claim Chord Dental violated its obligations under state and federal laws, including the Federal Trade Commission (FTC) Act and the Health Insurance Portability and Accountability Act (HIPAA). Plaintiffs argue that the company did not implement reasonable cybersecurity measures or provide timely and sufficient notice of the breach.

Exposed data included names, addresses, Social Security numbers, driver’s license numbers, bank and payment card information, dates of birth, and medical and insurance records.

The plaintiffs claim that they have suffered harm, including out-of-pocket costs, time spent mitigating the damage, emotional distress, and increased risk of identity theft. One plaintiff also seeks to represent a specific subclass of affected Pennsylvania residents.

The flurry of suits alludes to various legal claims, from negligence and breach of contract to unjust enrichment. Plaintiffs are seeking damages, restitution, credit monitoring, and court orders requiring stronger data protections.

As legal proceedings unfold, the case highlights ongoing concerns over cybersecurity practices in the healthcare industry—and the steep costs of failing to protect protected health information.

In yet another reminder that California takes data privacy seriously, this month, the California Privacy Protection Agency (CPPA) fined Florida-based data broker Jerico Pictures, Inc. (d/b/a National Public Data) $46,000 for failing to register under the state’s Delete Act.

The fine is the maximum allowed by law and was imposed after the company failed to register with the state’s Data Broker Registry for over 230 days. Registration only occurred after the CPPA’s Enforcement Division contacted the company during an investigation. National Public Data did not contest the allegations, prompting the CPPA Board to issue a default order.

“This case arose under the Delete Act rather than under California’s comprehensive consumer privacy law, [but] the takeaway is the same,” said Michael Macko, head of enforcement at the CPPA. “We will litigate and bring enforcement actions when businesses violate California’s privacy laws.”

The Delete Act, which took effect in 2024, requires data brokers to register annually and pay a fee that supports the California Data Broker Registry. That registry will soon underpin a major consumer privacy tool: the Delete Request and Opt-Out Platform (DROP), launching in 2026. DROP will allow Californians to request that all registered data brokers delete their personal information with a single action.

This enforcement action sends a clear message to data brokers nationwide: comply or face consequences.

On Monday, May 19, 2025, President Donald Trump signed the “Take It Down Act” into law. The Act, which unanimously passed the Senate and cleared the House in a 409-2 vote, criminalizes the distribution of intimate images of someone without their consent. Lawmakers from both parties have commented that the law is long overdue to protect individuals from online abuse. It is disheartening that a law must be passed (almost unanimously) to require people and social media companies to do the right thing.

There has been a growing concern about AI’s ability to create deepfakes and distribute deepfake pictures and videos of individuals. The deepfake images are developed by tacking benign images (primarily of women and celebrities) with other fake content to create explicit photos to use for sextortion, revenge porn, and deepfake image abuse.

The Take It Down Act requires social media platforms to remove non-consensual intimate images within 48 hours of a victim’s request. The Act requires “websites and online or mobile applications” to “implement a ‘notice-and-removal’ process to remove such images at the depicted individual’s request.”  It provides for seven separate criminal offenses chargeable under the law. The criminal prohibitions take effect immediately, but social media platforms have until May 19, 2026, to establish the notice-and-removal process for compliance.

The Take It Down Act is a late response to a growing problem of sexually explicit deepfakes used primarily against women. It makes victims have to proactively reach out to social media companies to take down images that are non-consensual, which in the past has been difficult. Requiring the companies to take down the offensive content within 48 hours is a big step forward in giving individuals the right to protect their privacy and self-determination.