The U.S. Attorney’s Office for the District of Massachusetts has charged a student at Assumption University with hacking into two U.S.-based companies’ systems and demanding a ransom.

Matthew D. Lane, 19, has agreed to plead guilty to one count of cyber extortion conspiracy, one count of cyber extortion, one count of unauthorized access to protected computers, and one count of aggravated identity theft.

The U.S. Attorney’s Office’s press release states that Lane agreed with co-conspirators between April and May 2024 to extort a $200,000 ransom payment from a telecommunications company by threatening to publish private data. When the telecommunications company questioned the payment, Lane used stolen login credentials to access the computer network of a software and cloud storage company that served school systems. The company received threats that the “PII of more than 60 million students and 10 million teachers – including names, email addresses, phone numbers, Social Security numbers, dates of birth, medical information, residential addresses, parent and guardian information and passwords, among other data – would be ‘leak[ed] . . . worldwide’ if the company did not pay a ransom of approximately $2.85 million in Bitcoin.”

A plea hearing has not been scheduled. If convicted, “the charges of cyber extortion conspiracy, cyber extortion and unauthorized access to protected computers each provide for a sentence of up to five years in prison, three years of supervised release and a fine of up to $250,000, or twice the gross gain or loss, whichever is greater. The charge of aggravated identity theft provides for a mandatory sentence of two years in prison, consecutive to any sentence imposed on the computer fraud charges.”

U.S. companies are running out of time to comply with a sweeping new Department of Justice (DOJ) rule that limits sharing sensitive personal data with certain foreign countries—including China, Russia, and Iran. With a hard compliance deadline of July 8, 2025, businesses must act quickly to avoid steep civil or criminal penalties.

The rule, which is part of a broader DOJ national security initiative, took effect on April 8, 2025. However, the agency is offering a short “good faith” grace period for companies actively working to meet the new requirements. After July 8, enforcement actions can carry fines of up to $1 million and potential prison sentences of up to 20 years.

What the Rule Covers

The DOJ’s data security rule prohibits or restricts U.S. companies from sharing bulk sensitive personal datawith individuals or entities from designated “foreign adversary” nations. Affected data types include:

  • Human genomic and biometric data
  • Precise geolocation
  • Health information
  • Financial data and identifiers like account names and passwords
  • Logs from fitness apps or wearables
  • Government-related location data or data linked to U.S. government employees

What Companies Need to Do Now

To comply, businesses can take the following actions:

  1. Audit Data
    Identify whether the company stores or transmits regulated data and whether the volumes meet “bulk” thresholds defined by the rule.
  2. Review Contracts and Data-Sharing Agreements
    Amend or terminate any transactions or contracts that give covered foreign persons access to sensitive data, including data licensing, brokerage, or research partnerships.
  3. Evaluate Foreign Partnerships
    Agreements with non-adversary foreign entities must now include language stating that data will not be passed on to restricted parties.
  4. Assess Vendor and Investment Exposure
    Transactions that grant foreign employees, investors, or vendors access to regulated data require strong security controls and may require renegotiation.
  5. Build a Compliance Program
    Companies should implement written policies, employee training, and auditing systems and report violations to the DOJ.

With less than two months remaining, companies are urged to determine the next steps for compliance, conduct a comprehensive risk assessment, and review the DOJ’s newly released compliance guide. The DOJ encourages informal inquiries before the deadline but will not review requests for advisory opinions or licenses before July 8.

Companies that handle sensitive personal data must treat the new rule as a top compliance priority or risk serious consequences for the business.

On May 21, 2025, the Federal Trade Commission (FTC) finalized its order with GoDaddy over allegations that GoDaddy “failed to implement standard data security tools and practices to protect customers’ websites and data.” In a Complaint filed against GoDaddy in January 2025, the FTC alleged that the company had “failed to implement reasonable and appropriate security measures to protect and monitor its website-hosting environments for security threats, and misled customers about the extent of its data security protections on its website hosting services.”

The allegations against GoDaddy include not implementing multi-factor authentication, monitoring for security threats, and securing connections to consumer data. As a result, GoDaddy suffered several data breaches, which “allowed bad actors to gain unauthorized access to customers’ websites and data.” In addition, the FTC alleged that GoDaddy “deceived” users about its data security practices and compliance with the EU-U.S. and Swiss-U.S. Privacy Shield Frameworks.

Pursuant to the order, GoDaddy is:

  • Prohibited from making misrepresentations about its security and the extent to which it complies with any privacy or security program sponsored by a government, self-regulatory, or standard-setting organization;
  • Required to establish and implement a comprehensive information-security program that protects the security, confidentiality, and integrity of its website-hosting services; and
  • Required to hire an independent third-party assessor to conduct reviews of its information-security program.

The FTC voted unanimously, 3-0, to finalize the order. The order emphasizes the FTC’s continued focus on data security and companies’ representations of data security measures to consumers. Therefore, companies may wish to reassess and update data security practices to confirm that they are commercially reasonable and consistent with their assertions to the public.

On May 15, 2025, a district court in Illinois denied a motion by defendant Hospital Sisters Health System and Saint Francis (HSHS) to dismiss a class action claim brought against the hospital system under the Illinois Genetic Information Privacy Act (GIPA).

GIPA regulates the use, disclosure, and acquisition of genetic information and has adopted the same definition of genetic information as provided in the federal Health Insurance Portability and Accountability Act (HIPAA):

(i) the individual’s genetic tests; (ii) the genetic tests of family members of the individual; (iii) the manifestation of a disease or disorder in family members of such individual; or (iv) any request for, or receipt of, genetic services, or participation in clinical research which includes generic services, by the individual or any family member of the individual.

GIPA prohibits employers from soliciting or requesting genetic testing or genetic information of a person or their family members as a condition of employment. GIPA also prohibits employers from changing the terms, conditions, or privileges of employment or terminating the employment of any person due to a person or their family member’s genetic testing or information.

In this case, the plaintiff filed their complaint in December 2024, which states that the hospital system requires potential employees to submit a pre-employment medical examination that an HSHS employee conducts. This examination allegedly entails job applicants being required to disclose information concerning their family medical histories. The plaintiff alleges that she was a job applicant with HSHS and that she, too, was required to submit a medical examination that asked questions about her family’s medical history. These questions reportedly included inquiries on family history of heart disease, asthma, or psychological conditions in the plaintiff’s family. 

In its motion to dismiss filed in February 2025, HSHS argued that the generic family medical history questions included in its medical examination are routine medical questions that do not constitute genetic information as protected by GIPA. The court was unconvinced, holding that “these questions involved[d] a clear report of the manifestation of a disease or disorder in a family which is clearly specified in GIPA through its adaptation of HIPAA’s definitions.” In addition, to support its holding, the court noted that the federal Genetic Information Nondiscrimination Act (GINA), which is also incorporated into GIPA, defines the term “family medical history” as “information about the manifestation of disease or disorder” in family members.

Though GIPA litigation has not yet risen to the level of litigation regarding Illinois’ Biometric Information Privacy Act (BIPA), several courts in 2024 have noted that GIPA should apply broadly. In Taylor v. Union Pacific Railroad Co., No. 23-CV-16404, 2024 WL 3425751, (N.D. Ill. July 16, 2024), the court held that GIPA plaintiffs have lenient standing requirements, concluding that BIPA’s definition of “aggrieved persons” – which encompasses individuals who sustained no actual injury beyond a violation of their rights under the statute – applies to GIPA, as well. In McKnight v. United Airlines, Inc., No. 23-CV-16118, 2024 WL 3426807, at *1 (N.D. Ill. July 16, 2024), the court found that individuals outside of Illinois may nonetheless initiate GIPA litigation if the underlying activity “occurred primarily substantially in Illinois” and that GIPA has a five-year statute of limitations.

Employers with ties to Illinois should note that GIPA may apply to them. Any questions about a job applicant’s family medical history may be considered genetic information under the act—even if these questions are intended to be routine health inquiries—and could give rise to a GIPA claim. Pre-employment exams should be structured carefully to avoid running afoul of GIPA and potential class action risks.

Pennsylvania-based Chord Specialty Dental Partners is under fire after a September 2024 data breach compromised the personal information of over 173,000 individuals. At least seven proposed class action lawsuits have been filed in federal courts in Tennessee and Pennsylvania, alleging the company failed to secure and protect patient data properly.

The lawsuits claim Chord Dental violated its obligations under state and federal laws, including the Federal Trade Commission (FTC) Act and the Health Insurance Portability and Accountability Act (HIPAA). Plaintiffs argue that the company did not implement reasonable cybersecurity measures or provide timely and sufficient notice of the breach.

Exposed data included names, addresses, Social Security numbers, driver’s license numbers, bank and payment card information, dates of birth, and medical and insurance records.

The plaintiffs claim that they have suffered harm, including out-of-pocket costs, time spent mitigating the damage, emotional distress, and increased risk of identity theft. One plaintiff also seeks to represent a specific subclass of affected Pennsylvania residents.

The flurry of suits alludes to various legal claims, from negligence and breach of contract to unjust enrichment. Plaintiffs are seeking damages, restitution, credit monitoring, and court orders requiring stronger data protections.

As legal proceedings unfold, the case highlights ongoing concerns over cybersecurity practices in the healthcare industry—and the steep costs of failing to protect protected health information.

In yet another reminder that California takes data privacy seriously, this month, the California Privacy Protection Agency (CPPA) fined Florida-based data broker Jerico Pictures, Inc. (d/b/a National Public Data) $46,000 for failing to register under the state’s Delete Act.

The fine is the maximum allowed by law and was imposed after the company failed to register with the state’s Data Broker Registry for over 230 days. Registration only occurred after the CPPA’s Enforcement Division contacted the company during an investigation. National Public Data did not contest the allegations, prompting the CPPA Board to issue a default order.

“This case arose under the Delete Act rather than under California’s comprehensive consumer privacy law, [but] the takeaway is the same,” said Michael Macko, head of enforcement at the CPPA. “We will litigate and bring enforcement actions when businesses violate California’s privacy laws.”

The Delete Act, which took effect in 2024, requires data brokers to register annually and pay a fee that supports the California Data Broker Registry. That registry will soon underpin a major consumer privacy tool: the Delete Request and Opt-Out Platform (DROP), launching in 2026. DROP will allow Californians to request that all registered data brokers delete their personal information with a single action.

This enforcement action sends a clear message to data brokers nationwide: comply or face consequences.

On Monday, May 19, 2025, President Donald Trump signed the “Take It Down Act” into law. The Act, which unanimously passed the Senate and cleared the House in a 409-2 vote, criminalizes the distribution of intimate images of someone without their consent. Lawmakers from both parties have commented that the law is long overdue to protect individuals from online abuse. It is disheartening that a law must be passed (almost unanimously) to require people and social media companies to do the right thing.

There has been a growing concern about AI’s ability to create deepfakes and distribute deepfake pictures and videos of individuals. The deepfake images are developed by tacking benign images (primarily of women and celebrities) with other fake content to create explicit photos to use for sextortion, revenge porn, and deepfake image abuse.

The Take It Down Act requires social media platforms to remove non-consensual intimate images within 48 hours of a victim’s request. The Act requires “websites and online or mobile applications” to “implement a ‘notice-and-removal’ process to remove such images at the depicted individual’s request.”  It provides for seven separate criminal offenses chargeable under the law. The criminal prohibitions take effect immediately, but social media platforms have until May 19, 2026, to establish the notice-and-removal process for compliance.

The Take It Down Act is a late response to a growing problem of sexually explicit deepfakes used primarily against women. It makes victims have to proactively reach out to social media companies to take down images that are non-consensual, which in the past has been difficult. Requiring the companies to take down the offensive content within 48 hours is a big step forward in giving individuals the right to protect their privacy and self-determination.

AI service provider Serviceaide Inc. faces two proposed class action lawsuits from a data breach tied to Catholic Health System Inc., a nonprofit hospital network in Buffalo, New York. The breach reportedly exposed the personal information of over 480,000 individuals, including patients and employees.

Filed in the U.S. District Court for the Northern District of California, the lawsuits allege that Serviceaide acted negligently and failed to protect sensitive data in its Elasticsearch database that was made publicly accessible allegedly for months before being disclosed.

Serviceaide, which provides AI-driven chatbots and IT support solutions, was contracted by Catholic Health and entrusted with managing protected health information and employment records. Plaintiffs allege that the company delayed notification to the affected individuals, waiting seven months after the incident to notify affected individuals. The affected data included patient records and personal information.

The lawsuits allege claims of negligence, breach of implied contract, unjust enrichment, invasion of privacy, and violations of California’s Unfair Competition Law.

Both plaintiffs seek to represent a nationwide class of individuals whose data was compromised and are seeking injunctive relief, damages, and attorneys’ fees.

These lawsuits highlight growing legal exposure for tech firms that handle protected health information, especially as more hospitals and healthcare systems outsource services to AI and cloud vendors. The healthcare sector remains one of the most targeted industries for cyber threats, and breaches involving third-party vendors are drawing increasing legal scrutiny.

Everyone thinks they can spot a phish. Whether it is an email, SMS text, or QRish phishing, people have an overinflated view of their capabilities to detect them.

A new summary by KnowB4, “What Makes People Click?” provides an insightful review and proves that people still click when curiosity gets the best of them.

According to the summary of top-clicked phishing tests between January and March 2025, phishes impersonating HR or IT are the most successful. People were more likely to interact with links related to internal team topics, open PDFs, HTML files, and .doc Word files and continue to be vulnerable to impersonation of trusted company brands. The companies most likely to be impersonated as part of a successful phishing campaign are Microsoft, LinkedIn, the company the victim works for, Google, and Okta.

And then there are QR codes. Everyone makes fun of me for constantly warning about QR codes, and I am grateful to KnowB4 for having my back on this one. Its summary illustrates that users continue to be duped into scanning malicious QR codes. The top three successful QR scams are QR codes related to the company’s new drug and alcohol policy, a DocuSign for review and signing, and a happy birthday message from Workday. Please take these statistics to heart and beware of these and similar scams. Think twice before clicking on that Happy Birthday message from Workday.

I frequently conduct employee education sessions and carefully follow KnowBe4’s insights. It always has its finger on the pulse and provides practical solutions in real-time. Review its 1st quarter summary, which is jam-packed with useful information for yourself and your users. 

A new study by Ivanti illustrates that one out of three workers secretly use artificial intelligence (AI) tools in the workplace. They do so for varying reasons, including “I like a secret advantage,” “My job might be reduced/cut,” “My employer has no AI usage policy,” “My boss might give me more work,” “I don’t want people to question my ability,” and “I don’t want to deal with IT approval processes.”

In 2025, a staggering 42% of employees admit to using generative AI (GenAI) tools at work. Another whopping 48% of employees admit to feeling resenteeism (a dislike of one’s job, but stays anyway) and 39% admit to feeling presenteeism (when one comes into the office to be seen, but is not productive).

The secret use of GenAI tools in the workplace poses several risks for organizations, including unauthorized disclosure of company data and/or personal information, cybersecurity risks, bias and discrimination, and misappropriation of intellectual property.

The Ivanti study emphasizes the need for organizations to adopt an AI Governance Program so employees feel comfortable using approved and sanctioned AI tools and don’t keep their use a secret. It also allows the organization to monitor the use of AI tools by employees and implement guidelines and guardrails around their safe use in the organization to reduce risk.