On April 15, 2026, the Department of Justice (DOJ) announced that two U.S. nationals, Kejia Wang and Zhenxing Wang, were sentenced for facilitating a North Korean IT worker scheme that compromised over 80 U.S. identities, with sentences of 108 and 92 months respectively, supervised release, and forfeiture orders.

The scheme involved the defendants operating “laptop farms” and using the stolen identities of over 80 legitimate U.S. citizens, with co-conspirators posting as remote workers to obtain employment at more than 100 U.S. companies. Once the stolen identities were used to obtain employment, a company laptop would be sent by the unsuspecting company to the “new employee” at the laptop farm. Once the laptop was received, the operators of the laptop farms would allow remote access to the devices, enabling North Korean actors to infiltrate the companies’ system with access to sensitive data, including ITAR-controlled data. The scheme netted over $5M for the North Korean government, considered by the DOJ as a “hostile foreign regime.”

The scheme took place between 2021 and 2024. One of the defendants served as “the U.S.-based manager for the scheme, supervising at least five facilitators in the United States who collectively hosted hundreds of computers of U.S. victim companies at their residences.”

Eight indicted co-conspirators remain at large, with a $5M reward announced for information leading to disruption of DPRK financial mechanisms; previous seizures of domains and accounts occurred in June and October 2025.

KnowB4 was one of the first companies to alert others about the scheme in its July 23, 2025 blog, stating,

First of all: No illegal access was gained, and no data was lost, compromised, or exfiltrated on any KnowBe4 systems. This is not a data breach notification, there was none. See it as an organizational learning moment I am sharing with you. If it can happen to us, it can happen to almost anyone. Don’t let it happen to you. 

The blog is extremely helpful in understanding how the scheme worked and how over 100 U.S. companies fell victim to it. It is also illustrative of how sophisticated and devious foreign adversaries are to obtain money to use against the U.S.

Although these two defendants have been sentenced, the North Korean worker scheme continues to be operated by others and is still a threat. As recently as March 6, 2026, Microsoft Threat Intelligence sent a warning that the operatives are now using AI to shorten the time it takes them to create fake identities to start the scheme. Companies should continue to be on the alert for remote worker fraud schemes and implement policies and procedures to prevent becoming victimized.

California’s new Delete Request and Opt-Out Platform (DROP) goes live on August 1, 2026, and the compliance stakes are enormous. State officials have warned that a single missed deletion cycle could create theoretical penalty exposure of $1.5 billion for one data broker. That number reflects how aggressively the Delete Act is designed to work. One consumer request can now cascade across every registered data broker in the state, turning deletion compliance into a centralized, high-volume, enforcement-ready system.

The bigger surprise for many companies is not the platform itself—it is who may be covered. California is signaling that “data broker” should be read broadly, and the analysis turns on the data, not just the business as a whole. A company can have direct customer relationships and still be a data broker if it sells personal information obtained from third parties. If your business acquires consumer data indirectly and monetizes it, this is not a definition to skim past.

Operationally, DROP is not just a periodic deletion exercise. Registered brokers must access the system at least once every 45 days, pull hashed identifiers, match them against their records, process deletions, and report status before they can access the next batch. Even more important, unmatched identifiers still have to go on a permanent suppression list. That means if you buy relevant third-party data later, you may already be prohibited from selling or sharing it. Compliance is ongoing, and it reaches future data ingestion as much as current inventories.

Companies should now assess whether they have California data broker obligations, especially where third-party sourced data is involved. They should also be preparing for API integration, workflow design, suppression screening, and internal ownership before the August deadline arrives. California has built the system, consumers are already in the queue, and the window for treating DROP as a future problem is closing fast.

On April 22, 2026, OpenAI released its new Privacy Filter tool, designed to identify and mask sensitive information in text before that text is stored, shared, or used in downstream processing. OpenAI says the tool can detect items such as names, addresses, account numbers, private dates, and other personal data in documents, logs, and datasets before that material moves further through a system.

From a privacy perspective, this is a notable release because many privacy concerns with AI systems arise before any final output is generated. The exposure often happens at the intake stage, when raw documents, customer communications, internal records, or troubleshooting logs are uploaded, indexed, retained, or sent to another service without enough scrutiny. In that sense, a tool aimed at screening text earlier in the process addresses a real problem.

The tool also appears to do more than simply look for obvious patterns like email addresses, phone numbers, or account numbers. Traditional redaction tools are often limited to spotting information that fits a known format, but personal information is not always that straightforward. Sometimes a sentence may not contain an obvious identifier on its own yet still reveals who a person is when read together with the surrounding text. OpenAI claims that this feature is intended to pick up more of that kind of context.

However, the tool should be viewed with appropriate caution. OpenAI has acknowledged that Privacy Filter can miss uncommon identifiers or make mistakes. Heightened privacy risks remans, especially in legal, healthcare, financial, and other regulated settings, where the consequences of overcollection or disclosure can be significant. In addition, privacy risk is not limited to obvious identifiers, and even where direct personal data has been masked, context can still allow a person to be identified or for sensitive facts to be inferred.

As a general guideline, sensitive, confidential, or regulated information should never be entered into free or consumer-facing AI tools. A filtering tool such as Privacy Filter may reduce some risk, but it does not solve the broader concerns that come with using free models for business, legal, or regulated data. Privacy-centered design is always a positive development, but tools like this one should be evaluated with care and should never be mistaken for a complete solution to the privacy risks that AI systems continue to create.

As corporate legal departments continue adopting AI, the conversation is shifting from experimentation to strategy. According to the Thomson Reuters Institute’s 2026 State of the Corporate Law Department Report, nearly half of legal departments now report department-wide AI adoption, and technology has become a top strategic priority for many general counsel.

That momentum matters, but adoption alone is not the goal. The bigger question is whether legal teams are using AI in ways that support the company’s broader business priorities.

So far, many legal departments have focused on AI’s most immediate benefits, such as faster research, quicker contract review, and more efficient document drafting. Those uses make sense, especially in the early stages of implementation. However, if success is measured only by time saved or internal usage, legal leaders risk missing AI’s larger value. The real opportunity is not just unlocking capacity inside the legal department but deploying that capacity in ways that improve outcomes across the business.

Contract reviews are a strong example. Faster turnaround is helpful, but business leaders care most about whether legal support helps close deals sooner, improve contract win rates, reduce revenue leakage, or avoid costly risk. These are the kinds of metrics that connect AI legal strategy to business performance. The report suggests that this is still an emerging discipline, with fewer than 20% of law departments measuring AI return on investment at all. That leaves plenty of room for legal teams to become more intentional about how they define and track success.

The most effective legal AI strategies will therefore go beyond efficiency alone. They will support better service delivery, stronger operations, smarter growth, and better protection of the business. For GCs, that means partnering more closely with other functions, aligning AI initiatives with company goals, and building metrics that show legal’s impact in terms the business already values. AI may start as a legal technology investment, but its long-term value will be determined by how well it helps the business perform. To view the full report, click here .

On March 11, 2026, the Federal Trade Commission (FTC) announced an Advance Notice of Proposed Rulemaking (ANPRM) highlighting its Rule Concerning the Use of Prenotification Negative Option Plans, seeking comment on whether the rule should be amended or supplemented to better address deceptive or unfair negative option practices.

The FTC describes negative options as marketing arrangements in which a consumer’s silence or failure to act is treated as consent to be charged for goods or services. Negative option marketing includes automatic renewals, continuity programs, free-to-pay conversions, and prenotification plans. Regulators generally focus on several considerations:

  • Are material terms clearly disclosed?
  • Did the seller obtain express informed consent?
  • Is cancellation simple and effective?

Consistent with that focus, the FTC’s March 11th notice seeks input on practices that prevent consumers from understanding key terms, lead to enrollment without express informed consent, or deter cancellation.

The FTC’s enforcement posture in this area has been active for years and is unlikely to soften. The agency cites ongoing concerns with difficult cancellation processes, unlawful retention tactics, and other barriers that keep consumers from switching or ending subscriptions. It also reports receiving thousands of complaints each year, including more than 100,000 complaints over the past five years, which signals that subscription marketing remains a regulatory priority.

As for timing, the FTC stated that once the ANPRM is published in the Federal Register, the public will have 30 days to submit comments. The agency may then proceed through review, a proposed rule, another round of comments, and potentially a final rule.

In the meantime, businesses should expect the FTC and state regulators to continue using existing authorities, including unfair and deceptive practices statutes, to challenge problematic subscription flows. The best approach is to make key terms conspicuous, obtain and retain clear evidence of affirmative consent, and offer cancellation that is straightforward, reliable, and at least as accessible as enrollment. In many cases, regulatory risk turns less on the fact of a subscription and more on whether the overall experience could be viewed as obscuring costs or limiting consumers’ ability to leave.

A federal judge has ruled that CNN must face a proposed class action alleging that its website shared consumers’ personal information with Microsoft and adtech firms without consent, in alleged violation of the California Invasion of Privacy Act (CIPA). The lawsuit challenges CNN’s alleged use of online tracking tools and the downstream sharing of data in the digital advertising ecosystem. 

According to the complaint, CNN allegedly embedded tracking tools from Microsoft, PubMatic, and OpenX that enabled those companies to collect users’ personal information and build detailed marketing profiles for targeted advertising purposes. The complaint further alleges that at least one advertiser bid on the plaintiff’s information, and that it was likely circulated far more broadly during automated real-time bidding for ad space. 

In denying CNN’s motion to dismiss, the judge said that the plaintiff adequately alleged a concrete injury sufficient for federal standing, pointing to allegations that their information was collected and sold in the online advertising marketplace in a manner described as “highly offensive.” The court also found the pleadings sufficient at this stage to claim that the tracking code functioned as a “pen register” under CIPA while noting it was premature to resolve CNN’s argument that it was exempt under a CIPA provision related to operating or maintaining its service. This decision signals that publishers using embedded adtech and analytics tools may face heightened litigation risk under CIPA when user data is collected or shared without clear, consent-based disclosures.

I have very fond memories of using a Eurail pass back in the day while backpacking through Europe as a student. I was saddened to see that Eurail was the victim of a data breach in December 2025 when attackers obtained access to travelers’ full names and contact information, including email addresses, passport details, ID numbers, bank account and health information, and published it on the dark web for sale.

The incident affected 308,777 travelers. In its notification to affected individuals, Eurail provides information on fraud alerts, credit or security freezes and urges those affected to stay “alert to suspicious messages or activity,” and obtain a free copy of your credit report.

Whether you receive a notification letter or not, it is always a good idea to check your credit report frequently.

Iran has always been a formidable cyber threat to the United States, but after the war in Iran commenced, the attacks are coming frequently and in full force. According to the Joint Cybersecurity Advisory issued on April 7, 2026, by the FBI, CISA, NSA, EPA, DOE, and Cyber Command, Iranian-based hackers are targeting operational technology devices connected to the internet, including programmable logic controllers (PLC). The Advisory notes that the PLC disruptions have been seen “across several U.S. critical infrastructure sectors through malicious interactions with the project file and manipulation of data…resulting in operational disruption and financial loss.”

The Advisory states that U.S. organizations “should urgently review the tactics, techniques, and procedures (TTPs) and indicators of compromise (IOCs) in this advisory for indications of current or historical activity on their networks, and apply the recommendations listed in the Mitigations section of this advisory to reduce the risk of compromise.”

If your organization is considered critical infrastructure, it is crucial to review the Advisory, including the indicators of compromise and mitigation techniques.