On December 17, 2025, a bipartisan group of 23 Attorneys General from the states of Arizona, California, Colorado, Connecticut, Delaware, Hawai’i, Illinois, Maine, Maryland, Massachusetts, Minnesota, Nevada, New Jersey, New Mexico, North Carolina, Oregon, Rhode Island, Tennessee, Utah, Vermont, Washington, Wisconsin, and the  District of Columbia, sent a comment letter to the Federal Communications Commission (FCC) “opposing the preemption of state laws on artificial intelligence.” The letter was in response to the FCC’s notice of inquiry published in September that it would use its regulatory powers to preempt state AI laws.

The letter argues that the FCC lacks authority to preempt state law, and that such would harm state interests. The letter comes on the heels of Executive Order 14365 (EO) signed by President Trump on December 11, 2025, that requires the Secretary of Commerce to “publish an evaluation of existing State AI laws that identifies onerous laws that conflict” with the administration’s policy “to sustain and enhance the United States’ global AI dominance through a minimally burdensome national policy framework for AI” within 90 days. The EO further requires the Secretary of Commerce to issue a Policy Notice that provides “that States with onerous AI laws… are ineligible for non-deployment funds to the maximum extent allowed by Federal law.”

In opposing a proposed preemption of state AI laws, California Attorney General Rob Bonta said the individual states

are on the front lines of consumer protecting, including when it comes to emerging technology….like any emerging technology, there are risks to adoption without responsible, appropriate, and thoughtful oversight. States have played a leading role in developing strong privacy and technology protections to address a wide range of harms associated with AI and automated decision-making. State authorities are often the first to receive consumer complaints and identify problematic practices and have the proximity and agility to identify emerging threats and implement innovative solutions.

The bipartisan letter to the FCC follows another effort by a bipartisan coalition of 36 state attorneys general who sent a letter to Congress in November opposing a proposed provision in the National Defense Authorization Act that would preempt state AI laws. That provision was ultimately not included in the final law. Additionally, in May 2025, there was an effort by some in Congress to propose a 10-year ban on the ability of states to enact laws related to the use of AI which also failed.

Ultimately, there will be a battle between the federal government and state legislatures over AI regulation. It is clear that the Trump administration seeks minimal regulation, despite the known risks, and state Attorneys General, charged with protection of consumers, feel very differently. I suspect we will see how it plays out in court.

Artificial intelligence has dramatically broadened the capabilities of anyone looking to reverse-engineer public-facing products. What once took specialized skill, deep pockets, and many hours now requires little more than a curious mind and a powerful AI model. For companies built around valuable and confidential know-how, this shift has profound implications, especially for in-house counsel tasked with safeguarding trade secrets.

How AI Is Changing Reverse Engineering

Reverse engineering is the process of using publicly available information, like software code or a publicly available user interface, to discover nonpublic information about a product or process. Traditionally, this was a slow and expert-driven endeavor that required a significant amount of information. But with advances in AI—including code analysis tools, language models, and automated data scrapers, reverse engineering can be carried out with significantly less information and at a scale and speed previously unimaginable.

Machine learning and predictive modeling allows AI to uncover hidden information and piece together proprietary logic from software outputs, reconstruct algorithms from behavioral patterns, and even deduce “secret sauce” ingredients that were once thought irretrievable. No company operating in the digital world, whether SaaS, traditional tech, or even non-tech with proprietary digital processes, is immune.

Trade Secret Law: What Counts as “Improper Means”?

In the US, trade secrets are protected under the Uniform Trade Secrets Act (UTSA) and the Defend Trade Secrets Act (DTSA). Both statutes define a trade secret as information that has economic value because it is not generally known or “readily ascertainable,” and reasonable efforts are used to keep it confidential.

Crucially, these laws focus on misappropriation, wrongful acquisition or use, typically through “improper means.” However, both UTSA and DTSA have always made a major exception: information obtained by reverse engineering a publicly available product is not considered “improper means”—and therefore, is not misappropriation.

Yet the AI era is disturbing what counts as “proper” and “improper.” Is deploying bots to scrape massive amounts of data “proper”? Is coaxing unexpected outputs from a generative AI model by way of “prompt injection” fair play, or does it cross the line into cyberattack territory? Recent legal cases signal that courts are struggling with these questions.

Recent Cases: The Law Grapples with AI

In a 2024 case, a company claimed competitors used “prompt injection” (manipulating generative AI with crafted inputs) to elicit sensitive outputs, allegedly extracting valuable trade secrets. Adding to the intrigue, the attackers used false credentials and impersonation—raising the specter of “improper means.”

The Eleventh Circuit recently ruled that even when data is accessible to the public, how it’s accessed matters. Automated scraping of millions of insurance quotes, carried out with bots, was deemed “improper means,” casting doubt on companies relying purely on “technical public availability” as a shield.

Underlying both decisions is a key trend: As AI makes it easier for outsiders to reconstruct proprietary information, courts are increasingly interrogating what makes a method of acquisition truly “improper.”

Heightened Risk: When “Readily Ascertainable” Is Redefined

A second risk looms: As AI tools become more adept at deducing secrets from public clues, courts may decide that information is, in fact, “readily ascertainable.” That could mean what was once securely covered by trade secret law might lose protected status, not because of a security lapse, but because technology makes it easier for competitors to deduce nonpublic confidential information from the outside.

Protecting Trade Secrets in an AI-Powered World

For in-house counsel and business leaders, the message is clear: the old playbook is no longer enough. Here are practical steps for safeguarding confidential information in the current environment:

  • Reinforce Technical Barriers
    • Implement rate limiting, CAPTCHA challenges, and advanced bot-detection tools, especially on SaaS platforms.
    • Apply AI-powered monitoring for unusual patterns that might signal scraping or prompt injection attempts.
  • Update Legal Protections
    • Revise terms of service to expressly prohibit automated access, scraping, reverse engineering, and prompt injection. Make these clauses visible and actively enforce them.
    • Incorporate explicit provisions regarding AI-specific attack vectors in your contracts and NDAs.
    • Document all incidents and responses to show your “reasonable measures” in the event of future litigation.
  • Revisit Traditional Best Practices
    • Limit access to core confidential information for each trade secret to those on a genuine “need to know” basis, and monitor usage carefully.
    • Maintain a robust trade secret management program, including labeling sensitive documents and regularly auditing access controls.
    • Review employment and third-party agreements to confirm they’re up to date for the realities of AI-era risks.
  • Stay Vigilant and Evolve.
    • Conduct periodic reviews of your confidentiality policies. If your framework predates the AI surge of 2023, it’s time for an update.
    • Educate internal teams on the new forms of reverse engineering and AI-enabled threats, including social engineering.
    • Consider documenting how your company specifically addresses the risk of AI-assisted reverse engineering.

Trade secret protection has always demanded vigilance, but the rise of AI has upped the stakes. The law continues to evolve as judges confront novel scenarios, but companies cannot afford to wait for clear answers. By implementing a multi-layered approach, combining legal vigilance, technical defenses, and up-to-date policies, businesses can better safeguard their intellectual assets and navigate the uncertain legal terrain of the AI age.

Have your trade secret protocols kept pace with the march of AI? Now is the time to review, revise, and reinforce.

An analysis by the Federal Trade Commission (FTC) shows that, since 2020, consumers have been swindled out of $65 million by rental scams. This statistic is particularly relevant during the holiday season when many people are traveling and renting places to stay.

According to the FTC, most of the scams involve fake rental listings on Facebook or Craigslist. The listings look real and copy information from legitimate listings like Air BnB and VRBO. Interestingly, “people ages 18 to 29 were three times more likely than other adults to report losing money to a rental scam.”

The scammers are able to swindle the victims by:

  • pressuring consumers to provide money upfront before seeing the rental property in person;
  • pushing consumers to prove they are creditworthy by sending screenshots of their credit scores. They send consumers affiliate links to websites to sign up for a credit check for little cost, but this may enroll the consumer in a paid membership with recurring fees; and
  • collecting personal information from consumers such as their Social Security number, driver’s license or paystubs to steal their identity.

Tips to avoid being scammed include:

  • search for the rental address online to see if the same property is listed with different prices, contact information, or is listed as being for sale;
  • avoid sharing personal information, particularly Social Security number, passport number or driver’s license number;
  • avoid sharing banking information that allows direct access to your bank account;
  • avoid providing financial information until they have agreed to rent a property and use a credit card;
  • avoid paying the full amount for the rental up front; and
  • check out typical rents paid in the area. If the advertised rent of a listing is much cheaper than rents for similar rentals in the same area, that could be a sign of a scam and a red flag.

Safe travels over the holidays and stay vigilant to avoid a rental scam.

700Credit, a Michigan-based company that runs credit checks and identification verification services for automobile dealerships nationwide, has announced that an “integrated partner” was compromised, allowing a bad actor to obtain unauthorized access to its network of information about individuals whose credit the company checked. The incident was discovered on October 25, 2025.

Michigan officials confirmed that the data breach affects the names, addresses, birth dates, and Social Security numbers of 5.6 million people whose credit was checked when they financed a vehicle purchase. 700Credit is notifying affected individuals by mail and offering credit monitoring services.

Overview of Commonwealth v. Kurtz

On December 16, 2025, the Pennsylvania Supreme Court held that individuals do not have a reasonable expectation of privacy in general, unprotected Google search records. Commonwealth v. Kurtz, No. 98 MAP 2023 (Pa. Dec. 16, 2025). In this criminal case, law enforcement obtained a so-called “reverse keyword search warrant” from Google for records of searches of a victim’s name and address made the week prior to an alleged assault. A reverse keyword search warrant is a tool that allows law enforcement to ask a technology provider like Google to identify all users who searched for specific terms or phrases during a defined time frame. In this case, the resulting data tied a particular search to the defendant’s IP address. The majority held that entering a query into Google willingly “voluntarily turns over [the contents of the search] to third parties” and therefore negates any constitutionally recognized privacy interest in those search records.

While law enforcement in this case did obtain a warrant, the bulk of the decision focuses on whether any privacy protection would apply even if there was no warrant. This finding may create broad implications well beyond the criminal context.

What the Majority Decision Means

The Kurtz majority held that when users enter search terms into Google or similar search engines without using additional privacy protections, Pennsylvania law treats those search queries as information that users knowingly and voluntarily share with a third party (the search provider). Because of this voluntary exposure, the Court found that individuals do not have a reasonable expectation of privacy in that search data, meaning law enforcement can generally access it provided there is some form of legal process or enforceable request, though not necessarily a warrant.

The majority decision is grounded in the fact that Google’s privacy notice and online disclosures make it explicit that data from general searches will be collected and can be provided to law enforcement when “reasonably necessary to… meet any applicable law, regulation, legal process or enforceable request.” This “express warning” from Google to its users was sufficient for the majority to find that there is no recognized privacy right in such search term data.

Notably, the ruling does not mean that law enforcement can access search data without any process whatsoever. Rather, that process does not have to clear the same privacy bar or warrant standard as communications or data for which users do maintain a recognized privacy interest, including password-protected accounts or encrypted searches. In fact, the Court explicitly carved out potential expectations of privacy for users who take affirmative steps to shield searches, such as by using password-protected accounts, a VPN, or private browsing tools, stating that “this case is limited to general, unprotected internet use.”

The Dissent’s Perspective: Privacy in Search Is Fundamental

Not all judges agreed with the majority opinion. The dissenting opinion called this reasoning “divorced from reality and blind to the societal benefits flowing from ready access to infinite amounts of information available [through the internet],” emphasizing that search engines are now vital for daily life. It asserted that conglomerated internet search history “provides a virtual current biography of the user,” containing information about health, beliefs, and intent, and thus deserves robust privacy protection, paralleling banking and phone records under Pennsylvania’s constitution and statutes. The dissent also pointed to state law, including the Pennsylvania Wiretap Act, that generally recognizes a warrant requirement for access to stored electronic communications.

What Businesses Should Know

Although this decision may allow broader law enforcement access to ordinary internet search data in Pennsylvania, other jurisdictions—notably California—treat search data as subject to far stronger privacy rights. California’s Invasion of Privacy Act (CIPA) and California Consumer Privacy Act (CCPA) regulations place strict requirements on data sharing, including with government entities. These regulations mandate that businesses must provide clear transparency, opt-outs, and minimization around data use.

Businesses often operate across multiple states, so they should be aware of different state approaches to what constitutes a reasonable expectation of privacy. For example, California’s approach is different: both the CIPA and CCPA treat internet search data as sensitive. Though courts are split in the CIPA context on whether internet search history is subject to an expectation of privacy, at least some have found that “users [have a reasonable expectation of privacy] over URLs that disclose… unique search terms.” See, e.g., Brown v. Google LLC, No. 4:20-cv-3664-YGR, 2023 WL 5029899, at *20 (N.D. Cal. Aug. 7, 2023). Additionally, under the CCPA, personal information is any information that identifies or is reasonably capable of being associated with a consumer, and the term explicitly includes internet browsing and search history. As a result, businesses operating in California should publish clear privacy disclosures and restrict law enforcement access without valid legal processes. Given the different approaches, companies that do business nationally must carefully review and follow all applicable state laws.

The Kurtz decision leans on the presumption that users are aware of, and agree to, broad data collection and sharing practices through privacy policies. Considering this underlying reasoning, businesses should give explicit, accurate notice about search data collection, disclosure, and potential law enforcement access in their privacy notices. Privacy notices should be kept current through annual reviews, and businesses should notify users of material changes to any privacy practices. Pennsylvania does not currently have in place a comprehensive state consumer privacy law, but 20 states do. In many of these states, companies may not collect, use, or share more personal information than is necessary for the disclosed business purpose. In turn, businesses should also maintain documentation of justifications for data collection and retention, including data from search queries.

If your business offers search or browsing tools, you might consider enabling or encouraging privacy-protective features, such as incognito modes, VPN integration, and options to delete search history. Even where courts might find no legal privacy interest, regulators and consumers may still expect companies to make privacy features prominent.

The Kurtz decision is yet another reminder that the patchwork of U.S. privacy law is tilting in different directions. Still, consumer and regulatory expectations continue to rise. Future challenges to this finding, both in court and at the ballot box, are likely. Monitoring state and federal trends, updating policies and training, and centering privacy in design and operations should remain top priorities for every company processing user search data.

A recent federal court decision in Adam v. CaringBridge, Inc., No.  25-cv-06042-WHO, 2025 WL 3493565 (N. Cal. Dec. 5, 2025), offers a cautionary tale for plaintiffs in privacy class actions, and a strategic playbook for defendants. Even where a case is properly filed in California (the home turf for many privacy statutes and plaintiffs), a well-drafted forum selection clause buried in a website’s terms of use can upend venue and send a lawsuit halfway across the country.

CaringBridge operates an online platform allowing caregivers and individuals to document and share health journeys. The site requires users to create an account and agree to its terms of use and privacy policy, a standard clickwrap approach. However, to sign up, users must disclose their health condition via a drop-down menu with highly sensitive options (e.g., “Brain Cancer,” “HIV/AIDS,” “Substance Use Disorder”).

The California plaintiff claimed that when she entered this data, CaringBridge used real-time tools like Google Analytics and Meta Pixel to intercept and transmit her health information to third parties for advertising. She filed a putative class action in the Northern District of California, alleging violations of the California Invasion of Privacy Act (CIPA), the California Constitution, and the federal Electronic Communications Privacy Act.

CaringBridge responded by seeking a transfer, or dismissal, based on a clause in its terms of use requiring all litigation to be filed in Minnesota courts (specifically, Hennepin County). The company invoked the forum selection clause as grounds for transfer under 28 U.S.C. § 1404(a).

District Court Judge Orrick first confirmed a distinction critical for any practitioner:

  • Forum selection clauses are not relevant to determining if venue is proper under § 1391 or subject to dismissal under § 1406(a).
  • Such clauses matter only for transfer under § 1404(a).

The plaintiff pointed to Popa v. Harriet Carter Gifts, Inc., 52 F.4th 121 (3d Cir. 2022), which held that online interception of communications occurs “at the plaintiff’s browser,” not the defendant’s server. Judge Orrick agreed: for venue purposes, the alleged interception happened in California, making venue proper under 28 U.S.C. § 1391(a)(2). But victory on venue proved short-lived.

The moment the analysis shifted to 28 U.S.C. § 1404(a), the forum selection clause became dispositive:

  • “[A]ll factors relating to the private interests of the parties … [are] entirely in favor of the preselected forum.” (quoting Sun v. Advanced China Healthcare, Inc., 185 F. Supp. 3d 1155, 1169 (N.D. Cal. 2016))
  • The party seeking to defeat transfer has a heavy burden; only “unusual cases” will suffice.

The plaintiff creatively argued that enforcing the forum selection clause would thwart California’s fundamental privacy policy, as Minnesota law lacks a CIPA equivalent. She relied on In re Facebook Biometric Information Privacy Litigation where a similar clash led the court to decline enforcing a choice-of-law provision. But as Judge Orrick explained, that case addressed what law would apply after transfer. The forum selection and choice-of-law provisions are separate: the proper place to fight about CIPA’s protections is in Minnesota, not California.

Further, the Court rejected the argument that the terms of use didn’t govern because some less-sensitive data was collected pre-agreement; the core fact (the sensitive health disclosures) arose only after consent.

The traditional deference given to a plaintiff’s forum choice was offset by the following:

  • Class actions: The named plaintiff’s preference gets less weight.
  • Forum selection clause: Plaintiff agreed to it.
  • Convenience: CaringBridge’s witnesses and most evidence are in Minnesota, making it the more efficient forum.

Key Takeaways

  • Forum selection clauses matter: immensely: Even in privacy cases with strong venue arguments, courts will enforce well-drafted clauses, shifting litigation to the defendant’s chosen forum.
  • CIPA policy questions remain alive, but in the transferee court. The argument that California public policy requires application of its privacy statutes may yet be made but must wait for choice-of-law litigation in Minnesota.
  • The Popa ruling on browser-based interception is good law (for venue), but it doesn’t trump a forum selection clause at the transfer stage.

This case illustrates the enduring power of forum selection clauses in online terms of use, especially for defendants facing CIPA and digital privacy lawsuits. Plaintiffs’ counsel should scrutinize such clauses at the outset and weigh the uphill battle of nullifying them. Defendants should ensure such clauses are not only present but conspicuous and enforced. Stay tuned as litigation proceeds in Minnesota; a future fight over whether California’s privacy laws will apply on their merits is nearly guaranteed.

In an excellent blog post, “Avoiding AI Pitfalls in 2026: Lessons Learned from Top 2025 Incidents,” ISACA’s Mary Carmichael summarizes lessons learned from top incidents in 2025 using MIT’s AI Incident Database and risk domains. According to Carmichael, an analysis of the incidents showed recurring patterns across different risk domains, including privacy, security, reliability, and human impact, pointing out that most problems were predictable and avoidable.

Carmichael notes that her blog post “reviews where those patterns appeared and what needs to change in 2026 so organizations can use AI with greater confidence and control.”

Consider reading the article, but in a nutshell, her lessons are:

  1. Treat AI systems like core infrastructure—enforce MFA, unique administrative accounts, privileged access reviews, and security testing, particularly where personal information is included.
  2. To combat discrimination and toxicity, facial recognition technology can be used to support investigations but should not be “the deciding evidence.” Require corroborating evidence, publish error rates by race and other characteristics, and log every use.
  3. Deepfakes are on the rise: “Organizations should monitor for misuse of their brands and leaders. This includes playbooks for rapid takedowns with platforms and training employees and the public to ‘pause and verify’ through secondary channels before responding.”
  4. Attackers are using AI models for cyber-espionage. “Assume attackers have an AI copilot. Treat coding and agent-style models as high-risk identities, with least-privilege access, rate limits, logging, monitoring, and guardrails. Any AI that can run code should be governed like a powerful engineer account, not a harmless chatbot.”
  5. Chatbots and AI companion apps have engaged in harmful conversations. Build AI products with safety-by-design: “clinical input, escalation paths, age-appropriate controls, strong limits and routes to human help. If it cannot support these safeguards, it should not be marketed as an emotional support tool for young people.”
  6. AI providers are alleged to be adding air pollution, noise, and industrial traffic to neighborhoods. Due diligence, including information on “energy mix, emissions and water use” should be collected “so AI procurement aligns with climate and sustainability goals.”
  7. AI tools are confident, but often incorrect. Hallucinations are frequent and pose safety risks. “Design every high-impact AI system with the assumption it will sometimes be confidently wrong. Build governance around that assumption with logging, version control, validation checks and clear escalation so an accountable human can catch and override outputs.”

Carmichael outlines strategic goals to consider in 2026 to leverage the lessons learned in 2025. Her final thought, near and dear to my heart, is that having an AI governance program will give organizations a competitive advantage in 2026. “Organizations that maintain visibility, clear ownership and rapid intervention will reduce harm and earn trust. With the right oversight, AI can create value without compromising safety, trust or integrity.” I couldn’t have said it better. If you have not developed and established an AI governance program yet, Q1 in 2026 is a perfect time to get started.

On December 17, 2025, the Federal Trade Commission (FTC) issued a press release announcing that it is taking action against Illusory Systems, Inc. “for failing to implement adequate data security measures, leading to a major security breach in which hackers stole $186 million from consumers.”

In its complaint, the FTC alleged that Illusory, doing business as Nomad, “designed, operated, and advertised a service that allows users to transfer messages and assets, a type of platform commonly known as a ‘cross-chain bridge.’” A cross-chain bridge is also known as a crypto or blockchain bridge. A crypto bridge enables the transfer of digital assets between two different blockchain networks. It allows crypto owners to transfer tokens from one cryptocurrency network to another. Trusted bridges are operated by a centralized authority, and trustless bridges are decentralized and use smart contracts and validators.

In this case, Nomad was a trustless bridge and relied on smart contracts. In June of 2022, Nomad introduced new code for a smart contract that included a security vulnerability. Threat actors exploited the vulnerability and “virtually all assets in the bridge—worth approximately $186 million—were transferred out. Nomad users lost more than $100 million.” The complaint alleges that Nomad was warned about inadequate testing of the code but deployed it, nonetheless.

The FTC alleges that Nomad’s failure to implement adequate security measures led to the breach. It alleges that Nomad marketed itself as a “security-first” platform but failed to:

  • Use secure coding practices.
  • Implement vulnerability reporting and incident response processes.
  • Adopt widely known security technologies that could have mitigated losses.

The FTC further alleges that after the vulnerability was exploited, Nomad lacked adequate incident response measures, delaying mitigation and amplifying consumer harm.

The proposed order:

  • prohibits Nomad from making false or misleading statements about the security of its products or services;
  • requires Nomad to establish and maintain a documented security program;
  • requires Nomad to undergo biennial independent security audits; and
  • requires Nomad to return any recovered funds and repay approximately $37.5 million to users who remain uncompensated

The proposed order is open for public comment for the next 30 days. If you are a Nomad user and have not been reimbursed for the cryptocurrency loss, you may be in luck if you are included in the proposed $37.5 million reimbursement requirement of the proposed order.

Deepfakes continue to be problematic for organizations and individuals. They are hard to detect and hard to respond to when used in an attack against a company.

To respond to this ongoing, and increasingly prevalent, problem, cyber insurer Coalition announced this week that it will expand coverage for “certain incidents where AI and deepfakes lead to reputational harm.” The coverage will expand “to any video, image, or audio content that is created or manipulated through the use of AI by a third party, and that falsely purports to be authentic content depicting any past or present executive or employee, or falsely frames the organization’s products or services.” It will be interesting to see if other insurance companies offering cyber coverage will follow suit. You consider asking your broker about these offerings.