U.S. District Judge Amit P. Mehta sanctioned an attorney who filed a brief containing erroneous citations in every case cited after the attorney admitted to relying on generative AI to write the brief. The attorney had used the tools Grammarly, ProWriting Aid, and Lexis’ cite-checking tool. The attorney was ordered to pay sanctions, including opposing counsel’s invoice for fees and costs. The court noted that sanctions were necessary because the attorney had acted “recklessly and shown “singularly egregious conduct” because they did not verify the citations and the citations of all nine cases cited were erroneous. The court further noted that the lack of verification raised “serious ethical concerns.”

The attorney’s co-counsel was not sanctioned as they indicated they were unaware of the use of generative AI, but they admitted that they didn’t independently check and verify the citations and underwent questioning by the court.

The sanctioned attorney self-reported the incident to the Pennsylvania Disciplinary Board and filed a motion to withdraw from the case.

This is a hard lesson to learn: it is not the first time an attorney has been sanctioned by a court for filing hallucinated citations. The message in all of the cases is that attorneys have an ethical obligation to check every cite before filing a pleading with the court, and extreme caution should be taken when using generative AI tools in the brief writing process.

Similarly, Senator Chuck Grassley, Chairman of the U.S. Senate Judiciary Committee, sent letters to two federal judges this week requesting information about their use of generative AI in drafting orders in cases. According to Grassley, original orders entered by the judges in July in separate cases were withdrawn after lawyers noted that factual inaccuracies and other errors were contained in the orders. Grassley noted that lawyers are facing scrutiny over the use of generative AI, and therefore judges should be held to the same or higher standard.

The judges have not responded to date.

The same lessons learned from attorneys using generative AI tools may wish to be considered by courts and their law clerks. Proceed with caution.

Continuing the weekly blog posts about lawyers using AI and getting in trouble, the Massachusetts Office of Bar Counsel recently issued an article entitled “Two Years of Fake Cases and the Courts are Ratcheting Up the Sanctions,” summarizing the problems encountered by courts when confronted with lawyers citing fake cases, and the subsequent referral to disciplinary counsel.

The article outlines multiple cases of lawyers being sanctioned for filing pleadings containing fake cases after using generative AI tools to draft the pleading. The cases range from lawyers not checking the cites themselves, to supervising lawyers not checking the cites of lawyers they are supervising before filing the pleading.

The article reiterates our professional ethical obligations as officers of the court to always file pleadings that “to the best of the attorney’s knowledge, information and belief, there is a good ground to support it,” that “any lawyer who signs, files, submits, or later advocates for any pleading, motion or other papers is responsible for its content,” and that lawyers are to provide proper supervision to subordinate lawyers and nonlawyers.

The article outlines two recent sanctions imposed upon lawyers in Massachusetts in 2025. The author states, “Massachusetts practitioners would be well-served to read the sanction orders in these matters.” I would suggest that non-Massachusetts practitioners should read the article and the sanctions imposed as they are similar to what other courts are imposing on lawyers who are not checking the content and cites of the pleadings before filing them.

Courts are no longer giving lawyers free passes for being unaware of the risk of using generative AI tools for drafting pleadings. According to the article, sanctions will continue be issued, and practitioners and firms need to address the issue head on.

The article points out several mitigations that lawyers and firms can take to avoid sanctions. My suggestion is that lawyers use caution when using AI to draft pleadings, communicate with any other lawyers involved in drafting the pleadings to determine whether AI is being used (including if you are serving as local counsel), and check and re-check every cite before you file a pleading with a court.

We have previously outlined several cases where lawyers have been sanctioned by courts for citing fake cases generated by artificial intelligence (AI), also known as “hallucinations.”

Now, we don’t even have to keep track of the cases to report on them because we found a nifty new database that keeps track of all of them. Did you know that as of this writing, there have been 156 cases where lawyers cited fake cases generated by AI in court documents?

It is hard to believe that with Rule 11 obligations, any lawyer would file a document with a court without checking the cite. Apparently, it happens more frequently than one would think. Many lawyers have already been sanctioned by courts to send the message that citing fake cases generated by AI is a waste of the court’s time, as well as a waste of the time and resources of opposing counsel and parties.

Kudos to Damien Charlotin, who has created a database to track the growing number of cases where lawyers have cited AI generated hallucinated cases. If you want to see how it is a growing problem, check it out.

The cases grow, and the sanctions continue to get larger and more punitive. Lawyers need to quickly learn that they must follow their ethical obligations and provide actual cases, with citations checked and shepardized with human oversight, before filing a pleading. It is truly shocking that lawyers have failed to do so in 156 instances thus far.

In the ongoing saga of lawyers who are sanctioned for AI generated hallucination citations in pleadings , FIFA (and other defendants) in an antitrust lawsuit filed by the Puerto Rico Soccer League in Puerto Rico, recently obtained an order from Chief U.S. District Judge Raul M. Arias-Marxuach requiring counsel for the plaintiff defunct league to pay FIFA and the other defendants $24,000 in attorney’s fees and costs “for filing briefs that appeared to contain errors hallucinated by artificial intelligence.” Puerto Rico Soccer League NFP, Corp. v. Federacion Puertoriquena de Futbol, No, 23-1203 (D.P.R. 9.23.25)

The judge noted that the motions filed by the Puerto Rico Soccer League “included at least 55 erroneous citations ‘requiring hours of work on the court’s end to check the accuracy of each citation.’ Plaintiffs’ counsel denied using generative AI, but this assertion was questioned by the judge by “the sheer number of inaccurate or nonexistent citations.”  The judge noted that the citations were violations of Rule 11 of the Federal Rules of Civil Procedure and applicable ethical rules.

The ordered sanctions are another reminder to lawyers to check and recheck all cases cited in any pleading filed to comply with Rule 11.

This week we are pleased to have a guest post by Robinson+Cole Business Transaction Group lawyer Tiange (Tim) Chen.

On February 28, 2024, the Justice Department published an Advanced Notice of Proposed Rulemaking (ANPRM) to seek public comments on the establishment of a new regulatory regime to restrict U.S. persons from transferring bulk sensitive personal data and select U.S. government data to covered foreign persons.

The ANPRM was published as a response to a new White House Executive Order (EO), issued pursuant to the International Emergency Economic Powers Act (IEEPA), which requires the Justice Department to propose administrative regulations within 6 months to respond to potential national security threats arising from cross-border personal and government data transfers.

Covered Data Transactions

Under the ANPRM, the Justice Department may restrict U.S. persons from engaging in a “covered data transaction,” which may refer to:

  • (a) a “transaction”: acquisition, holding, use, transfer, transportation, exportation of, or dealing in any property in which a foreign country or national thereof has an interest;
  • (b) that involves (1) bulk U.S. sensitive personal data; or (2) government-related data; and
  • (c) that involves (1) data brokerage, (2) a vendor agreement, (3) an employment agreement, or (4) an investment agreement.

Bulk Sensitive Personal Data. According to the ANPRM, the term “sensitive personal data” includes:

(1) specifically listed categories and combinations of covered personal identifiers (not all personally identifiable information), (2) precise geolocation data, (3) biometric identifiers, (4) human genomic data, (5) personal health data, and (6) personal financial data.

Only transactions exceeding certain “bulk,” or threshold volume, will be subject to the relevant restrictions based on the number of U.S. persons or U.S. devices involved.

Government-related Data. According to the ANPRM, the term means (1) any precise geolocation data, regardless of volume, for any geofenced location within an enumerated list, and (2) any sensitive personal data, regardless of volume, that links to current or former U.S. government, military or Intelligence Community employees, contractors, or senior officials.

Prohibited, Restricted, and Exempted Transactions

The EO and ANPRM propose a three-tier approach to differentiate the types of restrictions subject to the proposed rules.

Prohibited Transactions. The ANPRM generally prohibits a U.S. person to knowingly engage in a “covered data transaction” with a country of concern or covered person.

Restricted Transactions. The ANPRM provides that for U.S. persons involved in “covered data transactions” relating to a vendor, employment or investment agreement, such transactions may be permissible if adequate security measures are taken consistent with relevant rules to be promulgated by the Cybersecurity and Infrastructure Security Agency of the Department of Homeland Security.

Exempted Transactions. The ANPRM proposes to exempt certain types of transactions, including: (1) data transactions involving personal communication, information or information materials carved out by IEEPA, (2) transactions for official government business, (3) financial services, payment processing or regulatory compliance related transactions, (4) intra-entity transactions incident to business operations, and (5) transactions required or authorized by federal law or international agreements.

Licensing Regime. The EO authorizes the Justice Department to grant specific (entity or person-specific transaction) and general (that cover broad classes of transactions) licenses for U.S. persons to engage in prohibited and restricted transactions. The Justice Department is considering establishing a licensing regime modeled on the economic sanctions licensing regime managed by the Treasury Department’s Office of Foreign Asset Control.

Countries of Concerns and Covered Persons

Countries of Concerns. The ANPRM proposes to identify China (including Hong Kong and Macau), Russia, Iran, North Korea, Cuba, and Venezuela as the countries of concern.

Covered Persons. The ANPRM proposes to define the “covered persons” as (1) an entity owned by, controlled by, or subject to the jurisdiction or direction of a country of concern, (2) a foreign person who is an employee or contractor of such an entity, (3) a foreign person who is an employee or contractor of a country of concern, and (4) a foreign person who is primarily resident in the territorial jurisdiction of a country of concern. The Justice Department may also designate specific persons and entities as “covered persons.”

Implementations

The regime will only become effective upon the publication of final administrative rules. The scope of the final rules may significantly differ from the proposals published in the ANPRM. In addition, the EO affords significant discretions to the Justice Department and other agencies to issue interpretative guidance and enforcement guidelines to further clarify and refine the process and mechanisms for complying with the final rules, including potential due diligence, record keeping, or voluntary reporting requirements.

It’s hard to keep up with all of the legal challenges related to artificial intelligence tools (AI), but here are a couple of noteworthy ones that have surfaced in the past few weeks, in case you haven’t seen them.

Two New York lawyers are facing possible sanctions for using ChatGPT to assist with a brief, which included citing non-existent cases against non-existing airlines. This is a perfect example of how the use of ChatGPT can go wrong and “hallucinate,” and how human interaction and confirmation is critical to its use. Nothing like citing non-existent cases to get a judge really mad.

Another interesting development is that Georgia radio host Mark Walters has filed a defamation suit against OpenAI LLC, the developer of ChatGPT, alleging that a legal summary generated by ChatGPT that connects him to a lawsuit filed in the State of Washington relating to accusations of embezzlement from a gun rights group is false and a hallucination. Walters has stated that he has never been accused of embezzlement and never worked for the gun rights group.

It is being reported that this is “the first defamation suit against the creator of a generative AI tool.”

The legal challenges with AI are vast and varied and we will try to keep our readers informed on the myriad of relevant issues as they arise.

In a recent report by the Association of Corporate Counsel, a survey of chief legal counsels provided confirmation of what we’ve been saying for a while: expectations of increased regulatory enforcement, and privacy and cybersecurity are driving organizations to dedicate more efforts to compliance. In fact, 64 percent of those surveyed responded that they expected regulatory enforcement will increase in the next year. This expectation is driving compliance efforts for these corporate leaders. According to the report, the “focus on privacy regulations across multiple countries and jurisdictions (China, European Union, United States, including California) is forcing companies to step up its compliance efforts.”

How are these trends affecting companies and what are their lawyers doing to meet these compliance challenges, defend against litigation, prevent cyberattacks, and protect against fines and sanctions? According to the report, two thirds of those surveyed plan on “establishing new process[es] to increase defensibility, and over a half also intend to invest in new technology and consult with third parties to limit exposure to litigation and compliance threats.” As we’ve said many times, it is critical for companies to be informed, prepared, and actively manage privacy and cybersecurity issues as a key strategy to enhance regulatory compliance.