U.S. District Judge Amit P. Mehta sanctioned an attorney who filed a brief containing erroneous citations in every case cited after the attorney admitted to relying on generative AI to write the brief. The attorney had used the tools Grammarly, ProWriting Aid, and Lexis’ cite-checking tool. The attorney was ordered to pay sanctions, including opposing counsel’s invoice for fees and costs. The court noted that sanctions were necessary because the attorney had acted “recklessly and shown “singularly egregious conduct” because they did not verify the citations and the citations of all nine cases cited were erroneous. The court further noted that the lack of verification raised “serious ethical concerns.”

The attorney’s co-counsel was not sanctioned as they indicated they were unaware of the use of generative AI, but they admitted that they didn’t independently check and verify the citations and underwent questioning by the court.

The sanctioned attorney self-reported the incident to the Pennsylvania Disciplinary Board and filed a motion to withdraw from the case.

This is a hard lesson to learn: it is not the first time an attorney has been sanctioned by a court for filing hallucinated citations. The message in all of the cases is that attorneys have an ethical obligation to check every cite before filing a pleading with the court, and extreme caution should be taken when using generative AI tools in the brief writing process.

Similarly, Senator Chuck Grassley, Chairman of the U.S. Senate Judiciary Committee, sent letters to two federal judges this week requesting information about their use of generative AI in drafting orders in cases. According to Grassley, original orders entered by the judges in July in separate cases were withdrawn after lawyers noted that factual inaccuracies and other errors were contained in the orders. Grassley noted that lawyers are facing scrutiny over the use of generative AI, and therefore judges should be held to the same or higher standard.

The judges have not responded to date.

The same lessons learned from attorneys using generative AI tools may wish to be considered by courts and their law clerks. Proceed with caution.

In a strongly worded order, Judge Julie A. Robinson of the U.S. District Court for the District of Kansas publicly admonished and sanctioned four lawyers representing a plaintiff company in a patent infringement case for using ChatGPT to find caselaw to support a response to a motion to exclude an expert witness, and a response to the defendant’s motion for summary judgment.

In the 36-page order, the court made it clear that not only the lawyer who used AI to generate the hallucinated citations, but also his partners and local counsel bore responsibility for the filing of the motion. This is a clear reminder of the non-delegable duty of lawyers under Rule 11 of the Federal Rules of Civil Procedure. The Court held that “[b]ecause there is no dispute that all five . . . attorneys signed both documents that included these errors, and they admit that not one of them verified that the case law in those briefs actually exist and stand for the propositions for which they were cited, their conduct violates Rule 11(b)(2).”

The brief facts are that a seasoned lawyer admitted pro hac vice before the court that the prepared motion was created using ChatGPT, and he admitted that it was his first time doing so. He was under stress personally and admitted he was not thinking straight, and although he meant to check the citations before filing the motion, he never did. His partners, although also admitted pro hac vice, were not responsible for the motion, never read it, and did not participate in its preparation. The associate assigned to the case read the motion, made a few changes, but was not assigned to check the citations. The local counsel relied on the pro hac vice counsel and reviewed it briefly before filing it but never checked the citations. The court’s order points out that the response to the motion to exclude “contains a litany of problems: (1) nonexistent quotations; (2) nonexistent and incorrect citations; and (3) misrepresentations about cited authority.” Some of the same issues were included in the response to the motion for summary judgment.

Here’s what the court had to say about each of the attorneys’ responsibilities and the sanctions it assessed:

  • The most culpable lawyer used ChatGPT and failed to check the citations. Although he was experiencing difficulties in his personal life, he never asked for an extension or help from the other five lawyers representing the plaintiff in the case. Instead, he filed the motion on time, cut corners by using ChatGPT and filed a response that included the deficiencies above. Neither his co-counsel nor his client were aware of the generative AI use. He was a “novice” at using it and is only now aware of the risks. Although the court was sympathetic to his personal plight (and he graciously emphasized that he was the only one culpable), the court stated that “citing to a nonexistent case, attributing a nonexistent quotation to an existing case, and misstating the law violates Rule 11(b).” The violation is the failure to verify the cases, not the intent behind the failure. The court noted that the attorney’s unawareness of the “very real risk of case hallucinations,” after several instances of Rule 11 sanctions being levied against lawyers for this same violation was an aggravating fact. The court directed the attorney to implement a robust policy to “deter any future instance of submitting unverified authority in a filing…[by requiring] him to submit to the Clerk for filing a certificate outlining specific internal procedures at his firm that he intends to impose. . . and imposes a monetary fine of $5,000, . . .and revokes his pro hac vice admission to this Court.” The court further directed the attorney to “self-report to the state disciplinary authorities where he is licensed by providing them with a copy of this Order.”
  • The attorney’s partners, who did not participate in brief preparations, signed the filing despite failing to determine the accuracy of the contents. One of the partners assigned an associate to help and was on a family vacation when it was filed. The court pointed out that merely affixing their names to the brief without reviewing it, “violated [their] duty to conduct a reasonably inquiry into the facts and the law before filing.” The court reiterated that Rule 11 is non-delegable and imposed a fine of $3,000 for each of the co-counsel who signed the pleadings.
  • The court did not sanction the associate assigned to the case, as he had no supervisory authority and “was placed in a difficult position by his supervising attorneys.”
  • As for local counsel, he also signed the defective pleadings, and “by doing so, he vouched for the Texas attorneys in this matter.” He failed to cite-check them. The firm set forth its efforts to ensure this doesn’t happen again and provided the court with a formal policy around generative AI use, and the attorney “voluntarily sanctioned himself in the form of refraining from serving as sponsoring or local counsel for pro hac vice attorneys for a period of 12 months.” With the above considerations, the court sanctioned the local counsel $1,000.

The clear takeaway is that firms need to address the fact that lawyers may be tempted to use GenAI even when they have no experience, know or don’t know of the consequences, and are addressing personal issues. Judges have no sympathy when it comes to hallucinations and misrepresentations in briefs, as it is a waste of time and resources, and is a clear violation of Rule 11. Ignorance is not a defense, and relying on your partners, co-counsel, or local counsel will not get you off the hook, as Rule 11 is non-delegable. Firms may wish to consider adopting policies and guidance for attorneys on the use of GenAI tools, requiring all lawyers who are signing pleadings to be responsible for checking and verifying cites before affixing their signature to a pleading. There can be no reliance on others before a pleading is filed.

Continuing the weekly blog posts about lawyers using AI and getting in trouble, the Massachusetts Office of Bar Counsel recently issued an article entitled “Two Years of Fake Cases and the Courts are Ratcheting Up the Sanctions,” summarizing the problems encountered by courts when confronted with lawyers citing fake cases, and the subsequent referral to disciplinary counsel.

The article outlines multiple cases of lawyers being sanctioned for filing pleadings containing fake cases after using generative AI tools to draft the pleading. The cases range from lawyers not checking the cites themselves, to supervising lawyers not checking the cites of lawyers they are supervising before filing the pleading.

The article reiterates our professional ethical obligations as officers of the court to always file pleadings that “to the best of the attorney’s knowledge, information and belief, there is a good ground to support it,” that “any lawyer who signs, files, submits, or later advocates for any pleading, motion or other papers is responsible for its content,” and that lawyers are to provide proper supervision to subordinate lawyers and nonlawyers.

The article outlines two recent sanctions imposed upon lawyers in Massachusetts in 2025. The author states, “Massachusetts practitioners would be well-served to read the sanction orders in these matters.” I would suggest that non-Massachusetts practitioners should read the article and the sanctions imposed as they are similar to what other courts are imposing on lawyers who are not checking the content and cites of the pleadings before filing them.

Courts are no longer giving lawyers free passes for being unaware of the risk of using generative AI tools for drafting pleadings. According to the article, sanctions will continue be issued, and practitioners and firms need to address the issue head on.

The article points out several mitigations that lawyers and firms can take to avoid sanctions. My suggestion is that lawyers use caution when using AI to draft pleadings, communicate with any other lawyers involved in drafting the pleadings to determine whether AI is being used (including if you are serving as local counsel), and check and re-check every cite before you file a pleading with a court.

We have previously outlined several cases where lawyers have been sanctioned by courts for citing fake cases generated by artificial intelligence (AI), also known as “hallucinations.”

Now, we don’t even have to keep track of the cases to report on them because we found a nifty new database that keeps track of all of them. Did you know that as of this writing, there have been 156 cases where lawyers cited fake cases generated by AI in court documents?

It is hard to believe that with Rule 11 obligations, any lawyer would file a document with a court without checking the cite. Apparently, it happens more frequently than one would think. Many lawyers have already been sanctioned by courts to send the message that citing fake cases generated by AI is a waste of the court’s time, as well as a waste of the time and resources of opposing counsel and parties.

Kudos to Damien Charlotin, who has created a database to track the growing number of cases where lawyers have cited AI generated hallucinated cases. If you want to see how it is a growing problem, check it out.

The cases grow, and the sanctions continue to get larger and more punitive. Lawyers need to quickly learn that they must follow their ethical obligations and provide actual cases, with citations checked and shepardized with human oversight, before filing a pleading. It is truly shocking that lawyers have failed to do so in 156 instances thus far.

In the ongoing saga of lawyers who are sanctioned for AI generated hallucination citations in pleadings , FIFA (and other defendants) in an antitrust lawsuit filed by the Puerto Rico Soccer League in Puerto Rico, recently obtained an order from Chief U.S. District Judge Raul M. Arias-Marxuach requiring counsel for the plaintiff defunct league to pay FIFA and the other defendants $24,000 in attorney’s fees and costs “for filing briefs that appeared to contain errors hallucinated by artificial intelligence.” Puerto Rico Soccer League NFP, Corp. v. Federacion Puertoriquena de Futbol, No, 23-1203 (D.P.R. 9.23.25)

The judge noted that the motions filed by the Puerto Rico Soccer League “included at least 55 erroneous citations ‘requiring hours of work on the court’s end to check the accuracy of each citation.’ Plaintiffs’ counsel denied using generative AI, but this assertion was questioned by the judge by “the sheer number of inaccurate or nonexistent citations.”  The judge noted that the citations were violations of Rule 11 of the Federal Rules of Civil Procedure and applicable ethical rules.

The ordered sanctions are another reminder to lawyers to check and recheck all cases cited in any pleading filed to comply with Rule 11.

This week we are pleased to have a guest post by Robinson+Cole Business Transaction Group lawyer Tiange (Tim) Chen.

On February 28, 2024, the Justice Department published an Advanced Notice of Proposed Rulemaking (ANPRM) to seek public comments on the establishment of a new regulatory regime to restrict U.S. persons from transferring bulk sensitive personal data and select U.S. government data to covered foreign persons.

The ANPRM was published as a response to a new White House Executive Order (EO), issued pursuant to the International Emergency Economic Powers Act (IEEPA), which requires the Justice Department to propose administrative regulations within 6 months to respond to potential national security threats arising from cross-border personal and government data transfers.

Covered Data Transactions

Under the ANPRM, the Justice Department may restrict U.S. persons from engaging in a “covered data transaction,” which may refer to:

  • (a) a “transaction”: acquisition, holding, use, transfer, transportation, exportation of, or dealing in any property in which a foreign country or national thereof has an interest;
  • (b) that involves (1) bulk U.S. sensitive personal data; or (2) government-related data; and
  • (c) that involves (1) data brokerage, (2) a vendor agreement, (3) an employment agreement, or (4) an investment agreement.

Bulk Sensitive Personal Data. According to the ANPRM, the term “sensitive personal data” includes:

(1) specifically listed categories and combinations of covered personal identifiers (not all personally identifiable information), (2) precise geolocation data, (3) biometric identifiers, (4) human genomic data, (5) personal health data, and (6) personal financial data.

Only transactions exceeding certain “bulk,” or threshold volume, will be subject to the relevant restrictions based on the number of U.S. persons or U.S. devices involved.

Government-related Data. According to the ANPRM, the term means (1) any precise geolocation data, regardless of volume, for any geofenced location within an enumerated list, and (2) any sensitive personal data, regardless of volume, that links to current or former U.S. government, military or Intelligence Community employees, contractors, or senior officials.

Prohibited, Restricted, and Exempted Transactions

The EO and ANPRM propose a three-tier approach to differentiate the types of restrictions subject to the proposed rules.

Prohibited Transactions. The ANPRM generally prohibits a U.S. person to knowingly engage in a “covered data transaction” with a country of concern or covered person.

Restricted Transactions. The ANPRM provides that for U.S. persons involved in “covered data transactions” relating to a vendor, employment or investment agreement, such transactions may be permissible if adequate security measures are taken consistent with relevant rules to be promulgated by the Cybersecurity and Infrastructure Security Agency of the Department of Homeland Security.

Exempted Transactions. The ANPRM proposes to exempt certain types of transactions, including: (1) data transactions involving personal communication, information or information materials carved out by IEEPA, (2) transactions for official government business, (3) financial services, payment processing or regulatory compliance related transactions, (4) intra-entity transactions incident to business operations, and (5) transactions required or authorized by federal law or international agreements.

Licensing Regime. The EO authorizes the Justice Department to grant specific (entity or person-specific transaction) and general (that cover broad classes of transactions) licenses for U.S. persons to engage in prohibited and restricted transactions. The Justice Department is considering establishing a licensing regime modeled on the economic sanctions licensing regime managed by the Treasury Department’s Office of Foreign Asset Control.

Countries of Concerns and Covered Persons

Countries of Concerns. The ANPRM proposes to identify China (including Hong Kong and Macau), Russia, Iran, North Korea, Cuba, and Venezuela as the countries of concern.

Covered Persons. The ANPRM proposes to define the “covered persons” as (1) an entity owned by, controlled by, or subject to the jurisdiction or direction of a country of concern, (2) a foreign person who is an employee or contractor of such an entity, (3) a foreign person who is an employee or contractor of a country of concern, and (4) a foreign person who is primarily resident in the territorial jurisdiction of a country of concern. The Justice Department may also designate specific persons and entities as “covered persons.”

Implementations

The regime will only become effective upon the publication of final administrative rules. The scope of the final rules may significantly differ from the proposals published in the ANPRM. In addition, the EO affords significant discretions to the Justice Department and other agencies to issue interpretative guidance and enforcement guidelines to further clarify and refine the process and mechanisms for complying with the final rules, including potential due diligence, record keeping, or voluntary reporting requirements.

It’s hard to keep up with all of the legal challenges related to artificial intelligence tools (AI), but here are a couple of noteworthy ones that have surfaced in the past few weeks, in case you haven’t seen them.

Two New York lawyers are facing possible sanctions for using ChatGPT to assist with a brief, which included citing non-existent cases against non-existing airlines. This is a perfect example of how the use of ChatGPT can go wrong and “hallucinate,” and how human interaction and confirmation is critical to its use. Nothing like citing non-existent cases to get a judge really mad.

Another interesting development is that Georgia radio host Mark Walters has filed a defamation suit against OpenAI LLC, the developer of ChatGPT, alleging that a legal summary generated by ChatGPT that connects him to a lawsuit filed in the State of Washington relating to accusations of embezzlement from a gun rights group is false and a hallucination. Walters has stated that he has never been accused of embezzlement and never worked for the gun rights group.

It is being reported that this is “the first defamation suit against the creator of a generative AI tool.”

The legal challenges with AI are vast and varied and we will try to keep our readers informed on the myriad of relevant issues as they arise.

In a recent report by the Association of Corporate Counsel, a survey of chief legal counsels provided confirmation of what we’ve been saying for a while: expectations of increased regulatory enforcement, and privacy and cybersecurity are driving organizations to dedicate more efforts to compliance. In fact, 64 percent of those surveyed responded that they expected regulatory enforcement will increase in the next year. This expectation is driving compliance efforts for these corporate leaders. According to the report, the “focus on privacy regulations across multiple countries and jurisdictions (China, European Union, United States, including California) is forcing companies to step up its compliance efforts.”

How are these trends affecting companies and what are their lawyers doing to meet these compliance challenges, defend against litigation, prevent cyberattacks, and protect against fines and sanctions? According to the report, two thirds of those surveyed plan on “establishing new process[es] to increase defensibility, and over a half also intend to invest in new technology and consult with third parties to limit exposure to litigation and compliance threats.” As we’ve said many times, it is critical for companies to be informed, prepared, and actively manage privacy and cybersecurity issues as a key strategy to enhance regulatory compliance.