In another “hard lesson learned” case, on Monday, February 24, 2025, a federal district court sanctioned three lawyers from the national law firm Morgan & Morgan for citing artificial intelligence (AI)-generated fake cases in motions in limine. Of the nine cases cited in the motions, eight were non-existent.

Although two of the lawyers were not involved in drafting the motions, all three e-signed the motions before they were filed. The lawyer who drafted the motions admitted, after the defense counsel raised issues to the court concerning the cited cases, that they used MX2.law to add case law to the motions. MX2.law is “an in-house database launched by” Morgan & Morgan. The lawyer admitted to the court that it was their first time using AI in this way. Unfortunately, they failed to verify the accuracy of the AI platform’s output before filing the motions.

To Morgan & Morgan’s credit, they withdrew the motions, were forthcoming to the court, reimbursed the defendant for attorney’s fees, and implemented “policies, safeguards, and training to prevent another [such]occurrence in the future.”

The court sanctioned all three lawyers. The attorney who drafted the motions and failed to verify the output was sanctioned $3,000 and the other two who e-filed the motions were sanctioned $1,000 each. A hard lesson learned, although by now all attorneys should be aware of the risks of using generative AI tools for assistance with writing pleadings. This is not the first hard lesson learned by an attorney who cited fake cases in a court filing. Check the output of any AI-generated material, whether it is in a court filing or not. In the words of the sanctioning court: “As attorneys transition to the world of AI, the duty to check their sources and make a reasonable inquiry into existing law remains unchanged.”

Continuing the weekly blog posts about lawyers using AI and getting in trouble, the Massachusetts Office of Bar Counsel recently issued an article entitled “Two Years of Fake Cases and the Courts are Ratcheting Up the Sanctions,” summarizing the problems encountered by courts when confronted with lawyers citing fake cases, and the subsequent referral to disciplinary counsel.

The article outlines multiple cases of lawyers being sanctioned for filing pleadings containing fake cases after using generative AI tools to draft the pleading. The cases range from lawyers not checking the cites themselves, to supervising lawyers not checking the cites of lawyers they are supervising before filing the pleading.

The article reiterates our professional ethical obligations as officers of the court to always file pleadings that “to the best of the attorney’s knowledge, information and belief, there is a good ground to support it,” that “any lawyer who signs, files, submits, or later advocates for any pleading, motion or other papers is responsible for its content,” and that lawyers are to provide proper supervision to subordinate lawyers and nonlawyers.

The article outlines two recent sanctions imposed upon lawyers in Massachusetts in 2025. The author states, “Massachusetts practitioners would be well-served to read the sanction orders in these matters.” I would suggest that non-Massachusetts practitioners should read the article and the sanctions imposed as they are similar to what other courts are imposing on lawyers who are not checking the content and cites of the pleadings before filing them.

Courts are no longer giving lawyers free passes for being unaware of the risk of using generative AI tools for drafting pleadings. According to the article, sanctions will continue be issued, and practitioners and firms need to address the issue head on.

The article points out several mitigations that lawyers and firms can take to avoid sanctions. My suggestion is that lawyers use caution when using AI to draft pleadings, communicate with any other lawyers involved in drafting the pleadings to determine whether AI is being used (including if you are serving as local counsel), and check and re-check every cite before you file a pleading with a court.

U.S. District Judge Amit P. Mehta sanctioned an attorney who filed a brief containing erroneous citations in every case cited after the attorney admitted to relying on generative AI to write the brief. The attorney had used the tools Grammarly, ProWriting Aid, and Lexis’ cite-checking tool. The attorney was ordered to pay sanctions, including opposing counsel’s invoice for fees and costs. The court noted that sanctions were necessary because the attorney had acted “recklessly and shown “singularly egregious conduct” because they did not verify the citations and the citations of all nine cases cited were erroneous. The court further noted that the lack of verification raised “serious ethical concerns.”

The attorney’s co-counsel was not sanctioned as they indicated they were unaware of the use of generative AI, but they admitted that they didn’t independently check and verify the citations and underwent questioning by the court.

The sanctioned attorney self-reported the incident to the Pennsylvania Disciplinary Board and filed a motion to withdraw from the case.

This is a hard lesson to learn: it is not the first time an attorney has been sanctioned by a court for filing hallucinated citations. The message in all of the cases is that attorneys have an ethical obligation to check every cite before filing a pleading with the court, and extreme caution should be taken when using generative AI tools in the brief writing process.

Similarly, Senator Chuck Grassley, Chairman of the U.S. Senate Judiciary Committee, sent letters to two federal judges this week requesting information about their use of generative AI in drafting orders in cases. According to Grassley, original orders entered by the judges in July in separate cases were withdrawn after lawyers noted that factual inaccuracies and other errors were contained in the orders. Grassley noted that lawyers are facing scrutiny over the use of generative AI, and therefore judges should be held to the same or higher standard.

The judges have not responded to date.

The same lessons learned from attorneys using generative AI tools may wish to be considered by courts and their law clerks. Proceed with caution.

In the ongoing saga of lawyers who are sanctioned for AI generated hallucination citations in pleadings , FIFA (and other defendants) in an antitrust lawsuit filed by the Puerto Rico Soccer League in Puerto Rico, recently obtained an order from Chief U.S. District Judge Raul M. Arias-Marxuach requiring counsel for the plaintiff defunct league to pay FIFA and the other defendants $24,000 in attorney’s fees and costs “for filing briefs that appeared to contain errors hallucinated by artificial intelligence.” Puerto Rico Soccer League NFP, Corp. v. Federacion Puertoriquena de Futbol, No, 23-1203 (D.P.R. 9.23.25)

The judge noted that the motions filed by the Puerto Rico Soccer League “included at least 55 erroneous citations ‘requiring hours of work on the court’s end to check the accuracy of each citation.’ Plaintiffs’ counsel denied using generative AI, but this assertion was questioned by the judge by “the sheer number of inaccurate or nonexistent citations.”  The judge noted that the citations were violations of Rule 11 of the Federal Rules of Civil Procedure and applicable ethical rules.

The ordered sanctions are another reminder to lawyers to check and recheck all cases cited in any pleading filed to comply with Rule 11.

Researchers at Arizona State University and Citizen Lab have discovered that three families of Android VPN applications, used by millions of people worldwide, are related and owned by companies or individuals located in mainland China or Hong Kong with ties to the People’s Republic of China.

The researchers analyzed numerous VPN apps and the number of Google Play Store downloads, including the Java code and security flaws of each app. From their research, they identified three families of VPN providers and the number of downloads. The apps in the first group contained identical security flaws, including that they:

  • Collect location-related data (even though their privacy policies say they don’t);
  • Use weak/deprecated encryption; and
  • Contain hard-coded Shadowsocks passwords, which if extracted, may allow attackers to decrypt user traffic. These hard-coded credentials work across different apps and servers, proving that these providers use the same backend infrastructure.

They found a single company hosts all of the VPN servers in the second group, and that the VPN apps in the third family “are susceptible to connection interference attacks using the client-side blind in/on-path attacks.”

Significantly, the researchers found that “the providers appear to be owned and operated by a Chinese company (i.e., Qihoo 360) and have gone to great lengths to hide this fact from their 700+ million combined user bases.”

The Tech Transparency Project (TTP) provided an in-depth analysis of Qihoo 360 as a national security threat in its article “Apple Offers Apps With Ties to Chinese Military,” that is well worth the read.

According to the article, “[m]illions of Americans have downloaded apps that secretly route their internet traffic through Chinese companies, according to an investigation by the Tech Transparency Project (TTP), including several that were recently owned by a sanctioned firm with links to China’s military.” They discovered that “one in five of the top 100 free virtual private networks in the U.S. App Store during 2024 were surreptitiously owned by Chinese companies, which are obliged to hand over their users’ browsing data to the Chinese government under the country’s national security laws. Several of the apps traced back to Qihoo 360, a firm declared by the Defense Department to be a ‘Chinese Military Company.’”

They further found that “one Chinese VPN has been advertised on Facebook and Instagram to teens as young as 13, and some have targeted ads at Americans looking to keep using TikTok, another Chinese app threatened with a U.S. ban.”

While the researchers from Arizona State University and Citizen Lab did an in-depth analysis of the apps owned by Qihoo 360 (which found that the apps were downloaded over 70 million times), TTP provides more information about Qihoo 360 and its national security risk. According to TTP, Qihoo 360 was placed on the Commerce Department’s Entity List. It was sanctioned in June 2020 as it “takes part in the procurement of commodities and technologies for military end-use in China.” It was also “designated by the U.S. Department of Defense as a ‘Chinese military company’ operating in the U.S.”

Similar to the concerns raised by TikTok and Temu, the free VPN services provided by Qihoo contain risks that users should consider. Research your VPN provider to ensure that it does not have ties to the Chinese Communist government.

We have previously outlined several cases where lawyers have been sanctioned by courts for citing fake cases generated by artificial intelligence (AI), also known as “hallucinations.”

Now, we don’t even have to keep track of the cases to report on them because we found a nifty new database that keeps track of all of them. Did you know that as of this writing, there have been 156 cases where lawyers cited fake cases generated by AI in court documents?

It is hard to believe that with Rule 11 obligations, any lawyer would file a document with a court without checking the cite. Apparently, it happens more frequently than one would think. Many lawyers have already been sanctioned by courts to send the message that citing fake cases generated by AI is a waste of the court’s time, as well as a waste of the time and resources of opposing counsel and parties.

Kudos to Damien Charlotin, who has created a database to track the growing number of cases where lawyers have cited AI generated hallucinated cases. If you want to see how it is a growing problem, check it out.

The cases grow, and the sanctions continue to get larger and more punitive. Lawyers need to quickly learn that they must follow their ethical obligations and provide actual cases, with citations checked and shepardized with human oversight, before filing a pleading. It is truly shocking that lawyers have failed to do so in 156 instances thus far.

A new survey from Intapp, titled “2025 Tech Perceptions Survey Report,” summarizes findings from a survey of fee-earners that there has been a “surge in AI usage.” The professions surveyed included accounting, consulting, finance, and legal sectors. Findings include that “AI usage among professionals has grown substantially, with 72% using AI at work versus 48% in 2024.” AI adoption among firms increased to 56%, with firms utilizing it for data summarization, document generation, research, error-checking, quality control, voice queries, data entry, consultation (decision-making support), and recommendations. That said, the vast majority of AI adoption in the four sectors is in finance, with 89% of professionals using AI at work. Specifically, 73% of accounting professionals, 68% of consulting professionals, and 55% of legal professionals use AI.

A significant conclusion is that when firms do not provide AI tools for professionals to use, they often develop their own. Over 50% of professionals have used unauthorized AI tools in the workplace, which increases risk for companies. They are reallocating the time saved with AI tools by improving work-life balance, focusing on higher-level client work, focusing on strategic initiatives and planning, cultivating relationships with clients, and increasing billable hours.

The survey found that professionals want and need technology to assist with tasks. Only 32% of professionals believe they have the optimal technology to complete their job effectively. The conclusion is that professionals who are given optimal technology to perform their jobs are more satisfied and likely to stay at the firm, optimal tech “powers professional-and firm-success, and AI is becoming non-negotiable for future firm leaders.”

AI tools are rapidly developing and adopted by all industries, including professional sectors. As noted in the Intapp survey, if firms are not providing AI tools for workers to use to enhance their jobs, they will use them anyway. The survey reiterates how important it is to have an AI Governance Program in place to provide sanctioned tools for workers to reduce the risks associated with using unauthorized AI tools. Developing and implementing an AI Governance Program and acceptable use policies should be high on the priority list for all industries, including professional services.

A new study by Ivanti illustrates that one out of three workers secretly use artificial intelligence (AI) tools in the workplace. They do so for varying reasons, including “I like a secret advantage,” “My job might be reduced/cut,” “My employer has no AI usage policy,” “My boss might give me more work,” “I don’t want people to question my ability,” and “I don’t want to deal with IT approval processes.”

In 2025, a staggering 42% of employees admit to using generative AI (GenAI) tools at work. Another whopping 48% of employees admit to feeling resenteeism (a dislike of one’s job, but stays anyway) and 39% admit to feeling presenteeism (when one comes into the office to be seen, but is not productive).

The secret use of GenAI tools in the workplace poses several risks for organizations, including unauthorized disclosure of company data and/or personal information, cybersecurity risks, bias and discrimination, and misappropriation of intellectual property.

The Ivanti study emphasizes the need for organizations to adopt an AI Governance Program so employees feel comfortable using approved and sanctioned AI tools and don’t keep their use a secret. It also allows the organization to monitor the use of AI tools by employees and implement guidelines and guardrails around their safe use in the organization to reduce risk.

If you hang out with CISOs like I do, shadow IT has always been a difficult problem. Shadow IT refers to refers to “information technology (IT) systems deployed by departments other than the central IT department, to bypass limitations and restrictions that have been imposed by central information systems. While it can promote innovation and productivity, shadow IT introduces security risks and compliance concerns, especially when such systems are not aligned with corporate governance.”

Shadow IT has been a longstanding problem as IT professionals can’t implement security measures and guidelines when they are unaware of its use.

Now that artificial intelligence (AI) is widely used for purposes including work, it is imperative that organizations address its governance, as they previously addressed employees’ use of IT assets. Otherwise, employees will use AI tools without the organization’s knowledge and outside of its acceptable use policies, exacerbating the problem of shadow AI in the organization.

A recent TechRadar article concluded that “you almost certainly have a shadow AI problem.” The risks of having shadow AI in the organization include: “the leakage of sensitive or proprietary data, which is a common issue when employees upload documents to an AI service such as ChatGPT, for example, and its contents become available to users outside of the company. But it could also lead to serious data quality problems where incorrect information is retrieved from an unapproved AI source which may then lead to bad business decisions.” And don’t forget about the problem of hallucinations.

Implementing an AI Governance Program is one way to address the shadow AI problem. AI Governance programs differ depending on business needs, but all of them address who owns the program, AI tools usage, what tools are sanctioned, how AI tools can be used, guardrails around the risks of data loss, data integrity and accuracy, and user training and education. Governing the use of AI tools in an organization is similar to governing the use of IT assets. The most important thing is to get started before shadow AI gets out of hand.

In a win for global law enforcement, Germany’s Bundeskriminalamt (BKA) announced on April 5, 2022, that it had officially taken down the infrastructure of Hydra, a Russian-based, illegal dark-web marketplace that has allegedly facilitated more than $5 billion in Bitcoin transactions since its inception in 2015. In the process of shutting it down, German authorities seized over $25 million in Bitcoin through 88 transaction. According to BKA, it “secured the server infrastructure in Germany of the world’s largest illegal Darknet marketplace ‘Hydra Market.’”

BKA attributed the take down to a collaborative investigation between its Central Office for Combating Cybercrime and U.S. law enforcement authorities since August 2021.

According to BKA, Hydra had 17 million customers and over 19,000 seller accounts registered on its marketplace, and “was probably the illegal marketplace with the highest turnover worldwide.”

Following the takedown in Germany, the U.S. Department of Treasury (Treasury) Office for Foreign Assets Control (OFAC) followed up with sanctions against Hydra, which, according to Secretary of the Treasury, Janet Yellen, sends “a message today to criminals that you cannot hide on the darknet or their forums, and you cannot hide in Russia or anywhere else in the world.”

Treasury’s release states, “Countering ransomware is a top priority of the Administration. Today’s action supports the Administration’s counter-ransomware lines of effort to disrupt ransomware infrastructure and actors in close coordination with international partners” and calls out Russia as “a haven for cybercriminals.”

Therefore, Hydra was designated by OFAC “for being responsible for or complicit in, or having engaged in, directly or indirectly, cyber-enabled activities originating from, or directed by persons located, in whole or in substantial part, outside the United States that are reasonably likely to result in, or have materially contributed to, a significant threat to the national security, foreign policy, or economic health or financial stability of the United States and that have the purpose or effect of causing a significant misappropriation of funds or economic resources, trade secrets, personal identifiers, or financial information for commercial or competitive advantage or private financial gain.”

Treasury further sanctioned virtual currency exchange Garantex, which is in Estonia but operating in Moscow and St. Petersburg, Russia. According to Treasury, more than $100 million in transactions over the exchange were associated with “illicit actors and darknet markets,” including Conti and Hydra.

Therefore, Treasury designated Garantex “for operating or having operated in the financial services sector of the Russian Federation economy” which “reinforces OFAC’s recent public guidance to further cut off avenues for potential sanctions evasion by Russia, in support of the G7 leaders’ commitment to maintain the effectiveness of economic measures.”

These actions by the Department of the Treasury send a strong message to cybercriminals that sanctions related to the war in Ukraine are rapidly spurring additional scrutiny and action by law enforcement against anyone associated with Putin or Russia.

For more on what these sanctions mean for U.S. individuals and businesses, click here.