When ChatGPT becomes the judge - and the lawyer pays the bill

Published on: June 10, 2025Categories: Working world, LegalReading time: 2 min.
class="img-responsive
Avatar photo
Hakan Tok writes articles on technical topics in the blog Recht 24/7 Love & Law.

Fantasy case in court: lawyer cites "invented" judgment

Imagine this: A lawyer submits a pleading to the court - peppered with quotes and references to an alleged court ruling. Everything sounds correct, neatly formatted, legally well thought out. The problem? The cited judgment does not exist. It was invented by ChatGPT - and the lawyer didn't check it.

This is exactly what has now happened in the US state of Utah. The Court of Appeals there ruled that the lawyer concerned must bear the costs of the damage he caused by using AI-generated misinformation. In total, he must compensate the other party, indemnify his own clients and pay USD 1,000 to a non-profit legal organization.

Responsibility shifted - nevertheless sentenced

In his defense, the lawyer explained that an unlicensed legal assistant had drafted the pleading. He himself had not checked the text sufficiently. However, the court did not accept this: A lawyer remains responsible for what is submitted in his name - even if an employee or an AI tool collaborates.

The other side had discovered the error because - as good lawyers do - they wanted to check the cited judgments. When the allegedly cited judgment could not be found in any legal database, the fraud was discovered. A classic case of "copy, paste, embarrassment".

Not an isolated case: AI in the courtroom under scrutiny

Although this is the first documented case of its kind in Utah, it is by no means an isolated case worldwide. Two years ago, there was a similar case in New York in which a lawyer used ChatGPT, of all things, as a source of law - with invented judgments included. Another "thrifty fox" dispensed with human assistance altogether and had an AI in the form of an avatar speak in court. The result: unconvincing.

Such incidents raise fundamental questions: How far may AI be used in the administration of justice? Where does efficiency end - and where does irresponsibility begin?

AI is not a free pass for laziness

The fact that a lawyer blindly relies on ChatGPT is not only negligent, but also dangerous under professional law. The idea of saving time with a technical shortcut ends in the opposite: costs, embarrassment and a good shot across the bow.

AI can do a lot - but it can't think, it can't be liable and it can't check. That still requires real lawyers with a sense of proportion and responsibility. Anyone who can't deliver that should consider whether they have any business in the courtroom at all. ChatGPT is no substitute for common sense - and certainly not in the name of the law.

Want to know how AI affects your legal challenges? Book a consultation now and secure your future in the digital era!

At a fixed price of 169 EURO (gross)