
Credit: Unsplash/CC0 Public Domain
A Palo Alto, Calif., lawyer with nearly half a century of experience admitted this summer to a federal judge in Oakland that the lawsuit he referred to in a landmark court filing doesn’t actually exist and appears to be the product of an “illusion” of artificial intelligence.
In a court filing, Jack Russo said an apparent AI fabrication was a “new situation” for him, adding that it was “deeply embarrassing.”
Mr. Russo, an expert in computer law, found himself part of a rapidly growing firm of lawyers. Hugely popular but error-prone artificial intelligence technologies like ChatGPT have been publicly shamed for running afoul of strict rules of legal process.
The illusion of AI producing inaccurate or meaningless information has been an ongoing problem in generative AI, which has sparked a Silicon Valley frenzy ever since San Francisco-based OpenAI released its ChatGPT bot in late 2022.
In the legal field, AI-generated errors have attracted increased scrutiny as lawyers flock to the technology, with outraged judges referring them to disciplinary authorities and imposing fines of up to $31,000 in dozens of U.S. cases starting in 2023, including California’s largest-ever $10,000 fine in a Southern California case last month.
Chatbots utilize vast amounts of data to respond to user prompts and use pattern analysis and advanced inference to generate results. Errors can occur for a variety of reasons, including insufficient or defective AI training data or incorrect assumptions made by the AI. This not only affects lawyers, but also the public seeking information. Like last year when Google’s AI Overview told users to eat rocks and add glue to pizza sauce to keep the cheese from sliding off.
Mr Russo told Judge Jeffrey White that he took full responsibility for not ensuring the claims were true, but said that because he was over 70 years old and had taken time to recover from Covid-19, he had delegated the work to support staff without “appropriate supervisory procedures” in place.
“There’s no room for sympathy here,” said Eric Goldman, an internet law professor at Santa Clara University. “Every lawyer has a sob story to tell, but I don’t get into that. There are rules that require lawyers to double-check their submissions.”
The judge wrote in a court order last month that the AI-dream hoax was a first for Russo. Mr. White wrote that Mr. Russo violated federal court rules by failing to properly check the motion to dismiss the contract dispute case. The judge noted that the court “needs to divert attention from the merits of this and other cases to address this issue.”
Mr. White issued a preliminary order requiring Mr. Russo to pay a portion of the other party’s legal costs. Mr. Russo told Mr. White that his firm, Computer Law Group, “has taken steps to correct the problem and prevent it from happening again.” Mr. Russo declined to answer questions from this news organization.
Until mid-2023, it was a novelty for lawyers to face disciplinary action for filing case papers that referred to non-existent cases caused by artificial intelligence, but now such cases occur almost every day, and even judges are involved, according to a database compiled by Damien Charlotin, a senior researcher at French business school HEC Paris, who tracks case papers around the world involving AI hallucinations.
“I think the acceleration is still continuing,” Charlotin said.
Charlotin said his database contains a “staggering number” of lawyers who are sloppy, reckless or “clearly bad.”
In May, San Francisco lawyer Ivana Dukanovic admitted in U.S. District Court in San Jose that she and others at the law firm Latham & Watkins had made “embarrassing and unintentional mistakes.”
Dukanovic wrote that while representing San Francisco-based AI giant Anthropic in a music copyright case, they filed a complaint that included psychedelic material. Mr. Dukanovic, whose company profile lists “artificial intelligence” as one of his areas of law practice, blamed the creation of false information on a particular chatbot, Claude.ai, the flagship product of his client Anthropic.
Judge Susan van Keulen ordered some of the filings to be removed from the court record. Dukanovic, who appears to have evaded sanctions with his company, did not respond to requests for comment.
Charlotin found 113 U.S. cases involving lawyers who filed submissions containing psychedelic material, mostly quoting court cases, that have been the subject of court rulings since mid-2023. He believes that many court filings regarding AI fabrications are never caught and could affect the outcome of the case.
Goldman, the law professor, said the court’s decision could have “life-altering consequences,” including issues involving child custody and disability claims.
“The stakes are so high in some cases that if someone skews a judge’s decision-making, the system collapses,” Goldman said.
Still, he said, AI can be a useful tool for lawyers, finding information people might miss or helping with document preparation. “If people use AI wisely, they can do better jobs,” Goldman said. “That’s what’s pushing everyone to adopt it.”
A study released in April by the American Bar Association, the nation’s largest legal association, found that law firms’ use of AI has nearly tripled, from 11% in 2023 to 30% last year, and that ChatGPT is “a clear leader among law firms of all sizes.”
Goldman said fines may be the last thing lawyers worry about. A judge may issue charges that require disciplinary action against the attorney, dismiss the case, reject key submissions, or view everything the attorney does in the case with skepticism. The client may sue for medical malpractice. An order to pay the other party’s legal costs can require a six-figure payment.
Charlotin’s database shows that judges have slapped many lawyers with warnings and referrals to disciplinary authorities, sometimes removing all or part of their submissions from the court record or ordering them to pay the costs of their appeals. Last year, a federal appeals court in California rejected the appeal, citing “a litany of misrepresentations and fabrications of precedent,” including “two cases that appear not to exist.”
Charlotin expects the database to continue to grow.
“I don’t think it’s decreasing as much as you would think, ‘I’m sure everyone knows this by now,'” Charlotin said.
#YR@ MediaNews Group, Inc. Distributed by Tribune Content Agency, LLC.
Quote: Chatbot dreams bring AI nightmares for Bay Area lawyers (October 8, 2025) Retrieved October 9, 2025 from https://techxplore.com/news/2025-10-chatbot-generate-ai-nightmares-bay.html
This document is subject to copyright. No part may be reproduced without written permission, except in fair dealing for personal study or research purposes. Content is provided for informational purposes only.
