
Credit: Unsplash/CC0 Public Domain
For more than a decade, computer scientist Randy Goebel and his colleagues in Japan have been using proven methods in his field to advance artificial intelligence in the legal world. It’s an annual contest.
Based on legal cases from the Japanese bar exam, contestants must use an AI system that can search for laws related to the case and, more importantly, decide whether the defendant in the case broke the law.
Goebel says it’s this yes/no answer that AI struggles with the most, raising questions about whether AI systems can be ethically and effectively deployed by lawyers, judges, and other legal professionals who are faced with extensive paperwork and limited time frames to deliver justice.
The competition provided the basis for a new paper in which Goebel and his co-authors outline the types of reasoning that AI must use to “think” like lawyers and judges, and describe a framework for incorporating legal reasoning into large-scale language models (LLMs).
This paper will be published in Computer Law & Security Review.
“While our mission is to understand legal reasoning, our passion and value for society is to improve judicial decision-making,” Goebel said.
Goebel said the need for this type of tool has become especially important since the Supreme Court of Canada’s Jordan decision. The decision shortened the amount of time prosecutors had to bring cases to trial, resulting in serious cases such as sexual assault and fraud being thrown out of court.
“It’s a very good motivator to say, ‘Let’s make the justice system faster, more effective, and more efficient,'” Goebel said.
Making machines “think” like lawyers
This paper highlights three types of reasoning that AI tools need to think like legal experts: case-based reasoning, rule-based reasoning, and abductive reasoning.
Some AI systems, such as LLM, have proven adept at case-based reasoning, which requires legal experts to study past case law and determine how the law has been applied in the past to draw parallels with the case at issue today.
Rule-based reasoning that applies written law to unique legal cases can also be completed to some extent by AI tools.
But where AI tools struggle the most is abductive reasoning, a type of logical reasoning that connects together a series of events that might explain, for example, why a defendant is innocent. (Did the man with the knife stab the victim? Or did a gust of wind hit the victim in the hand?)
“Of course, modern large-scale language models cannot perform abductive inference, because they don’t infer,” Goebel says. “They’re like a friend who has read every page of the Encyclopedia Britannica and has an opinion about everything, but knows nothing about how the logic fits together.”
Combined with a tendency to “hallucinate” and fabricate “facts” on a large scale, a generic LLM applied to the legal field can be unreliable at best and can end a lawyer’s career at worst.
A key challenge for AI scientists, Goebel says, is whether they can develop reasoning frameworks that work in conjunction with general-purpose LLMs that focus on the precision and contextual relevance of legal reasoning.
There is no one-size-fits-all AI tool
When will we see an AI tool that can cut the work of lawyers and judges in half? Probably not anytime soon.
Goebel said the key takeaway from the competition, and also outlined in the paper, is that the use of computer programs to support legal decision-making is relatively new and there is still much work to be done.
Rather than a single “godlike” LLM, Goebel foresees a number of separate AI tools being used for different types of legal work.
Claims by some in the AI industry that humans are on the verge of developing AI tools that can make “perfect” judicial decisions and legal arguments are absurd, Goebel said.
“Every judge I’ve talked to agrees that there is no such thing as a perfect sentence,” he says. “The question really is, ‘How do we determine whether our current technology provides more value than harm?'”
Further information: Ha Thanh Nguyen et al., LLM for Legal Reasoning: A Unified Framework and Future Prospects, Computer Law & Security Review (2025). DOI: 10.1016/j.clsr.2025.106165
Provided by University of Alberta
Quote: Is AI Ready for Courtroom? New Framework Tackles Technology’s Biggest Weakness (October 28, 2025) Retrieved October 28, 2025 from https://techxplore.com/news/2025-10-ai-ready-courtroom-framework-tackles.html
This document is subject to copyright. No part may be reproduced without written permission, except in fair dealing for personal study or research purposes. Content is provided for informational purposes only.
