
When AI Enters the Courtroom: Grassley Demands Answers After Judges CitePhantom Cases
By Kendall PC
The intersection of artificial intelligence and the judiciary has taken an
unexpected—and uncomfortable—turn. Senator Chuck Grassley (R-Iowa), the longtime
watchdog of government accountability and the current Chair of the Senate Judiciary
Committee, has raised pointed questions about whether two federal judges relied on
generative AI tools in drafting recent court orders that included fabricated citations and
fictionalized details.
Grassley’s letters, sent October 6 to U.S. District Judges Julien Neals of New Jersey
and Henry Wingate of Mississippi, request confirmation of whether either judge—or their
staff—used generative AI systems to draft the flawed opinions. He also asks whether
any confidential or sealed case materials were entered into such tools. The judges have
until October 13 to respond.
The Orders in Question
Judge Neals’ June 30 order in a shareholder dispute involving a biopharma company
contained quotations and case citations that simply did not exist. The inconsistencies
were flagged by defense counsel from Willkie Farr & Gallagher, who recognized that
several “authorities” cited in the ruling were fictitious. Neals rescinded the order shortly
after the discovery, issuing a corrected version.
Judge Wingate’s July 20 order in a case challenging a Mississippi law restricting the
teaching of diversity, equity, and inclusion (DEI) concepts was similarly problematic. His
decision referenced parties, allegations, and quotations unconnected to the actual
record. That order, too, was later withdrawn and replaced.
While neither judge has confirmed the use of AI, the mistakes bore the hallmarks of
“hallucinated” outputs—AI-generated text that confidently presents false or invented
information. Grassley’s inquiry seeks to determine whether such technology was
involved, and if so, under what safeguards.
A Constitutional and Ethical Crossroads
Grassley’s letter invokes a larger question: how courts will navigate the risks of AI-
generated content within judicial decision-making. Federal judges are not explicitly
prohibited from using AI-assisted tools, but they are bound by strict duties of accuracy,
confidentiality, and impartiality under the Code of Conduct for U.S. Judges.
The senator’s investigation underscores growing anxiety across the legal profession:
if even members of the federal bench can be misled by AI, what does that mean for the
credibility of the justice system? As Grassley framed it, “The public has a right to know
whether judicial officers are introducing unverified or fabricated material into official
court records.”
The U.S. Constitution provides that judges may be removed only through
impeachment by the House and conviction by the Senate. But according to Stephen
Gillers, Elihu Root Professor of Law Emeritus at NYU School of Law, the likelihood of
such a consequence is slim. “Embarrassment, not impeachment, will be the result,”
Gillers noted—adding that the incident nonetheless serves as a cautionary tale for the
entire legal community.
AI in the Courtroom: Promise Meets Peril
The episode comes amid broader efforts within the federal judiciary to understand
and regulate AI use. In 2024, the Administrative Office of the U.S. Courts issued
preliminary guidance warning against entering confidential or identifying case data into
public AI platforms. Several state court systems have since followed suit.
Still, AI’s allure remains. Judges, clerks, and attorneys alike face mounting workloads
and shrinking resources. The temptation to use large language models for drafting
opinions or summarizing case law is real—and rising. Yet, as these cases demonstrate,
automation cannot replace human legal reasoning and verification.
For litigants and practitioners, the takeaway is clear: AI tools can accelerate work but
also amplify error. For judges, the stakes are higher still—because every misstep risks
eroding public trust in the rule of law.
The Takeaway
Regardless of what Neals and Wingate disclose to Senator Grassley, the message is
unmistakable: unchecked AI use in judicial writing is a reputational and ethical minefield.
Courts will need explicit policies defining permissible use, audit trails for citations, and
mandatory human verification before any ruling is issued. Until then, these two judges’
experiences will likely be remembered as the moment artificial intelligence met Article
III—and the law blinked first. credibility of the justice system? As Grassley framed it, “The public has a right to know
whether judicial officers are introducing unverified or fabricated material into official
court records.”
The U.S. Constitution provides that judges may be removed only through
impeachment by the House and conviction by the Senate. But according to Stephen
Gillers, Elihu Root Professor of Law Emeritus at NYU School of Law, the likelihood of
such a consequence is slim. “Embarrassment, not impeachment, will be the result,”
Gillers noted—adding that the incident nonetheless serves as a cautionary tale for the
entire legal community.