Tag Archives: artificial intelligence

We Must Have Imagined It – Update for May 8, 2026

We post news and comment on federal criminal justice issues, focused primarily on trial and post-conviction matters, legislative initiatives, and sentencing issues.

HALLUCINATIONS

I have had four inmates in the past few months send me draft court filings that they had prepared by using artificial intelligence. The drafts were uniformly terrible.

In the most recent, the draft cited four cases – one was outright fictitious. Two others were real cases but did not address the question that the AI chatbot said they did. The fourth was a real case, but instead of holding what the motion said it did, the case said the exact opposite and destroyed the most important argument the inmate was trying to make.

The problem, called “hallucinating,” is that the AI agent makes things up when it cannot find the right answer or the right case. The problem is so epidemic in the legal world that a Paris-based legal tech researcher has launched an AI Hallucinations Cases database, with almost 1,400 cases listed so far.

One of the newer entries is a 5th Circuit denial of a pro se inmate’s appeal of the denial of his compassionate release motion. Prisoner-appellant Jose Marquez cranked out his appellate brief through AI. The arguments were quickly shot down by the appeals panel. At the end of the decision, the Circuit delivered a blunt warning:

Before concluding, we note that Marquez’s deceptive briefing practices deserve special mention and admonition. After an exhaustive review of Marquez’s brief, we conclude that some of the cases Marquez cites do not exist and nearly every quotation from the caselaw that he cites from existing caselaw is either misquoted or fabricated. Further, most of the legal propositions that Marquez posits are supported by our caselaw are either inapposite to the cases he cites or, worse, contradicted by our caselaw. While we afford pro se plaintiffs some leeway, we will not ignore Marquez’s use of non-existent caselaw and fabricated quotations, which flouts the requirement in Federal Rule of Appellate Procedure 28(a)(8)(A) that all briefs contain arguments supported by cited authority. Marquez is WARNED that his use of deceptive briefing practices akin to those employed in this case may result in the imposition of appropriate sanctions.

Appropriate sanctions primarily include fines. A few weeks ago, a West Coast lawyer in a probate action was fined over $100,000 for his repeated use of AI-generated motions containing hallucinated cases and quotations. The 5th said that being pro se doesn’t mean that you can avoid that lawyer’s fate.

United States v. Marquez, Case No. 25-50866, 2026 U. S. App. LEXIS 11880 (5th Cir. April 24, 2026)

AI Hallucinations Cases database

~ Thomas L. Root

News From Here And There – Update for November 6, 2025

We post news and comment on federal criminal justice issues, focused primarily on trial and post-conviction matters, legislative initiatives, and sentencing issues.

FEDERAL SHORTS

Bang, Bang: Remember the Bureau of Prisons correction officer who pursued a suspicious BMW parked at MDC Brooklyn through city streets back in September 2023, finally opening fire on the fleeing car at the foot of Brooklyn Bridge (and hitting one of the malefactors in the back)?

Last week, the officer, Leon Wilson, was convicted in U.S. District Court for the Eastern District of New York of depriving the man he shot of his civil rights, as well as an 18 USC § 924(c) offense for using a gun in a crime of violence.

Wilson, who had no arrest authority except on MDC property, faces a mandatory 10-year sentence for the § 924(c) violation.

The New York Times reported that, “Outside the courtroom after the verdict, Mr. Wilson was emotional. He said he had not reported the incident because he was “traumatized,” and that he thought someone had escaped from the jail.”

The people in the car were trying to drop off cigarettes and cellphones to be smuggled into the facility.

New York Times, Guard is Convicted of Pursuing Jail Smugglers and Firing at Them (October 28, 2025)

Do As We Say, Not As We Do: Federal judges have excoriated and fined lawyers for filing AI-generated motions and briefs full of false quotations and case citations.

Now, the Senate Judiciary Committee is taking aim at judges who do the same.

Two federal judges in New Jersey and Mississippi admitted last month that their offices used artificial intelligence to draft factually inaccurate court documents that included fake quotes, mangled facts and even fictional litigants — drawing a rebuke from the head of the Senate Judiciary Committee.

“I’ve never seen or heard of anything like this from any federal court,” Sen Charles Grassley (R-Iowa), chairman of the Judiciary Committee, said in a Senate floor speech last week.

The Committee revealed the week before that Judge Henry T. Wingate of the Southern District of Mississippi and Julien X. Neals of the District of New Jersey admitted that their offices used AI in preparing the mistake-laden filings in the summer.  In true form, the judges blamed someone else, attributing the mistakes to a law clerk and a law school intern, respectively.

Grassley demanded that courts establish rules on AI use in litigation. “I call on every judge in America to take this issue seriously and formalize measures to prevent the misuse of artificial intelligence in their chambers,” he said.

Washington Post, Federal Judges Using AI Filed Court Orders with False Quotes, Fake Names (October 29, 2025)

Beaten Inmate Gets Paid:  A federal judge last week found that an incarcerated, self-represented Florence ADX prisoner should be compensated $10,000 by the government for a BOP prison guard’s unwarranted use of force.

After a five-day bench trial in which the inmate represented himself on his Federal Tort Claims Act complaint, Senior District Court Judge R. Brooke Jackson determined the prisoner had successfully proven one of his three battery claims, that he was slammed into a wall by the officer in a 2018 incident, suffering psychological damage from the encounter.

Being slammed into a wall “has had a profound and lasting negative impact on him. In 18 years prior to the incident in (prison) custody, Mr. Mohamed had no suicide risk assessments; since this incident, he has had 12,” Jackson found in his October 24 order.

The Court noted in a wry aside that the prisoner’s administrative remedies filed for loss of his property did “not settle the matter. Instead, they show [the inmate] and the BOP talking past one another,” a sensation that is all too common in the administrative remedy process.

Colorado Politics, Federal Judge Awards $10,000 to Supermax Prisoner For Guard’s Use of Force (October 29, 2025)

Mohamed v United States, Case No. 1:20-cv-2516, 2025 U.S. Dist. LEXIS 210451 (D. Colo. October, 24, 2025)

Homeland Security Behaving Badly: A couple of federal agents for Homeland Security wound up on the wrong side of the courtroom last week.

In Utah, DHS agent, Nicholas Kindle, an expert on the synthetic drug bath salts was sentenced to 60 months last week for selling the drug while on the job in Salt Lake City.

Before he was sentenced October 22, Nick’s defense attorney argued the sentence should be reduced to reflect his willingness to cooperate with the FBI. He asked for a below-Guidelines 33-month sentence.

Meanwhile, in Minneapolis, former DHS Timothy Gregg pled guilty last Wednesday to production of child pornography after producing videos of his sexual abuse of a 17-year-old.

Gregg testified he thought she was 19, but he later admitted that he had looked her up on a DHS law enforcement database and learned she was 17.

Gregg is the third Minnesota-based law enforcement officer charged with creating or possessing child sex abuse material this year.

Salt Lake City Tribune, A Utah federal agent and bath salts expert is headed to prison for selling the drug. Here’s how long he’ll serve. (October 29, 2025)

Minnesota Public Radio News, Ex federal agent admits guilt in child sex abuse case as attorney recounts harrowing surrender (October 30, 2025)

~ Thomas L. Root

Too Good To Be True? It’s Probably AI – Update for April 17, 2023

We post news and comment on federal criminal justice issues, focused primarily on trial and post-conviction matters, legislative initiatives, and sentencing issues.

A CAUTIONARY TALE

You’ve probably heard of artificial intelligence programs – such as ChatGPT – doing all sorts of great things. While inmates can’t get it on their Bureau of Prisons-sold tablets, they might decide to have friends on the street use it for some high-powered legal research.

Last week, I was wrestling with a tough habeas corpus issue. Even with a LEXIS subscription, I wasn’t finding much on the topic. A friend interested in the issue sent me an email with two Federal Reporter 3d case citations that were exactly on point.

AIphony230417I was excited and at the same time embarrassed I had not found those cases in my research. I looked up both cases to read the whole opinions, but the citations led nowhere. So I searched the respective circuits by case name but could find nothing.

I contacted my friend for help. He checked the citations himself, and then sheepishly reported to me that they indeed did not exist. He had used Chat GPT to research the issue but had not independently verified the results.

crazy200306Computer scientists call it ‘hallucinating’. Apparently, when an AI program cannot find the answer someone is seeking, it can make things up. That’s what happened here.

So, a caution: If you run some AI legal research, you may find some really good information. But check every case citation to be sure the case exists and says what the AI is telling you it says.

– Thomas L. Root