← All posts

ai-technology

Why Sanctions Keep Rising as AI Spreads Through Legal Work

April 3, 2026

The legal industry does not have an AI problem.

It has a workflow problem.

An NPR report published on April 3, 2026 notes that court sanctions for AI-generated errors in legal filings continue to rise. Damien Charlotin, who tracks these incidents globally, told NPR that he now counts more than 1,200 total cases, about 800 from U.S. courts. He also said there were 10 cases from 10 different courts on a single day, and NPR reported that an Oregon federal court may have set a recent high-water mark with $109,700 in sanctions and costs.

That trend matters. The easy takeaway is still the wrong one.

This is not a story about lawyers needing to avoid AI altogether.

It is a story about legal AI being deployed in the wrong shape: as drafting convenience instead of controlled workflow.

Source:

The duty has not changed

Carla Wale, associate dean and law library director at the University of Washington School of Law, put the baseline clearly in the NPR piece: lawyers remain responsible for the accuracy of what they file, regardless of how it was generated.

That is not a new rule. It is the old rule applied to a new workflow.

What has changed is the number of systems that make it easier to:

  • generate text quickly
  • obscure where it came from
  • blur the line between draft and final
  • move too fast past verification

That is why sanctions keep rising even after years of public embarrassment. The underlying workflow is still weak.

This is bigger than fake citations

The fake-citation cases get attention because they are easy to mock.

But that is not the full problem.

The deeper issue is that too many legal AI systems still hide the middle of the work:

  • what information was used
  • what source material was retrieved
  • what state the output is in
  • what changed during review
  • what became final and why

Joe Patrice made the most important point in the NPR article when he warned about increasingly "agentic" products that obscure the middle steps.

Once a system hides the middle of the workflow, it becomes much easier for lawyers to rely on output without seeing enough of how it was assembled, checked, edited, or approved.

That is not a model-quality problem. It is a system-design problem.

Better models do not fix weak workflow

A stronger model can improve drafting quality. It can improve fluency, summarization, and retrieval. But better output quality does not fix a weak legal workflow.

The real design questions are:

  • Is retrieval bounded to the task?
  • Is the output visibly draft or visibly final?
  • Is review enforced or merely suggested?
  • Is provenance visible?
  • Can the system keep draft, edited, approved, and rejected states separate?

Those questions tell you more than a vendor's benchmark chart ever will.

They are also the questions courts and ethics rules keep pointing back to, whether they use that language explicitly or not.

ABA 512 already pointed in this direction

This is one reason ABA Formal Opinion 512 still matters so much.

Opinion 512 was issued on July 29, 2024. It is still the clearest ABA statement that lawyers remain responsible for:

  • competence
  • confidentiality
  • supervision
  • candor
  • reasonable fees

Read at the systems level, 512 does not say "pick the best model." It says legal work still requires professional judgment, supervision, and control.

That should push buyers toward systems that make those duties easier to satisfy in practice, not systems that reduce legal work to a fast drafting surface.

Heppner pointed in the same direction

United States v. Heppner did the same thing from a different angle.

As I wrote in What United States v. Heppner Means for Legal AI Architecture, the case was not a general anti-AI ruling. It was a warning about public consumer tools, weak confidentiality boundaries, and uncontrolled use.

That is why the larger lesson from the current sanctions wave is not just "hallucinations are dangerous."

Legal AI should not behave like a consumer drafting tool wrapped in professional disclaimers.

It should behave like legal infrastructure.

The safer pattern is not mysterious anymore

Serious legal AI should be built around:

  • bounded retrieval instead of giant prompt stuffing
  • visible review states instead of soft "human in the loop" language
  • provenance instead of black-box generation
  • controlled workflow instead of hidden middle steps
  • hosted models used as inference layers rather than as firm memory

That is why pieces like Why Legal AI Needs Bounded Memory, Not Bigger Prompts, Why Review Boundaries Matter More Than Model Choice, and Why Legal AI Memory Is a Systems Problem, Not a Prompt Problem all point in the same direction.

What matters is not whether a product says it uses AI responsibly.

What matters is whether the product is actually designed to make legal review, supervision, and verification visible.

Labeling rules will not solve this by themselves

NPR notes that some courts have adopted broader disclosure or labeling rules for AI-generated filings.

Those rules may help at the margins. They are not enough.

Once AI becomes deeply embedded in drafting, search, intake, document systems, and workflow tools, blanket labels become less useful. A generic disclosure that "AI assisted" some part of the work does not tell anyone:

  • which part
  • what sources were used
  • what was checked
  • what remained draft
  • what a lawyer actually reviewed

Workflow architecture matters more than labels.

The standard serious buyers should use

A serious buyer should stop asking only:

  • Which model does it use?
  • How many tokens does it have?
  • Does it have an AI disclosure?

And start asking:

  • What records does the system keep of generation, editing, and approval?
  • Where does draft become final?
  • What is blocked until review occurs?
  • What context is loaded and why?
  • What can a supervising attorney actually see?

Those answers tell you more about whether a legal AI system belongs in real practice than a polished demo ever will.

The real takeaway

Sanctions are rising because legal AI is still too often deployed as drafting convenience.

Legal AI should not be judged only by how fast it writes. It should be judged by whether the surrounding system makes legal judgment visible, keeps sensitive work inside bounded workflows, and enforces a real difference between draft and final output.

That is the line the market needs to start drawing more clearly.

FlowCounsel includes pipeline management, directory presence, and AI-managed campaigns.

By invitation only. We're onboarding select firms.