← All posts

Legal Tech

Use the Frontier. Then Understand Where It Breaks.

April 24, 2026

Reading mode

Lawyers should use frontier AI tools.

That sentence should not still feel controversial, but in large parts of the profession it does.

The lawyers who refuse to touch the frontier are not protecting themselves from change. They are delaying the moment when they discover how much the ground has already moved under them.

So yes: use the frontier.

Try the strongest models you can get your hands on. Use the coding tools. Experiment with model-driven workflows. Learn what the systems are good at. Watch how quickly the surface is improving.

That is the right first move.

It is not the whole move.

Frontier access is only half the literacy

There is a version of AI literacy that means keeping up with model names, features, demos, and impressive use cases.

That matters.

There is a more durable version that matters more.

It means understanding the shape of the failures.

What does the system do when the record is incomplete? When the source is thin? When the prompt is underdetermined? When a citation is requested that should not be inferred? When the output sounds more complete than the underlying evidence supports?

If a lawyer touches the frontier and comes away only with the impression that the tools are astonishing, they have learned something true but incomplete.

The professional skill is learning where the astonishment stops carrying the work.

The failures are not edge cases

Legal AI discussion still treats the main failures as if they are occasional bugs in otherwise reasoning systems.

That is the wrong mental model.

As I argued in LLMs Do Not Reason. Legal AI Has to Account for That., the outputs can look like reasoning without the underlying mechanism being reasoning in the professional sense lawyers rely on.

That matters because some of the most consequential legal-AI failures are not strange accidents. They are natural results of how the systems work.

Hallucinated authority is one obvious example.

But it is not the only one.

Another is what the recent Stanford paper on factual presumptuousness makes clear: systems often decide when the correct answer is that more facts are needed before any decision should be made. I wrote about that in The Missing State in Legal AI: Knowing When Not to Decide.

Those are not random defects.

They are exactly the kinds of failures you should expect from systems optimized to produce plausible output from incomplete context.

The magic moment is real, and it creates a new risk

The frontier is persuasive because it creates genuine magic moments.

A lawyer sees a model summarize a dense record in seconds, draft a credible first pass of a motion section, classify a pile of intake text, or generate a working tool from a workflow description. That moment matters. It changes what the lawyer believes is possible.

But the magic moment creates a second-order risk.

It can make the work look more complete than it is.

That is where overconfidence begins.

The danger in legal work is not only that the output may be wrong.

It is that the output may be right enough, fluent enough, and well-shaped enough to tempt the user into forgetting where the last mile still lives:

  • verification
  • source tracing
  • professional judgment
  • review
  • approval

The frontier should teach possibility. It should not trick the profession into treating plausibility as completion.

What the sanctions record already shows

The sanctions data is not a side note anymore.

Damien Charlotin's running database has already documented well over a thousand hallucination incidents worldwide. The specific number will keep changing, but the underlying pattern is already clear: lawyers keep using AI in ways that let plausible output cross into legal effect without sufficient verification.

The Sullivan & Cromwell incident made the same point from the opposite end of the market. This is not only a solo-lawyer story or a novice-lawyer story. It is a workflow story. As I argued in When Policies Are Not Enough, policy and training alone do not change what the system is underneath.

If the architecture still lets fluent output move through a weak review boundary, the sophistication of the user does not eliminate the risk.

Frontier users need a second instinct

The first instinct is:

What can this do?

That is the right instinct for learning the capability.

The second instinct has to arrive quickly after:

Where does this break?

That means asking:

  • What source material constrained this output?
  • What facts would have to be verified independently?
  • What should the system have deferred instead of deciding?
  • What does the workflow record about the run?
  • What human review stands between this draft and legal effect?

These are not anti-frontier questions.

They are how you keep frontier use from collapsing into frontier theater.

The useful posture is capability plus architecture

The profession does not need more writing that says "never use the frontier."

That is not serious.

Nor does it need more writing that says "the frontier is here, so ship."

That is also not serious.

The useful posture is:

  • use the frontier to update your mental model of what is possible
  • use architecture to decide what belongs in real legal workflow

That means review boundaries, provenance, bounded retrieval, explicit state transitions, and systems that can represent uncertainty and incompleteness instead of flattening everything into an answer.

It also means understanding that not every useful frontier interaction belongs in the same tier.

Some uses belong in experimentation. Some belong in internal drafting acceleration. Some belong nowhere near client- or court-facing output unless the surrounding workflow is built to carry the risk.

What lawyers should actually take from the frontier

The best lawyers using these tools well are not the ones most dazzled by them.

They are the ones whose mental model got sharper.

They see:

  • how much scaffolding can compress
  • how quickly the interface layer is moving
  • how strong retrieval-backed drafting can be
  • how easy it is to overread fluent output
  • how much the surrounding system design matters

That last point is the one the market still understates.

The frontier is worth touching because it teaches the capability.

It is also worth touching because it teaches the limits, if you are paying attention.

The lawyers and firms that benefit most will be the ones that learn both at the same time.

The infrastructure legal runs on.

Guided by attorney judgment.