← All posts

Legal Tech

F1 Requires a Track

April 24, 2026

Reading mode

The lawyer-developer era is real.

Lawyers are building tools for their own work now. Not hypothetically. Not someday. Right now.

A contract-review helper. A local-rules tracker. An intake workflow. A chronology builder. A narrow drafting surface for a specific practice problem. A one-weekend internal tool that saves five hours a week after that.

That capability matters.

The legal market should stop pretending the only future available is vendor software sold top-down into firms. A second path is visible now: lawyers identifying their own pain points, building just-in-time tools, and sharing what works with each other.

LegalQuants has become the clearest public face of that shift.

Jamie Tso and Raymond Sun use an F1 metaphor for what they are building:

  • drivers
  • car
  • track

The metaphor is good. It explains more than most legal AI writing does.

It also points to the missing piece.

F1 requires a track.

The drivers are real

The first thing worth saying clearly is that the builders are real.

A lawyer who can identify a workflow problem, specify a solution, direct AI to produce a working tool, test it against real practice needs, and improve it iteratively is doing something meaningful. That is not fake software because a foundation model was involved. It is not fake innovation because the person building is a lawyer rather than a full-time engineer.

As I wrote in Build. And Draw the Lines., the capability is real. The professional floor is real too.

The mistake is not taking the capability seriously.

The mistake is stopping there.

The car is not the whole sport

The lawyer-builder story often gets told as if the breakthrough is mostly about direct access to the car.

The old enterprise legal AI model gave lawyers a safe abstraction: a branded surface, simplified controls, and whatever guardrails the vendor decided to ship. The newer builder culture says some lawyers want more than that. They want to configure the workflow themselves, decide what the tool does, and work closer to the underlying model capability.

That instinct is right.

But F1 is not just a driver and a car.

F1 includes:

  • track design
  • pit crews
  • telemetry
  • safety systems
  • rules
  • inspection
  • mechanical discipline

No one watching an F1 race thinks the driver therefore built the car, designed the track, wrote the safety rules, and engineered the telemetry.

The driver is extraordinary.

The driver is not alone.

That is the category distinction the legal AI conversation still needs.

The division of labor is the point

Lawyer-builders are learning a powerful new capability. They are not, by virtue of that capability alone, becoming senior production engineers.

That is not a criticism. It is a normal division of labor.

I have spent fifteen years building software and working around the part of the stack most users never see: architecture, failure modes, secrets, incident response, dependency risk, tenant isolation, data handling, review boundaries, and the operational consequences of getting those things wrong.

AI-assisted coding did not erase that work. It compressed the visible surface.

The same thing is now happening in legal.

The visible scaffolding compresses first. The judgment underneath does not. That is why, as I argued in Legal Is Less Bespoke Than Lawyers Want It to Be, the profession is about to discover that a lot of what felt bespoke was actually preparation around the craft.

The same distinction applies on the engineering side.

A working tool is not yet a trustworthy system.

The difference lives in the 80% underneath:

  • credential handling
  • patch cadence
  • provider boundaries
  • retention posture
  • prompt-injection exposure
  • provenance
  • review states
  • tenant scoping
  • auditability

Those are not the part of the work most lawyer-builders want to spend their time on.

They should not have to.

Safeties are not what neuter the capability

There is a version of the current legal AI rhetoric that gets dangerous fast.

It says the tools are finally powerful, that vendor abstractions have been too constraining, that the profession has been too slow, and that what matters now is shipping.

Taken individually, parts of that are right.

Taken as a worldview, it starts treating the safeties as if they are what get in the way.

That is backwards.

In legal work, the safeties are not what reduce the capability. They are the conditions under which the capability becomes legitimate.

Review states do not neuter the model. They preserve the difference between a draft and legal effect.

Bounded retrieval does not neuter the model. It makes the output traceable and confidential.

Provenance does not neuter the model. It makes supervision possible.

Approval gates do not neuter the workflow. They keep professional judgment in the place the profession still requires it.

This is why Why Review Boundaries Matter More Than Model Choice and What Legal AI Confidentiality Actually Requires are architecture pieces, not just governance pieces.

The track's discipline is what makes speed matter.

Without the track, you do not get F1.

You get a fast car pointed at ordinary roads.

Shared infrastructure, configurable surface

The strongest case for lawyer-builders is not that every firm should reinvent its own infrastructure.

It is that lawyers should be able to configure the parts closest to the work they actually understand best.

That means the surface should be configurable:

  • specialists
  • workflow order
  • matter-specific logic
  • required facts
  • review paths
  • practice-specific outputs

It does not follow that every lawyer should also build their own execution layer, storage boundaries, approval state machine, or retention model.

Those are shared problems.

The right abstraction for legal AI is not a black-box vendor wrapper that hides the middle of the process. It is also not raw model access plus hope.

The right abstraction is shared infrastructure with a configurable surface.

The lawyer controls the workflow logic, the context that should reach the system, and the points where human approval matters. The infrastructure handles the production floor underneath.

That is what a real track does.

Why this matters beyond one community

LegalQuants matters because it makes the movement visible.

But the argument is larger than one community.

The legal market is going to split between firms that let this capability stay informal and firms that give it proper infrastructure.

The first group will produce demos, shadow tools, and occasional wins mixed in with preventable failures.

The second group will produce repeatable systems that let lawyers build faster without asking them to become engineers, security teams, and platform operators at the same time.

That is where the next layer of legal AI competition sits.

Not only who has the strongest model.

Who builds the best track.

The next phase

The lawyer-developer era does not need to be argued into existence anymore.

It is here.

The question now is whether the movement gets the infrastructure it needs to scale past clever prototypes and local wins.

That is the work in front of the market:

  • respect the drivers
  • do not hide the car
  • build the track

The winners in legal AI will not be the people who force lawyers back into black boxes.

They will be the people who give lawyers real control where control matters, while building the production floor that turns working tools into trustworthy systems.

F1 requires a track.

The infrastructure legal runs on.

Guided by attorney judgment.