Four smart people, four different statements, one category error.
A tech founder said that, with AI models, you mostly need to know what you want and how to describe it.
An institutional legal leader described the experience of suddenly being able to build things she could not build before.
A legal tech investor noted that lawyers are shipping tools that look like the products funded legal AI companies are building.
The same investor later wondered whether the venture capital model itself looks different now that vibe coding has compressed the cost of software production.
Each observation is partly right, which is why the error is persuasive.
AI has compressed output production. It has not compressed system building in the same way.
What compressed
The real part should be stated plainly.
The cost of producing a working surface has collapsed. A well-designed prompt or agentic coding loop can produce a functional interface, a rough workflow, a draft document tool, a data-entry screen, a contract-review surface, or a litigation timeline prototype dramatically faster than a small team could have done a few years ago.
The capability shift is real.
Specification-to-prototype cycles that used to take weeks can now take hours. People who could not previously build software can now produce software-like artifacts. Small teams can ship more surface area. Senior engineers can move faster through scaffolding and spend more time on the parts that actually require judgment.
None of that is fake. I use these tools heavily. The compression is real.
The visible surface compressed. The system underneath did not compress with it.
What did not compress
Production infrastructure did not compress in the same way.
Authentication, authorization, tenancy, data isolation, audit logging, rate limiting, error handling, backup and recovery, retention policy, incident response, observability, payment reconciliation, permission changes, access revocation, and safe deployment practices still have to be designed and operated.
Regulatory and professional obligations did not compress either.
SOC 2 does not become unnecessary because the frontend was generated quickly. HIPAA does not care whether a healthcare-adjacent workflow came from a senior engineer, a non-technical founder, or an AI coding assistant. Bar advertising rules do not disappear because the intake funnel was scaffolded in a weekend. Professional responsibility does not move out of the way because the tool looks polished.
The same is true for content and knowledge infrastructure. Legal content across states, practice areas, courts, procedures, and legal-aid routes still requires source discipline, review, freshness tracking, and editorial judgment. A prompt can help produce text. It cannot make an unverified legal knowledge system trustworthy.
The visible surface compressed fastest because it was always the least defensible part of the product. The hard parts are still hard.
Code was already the smaller part
Even before AI-assisted coding, writing code was only part of senior engineering work.
A senior engineer also spends time on architecture, code review, debugging production issues, incident response, technical mentorship, system design, upgrade planning, capacity management, security review, documentation, and the other work that makes production systems operate after the first version ships.
AI compressed the part of the profession that was most visible from the outside: producing code.
It did not compress the whole profession.
The legal parallel is straightforward. Drafting is not the whole of lawyering. Senior legal work also includes client counseling, strategy, negotiation, supervision, risk assessment, business development, matter selection, professional responsibility calls, and judgment about when the standard path is not the right path.
AI can compress drafting. It does not compress the full professional system around the draft.
I heard the same pattern recently from an IT director at a college. A faculty member built a Chrome extension. People got excited because the artifact worked and because the faculty member could now build something that would previously have required engineering help.
Then the operational questions landed on IT.
Who maintains it? Who reviews it for security? What data does it touch? What happens when Chrome changes an extension policy? Who supports users when it breaks? How does it interact with campus systems? What happens when the faculty member leaves or stops maintaining it?
The faculty member got the initial reward of creating the surface. The operational cost landed elsewhere.
That pattern generalizes. When the compressed surface is the rewarding part of the work and the uncompressed system is the unrewarding part, the gap between who builds and who maintains is where institutional cost accumulates.
Why the error is understandable
The people making this mistake are not stupid.
They are reasoning from what they can see.
The demo looks like the product. The generated code looks like engineering. The chat interface looks like a company. The polished output looks like a completed workflow.
If you are adjacent to a field but not responsible for operating the system, that is the natural inference. You see the thing that used to take a team six months appear in a weekend. The conclusion that "the work compressed" is understandable.
It is just incomplete.
The output does not show you the auth model. It does not show you whether tenant data is isolated correctly. It does not show you how secrets are managed. It does not show you whether logs contain client information. It does not show you whether access is revoked when an employee leaves. It does not show you how the system behaves when two users update the same record at the same time. It does not show you whether the knowledge base is sourced, current, and reviewable.
The output shows you the compressed surface.
The system is underneath.
Legal makes the distinction sharper
In many domains, confusing output with system produces operational pain.
In legal, it can produce professional liability, consumer harm, privilege exposure, and bar problems.
A lawyer who ships an AI-assisted intake tool into a live practice without thinking through data handling, retention, access control, and breach response is not just running a rough prototype. That lawyer may be creating a confidentiality problem.
A firm that adopts a compressed-surface AI tool without evaluating the production architecture underneath is making decisions about client data on the basis of a demo.
A public legal-help tool that gives polished answers without source-grounding, jurisdictional boundaries, and handoff rules can create exactly the kind of false confidence access-to-justice products are supposed to reduce.
Consumer legal-access workflows are not safer just because they are free or well-intentioned. The same system-building obligation applies: source-grounded content, deterministic routing where stakes require it, visible limitations, and handoff to legal aid or attorneys when self-help is not the right next step.
Compressed output does not satisfy those obligations.
Architecture does.
The professional floor is not gatekeeping
"Lawyers should stop building" and "non-engineers should not ship software" are the wrong lessons.
The capability being unlocked is real, and the legal profession benefits when lawyers, legal-aid leaders, and operators learn to build. The standard is not who is allowed to build. The standard is what the shipped system has to satisfy when it touches consequential work. The same floor is discussed in Build. And Draw the Lines..
Non-engineers can meet that floor. Engineers can fail to meet it. Credentials are not the standard.
Adequacy to consequences is the standard.
For legal software, that means the system needs answers to questions like:
- where does user data go?
- what is stored, logged, retained, or deleted?
- who can access matter or client information?
- what happens when output is wrong?
- what requires human review before external effect?
- what source supports a legal statement?
- what happens when a user should be routed to legal aid or an attorney instead of a self-help path?
Those questions are not anti-builder. They are the floor under building.
The venture-capital mistake
The same category error shows up in the "VC is dead" argument.
AI has compressed some of the work venture-backed startups used to spend a lot of money on. It is now easier to produce frontend surface area, prototype workflows, write glue code, generate internal tools, and test product ideas with a very small team.
That changes startup economics. It does not eliminate the company-building problem.
Sales motion, trust, distribution, regulatory posture, enterprise security, content defensibility, implementation support, customer success, operations, and durable infrastructure still take time and capital. Some of those pieces may become more efficient. They do not disappear because code production got cheaper.
Commodity SaaS may be under real pressure. Infrastructure ambition is not.
That distinction changes how the claims should be evaluated.
The correction
Correct the category error once, and a lot of naive claims become easier to evaluate.
"Anyone can code" becomes: more people can produce software-like artifacts, but engineering still requires specification, verification, architecture, and operational judgment.
"Lawyers are replicating Harvey" becomes: lawyers can now prototype surfaces that resemble enterprise legal AI products, but enterprise-grade systems also require tenancy, data isolation, review boundaries, provenance, security, support, and governance.
"AI kills VC" becomes: AI changes the cost structure of software production, but it does not eliminate the need to build trust, distribution, infrastructure, and defensibility.
"Legal access tools are easy now" becomes: legal-help surfaces are easier to create, but high-stakes access-to-justice workflows still need source grounding, routing discipline, and human handoff when the situation requires it.
The pattern is the same each time.
Compressed output is not compressed system.
Where the work is
The next few years will produce a lot of impressive surfaces.
Some will become real products. Many will not. The difference will not be whether the first demo looked credible. A lot of first demos now look credible. The credibility of the demo is no longer enough.
The difference will be whether the builders did the work underneath the surface:
- production infrastructure
- data handling
- source grounding
- regulatory posture
- review boundaries
- incident response
- professional-floor architecture
That work does not always show up in demos.
It shows up when the demo becomes production, which is where the actual profession lives.