Build. And Draw the Lines. made the case that lawyer-builders should build, and that the professional obligations attached to legal work travel with the tool.
It stayed at the architectural level: where the data goes, what happens when the tool is wrong, what obligations travel with the output, and where human review has to sit before legal effect.
This post goes one layer lower.
A working legal tool sits on invisible software infrastructure. Dependencies. Secrets. Network access. Package registries. Patch cadence. Logging. Supply chains. Threat models. An engineer trained on production software sees that stack as part of the product. A lawyer using an AI coding assistant to ship a tool may not see it at all.
The tool can still work, even when the system underneath is unsafe.
The foundation looks different from each profession
When a lawyer cites a fabricated case in a brief, the reaction from the legal profession is immediate. Not because the lawyer is unintelligent. Because the foundation of legal practice is that the authority has to be real. A professional who bypasses that foundation has missed something basic about the work.
Engineers have a similar reaction to software shipped without dependency review, secret management, patch cadence, or threat modeling.
From the engineering side, a lawyer-built tool running in production without those practices can look the way a brief with fabricated citations looks to the legal profession. The output may pass a surface read. The underlying work is not sound.
The point is not that one profession is better at foundations. Both are built on them. The point is that the foundations are different. A builder who crosses professional lines without learning the new foundation takes on risk they can see and risk they cannot.
One compressed example
On March 31, 2026, three unrelated software-security stories collided.
First, Anthropic shipped a Claude Code package with a source map that exposed a large amount of internal TypeScript. Reporting described the exposed code as roughly 512,000 lines across about 1,900 files. Anthropic described the event as a packaging mistake rather than a breach, and said no customer data or credentials were exposed. Even on that framing, the event is still an operational lesson: packaging, release configuration, and dependency distribution are part of the product.
Second, malicious versions of the popular axios npm package were published
with a trojanized dependency named plain-crypto-js. Security researchers
reported a short attack window, but short windows are still windows. Anyone
installing or updating affected dependency trees during that period had to
check whether the bad versions landed locally.
Third, fake repositories claiming to contain leaked Claude Code source began distributing infostealer and proxy malware to people curious enough to run them.
Three different incidents. One lesson.
Modern software is not only the code you wrote. It is the package registry, the dependency tree, the build process, the secrets on the machine, the update process, and the judgment of whoever is operating it.
An engineer with a normal production security workflow saw the axios advisory,
checked lockfiles, removed the affected versions if present, rotated secrets
where needed, and moved on.
A lawyer-builder running an AI-assisted contract-review tool from the same development environment might not have known there was anything to check.
This is not a criticism of the lawyer-builder. It describes the terrain.
Software is assembled
Software written in 2026 is assembled as much as it is authored.
A small Node application can import a handful of direct dependencies and pull in hundreds of transitive dependencies underneath them. Each dependency is maintained by a different person, organization, or nobody. Each updates on a different schedule. Each can be abandoned, compromised, misconfigured, or patched without the builder understanding the full downstream effect.
Engineers have spent years developing reflexes for this terrain:
- lockfiles and pinned versions
- dependency scanning
- vulnerability advisories
- patch processes
- secret scanning
- software bills of materials
- least-privilege access
- threat modeling
- blast-radius thinking
Those practices are not visible in the browser when the tool appears to work. The interface loads. The contract review runs. The intake form submits. The timeline generator produces a clean chronology.
The invisible stack is still there.
Where the cracks form
The operational risks are not exotic. They are ordinary software risks that become professional risks when the tool touches legal work.
Dependencies
A lawyer-builder scaffolds a contract-review tool. The generated project declares fifteen direct dependencies. Those dependencies pull in hundreds of packages underneath them.
The project now depends on maintainers the lawyer has never heard of.
An engineer knows to inspect the dependency tree before production use, pin versions, run dependency scanning, avoid abandoned packages, and subscribe to security advisories for important libraries.
A first-time builder may not know the transitive dependency problem exists. The tool works locally. The assumption is that the code is the code.
It is not. The code is also everything the code depends on.
Supply chain
The axios incident was a supply-chain event. The attacker did not need to
break into every affected application. The compromise moved through the normal
package installation path.
Earlier incidents such as event-stream and colors.js taught engineers the same lesson: the dependency ecosystem is part of the system.
A production team usually has some combination of Dependabot, Renovate, security advisories, lockfile review, and patch cadence. A tool that was vibe-coded in February and ignored until July has whatever vulnerabilities accumulated during the gap.
The tool may still work.
Most compromised tools do.
Secrets
An AI-assisted legal tool usually needs credentials. A model-provider key. A storage key. A database URL. A payment key. A signing credential. These strings let software act as the tool owner.
Engineers handle secrets with rules. Do not commit them. Do not paste them into chat. Do not share them in email. Scope them narrowly. Rotate them. Audit their use. Keep production keys out of local development where possible.
A first-time builder working quickly with an AI assistant can commit a .env
file by accident. Public code hosts scan for exposed secrets. So do attackers.
A live API key committed publicly can be copied and used quickly.
The tool still works after the key is copied. The builder may not know the credential has escaped until the bill spikes, the provider disables the key, or sensitive data has already been touched.
Prompt injection
A contract-review tool takes contract text as input and asks a model to analyze it. The tool exists to process text supplied by someone else.
Prompt injection is what happens when the input contains instructions aimed at the model rather than the human. A hostile contract, email, intake submission, or uploaded document can try to shape the model's behavior from inside the content the tool was built to process.
Prompt injection is not a theoretical edge case. It is a normal property of systems that combine natural-language input, model reasoning, and tool access.
The defenses are architectural. Treat user-controlled input as hostile. Do not let model output drive privileged actions without human approval. Scope tool access narrowly. Log enough to reconstruct what happened. Keep external effect behind a review boundary.
A lawyer-builder may not know to design for that failure mode. The model reads the contract. The model produces analysis. The lawyer sees polished output.
In a legal context, the attacker is not hypothetical. A contract supplied by opposing counsel, an email from an adverse party, or an intake submission from a bad-faith user can shape what the tool does and what the lawyer sees.
Patch cadence
Security is continuous. Vulnerabilities are disclosed constantly. Patches are published constantly. Attackers read the same advisories as defenders.
A production system needs a person or process that applies security patches on a schedule. The cadence does not need to be theatrical. It does need to exist.
A lawyer-builder who ships a tool and moves on to the next matter may have no patch process. Six months later, the dependency tree can contain disclosed critical vulnerabilities.
The tool still works, which is why the risk persists.
Threat model
Every production system should be able to answer a simple question:
What happens if this tool is wrong, compromised, or used against us?
The answer shapes the architecture. Who can reach the tool? What can they put into it? What can the tool do in response? What data can it read? What output can it send? What is the blast radius if one piece fails?
If no one has asked those questions, the system has still answered them. The answer is just uncontrolled.
Competence changes when the lawyer becomes the builder
ABA Model Rule 1.1 Comment 8 requires lawyers to keep abreast of the benefits and risks of relevant technology. ABA Formal Opinion 512 extends that duty into generative AI.
For a lawyer using a vendor product, the competence question is mostly a vendor-diligence question. Where does the data go? What does the contract say? What security controls exist? What review obligations remain with the lawyer?
For a lawyer building the tool, the question expands.
The lawyer is no longer only evaluating someone else's system. The lawyer is operating a system. The lawyer does not have to become a security engineer, but the operational layer needs an owner.
Someone has to know what dependencies are running, where the secrets live, how patches happen, how hostile input is handled, what logs exist, and what the blast radius is if something fails.
If the tool touches live client work, "the AI wrote the code" will not answer those questions.
Three responsible paths
Lawyer-builders and other vibe coders have real options.
The first is to work with an engineer. The lawyer keeps the domain knowledge, workflow judgment, and professional responsibility. The engineer helps own the operational layer: dependency review, secrets, patch process, access control, logging, deployment, and incident response.
The second is to keep the tool inside a sandbox where the risk is limited. Internal use only. No client data unless the storage and access model are understood. No external effect. No filing, signing, sending, routing, or legal conclusion that outruns human review. Smaller blast radius, lower operational burden.
The third is to build on infrastructure that already owns more of the operational layer. Managed AI gateways, hosted workflow systems, compliance-aware orchestration layers, and legal-specific platforms can reduce the amount of security and operations work each individual builder has to carry alone.
Shared legal infrastructure matters for this reason. If every lawyer has to become a production engineer before building useful tools, the capability will concentrate at the firms that can afford full-time engineering teams. If the operational baseline is handled once, more firms and legal-aid organizations can build safely on top of it.
The responsibility does not disappear. The surface area becomes manageable.
Build. And check the foundation.
This is not an argument against lawyer-builders.
It is an argument against confusing a working surface with a safe system.
The lines from Build. And Draw the Lines. still apply: what you ship into a live matter, whose data is in it, what happens when it fails, what sources support its output, and what obligations travel with the result.
At the operational layer, the same lines become more technical.
What dependencies is the tool running on? Where are the secrets? What happens when a package ships a vulnerability? What hostile input can reach the model? What can the model do in response? How are patches applied? How large is the blast radius?
These questions do not make lawyer-builders stop building.
They separate tools whose surface works from tools whose system can be trusted.
Build.
And check the foundation.
Sources
- The Verge: Anthropic accidentally leaked a large part of its Claude Code source
- Tom's Hardware: axios npm package compromised in supply-chain attack
- StepSecurity: axios compromised on npm
- Snyk: axios npm package compromised
- Microsoft Security Blog: Mitigating the axios npm supply-chain compromise
- Bitdefender: fake Claude Code leak repositories distributing malware