ABA Formal Opinion 512 and United States v. Heppner should be read together.
Opinion 512, issued on July 29, 2024, is the clearest ABA statement of the professional duties lawyers still carry when generative AI is part of the work. Heppner, decided in the Southern District of New York on February 17, 2026 after a bench ruling on February 10, 2026, shows what happens when those duties meet a public AI workflow with weak legal boundaries.
One gives the framework. The other gives the warning.
Taken together, they do something more useful than either authority does alone. They turn a fuzzy conversation about "responsible AI" into a concrete standard for legal workflows. The question stops being whether a system feels helpful and becomes whether it creates a professional environment that a lawyer can defend.
What ABA 512 does
Formal Opinion 512 does not ban AI. It sets the baseline duties lawyers remain responsible for when AI is used in representation.
Those duties include:
- competence
- confidentiality
- communication with clients when appropriate
- candor toward tribunals
- supervisory responsibilities
- reasonable fees and expenses
That is why 512 is more useful now than when it first came out. The novelty is gone. What remains is the architecture question underneath it.
The official opinion is here:
What Heppner does
Heppner is not a general anti-AI decision. It is a privilege and work-product decision about a specific kind of workflow.
Judge Rakoff held that written exchanges between the defendant and Anthropic's consumer version of Claude were protected by neither the attorney-client privilege nor the work product doctrine on the facts before the court.
The court focused on a few points:
- the exchanges were not communications between attorney and client
- they were not confidential in light of the third-party platform and its terms
- the defendant was not acting at counsel's direction in using the tool
- the resulting materials did not fit the claimed work-product theory
The opinion is here:
Read together, the message is clear
512 says lawyers remain responsible.
Heppner shows that a public AI workflow does not somehow relieve them of that responsibility. If anything, it makes the need for boundaries more obvious.
Read together, 512 and Heppner shift attention away from demo quality and back toward workflow quality.
The issues that matter are concrete:
- what information reaches the model
- what remains inside controlled application-layer storage
- what review boundary exists before legal effect
- what records exist of generation, editing, approval, and use
- what kind of workflow the system is actually enforcing
Those are the conditions under which legal AI becomes usable in practice rather than merely interesting in a demo.
Four practical requirements that follow
1. Review has to be a real state, not an expectation
Opinion 512 makes clear that lawyers cannot blindly rely on AI output. Heppner shows why weak workflow assumptions are dangerous.
That means draft and final should not be the same thing.
A legal AI system should make clear:
- what is draft
- what is pending review
- what was edited
- what was approved
- what has external effect
If the product treats review as a soft suggestion, it is built on the wrong assumption.
2. Confidentiality is a system-boundary question
Heppner is a reminder that confidentiality does not survive by wishful thinking. It depends on the actual workflow and the actual third-party relationship.
That pushes legal AI design toward:
- bounded retrieval
- scoped data access
- deliberate prompt assembly
- controlled storage boundaries
- visible provider roles and data paths
The practical issue is not just whether a model is hosted. It is whether the system controls what reaches that model and what remains outside it.
3. Provenance matters because supervision matters
Opinion 512 keeps supervisory responsibility with the lawyer. That duty is hard to satisfy if the system hides its own process.
A serious legal AI workflow should make provenance legible:
- what task ran
- what source material was used
- what output was produced
- what changed during review
- what became operative
Without that, supervision becomes performative instead of operational.
4. Consumer AI and legal infrastructure are not the same category
Heppner is a good example of what happens when people blur those categories.
A consumer chat tool can be useful. It is not, by itself, legal infrastructure.
Legal infrastructure requires more:
- role boundaries
- review states
- bounded retrieval
- confidentiality-aware workflows
- auditable output paths
That distinction is increasingly where legal AI products will stand or fall.
The standard buyers should use
Managing partners, legal ops teams, and in-house counsel should use a stricter standard than headline productivity claims.
The useful test is whether the system makes legal duties easier or harder to satisfy in the actual workflow.
512 and Heppner point toward the same conclusion:
- a useful legal AI system must be reviewable
- a safe legal AI system must be bounded
- a professional legal AI system must be auditable
That is a much better buying framework than "Which model does it use?"