United States v. Heppner, No. 25-cr-00503-JSR (S.D.N.Y.), is one of the
first federal decisions to address privilege and work product in the context of
generative AI directly. Judge Jed Rakoff ruled from the bench on February 10,
2026, and issued a written memorandum on February 17, 2026.
The decision matters, but it should be read carefully.
Heppner is not a holding that AI can never be used in legal work. It is not a holding that every AI-assisted draft loses privilege. It is a fact-specific warning about what happens when a person independently uses a public consumer-facing AI tool for case strategy and expects traditional privilege doctrines to do the rest.
That makes it highly relevant for legal AI architecture.
What the court actually held
The court held that certain written exchanges between the defendant and Anthropic's consumer version of Claude were protected by neither the attorney-client privilege nor the work product doctrine.
The opinion is here:
The court's reasoning turned on a few specific points:
- the communications were not between client and attorney
- the communications were not confidential in light of the third-party platform and its policy terms
- the defendant was not acting at counsel's direction when he used the tool
- the resulting materials were not prepared by or at the behest of counsel in a way that supported work-product protection
Those are not small details. They are the case.
What Heppner does not mean
The easy but sloppy takeaway is: AI destroys privilege.
That is not what the opinion says.
Heppner is better read as a warning against a specific pattern:
- a public consumer AI tool
- direct user input of sensitive litigation material
- no controlled system boundary around what is sent
- no attorney-directed workflow
- no protected internal review path before outside effect
That distinction matters. A legal AI platform is not evaluated only by whether a model can draft something useful. It is evaluated by the system wrapped around the model.
The architecture question the case forces
Once you read Heppner as a systems case rather than just an AI case, the real question becomes:
What boundaries exist between sensitive legal work, the model, and the final output?
That is where legal AI architecture starts to matter.
1. Public chat interfaces are not a legal workflow
Heppner is a reminder that a public AI conversation window is not the same thing as a controlled legal system.
If users can freely paste matter facts, strategy, and privileged material into a consumer tool, the legal boundary is already weak before the model says anything.
That is why serious legal AI cannot just be:
- a general chat box
- broad document uploads
- a promise that lawyers should be careful
The workflow has to do some of the work.
2. Confidentiality is a system design problem
The court focused heavily on confidentiality and the role of the third-party platform. That means legal AI products cannot treat confidentiality as a procurement footnote.
The architecture implications are straightforward:
- limit what information reaches a model run
- keep firm and matter context in controlled application-layer stores
- make retrieval bounded and task-specific rather than open-ended
- distinguish between internal draft state and externally effective output
This is one reason the useful question is not simply "hosted model or local model?" The more important question is what the surrounding system permits, loads, stores, and exposes.
3. Review cannot be a slogan
Heppner is not mainly a hallucination case. It is a control-boundary case.
That matters because many legal AI systems still treat review as an informal expectation:
- the model drafts
- the user is "supposed" to check it
- the system does not really enforce the difference between draft and final
That is too weak for legal work.
A legal AI system should make the review boundary legible:
- what was generated
- what context was used
- what remains draft
- what was edited
- what was approved
Without that, review language does not amount to much.
4. Counsel-directed workflows matter
One important feature of the opinion is the court's attention to the fact that Heppner acted on his own rather than at counsel's direction.
That does not mean every attorney-directed use of AI is automatically protected. It does mean workflow design matters.
A legal system built for professional use should look less like a consumer conversation and more like a supervised process with:
- defined tasks
- known users and roles
- visible review states
- auditability
- constrained output paths
That is not just a usability preference. It affects how the law will view the system.
Where Heppner fits with ABA Formal Opinion 512
ABA Formal Opinion 512 and Heppner point in the same direction.
Opinion 512 frames the lawyer's duties: competence, confidentiality, supervision, candor, and reasonable fees.
Heppner shows what happens when those duties meet a public AI workflow with weak boundaries.
Together they suggest that legal AI should be judged less by raw draft quality and more by whether the surrounding system makes competent use, confidentiality, supervision, and review easier to satisfy in practice.
The practical takeaway
The lesson from Heppner is not "never use AI."
The lesson is:
- do not confuse consumer AI access with legal infrastructure
- do not assume later review repairs a weak confidentiality boundary
- do not assume privilege doctrine will stretch to cover a workflow the system itself does not control
For firms and legal departments evaluating legal AI, drafting quality is only a small part of the analysis.
The harder issues are operational:
- where sensitive information lives
- what reaches the model
- what review boundary exists before legal effect
- what records exist of generation, editing, and approval
- what kind of workflow the system actually enforces
That is why Heppner is so relevant. It is not just a case about one defendant's use of Claude. It is a case about the difference between a public AI interface and a legal system.