Lawyers have always used tools to capture conversations. Dictation machines, tape recorders, court reporters, and video are not new.
What is new is what happens after the recording.
AI-powered meeting and note-taking tools do not just record. They transcribe, summarize, organize, and often store the contents of the conversation on third-party infrastructure. That creates a question many lawyers still are not asking carefully enough:
When a third-party AI system processes a privileged attorney-client conversation, what happens to confidentiality and privilege?
The point is not that there is already one clean answer across every jurisdiction. There is not. The point is that lawyers should not treat these tools like neutral recorders when the processing layer changes the risk.
The Easy Question and the Hard One
The easy question is consent.
Lawyers already know, or should know, that states differ on recording-consent rules. Some jurisdictions are one-party consent states. Others require all parties to consent in at least some circumstances. That landscape has existed for years.
The harder question is what happens after the capture.
When an AI note-taking tool processes a client conversation, the communication may be:
- transmitted to a vendor-operated cloud service
- stored under the vendor's retention and deletion rules
- processed by systems the lawyer does not control
- summarized in ways that can change nuance or emphasis
That does not automatically answer the privilege question in every case. But it does mean the lawyer has introduced a third-party processing layer into a communication the client may reasonably understand as highly confidential.
That alone should trigger more scrutiny than many current legal-AI workflows receive.
Why the New York City Bar Opinion Matters
One of the clearest sources here is the New York City Bar Association's Formal Opinion 2025-6, which addresses ethical issues raised by AI tools used to record, transcribe, and summarize lawyer-client conversations.
The opinion is useful because it does not reduce the issue to a simple question of whether recording is legal in a one-party-consent jurisdiction. Instead, it focuses on broader professional-responsibility concerns, including deception, client expectations, confidentiality, competence, and the need to understand how the tool actually works.
That matters because a lawyer could be in a jurisdiction where secret recording is not categorically prohibited by criminal law and still face a separate ethics problem.
The legality of recording and the ethics of recording are not the same thing.
Confidentiality First, Privilege Next
The strongest immediate framing is not "privilege is definitely waived" or "privilege definitely survives." Those are broader legal conclusions than the current public source set supports in a universal way.
The better framing is simpler:
- confidentiality is the foundation of privilege
- third-party processing can complicate confidentiality
- lawyers therefore need to understand the tool, the vendor, and the context before using it in privileged conversations
If a lawyer does not know:
- where the audio goes
- how long it is stored
- who can access it
- whether the provider uses it for system improvement
- whether the account tier changes the data-handling posture
then the lawyer is not making an informed judgment about one of the most sensitive categories of information in legal practice.
That is a competence and risk-management problem before it is anything else.
The Relationship Risk Is Not Just Technical
There is also a client-relationship problem that technical teams tend to underestimate.
Clients who know a conversation is being recorded and processed by a third-party system may speak differently. They may become more guarded. They may leave out facts that are embarrassing, emotionally difficult, or legally ambiguous. Those are often the exact facts a lawyer most needs to hear.
So even where a recording workflow is lawful and ethically defensible, it may still be a poor fit for certain conversations.
That is why this should be treated as a risk gradient, not a binary tool policy.
Low-risk factual intake is one thing. Strategy discussions, settlement conversations, internal witness preparation, or criminal-defense facts are another.
The Client-Side Version of the Same Problem
The issue also runs in the other direction.
Clients increasingly want to record or summarize conversations with lawyers so they can remember what was said. That is understandable. Legal conversations are dense, stressful, and easy to misremember.
But if the client uses an AI note-taking product to process the conversation, the same questions return:
- where did the content go
- who can access it
- how long is it retained
- what was done to it
The privilege belongs to the client, but lawyers still have obligations around protecting confidential communications and managing the relationship carefully. That means lawyers should not ignore the possibility that the client may be using recording or summarization tools on their side.
The Summary Problem
There is another risk here that is not strictly about privilege.
AI summaries are not transcripts. They are interpretations.
A careful, qualified statement about litigation risk can turn into a cleaner, stronger bullet point than the lawyer actually intended. A sentence like "we have arguments, but there are real risks" can come back as something closer to "strong case with good chance of success."
If the client later relies on the summary rather than the actual conversation, the summary can become the client's memory of the advice.
That is not a privilege doctrine problem. It is still a very real legal practice problem.
What Lawyers Should Actually Do
The practical takeaway is not "never use AI note-takers."
It is:
Understand the tool. Lawyers should know how the provider handles storage, retention, access, deletion, and model-related processing before using it in sensitive conversations.
Match the tool to the conversation. Routine factual intake is not the same as high-stakes privileged strategy.
Set expectations early. Recording and AI-processing policies should not be left implicit.
Do not treat vendor convenience as a substitute for judgment. A clean user interface is not the same thing as a safe professional workflow.
Know the jurisdiction. Recording-consent laws and ethics treatment are not uniform.
The System-Boundaries Question Again
This is ultimately the same governance issue that keeps appearing across legal AI:
What can the system touch?
A recording tool that touches privileged client conversations is not a trivial productivity layer. It is a system with access to some of the most sensitive information in legal practice.
That means the right questions are architectural, not just procedural:
- what does the system capture
- where does it send the content
- what does it store
- what does it transform
- what can other people or systems access afterward
Lawyers who ask those questions early will be in a much stronger position than lawyers who adopt these tools as if they were only faster note-taking.
The tools may be useful. The scrutiny is still mandatory.
FlowCounsel is the AI-native operating system for legal teams. FlowLawyers is the consumer-facing legal help platform with attorney discovery, legal aid routing, state-specific legal information, and document tools. Neither provides legal advice. Attorney supervision of all AI output is required.