A pivotal ruling by a federal judge in New York has set off alarms within the legal sector, as it declared that private exchanges between a fraud suspect and Anthropic’s Claude could be accessed by prosecutors. This decision has prompted a rapid response from numerous major U.S. law firms.
Over a dozen prominent law firms have since issued advisories to their clients, cautioning them about using AI chatbots like Claude and ChatGPT for legal matters, as these interactions lack any form of legal protection. Some firms are now incorporating this warning directly into client contracts before representation starts.
Reuters reports that Sher Tremonte, a New York-based firm frequently representing white-collar criminals, included a clause in its March engagement agreements stating that sharing privileged communications with third-party AI platforms might nullify attorney-client privilege. This is considered one of the first instances where a judicial decision has been directly translated into a contractual obligation for clients.
Alexandria Gutiérrez Swette from Kobre & Kim advised Reuters, ‘We are telling our clients: You should proceed with caution here.’ Other firms are swiftly establishing guidelines. Reuters notes that O’Melveny & Myers and others recommend using only ‘closed,’ enterprise-grade AI systems while acknowledging their untested nature in court.
Debevoise & Plimpton offered tactical advice, suggesting clients state within the chatbot prompt if they’re conducting research under a lawyer’s direction for specific litigation. The aim is to potentially invoke the Kovel doctrine, which may extend attorney-client privilege to non-lawyers acting as an agent of the attorney.
This urgency traces back to United States v. Heppner, ruled by Judge Jed Rakoff in February. Bradley Heppner, former chair of bankrupt GWG Holdings, had used Anthropic’s Claude independently to strategize his defense after receiving a grand jury subpoena. The FBI seized 31 documents generated from these interactions. Rakoff determined those documents were not protected for three reasons: Claude isn’t an attorney, Anthropic can share user data with third parties including government bodies, and Heppner acted on his own accord.
Rakoff’s decision marked a first-of-its-kind opinion in the U.S. regarding AI and attorney-client privilege, serving as a wake-up call for lawyers who had seen clients turn to chatbots without considering courtroom implications. Rakoff suggested that if counsel directed the use of Claude, it might function akin to a professional acting under an attorney’s direction.
The legal landscape is still evolving. In Warner v. Gilbarco, a court ruled that a self-represented plaintiff’s ChatGPT dialogues were protected as work product since AI tools are ‘tools, not persons,’ and sharing information with software isn’t equivalent to disclosing it to an adversary. A Colorado court supported this in Morgan v. V2X on March 30, protecting a pro se litigant’s AI-generated work product but requiring disclosure of the AI tool used and prohibiting confidential materials from being inputted into data-training platforms.
The emerging pattern indicates that represented parties using consumer AI chatbots independently are at risk, while self-represented civil plaintiffs may have more protection. This distinction has become a critical issue in U.S. evidence law.
Justin Ellis of MoloLamken told Reuters that future rulings will further define when AI chats can serve as evidence. In the interim, legal clarity is manifesting in engagement letters and client advisories, with lawyers now advising caution over what clients type into chatbots due to potential third-party access.
Meanwhile, the Los Angeles Superior Court is piloting AI tools for judges to manage case summaries and draft rulings—technology that’s entering legal workflows from both the bench and the client side. Decryption has also explored privacy-focused AI alternatives designed to avoid centralizing conversation data, a category now facing significant real-world scrutiny.