In the US case of United States v Heppner, a Federal Judge ruled that AI-generated documents produced by the Defendant and subsequently sent to his attorneys were not protected by privilege.
As reported by Law360 earlier this month, Judge Jed Rakoff considered whether the Defendant could assert privilege over 31 documents it had generated using Anthropic’s LLM tool, Claude, which related to his legal case and which he had sent to his attorneys. The Defendant sought to assert both attorney-client privilege and the work-product doctrine in order to prevent the US Government from relying on the documents in proceedings.
On the issue of attorney-client privilege, the Judge emphatically rejected the Defendant’s arguments and accepted the US Government’s case on the following bases:
The documents were not produced by an attorney. Claude is not a qualified attorney, nor was it acting as an agent for legal advice.
The documents were not created for the purpose of obtaining legal advice. Anthropic’s “Constitution” for Claude states that the tool selects the response which “least gives the impression of legal advice” and “suggests asking a lawyer”. If asked to give legal advice, Claude will tell the user that it is not able to do so.
The documents were not confidential. Anthropic’s Privacy Policy states that prompts and outputs are used to train Claude and Anthropic reserves the right to share them with governmental regulatory authorities and third parties. Users could not therefore reasonably expect their conversations with Claude to be confidential.
The fact that the documents were later shared with the Defendant’s legal team does not retroactively make them privileged.
On the issue of the work-product doctrine, the Judge found that the documents were not protected by this principle either. The documents must have been created because of or in anticipation of litigation, by or for a party or its representative. In this case, the Defendant had created the documents on his own initiative.
The Judge relied on Anthropic’s Ts&Cs for Claude’s free service, which expressly disclaim that the tool provides legal advice and do not guarantee confidentiality. Many other popular AI tools, including ChatGPT and Gemini, have similar Ts&Cs disclaiming that users should not treat outputs as legal advice and that users should not expect confidentiality. It is therefore likely that the legal position could be the same for documents produced by these tools under similar circumstances.
Though this is a US decision, it provides insight into how a court may approach AI tools and the English law equivalents of attorney-client privilege and the work-product doctrine, namely legal advice privilege and litigation privilege:
For a document to attract legal advice privilege it must (i) be confidential, (ii) pass between a client and their lawyer, and (iii) have come into existence for the dominant purpose of giving or receiving legal advice.
For a document to be covered by litigation privilege it must be (i) confidential, (ii) a communication between a lawyer and client or between either of them and a third party, (iii) relate to litigation which is live or reasonably contemplated, and (iv) be made for the dominant purpose of litigation.
Applying the reasoning in this case, it appears likely that documents a client generates using public AI tools in relation to its legal case will also fail to attract privilege under English law.

/Passle/5f3d6e345354880e28b1fb63/MediaLibrary/Images/2025-09-29-13-48-10-128-68da8e1af6347a2c4b96de4e.png)
/Passle/5f3d6e345354880e28b1fb63/SearchServiceImages/2026-02-18-09-25-14-317-6995857a406a38553e720e5d.jpg)
/Passle/5f3d6e345354880e28b1fb63/MediaLibrary/Images/2024-08-23-11-31-07-354-66c872fb971eecc249d83d40.png)
/Passle/5f3d6e345354880e28b1fb63/SearchServiceImages/2026-02-16-10-20-56-766-6992ef880e356ebc236b9557.jpg)