Top

Your AI Is Not Your Lawyer: How Conversations with AI Can Be Used Against You

|

Estimated Reading Time: 8 minutes

Most people treat AI chatbots the way they used to treat a private journal. They type their worries, their plans, and their questions into a text box and assume the conversation stays between them and the screen.

Two recent cases should change that assumption permanently.

In a federal courtroom in Manhattan, a judge ruled that a defendant's conversations with an AI platform were fair game for prosecutors. In a Tennessee murder case, ChatGPT messages were presented as key evidence at a preliminary hearing. Neither defendant had any idea those conversations would end up in a courtroom.

At Winters & Chidester, we represent people facing criminal charges throughout Central Texas. We see firsthand how quickly an investigation can turn on digital evidence, and AI chat logs are now firmly in that category. What you say to a chatbot can be used against you, and the law has made clear that it will be.

The Heppner Case: Asking an AI for Legal Help Can Break Attorney-Client Privilege

In February 2026, Judge Jed Rakoff of the United States District Court for the Southern District of New York issued a ruling that many describe as the first of its kind in the country. The question before the court was: could a criminal defendant claim that his conversations with an AI chatbot were protected by attorney-client privilege?

The answer was no.

Bradley Heppner was indicted on October 28, 2025, on charges of securities and wire fraud, conspiracy to commit securities and wire fraud, making false statements to auditors, and falsifying corporate records. When federal agents executed a search warrant at his home, they seized electronic devices containing documents generated using Anthropic's AI tool Claude.

Heppner argued those documents were protected. He said he had used AI to organize his thoughts before speaking with his lawyers, and that he later shared the outputs with his legal team. His counsel asserted that Heppner had entered information from his attorneys into the platform, created AI communications to speak with counsel to obtain legal advice, and subsequently shared them with counsel for that same purpose.

The court rejected every part of that argument.

The court held that the AI documents were not shielded by attorney-client privilege for three reasons:

  • First, the AI platform is not a licensed attorney and therefore cannot form a fiduciary attorney-client relationship.
  • Second, the communications were not confidential because the platform's privacy policy expressly notifies users that it collects data on users' inputs and the AI's outputs to train its tools and may disclose that data to third parties.
  • Third, the communications were not made for the purpose of obtaining legal advice, as Heppner communicated with Claude of his own volition and not at the direction of counsel.

The court also rejected the work product doctrine, which can shield materials prepared in anticipation of litigation. Because Heppner's counsel conceded they did not direct him to use Claude, the documents did not reflect counsel's legal strategy at the time they were created. The ruling noted that the outcome might have been different if Heppner had used the AI tool at his counsel's direction.

The lesson is clear: using AI to think through your legal situation, even if you plan to share the results with your lawyer, does not make those conversations privileged. The platform breaks confidentiality the moment you start typing.

The Darron Lee Case: AI Chats as Evidence for Prosecutors

The Heppner case was about privilege. The Darron Lee case shows what AI chat logs look like as evidence for the prosecution.

Darron Lee, 31, a former first-round NFL draft pick, is charged with first-degree murder in the death of Gabriella Perpetuo, whose body was discovered at the couple's Hamilton County, Tennessee home.

Prosecutors presented exchanges between Lee and ChatGPT. The messages were sent hours before her body was found. In one message, Lee wrote that he had awakened to find his fiancée unresponsive, described injuries he claimed were self-inflicted, and asked the chatbot what to do. In another, he asked what a friend should tell someone handling a non-responsive person who did not want to call the police.

Hamilton County District Attorney Coty Wamp told the court that Lee was using ChatGPT “as a legal advisor, as a defense attorney,” and asking it for advice on “how you cover up a crime scene.”

Prosecutors say the exchanges helped them establish a timeline of events that contradicted Lee's account to deputies at the scene. Lee had told officers he found Perpetuo unresponsive and immediately called 911. The timestamps on the ChatGPT messages told a different story.

Lee's case is not an isolated incident. In October 2025, federal prosecutors in Los Angeles charged Jonathan Rinderknecht in connection with setting the brush fire that grew into the deadly Palisades Fire, citing records showing he had asked ChatGPT to generate images of cities and forests on fire. Shortly after igniting a fire on New Year's Day, he allegedly asked the chatbot whether a person could be held at fault if a fire spread from their cigarette. The chatbot conversations became part of the government's evidence.

Why People Keep Making This Mistake

There is a reasonable explanation for why people turn to AI in moments of crisis, fear, or legal uncertainty. These tools are available at any hour, they respond without judgment, and they have become embedded in daily life. People use them to plan meals, draft emails, and settle arguments. It is not a stretch to see why someone in a frightening situation might instinctively reach for the same tool.

But that instinct now carries legal risk. AI platforms are not lawyers. They do not have confidentiality obligations. Their privacy policies, almost universally, reserve the right to store, review, and in some cases share what users type. When law enforcement obtains a warrant, those records can be produced.

The intimacy of the interface creates a false sense of privacy. Typing into a chat window feels like thinking out loud. It is not. It is creating a timestamped written record that sits on a third-party server.

What The Rulings Mean If You Are Facing Criminal Charges

If you are under investigation, have been arrested, or believe you may be charged with a crime, stop communicating about your situation on any platform that is not your attorney's secure, protected channel.

That means:

  • Do not use ChatGPT, Claude, Gemini, or any other public AI tool to ask questions about your charges, your options, or what might happen at trial.
  • Do not ask an AI to help you think through what you will say to law enforcement, prosecutors, or even family members.
  • Do not use AI to draft any statement, message, or explanation related to the matter.
  • Understand that conversations you have already had with AI platforms may be obtainable by prosecutors through a search warrant or subpoena. If you have had such conversations, tell your attorney immediately so they can assess the situation.

The Right Place to Have These Conversations

Attorney-client privilege exists for a reason. It allows you to speak candidly with your lawyer about the facts of your situation, your concerns, and your options, without fear that those conversations will be turned over to the government. That protection is enshrined in law, but it depends on you having those conversations in the right place.

The right place is with a licensed attorney, in a protected communication, not with a chatbot.

At Winters & Chidester, we provide aggressive criminal defense counsel to individuals and families across Central Texas. If you are under investigation or facing charges, contact us at (512) 961-4555 or reach out online to schedule a consultation. What you tell us stays between us. What you tell an AI may not.