A courtroom in Seoul, fall 2025. The defendant — a middle-aged woman, a former caregiver. The charge: poisoning an elderly patient. The defense insists: an accident, medication overdose, a tragic confluence of circumstances. Then the prosecutor puts screenshots on the screen. Not a text conversation with a friend. Not Google search queries. A ChatGPT chat history.
“What happens if you mix sleeping pills with alcohol?” “What dose of zolpidem is considered lethal?” “What do the symptoms of sleeping pill poisoning look like?”
The charge is upgraded from manslaughter to murder. The key evidence — not DNA, not fingerprints, not witness testimony. A conversation with a chatbot.
Welcome to a world where your most candid conversational partner is simultaneously the least reliable keeper of secrets.
The scale: who’s telling AI what, and how much
We’re used to the idea that search history is a digital footprint. But a search query is two or three words. An AI chat is a full-blown conversation. With context, emotions, follow-up questions, and — most importantly — intent.
People share things with chatbots that they don’t tell friends, partners, or therapists. Suicidal thoughts. Revenge fantasies. Business plans and gray-area schemes. Sexual desires. Fears. Hatred. All of it — in sprawling, multi-page dialogues, with clarifications and follow-up questions.
And here’s the key trap: AI creates an illusion of privacy. A one-on-one dialogue. No judgment, no raised eyebrows. An interface that looks like a personal diary. But it’s not a diary — it’s a letter sent to a commercial company’s server in another country.
Why AI chats are the perfect evidence
An investigator dreams of evidence that reveals not the act, but the thinking. Not “he bought poison,” but “he was thinking about how to use poison three weeks before the crime.” An AI chat is exactly that kind of evidence.
keywords
no context
intent +
plan + emotion
premeditation
= more serious
charge
Here’s what makes AI chats unique for investigations:
- Intent. “How to secretly poison someone” isn’t an abstract search query. It’s a question with context: who, why, under what circumstances.
- Planning. People often ask a chain of related questions, building a plan step by step. The AI obligingly helps structure it.
- Emotional state. “I can’t take this anymore” — right before a series of questions about methods of causing harm.
- Timestamps. Every message is dated. A prosecutor can construct a timeline of the defendant’s inner life down to the minute.
A personal diary is protected in most jurisdictions — seizing it requires serious grounds. But an AI chat is legally closer to correspondence with a third party: the data is stored on a company's servers, and with a warrant (and sometimes without one) law enforcement gets full access. You were writing in a diary — and it turned out to be a postcard.
The timeline: from exotic to routine
Note the speed. From the first widely discussed case to standard forensic practice — less than three years. For comparison: it took courts a decade and a half to learn how to use browser history.
Who has access to your chats
Here’s a question few people ask: who, exactly, can read your conversation with ChatGPT?
Short answer: almost anyone with sufficient motivation.
You → company servers → employees for review → law enforcement upon request → leaks and breaches. Every link is a point where your "private" conversation stops being private. OpenAI states directly in its Terms of Service: data may be disclosed in response to lawful requests. And it is disclosed — regularly.
In 2024, OpenAI published its first transparency report: the company received hundreds of requests from law enforcement agencies worldwide. Data was provided in response to many of them. And that’s just OpenAI — one company among dozens offering AI chats.
Add to that:
- Company employees who review conversations for model training and safety monitoring
- Hacks and leaks — in 2023, ChatGPT payment account data was leaked; in the future, conversations themselves could follow
- Your device — chat history is stored locally in your browser, accessible to anyone who gains physical access to your machine
The digital confessional
There’s a historical parallel that explains what’s happening better than any technical analysis.
Catholic confession worked for centuries as an institution of absolute trust. A person came to a priest and shared their deepest secrets — things they wouldn’t admit to anyone. The system worked because there was the sigillum confessionis — the seal of confession, the violation of which is punishable by excommunication. No court, no king could compel a priest to reveal what was heard.
AI chat has created an exact functional replica of the confessional — a space where people open up completely. But it forgot to copy the one thing that made the confessional safe: the seal.
And this isn’t a bug — it’s a feature of the business model. Companies benefit from you sharing as much information as possible — it’s training data. They have no incentive to encrypt your conversations end-to-end — that would strip them of the ability to improve the product. And the state has no incentive to protect this data with privilege — that would close off access to an unprecedented investigative tool.
What to do about it (and what won’t happen)
There will be no law establishing AI confessional privilege. No state will voluntarily give up access to such an information source. Doctor-patient privilege took centuries to develop and still has exceptions. Attorney-client privilege is one of the oldest in law. “Human-chatbot” privilege won’t emerge because there is no entity on the other end that could be held responsible for upholding it.
Awareness. Every time you type something into an AI chat, imagine a prosecutor reading it aloud at trial. Not because this is a paranoid world — but because technically and legally this is already possible. If you wouldn't write it in an email to a colleague — don't write it to ChatGPT.
Local models will emerge — LLaMA, Mistral, their descendants — that can run on your own device without sending data to anyone’s servers. But 95% of users will continue using cloud services because they’re more convenient, more powerful, and freer. Convenience will beat security — as it always has.
By mid-2028, chat history with AI assistants (ChatGPT, Claude, Gemini, and similar) will become a standard element of digital forensics — on par with browser history, email correspondence, and messenger data. AI chat inspection will be included in standard digital evidence seizure protocols in at least 5 jurisdictions (US, South Korea, UK, EU, China).
AI chats will be used primarily to prove intent and premeditation — turning "accident" cases into murder, fraud, and premeditated crime cases. Meanwhile, no major jurisdiction will establish a legal privilege protecting the contents of such chats from investigative requests.