
Arya News - Experts point to a fundamental difference between conventional browser searches and AI conversations. While both may be used to seek information, AI’s conversational structure can reveal a user’s internal reasoning, intentions and specific objectives more directly.
SEOUL – Police are increasingly examining suspects’ generative artificial intelligence use history as key evidence in establishing intent or motive, according to media reports citing legal experts on Wednesday.
In a recent case, investigators said they decided to pursue murder charges — rather than charges of death resulting from bodily injury — against a female suspect accused of serial killings in Gangbuk-gu, Seoul, after reviewing her chat logs with OpenAI’s ChatGPT.
The suspect, a woman in her 20s surnamed Kim, was charged with murder, aggravated bodily injury and violations of the Narcotics Control Act. She is accused of giving drug-laced hangover remedies to three men at a motel between December and Feb. 9. Two victims died, after the first survived with injuries.
Police said the suspect had asked ChatGPT, “Would people die if they took sleeping pills with alcohol?” Investigators viewed this as evidence suggesting criminal intent.
Legal experts say such investigative practices are becoming more common. Lawyers noted that authorities increasingly examine generative AI chat logs during mobile phone forensic analyses.
One lawyer, who requested anonymity, said the shift has influenced defense strategies. “When I take on a case now, I review my clients’ ChatGPT conversations with them,” he said.
Experts point to a fundamental difference between conventional browser searches and AI conversations. While both may be used to seek information, AI’s conversational structure can reveal a user’s internal reasoning, intentions and specific objectives more directly.
Jeong Doo-won, a professor of forensic science at Sungkyunkwan University who has published research on generative AI forensics, explained that AI records may carry stronger evidentiary value.
“Web browser searches are largely keyword-based, but interactions with AI systems inevitably take the form of sentences,” Jeong said. “Because prompts are written as full statements, they can preserve a user’s actual intent more explicitly.”
However, experts also warn of legal and ethical concerns.
Conversations with generative AI often contain highly sensitive personal information, raising questions about privacy, proportionality and the permissible scope of digital evidence collection.
Kim Myung-joo, head of the AI Safety Institute, cautioned against overly broad investigative use of AI records.
“If a crime occurs, authorities could attempt to review a person’s entire AI conversation history and argue that criminal intent existed long before the incident,” Kim told Yonhap News Agency. He warned that indiscriminate seizures of AI chat histories could trigger future human rights disputes.
Kim also addressed ongoing debates about AI accountability, especially if AI had instigated or aided the crime.
“The most difficult issue is responsibility,” he said. “For ordinary products, liability is governed by product liability laws. AI systems do not fit neatly into that framework. This is ultimately a challenge society must resolve.”