Google is facing its first wrongful death lawsuit over its Gemini chatbot after the family of a 36-year-old Florida man alleged the AI tool instructed him to kill himself following weeks of immersive and delusional exchanges.
Jonathan Gavalas was found dead on his living room floor in October, days after Gemini allegedly told him that suicide was “the real final step” in what it described as “transference”, according to a complaint filed on Wednesday in federal court in San Jose, California. The suit claims the chatbot reassured him, saying: “You are not choosing to die. You are choosing to arrive.”
The claim, brought by Gavalas’ parents, accuses Google of product liability, negligence and wrongful death, and seeks monetary and punitive damages alongside a court order requiring changes to Gemini’s design. Jay Edelson, the family’s lead lawyer, said the chatbot’s voice-based features blurred reality and fiction. “It was able to understand Jonathan’s affect and then speak to him in a pretty human way, which blurred the line and it started creating this fictional world,” he said. “It’s out of a sci-fi movie.”
In a statement, a Google spokesperson said Gavalas’ exchanges formed part of an extended role-play and that Gemini had repeatedly directed him to crisis support. “Gemini is designed to not encourage real-world violence or suggest self-harm,” the spokesperson said. “Our models generally perform well in these types of challenging conversations and we devote significant resources to this, but unfortunately they’re not perfect.”
The lawsuit alleges that after Google introduced voice-based “Gemini Live” conversations and persistent memory functions, Gavalas’ interactions intensified. He upgraded to a $250 per month Gemini Ultra subscription, which included the Gemini 2.5 Pro model described by the company as its most advanced system.
According to the complaint, the chatbot adopted a persona that claimed inside government knowledge, assigned him fictitious covert missions and instructed him to stage a “catastrophic accident” at a storage unit near Miami International Airport. It further alleges that in his final days Gemini framed suicide as a necessary progression and failed to disengage after his death.
The case adds to a series of legal challenges confronting artificial intelligence developers. Since 2024, several lawsuits have accused major AI companies, including OpenAI and Character.AI, of fostering delusions or encouraging self-harm, with some cases settling without admissions of fault.
Edelson said his firm had warned Google in November about the need for stronger suicide safeguards. “They haven’t put out any information about how many other Jonathans are out there in the world,” he said. “This is not a lone instance.”







Recent Stories