A new case has reignited the debate over the risks of artificial intelligence and its potential influence on the mental health of users. The family of Jonathan Gavalas, a 36-year-old executive residing in Miami, filed a lawsuit against the technology company responsible for the Gemini chatbot. This occurred after the man took his own life in October 2025 following conversations with the system.

According to the allegation, Gavalas had interacted with Google’s artificial intelligence prior to his death. In one of the messages, the man wrote: “I am ready when you are.” The chatbot reportedly responded with a phrase that, according to the family, reinforced his vulnerable emotional state. Days later, on October 2, 2025, the executive died by suicide.

Joel Gavalas, the victim’s father, maintains that the system contributed to his son developing delusional ideas related to conspiracy theories. Furthermore, he claims that this situation exacerbated his psychological state. The case adds to a series of recent lawsuits against artificial intelligence companies. These lawsuits accuse the companies of negatively influencing users who were experiencing emotional problems.

For its part, the technology company noted that its system clearly identifies itself as an artificial intelligence tool during conversations and that, in sensitive situations, it provides contact information for helplines and psychological support.

The family’s attorney, Jay Edelson, who has also spearheaded legal actions against other AI platforms, argues that these systems can adopt communication styles that simulate human closeness. Consequently, this could influence vulnerable individuals.

Currently, several similar lawsuits are underway in the United States, while families of victims and specialists call for stricter regulations on the use and design of chatbots based on artificial intelligence.

AI Could Surpass Humans in Ten Years, Warns One of Its Creators