
The family of a man killed in an April 2025 mass shooting at Florida State University is suing OpenAI, alleging that ChatGPT enabled the attack.
Vandana Joshi, the widow of victim Tiru Chabba, filed a federal lawsuit against the company in Florida on Sunday.
During an exchange of more than 16,000 messages, the chatbot advised alleged gunman Phoenix Ikner when the student union would be busiest and how to get more notoriety for the killings, according to the lawsuit.
“If children are involved, even 2-3 victims can draw more attention,” ChatGPT allegedly told him.
OpenAI has denied all wrongdoing in the case. “ChatGPT provided factual responses to questions with information that could be found broadly across public sources on the internet, and it did not encourage or promote illegal or harmful activity,” spokesperson Drew Pusateri said in a statement.
Ikner also allegedly sent the chatbot pictures of firearms he had acquired, and it replied, “telling him the Glock had no safety, that it was meant to be fired ‘quick to use under stress’ and advising him to keep his finger off the trigger until he was ready to shoot,” the lawsuit says.
One other man, Robert Morales, was killed, and several people were injured in last year’s attack.
The lawsuit characterises ChatGPT as a co-conspirator, and says OpenAI did not flag or escalate the conversations. It seeks punitive damages, accusing the company of designing a defective product and failing to adequately warn the public of the risks.
Last month, seven families filed lawsuits in California accusing OpenAI and its CEO Sam Altman of ignoring disturbing messages between ChatGPT and a mass shooter in Canada. Eight people were killed, including six children, when an attacker opened fire at the Tumbler Ridge Secondary School in British Columbia in February.
Altman has apologised to the families of those victims. “I am deeply sorry that we did not alert law enforcement,” he said in an open letter last month.
From Novara Media via This RSS Feed.


