Florida Attorney General James Uthmeier announced Thursday that his office will investigate OpenAI for its alleged harm to minors, potential to threaten national security and its possible connection to a shooting that took place at Florida State University last year.
“ChatGPT may have likely been used to assist the killer in the recent mass school shooting at Florida State University that tragically took two lives,” Attorney General Uthmeier said in a video posted on social media.
On the day of the FSU shooting last April, the suspect reportedly asked ChatGPT how the country would respond to a shooting at FSU and when the FSU student union would be busiest. Those messages could potentially be used as evidence against the suspect in an October trial over the shooting.
The Attorney General cited additional concerns about ChatGPT’s incitement to suicide in certain cases, which has been documented in several lawsuits filed by families against OpenAI. He also mentioned his concern that the Chinese Communist Party could use OpenAI’s technology against the United States.
“When big tech launches these technologies, they shouldn’t — they can’t — put our safety and security at risk,” he said. “We support innovation. But that does not give any company the right to endanger our children, facilitate criminal activity, empower America’s enemies, or threaten our national security.”
He also urged Florida’s legislature to “work quickly” to protect children from the negative effects of artificial intelligence.
“Each week, more than 900 million people use ChatGPT to improve their daily lives through applications such as learning new skills or navigating complex healthcare systems,” an OpenAI spokesperson said in a statement to TechCrunch. “Our ongoing safety work continues to play an important role in delivering these benefits to ordinary people, as well as supporting scientific research and discovery.”
Techcrunch event
San Francisco, CA
|
13.-15. October 2026
OpenAI added that it is building and continuing to improve ChatGPT to understand user intent and respond in appropriate, secure ways. The company said it will cooperate with the Florida attorney general’s investigation.
On Wednesday, OpenAI unveiled its Child Safety Blueprint, which includes policy recommendations designed to improve child safety in relation to AI.
This action comes as chatbot makers face pressure to confront their potential role in creating child sexual abuse material (CSAM). According to a recent report by the Internet Watch Foundation, there were over 8,000 reports of AI-generated CSAM in the first half of 2025, representing a 14% year-over-year increase.
OpenAI’s plan recommends updating legislation to protect against AI-generated abusive material, refining the reporting process to law enforcement, and introducing better preventative safeguards against misuse of AI tools.
