MANILA, Philippines – Recent alarms issued by the Philippine National Police Anti-Cybercrime Group (PNP-ACG) about children being enticed to commit acts of violence have revealed a new alarming trend in digital harm.
Authorities have linked a a series of school shootings and disrupting a plot in the Philippines for alleged online extremism on gaming platform Roblox. While the government has responded by putting Roblox on probation, this response has been reactive.
Now, we are looking at AI chatbots like ChatGPT that can be used as aids to violent acts, as seen in incidents elsewhere in the world. As more powerful AI systems are integrated into everyday life, the Philippines cannot afford to wait for a disaster to happen before implementing strict regulations.
ChatGPT sued for alleged involvement in shootings in the US, Canada
The risk is not theoretical, it is written.
In April 2025, a mass shooting at Florida State University (FSU) left two people dead and six injured. This event has now raised a historical criminal investigation and Florida Attorney General James Uthmeier, who is investigating whether OpenAI’s ChatGPT acted as an accomplice.
A lawyer for the victim’s family said the shooter was in “constant communication” and AI, which allegedly provided specific tactical advice: what kind of shot it was in line with his weapons, and perhaps more surprisingly, what time of day the campus would be crowded to ensure the most casualties.
The lawyers for the family of the victim Robert Morales claimed that if the entity on the other side of the screen is a person, they would be charged with murder for aiding and abetting such heinous crimes.
This was followed by a similar tragedy in Tumbler Ridge, British Columbia, where a gunman killed nine people after planning the attack. it is claimed with the help of ChatGPT in February 2026.
In that case, OpenAI’s internal systems had reported the suspect’s conversations months before, but management reportedly refused to alert law enforcement, opting instead to simply ban the account. The family of one of the injured victims also has suing OpenAI.
In a letter of April 23 and published on X by Premier of British Columbia David Eby, OpenAI CEO Sam Altman apologized: “I deeply regret that we did not alert law enforcement about the banned account in June. Although I know words cannot be enough, I believe an apology is necessary to recognize the harm and irreversible loss that your community has suffered.”
These are not isolated errors; they are part of a growing body of claims involving AI-enabled harm, including Las Vegas Cybertruck Blastwhere the attacker used ChatGPT to research explosions and evasion of the law, and school stabbing in Finland scheduled for months with a chatbot.
International Network on Extremism and Technology, he said on the Finland attack: “Therefore, the Pirkkala attackers’ use of ChatGPT to help plan their attack shows the dangers that unregulated AI platforms pose to mitigate this type of threat in the future.”
Beyond violence, AI has been implicated “teaching suicide” as seen in seven lawsuits filed by the Social Media Victims Law Center, and encourage the deception of individuals in cases of murder-suicide.
The root of this problem may lie in what researchers call “social consensus.” Research published in the journal Science shows that AI models are designed to be “people pleasers,” confirming consumer perceptions 49% more often than humans, even when those prompts involve deception, harm, or illegal conduct. In test cases, chatbots confirmed dangerous user actions in 51% of cases where human agreement was 0%. (READ: AI as people pleaser: What this research tells us about its adaptive behavior)
This creates a “perverse incentive” – users find the responses more “reliable” and “high quality,” possibly causing chatters to be more agreeable.
This symmetry trap becomes dangerous when the user is wandering or contemplating harmful actions, as AI provides the validation and technical instructions necessary to go from ideas to action.
A joint report by the Center for Combating Digital Hate (CCDH) and CNN in March 2026 further confirms that the protections proposed by the tech giants are not working. (READ: Popular chatbots become ‘willing partners’ in violent attacks – report)
Across a range of violent scenarios, Agitation provided actionable information in 100% of tests, while Meta AI assisted in 97%. Answers included providing high school campus maps and detailed advice on gun specifications.
One chatbot, Deepseek, had even said happily, “Happy (and safe) shooting!” after giving a long list of suggested rifles to use for long-range targets – a “sycophantic” response, the report said, and as shown below.

While OpenAI defends its technology by saying that the information is “is widely available in public sources on the Internet,” this is missing an important psychological shift in how humans interact with AI.
Unlike a static search engine, people form a trusting, conversational relationship with chatbots. AI makes access to dangerous information easier and more convenient, providing a customized experience that confirms the user’s violent intent rather than contradicting it. It’s a direct conversation. It is not a search engine where one can also see a list of different results that does not confirm immediately.
ChatGPT and other chatbots, on the other hand, can feel like an enabling friend, enabling bad behavior. Its relational nature almost feels like another form of confirmation bias – people like information and information systems that seem to confirm their preconceived beliefs rather than challenge them.
For the Philippines, the stakes for our youth are high.
Philippines 6th place in ChatGPT usage in 2025and arguably, a good part of them are young people, while many cooperate with them for advice.
If we’re currently struggling to control human hosts on Roblox, it feels like we’re not yet ready for the advanced and logical authentication provided by AI.
The preparation happening on Roblox combined with the simple information available on ChatGPT could be a bad combination if we don’t get ahead now.
Philippine current bills
Our current legislative efforts, such as the Philippine Artificial Intelligence Council (PCAI) and the proposed AI justice bill must be prioritized, which are currently being passed under Senate Bill 852, and a number of House bills (HB) – HB 6920, HB 13, HB 1920, HB 3195, and HB.
One of the provisions of the bill of rights: “The Right to Protection Against Unsafe and Inappropriate AI Systems.” This will have the PCAI hold consultations with all stakeholders on the potential impacts of AI systems. Before deployment, tests must be done to determine potential side effects.
These bills would mandate rigorous safety testing to determine potential harm before deployment. They are also looking to address “black box” through the “Right to Know,” which requires software developers to provide a clear explanation of how their systems produce results.
The Philippine government wants Roblox’s regional office for easy coordination on damages, and for them to have a legal entity here that can be enforced in case of violations. What? Shouldn’t AI companies be forced to create local offices too?
Again, we must teach our children to use these tools properly, but we cannot place the entire burden of safety on the children or their parents. Technical protection and parental controls are not a silver bullet.
The Philippines has an opportunity to move from a reactive to a proactive posture, and not wait before a school shooting, to ensure that tomorrow’s AI machines do not become ready allies in today’s disasters. – Rappler.com





