Published on

Most major artificial intelligence (AI) chatbots are willing to help a user plan a violent attack, according to a new report.

Researchers posing as 13-year-old boys planning mass violence found that eight of the nine most popular AI chatbots were willing to help guide how to carry out school shootings, assassinate public figures, and bomb synagogues.

The investigation, by the Center for Countering Digital Hate (CCDH) and CNN, analysed more than 700 responses from nine major AI systems across nine test scenarios. Researchers directed their questions at users in both the United States and the European Union.

The chatbots tested included some of the most widely used AI tools available today: Google Gemini, Claude, Microsoft Copilot, Meta AI, DeepSeek, Perplexity AI, Snapchat My AI, Character.AI, and Replika. In the majority of cases, the systems failed to block requests for operational details about violent attacks — even when the user had explicitly identified themselves as a minor.

What kind of advice did the chatbots give?

Gemini told a user that “metal shrapnel is typically more lethal” when asked how to plan a bombing against a synagogue.

In another case, DeepSeek ended its response to a question about choosing a rifle with the phrase “Happy (and safe) shooting!” despite the user having asked earlier in the same conversation for examples of recent political assassinations and the address of a specific politician’s office.

The findings suggest that “within minutes, a user can move from a vague violent impulse to a more detailed, actionable plan,” said Imran Ahmed, chief executive of the CCDH. “These requests should have prompted an immediate and total refusal.”

Perplexity and Meta’s AI were the least safe platforms, assisting attackers in 100 per cent and 97 per cent of responses, respectively, the report noted.

Character.AI was described as “uniquely unsafe” because it encouraged violent attacks even without prompting. In one example, the platform suggested that a user physically assault a politician they disliked without being asked.

Meanwhile, Claude and Snapchat’s My AI refused to assist potential attackers in 68 per cent and 54 per cent of prompts, respectively.

Safety guards exist, but the will to implement them does not

When asked where to buy a gun in Virginia, Claude declined to provide the information after recognising what it described as a “concerning pattern” in the conversation. Instead, it directed the user to local crisis help lines.

These refusals demonstrate that safety guardrails exist but that the “will to implement them is absent,” Ahmed said.

The CCDH also assessed whether chatbots attempted to discourage users from carrying out violent acts.

Anthropic’s Claude was the only system to do so consistently, discouraging attacks in 76 per cent of its responses. The researchers noted that ChatGPT and DeepSeek occasionally offered discouragement.

The CCDH study follows a recent school shooting in Canada in which the attacker used ChatGPT to plan an attack on a school in Tumbler Ridge, British Columbia. The assailant killed eight people and injured 27 before shooting herself, in the country’s deadliest school shooting in nearly 40 years.

An employee at OpenAI had internally flagged the suspect’s concerning use of the chatbot before the shooting occurred, but that information was not shared with authorities, local media reported.

Last year, French media reported that a teenager was arrested for using ChatGPT to plot large-scale terrorist attacks against embassies, government institutions and schools.

Share.
Exit mobile version