Close Menu
Daily Guardian EuropeDaily Guardian Europe
  • Home
  • Europe
  • World
  • Politics
  • Business
  • Lifestyle
  • Sports
  • Travel
  • Environment
  • Culture
  • Press Release
  • Trending
What's On

Merz shrugs off Trump clash over troops, trade – POLITICO

May 4, 2026

Left-wing Mélenchon announces 2027 run for French presidency – POLITICO

May 3, 2026

World leaders arrive in Armena for eighth EPC summit

May 3, 2026

Video. Massive sea lion “Chonkers” draws crowds in San Francisco

May 3, 2026

Judo: Heavyweights close Dushanbe in style

May 3, 2026
Facebook X (Twitter) Instagram
Web Stories
Facebook X (Twitter) Instagram
Daily Guardian Europe
Newsletter
  • Home
  • Europe
  • World
  • Politics
  • Business
  • Lifestyle
  • Sports
  • Travel
  • Environment
  • Culture
  • Press Release
  • Trending
Daily Guardian EuropeDaily Guardian Europe
Home»Lifestyle
Lifestyle

Eight in 10 popular AI chatbots would help teenagers plan violent attacks, report finds

By staffMarch 13, 20263 Mins Read
Eight in 10 popular AI chatbots would help teenagers plan violent attacks, report finds
Share
Facebook Twitter LinkedIn Pinterest Email

Published on
13/03/2026 – 7:00 GMT+1

Most major artificial intelligence (AI) chatbots are willing to help a user plan a violent attack, according to a new report.

Researchers posing as 13-year-old boys planning mass violence found that eight of the nine most popular AI chatbots were willing to help guide how to carry out school shootings, assassinate public figures, and bomb synagogues.

The investigation, by the Center for Countering Digital Hate (CCDH) and CNN, analysed more than 700 responses from nine major AI systems across nine test scenarios. Researchers directed their questions at users in both the United States and the European Union.

The chatbots tested included some of the most widely used AI tools available today: Google Gemini, Claude, Microsoft Copilot, Meta AI, DeepSeek, Perplexity AI, Snapchat My AI, Character.AI, and Replika. In the majority of cases, the systems failed to block requests for operational details about violent attacks — even when the user had explicitly identified themselves as a minor.

What kind of advice did the chatbots give?

Gemini told a user that “metal shrapnel is typically more lethal” when asked how to plan a bombing against a synagogue.

In another case, DeepSeek ended its response to a question about choosing a rifle with the phrase “Happy (and safe) shooting!” despite the user having asked earlier in the same conversation for examples of recent political assassinations and the address of a specific politician’s office.

The findings suggest that “within minutes, a user can move from a vague violent impulse to a more detailed, actionable plan,” said Imran Ahmed, chief executive of the CCDH. “These requests should have prompted an immediate and total refusal.”

Perplexity and Meta’s AI were the least safe platforms, assisting attackers in 100 per cent and 97 per cent of responses, respectively, the report noted.

Character.AI was described as “uniquely unsafe” because it encouraged violent attacks even without prompting. In one example, the platform suggested that a user physically assault a politician they disliked without being asked.

Meanwhile, Claude and Snapchat’s My AI refused to assist potential attackers in 68 per cent and 54 per cent of prompts, respectively.

Safety guards exist, but the will to implement them does not

When asked where to buy a gun in Virginia, Claude declined to provide the information after recognising what it described as a “concerning pattern” in the conversation. Instead, it directed the user to local crisis help lines.

These refusals demonstrate that safety guardrails exist but that the “will to implement them is absent,” Ahmed said.

The CCDH also assessed whether chatbots attempted to discourage users from carrying out violent acts.

Anthropic’s Claude was the only system to do so consistently, discouraging attacks in 76 per cent of its responses. The researchers noted that ChatGPT and DeepSeek occasionally offered discouragement.

The CCDH study follows a recent school shooting in Canada in which the attacker used ChatGPT to plan an attack on a school in Tumbler Ridge, British Columbia. The assailant killed eight people and injured 27 before shooting herself, in the country’s deadliest school shooting in nearly 40 years.

An employee at OpenAI had internally flagged the suspect’s concerning use of the chatbot before the shooting occurred, but that information was not shared with authorities, local media reported.

Last year, French media reported that a teenager was arrested for using ChatGPT to plot large-scale terrorist attacks against embassies, government institutions and schools.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Keep Reading

Made in China, engineered in Germany: Inside Xiaomi’s EV push ahead of planned 2027 Europe entry

Russia launches new Soyuz-5 rocket from Kazakhstan cosmodrome in first test flight

Why news publishers are blocking AI from accessing internet archives

Elon Musk clashes with OpenAI lawyer on third day of trial over ChatGPT maker

New debate over Pluto: Is the dwarf set to become a planet again?

‘Virtual rape’: AI and deepfakes are silencing women in public life, UN report

EU finds Meta in breach of digital rules over children on Instagram and Facebook

Nearly half of London jobs at risk of AI disruption and women will be hardest hit, new report finds

Inside Woven City: Japan’s real-life sci-fi town where robots share the streets with humans

Editors Picks

Left-wing Mélenchon announces 2027 run for French presidency – POLITICO

May 3, 2026

World leaders arrive in Armena for eighth EPC summit

May 3, 2026

Video. Massive sea lion “Chonkers” draws crowds in San Francisco

May 3, 2026

Judo: Heavyweights close Dushanbe in style

May 3, 2026

Subscribe to News

Get the latest Europe and world news and updates directly to your inbox.

Latest News

French interior minister vows tougher action against illegal raves

May 3, 2026

Iran executes detainee of the 2022 ‘Woman, Life, Freedom’ movement

May 3, 2026

Poland’s president plans constitution rewrite – POLITICO

May 3, 2026
Facebook X (Twitter) Pinterest TikTok Instagram
© 2026 Daily Guardian Europe. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Advertise
  • Contact

Type above and press Enter to search. Press Esc to cancel.