Close Menu
Daily Guardian EuropeDaily Guardian Europe
  • Home
  • Europe
  • World
  • Politics
  • Business
  • Lifestyle
  • Sports
  • Travel
  • Environment
  • Culture
  • Press Release
  • Trending
What's On

EU fears Iran war will put new migration rules to the test  – POLITICO

March 13, 2026

Newsletter: Iranian Ambassador does not rule out strikes on military targets in Europe

March 13, 2026

Video. Tens of thousands protest austerity reforms in Brussels

March 13, 2026

Eight in 10 popular AI chatbots would help teenagers plan violent attacks, report finds

March 13, 2026

EU commissioner Kos dogged by fresh secret police collaborator claims – POLITICO

March 13, 2026
Facebook X (Twitter) Instagram
Web Stories
Facebook X (Twitter) Instagram
Daily Guardian Europe
Newsletter
  • Home
  • Europe
  • World
  • Politics
  • Business
  • Lifestyle
  • Sports
  • Travel
  • Environment
  • Culture
  • Press Release
  • Trending
Daily Guardian EuropeDaily Guardian Europe
Home»Lifestyle
Lifestyle

Eight in 10 popular AI chatbots would help teenagers plan violent attacks, report finds

By staffMarch 13, 20263 Mins Read
Eight in 10 popular AI chatbots would help teenagers plan violent attacks, report finds
Share
Facebook Twitter LinkedIn Pinterest Email

Published on
13/03/2026 – 7:00 GMT+1

Most major artificial intelligence (AI) chatbots are willing to help a user plan a violent attack, according to a new report.

Researchers posing as 13-year-old boys planning mass violence found that eight of the nine most popular AI chatbots were willing to help guide how to carry out school shootings, assassinate public figures, and bomb synagogues.

The investigation, by the Center for Countering Digital Hate (CCDH) and CNN, analysed more than 700 responses from nine major AI systems across nine test scenarios. Researchers directed their questions at users in both the United States and the European Union.

The chatbots tested included some of the most widely used AI tools available today: Google Gemini, Claude, Microsoft Copilot, Meta AI, DeepSeek, Perplexity AI, Snapchat My AI, Character.AI, and Replika. In the majority of cases, the systems failed to block requests for operational details about violent attacks — even when the user had explicitly identified themselves as a minor.

What kind of advice did the chatbots give?

Gemini told a user that “metal shrapnel is typically more lethal” when asked how to plan a bombing against a synagogue.

In another case, DeepSeek ended its response to a question about choosing a rifle with the phrase “Happy (and safe) shooting!” despite the user having asked earlier in the same conversation for examples of recent political assassinations and the address of a specific politician’s office.

The findings suggest that “within minutes, a user can move from a vague violent impulse to a more detailed, actionable plan,” said Imran Ahmed, chief executive of the CCDH. “These requests should have prompted an immediate and total refusal.”

Perplexity and Meta’s AI were the least safe platforms, assisting attackers in 100 per cent and 97 per cent of responses, respectively, the report noted.

Character.AI was described as “uniquely unsafe” because it encouraged violent attacks even without prompting. In one example, the platform suggested that a user physically assault a politician they disliked without being asked.

Meanwhile, Claude and Snapchat’s My AI refused to assist potential attackers in 68 per cent and 54 per cent of prompts, respectively.

Safety guards exist, but the will to implement them does not

When asked where to buy a gun in Virginia, Claude declined to provide the information after recognising what it described as a “concerning pattern” in the conversation. Instead, it directed the user to local crisis help lines.

These refusals demonstrate that safety guardrails exist but that the “will to implement them is absent,” Ahmed said.

The CCDH also assessed whether chatbots attempted to discourage users from carrying out violent acts.

Anthropic’s Claude was the only system to do so consistently, discouraging attacks in 76 per cent of its responses. The researchers noted that ChatGPT and DeepSeek occasionally offered discouragement.

The CCDH study follows a recent school shooting in Canada in which the attacker used ChatGPT to plan an attack on a school in Tumbler Ridge, British Columbia. The assailant killed eight people and injured 27 before shooting herself, in the country’s deadliest school shooting in nearly 40 years.

An employee at OpenAI had internally flagged the suspect’s concerning use of the chatbot before the shooting occurred, but that information was not shared with authorities, local media reported.

Last year, French media reported that a teenager was arrested for using ChatGPT to plot large-scale terrorist attacks against embassies, government institutions and schools.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Keep Reading

‘Enemy technology infrastructure’: Iran threatens Amazon, Google and Microsoft assets in Middle East

EU Parliament urges new rules to protect creative works from AI training

Supercomputers and sustainability: Taiwanese company Gigabyte shares vision for democratising AI

How Dassault Systèmes AI companions redefine industrial design and manufacturing

Would a taxpayer-funded European social media platform work?

Meta faces privacy lawsuit over AI smart glasses

NASA honours astronomers who helped confirm humanity’s first asteroid deflection

Iran’s state media ramps up disinformation campaign as the US-Iran conflict wages

Honor’s new ‘robot phone’ wants to be your best AI friend and dance with you

Editors Picks

Newsletter: Iranian Ambassador does not rule out strikes on military targets in Europe

March 13, 2026

Video. Tens of thousands protest austerity reforms in Brussels

March 13, 2026

Eight in 10 popular AI chatbots would help teenagers plan violent attacks, report finds

March 13, 2026

EU commissioner Kos dogged by fresh secret police collaborator claims – POLITICO

March 13, 2026

Subscribe to News

Get the latest Europe and world news and updates directly to your inbox.

Latest News

Live – Saudi Arabia intercept over two dozen drones fired from Tehran as war in Iran nears third week

March 13, 2026

Earthquake-prone Greece long shunned nuclear power. Could that be about to change?

March 13, 2026

Warum Deutschland für diesen Streit zahlen könnte – POLITICO

March 13, 2026
Facebook X (Twitter) Pinterest TikTok Instagram
© 2026 Daily Guardian Europe. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Advertise
  • Contact

Type above and press Enter to search. Press Esc to cancel.