Close Menu
Daily Guardian EuropeDaily Guardian Europe
  • Home
  • Europe
  • World
  • Politics
  • Business
  • Lifestyle
  • Sports
  • Travel
  • Environment
  • Culture
  • Press Release
  • Trending
What's On

You can now track the prices of individual hotels with Google

April 20, 2026

Magyar names first ministers in new Hungarian government – POLITICO

April 20, 2026

Social media fine print may restrict users’ right to sue, analysis shows

April 20, 2026

Foreign airlines to gradually resume flight operations at Qatar’s Hamad International Airport

April 20, 2026

Energy, security and the new European risk landscape  – POLITICO

April 20, 2026
Facebook X (Twitter) Instagram
Web Stories
Facebook X (Twitter) Instagram
Daily Guardian Europe
Newsletter
  • Home
  • Europe
  • World
  • Politics
  • Business
  • Lifestyle
  • Sports
  • Travel
  • Environment
  • Culture
  • Press Release
  • Trending
Daily Guardian EuropeDaily Guardian Europe
Home»Lifestyle
Lifestyle

Anthropic and OpenAI are hiring weapons specialists to prevent ‘catastrophic misuse’

By staffMarch 18, 20262 Mins Read
Anthropic and OpenAI are hiring weapons specialists to prevent ‘catastrophic misuse’
Share
Facebook Twitter LinkedIn Pinterest Email

Published on
18/03/2026 – 13:32 GMT+1

Artificial intelligence (AI) companies Anthropic and OpenAI are looking to hire weapons and explosives experts to prevent misuse of their technology, according to job postings from both companies.

Anthropic announced in a LinkedIn post that it was searching for a policy expert on chemical weapons and explosions to prevent “catastrophic misuse” of its technology by shaping how its AI systems handle sensitive information in these fields.

The person hired at Anthropic will design and monitor the guardrails for how AI models react to prompts about chemical weapons and explosives. They will also conduct “rapid responses” to any escalations that Anthropic detects in weapons and explosions prompts.

Applicants should have a minimum of five years of experience in “chemical weapons and/or explosives defences,” as well as knowledge of “radiological dispersal devices,” or dirty bombs. The role involves designing new risk evaluations that the company’s leadership can “trust during high-stakes launches.”

OpenAI’s job posting earlier this month said it was looking for researchers to join its Preparedness team, which monitors for “catastrophic risks related to frontier AI models.”

It also advertised for a Threat Modeler, which would give one person primary ownership “to identifying, modelling, and forecasting frontier risks” and serve as “a central node connecting technical, governance, and policy perspectives on prioritisation, focus and rationale on our approach to frontier risks from AI.”

Euronews Next reached out to Anthropic and OpenAI about the job postings but did not receive an immediate reply.

These hires come after Anthropic mounted a legal challenge against the US government after it designated the company as a “supply chain risk,” a label that allows the government to block contracts or instruct departments not to work with them.

The conflict began on February 24, when the Department of War (DOW) demanded unfettered access to Anthropic’s Claude chatbot.

CEO Dario Amodei said that DOW contracts should not include instances where Claude is deployed for mass domestic surveillance and integrated into fully autonomous weapons.

Shortly after the fallout with Anthropic, OpenAI signed a deal with the Department of War (DOW) to deploy its AI into classified environments. The company said the deal included strict red lines, such as no use of its systems for mass surveillance or autonomous weapons.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Keep Reading

Social media fine print may restrict users’ right to sue, analysis shows

Humanoid robot smashes human half-marathon world record in Beijing race

Can AI robots work alongside humans? Siemens and NVIDIA trial a humanoid robot

‘We left you in the jungle’: Macron urges students to go offline one day a month and read instead

Would you work with a joke-cracking robot? Japan tries to catch up with China’s humanoids

‘The most majestic thing human eyes will ever witness’: Artemis II crew on historic Moon mission

Forget relying on solar power: NASA plans to put nuclear reactors on the surface of the Moon

Using AI for basic tasks damages a person’s intellect in just 10 minutes, study shows

Watch: Amateur astronomers on Earth catch Artemis II speeding past the stars

Editors Picks

Magyar names first ministers in new Hungarian government – POLITICO

April 20, 2026

Social media fine print may restrict users’ right to sue, analysis shows

April 20, 2026

Foreign airlines to gradually resume flight operations at Qatar’s Hamad International Airport

April 20, 2026

Energy, security and the new European risk landscape  – POLITICO

April 20, 2026

Subscribe to News

Get the latest Europe and world news and updates directly to your inbox.

Latest News

Video. Skydiver crashes into scoreboard during Virginia Tech Spring Game

April 20, 2026

French budget faces €4B in cuts as Iran war hampers growth – POLITICO

April 20, 2026

Stranded cruise ships finally pass the Strait of Hormuz and are en route to Europe

April 20, 2026
Facebook X (Twitter) Pinterest TikTok Instagram
© 2026 Daily Guardian Europe. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Advertise
  • Contact

Type above and press Enter to search. Press Esc to cancel.