Published on
Artificial intelligence (AI) companies Anthropic and OpenAI are looking to hire weapons and explosives experts to prevent misuse of their technology, according to job postings from both companies.
Anthropic announced in a LinkedIn post that it was searching for a policy expert on chemical weapons and explosions to prevent “catastrophic misuse” of its technology by shaping how its AI systems handle sensitive information in these fields.
The person hired at Anthropic will design and monitor the guardrails for how AI models react to prompts about chemical weapons and explosives. They will also conduct “rapid responses” to any escalations that Anthropic detects in weapons and explosions prompts.
Applicants should have a minimum of five years of experience in “chemical weapons and/or explosives defences,” as well as knowledge of “radiological dispersal devices,” or dirty bombs. The role involves designing new risk evaluations that the company’s leadership can “trust during high-stakes launches.”
OpenAI’s job posting earlier this month said it was looking for researchers to join its Preparedness team, which monitors for “catastrophic risks related to frontier AI models.”
It also advertised for a Threat Modeler, which would give one person primary ownership “to identifying, modelling, and forecasting frontier risks” and serve as “a central node connecting technical, governance, and policy perspectives on prioritisation, focus and rationale on our approach to frontier risks from AI.”
Euronews Next reached out to Anthropic and OpenAI about the job postings but did not receive an immediate reply.
These hires come after Anthropic mounted a legal challenge against the US government after it designated the company as a “supply chain risk,” a label that allows the government to block contracts or instruct departments not to work with them.
The conflict began on February 24, when the Department of War (DOW) demanded unfettered access to Anthropic’s Claude chatbot.
CEO Dario Amodei said that DOW contracts should not include instances where Claude is deployed for mass domestic surveillance and integrated into fully autonomous weapons.
Shortly after the fallout with Anthropic, OpenAI signed a deal with the Department of War (DOW) to deploy its AI into classified environments. The company said the deal included strict red lines, such as no use of its systems for mass surveillance or autonomous weapons.

