The US artificial intelligence (AI) firm Anthropic is looking to hire a chemical weapons and high-yield explosives expert to try to prevent 'catastrophic misuse' of its software. In other words, it fears that its AI tools might tell someone how to make chemical or radioactive weapons, and wants an expert to ensure its guardrails are sufficiently robust.
In the LinkedIn recruitment post, the firm states that applicants should have a minimum of five years of experience in 'chemical weapons and/or explosives defence' as well as knowledge of 'radiological dispersal devices' – also known as dirty bombs. The company indicated that this role is akin to jobs in other sensitive areas that it has created previously.
Anthropic is not the only AI firm adopting this strategy. A similar position has been advertised by ChatGPT developer OpenAI. On its careers website, it lists a job vacancy for a researcher in 'biological and chemical risks,' with a salary of up to $455,000 (£335,000), almost double that offered by Anthropic.
However, some experts have expressed concern about the risks associated with this approach, emphasizing that providing AI tools with information about weapons—no matter how strict the instructions are against misuse—can be dangerous. Dr. Stephanie Hare, a tech researcher and co-presenter of BBC's AI Decoded TV programme, questioned the safety of using AI systems to manage sensitive chemicals and explosives information, citing the lack of international treaties or regulations governing such practices.
The urgency of this discussion has heightened as the US government seeks the collaboration of AI firms amidst military operations in countries like Iran and Venezuela. Anthropic is currently taking legal action against the US Department of Defense for labeling it a supply chain risk, claiming that its systems should not be used for fully autonomous weapons or mass surveillance on American citizens. Despite the challenges, Anthropic's AI assistant, Claude, remains operational within systems utilized by the US military.
In the LinkedIn recruitment post, the firm states that applicants should have a minimum of five years of experience in 'chemical weapons and/or explosives defence' as well as knowledge of 'radiological dispersal devices' – also known as dirty bombs. The company indicated that this role is akin to jobs in other sensitive areas that it has created previously.
Anthropic is not the only AI firm adopting this strategy. A similar position has been advertised by ChatGPT developer OpenAI. On its careers website, it lists a job vacancy for a researcher in 'biological and chemical risks,' with a salary of up to $455,000 (£335,000), almost double that offered by Anthropic.
However, some experts have expressed concern about the risks associated with this approach, emphasizing that providing AI tools with information about weapons—no matter how strict the instructions are against misuse—can be dangerous. Dr. Stephanie Hare, a tech researcher and co-presenter of BBC's AI Decoded TV programme, questioned the safety of using AI systems to manage sensitive chemicals and explosives information, citing the lack of international treaties or regulations governing such practices.
The urgency of this discussion has heightened as the US government seeks the collaboration of AI firms amidst military operations in countries like Iran and Venezuela. Anthropic is currently taking legal action against the US Department of Defense for labeling it a supply chain risk, claiming that its systems should not be used for fully autonomous weapons or mass surveillance on American citizens. Despite the challenges, Anthropic's AI assistant, Claude, remains operational within systems utilized by the US military.




















