The US artificial intelligence (AI) firm Anthropic is looking to hire a chemical weapons and high-yield explosives expert to try to prevent catastrophic misuse of its software.

In other words, it fears that its AI tools might tell someone how to make chemical or radioactive weapons, and wants an expert to ensure its guardrails are sufficiently robust.

In the LinkedIn recruitment post, the firm says applicants should have a minimum of five years experience in chemical weapons and/or explosives defence as well as knowledge of radiological dispersal devices – also known as dirty bombs.

The firm told the BBC the role was similar to jobs in other sensitive areas that it has already created.

Anthropic is not the only AI firm adopting this strategy. A similar position has been advertised by ChatGPT developer OpenAI, which lists a job vacancy for a researcher in biological and chemical risks, with a salary of up to $455,000 (£335,000), almost double that offered by Anthropic.

However, some experts are alarmed by the risks of this approach, warning that it gives AI tools information about weapons - even if they have been instructed not to use it. Is it ever safe to use AI systems to handle sensitive chemicals and explosives information, including dirty bombs and other radiological weapons? questioned Dr. Stephanie Hare, tech researcher and co-presenter of the BBC's AI Decoded TV programme.

Concerns have escalated without the presence of international treaties or regulations governing such work with AI technology. The urgency of this issue is amplified as the US government calls on AI firms amidst military operations in Iran and Venezuela.

Anthropic is currently engaged in legal action against the US Department of Defence, which categorized it as a supply chain risk, following the company's insistence that its systems must not be utilized in fully autonomous weapons or mass surveillance of Americans.

Co-founder Dario Amodei expressed in February that he does not believe the technology is appropriate for use in these contexts yet, while the White House stated that the US military would not be managed by tech companies. This complicated relationship highlights the balanced tension between technological advancement and ethical responsibility.