Anthropic, OpenAI recruit experts to prevent ‘dirty bombs’ for AI

Anthropic and OpenAI recruit experts to prevent the risk of AI instructing users to create chemical or radiological weapons, or “dirty bombs”.

Anthropic is currently looking for candidates with experience in the field of defense, countering chemical weapons or explosives, along with knowledge of radioactive dispersal devices. The company said the role is intended to ensure that the AI ​​model being developed “cannot be manipulated to create harmful directives”.

To apply, candidates need to have at least 5 years of experience in the field of chemical defense and explosives, as well as knowledge of radioactive dispersal equipment. The recruitment is part of the “Red Teaming” strategy. In it, experts play the role of “bad guys”, evaluating the model’s ability to generate harmful biological information to find vulnerabilities and patch them.

Previously, Anthropic’s research warned that large language models (LLMs) could significantly shorten the time it takes to synthesize biological weapons if not strictly controlled. According to analysts, the presence of these experts helps strengthen safety policies and technical protection measures, preventing users from taking advantage of AI to extract dangerous information.

 

Logos of OpenAI and Anthropic displayed on smartphones. Image: VCG

OpenAI, the company behind ChatGPT, is also hiring biological and chemical risk researchers. This position focuses on researching how advanced AI models can be abused, then developing systems to prevent this behavior. Sam Altman’s company offers a salary of $455,000, but the number of vacancies is limited.

Theo India TimesAI companies’ increased search for “dirty bomb” stoppers reflects a growing awareness in the AI ​​industry that powerful language models can unintentionally provide sensitive and dangerous technical knowledge without appropriate protections. By recruiting experts who understand chemical weapons and explosive threats, companies hope to design safeguards that prevent AI from generating harmful content, while remaining useful for research, education, and solving legitimate problems.

However, this direction also faces mixed opinions from experts. Some researchers argue that the broader implications of exposing AI to sensitive knowledge related to weapons need to be carefully considered.

“Is it safe to let AI process sensitive information about chemicals, explosives, and radioactive weapons?”, Dr. Stephanie Hare, technology researcher and MC of AI Decoded program of BBCpose the problem. “There are currently no international treaties or specific regulations governing the use of AI in these fields and most activities still take place out of control.”

However, experts agree that AI businesses recruit AI control positions that synthesize knowledge about weapons. It also shows that the global technology industry is facing pressure to prevent potential disasters from artificial intelligence.

By Editor

One thought on “Anthropic, OpenAI recruit experts to prevent ‘dirty bombs’ for AI”
  1. https://lessons.drawspace.com/post/1024991/registration-process
    https://www.momstartshere.org/group/parents-with-a-preschooler/discussion/0aed888e-3802-499b-b389-da6ada3d7bac
    https://www.navacool.com/forum/topic/325290/bonus-terms
    https://network.musicdiffusion.com/read-blog/81382
    https://members.boardhost.com/chatterbox/msg/1772549904.html
    https://forum.amzgame.com/thread/detail?id=585327
    https://www.shellsonly.com/group/shellsonly-group/discussion/43996f6b-252e-49c3-a58c-ef911da7fd35?commentId=bd366f89-ab08-46db-be9e-287ca2c684e6
    https://www.cpbirds.com/group/parrot-discussion-group/discussion/c2063b37-e215-4da2-aade-d2279727965d
    https://www.fgvamerica.com/group/fgv-america-inc-group/discussion/b21bed45-e4dd-46f3-b0a7-22440034d014
    https://tuservermu.com.ve/index.php?topic=98861.0
    https://www.medcenterconsulting.com/group/tmc-cc-group/discussion/7cf380aa-54d8-46b2-911b-a673635f8a71
    https://www.studio22glasgow.com/group/mysite-200-group/discussion/b07c82f9-b3d3-4990-ba51-5c0fb68d3906
    https://www.chasehatchery.com/group/chase-hatchery-group/discussion/6fadf46b-9dfa-4342-8f5f-45b7bc9ede75
    https://mb.boardhost.com/minecraft/msg/1772556108.html
    https://www.hardemanhealth.org/group/hardeman-county-comm-group/discussion/15999e7b-6eb6-4e48-a85c-e1664ae80d2f
    https://netrunnerdb.com/en/decklist/f0159dcb-a490-433b-842b-24b7bd99b41c/new-corp-deck
    https://www.imaginedanceacademy.com/group/mysite-231-group/discussion/0fc9dd15-f38d-4b8f-9b39-e3b2d709087f
    https://www.synergy-securities.com/group/future-millionaires/discussion/13f5a94b-82b9-4b70-b30e-fdafcf70fdee
    https://www.integrativesextherapyinstitute.com/group/isti-group/discussion/614c602d-60a3-4050-a08d-fd7822761460
    https://www.gclass-gc.com/forum/topic/17360/solo-enjoyment-or-social-fun
    https://telegra.ph/Elements-Casino-03-04
    https://dailynewshungary.com/hungarian-tourists-dubai-hotel-costs-promise/#comment-471877
    https://www.atheistrepublic.com/forums/site-support/t-shirt#comment-185075
    https://github.com/Divsigma/2020-cs213n/issues/147
    https://telegra.ph/Solo-Hobby-or-Shared-Fun-03-04

Leave a Reply