OpenAI is working with Los Alamos National Laboratory to study how artificial intelligence can be used to combat biological threats that could arise from the use of artificial intelligence tools by non-experts, according to an announcement Wednesday. The Los Alamos laboratory, originally established in New Mexico during World War II to develop the atomic bomb, said the work is an “informative of artificial intelligence in biosafety and how artificial intelligence can be used in laboratory settings.” “First of its kind” study.
The differences between the two statements released Wednesday by OpenAI and Los Alamos Labs are striking. OpenAI’s statement attempts to describe the collaboration as a simple study of how “scientists can safely use artificial intelligence in a laboratory setting to advance biological science research.” However, the Los Alamos lab emphasized the fact that previous research “found that ChatGPT-4 provided a slight improvement in providing information that could lead to biological threats.”
Much of the public discussion about the threats posed by artificial intelligence has focused on the creation of a self-aware entity that could conceivably develop a mind of its own and harm humans in some way. Some worry that implementing AGI (advanced general intelligence, in which artificial intelligence can perform advanced reasoning and logic rather than acting as a fancy autocomplete word generator) could lead to a Skynet-type situation. While many AI proponents, including Musk and OpenAI CEO Sam Altman, lean toward that characterization, the more pressing threat appears to be ensuring that tools like ChatGPT are not used to create bioweapons.
“Artificial intelligence-driven biothreats may pose significant risks, but existing work has yet to assess how multimodal cutting-edge models can reduce the entry of non-experts creating biothreats,” the Los Alamos laboratory said in a statement on its website. threshold.
The two groups’ differing approaches to information may be down to the fact that OpenAI may be unwilling to acknowledge the national security implications of highlighting the potential for its products to be used by terrorists. In more detail, the Los Alamos statement uses the word “threat” five times, while the OpenAI statement uses it only once.
“The potential advantages of increasing artificial intelligence capabilities are endless,” Los Alamos research scientist Erick LeBrun said in a statement Wednesday. “However, measuring and understanding biological threats is not as relevant as Any potential dangers or misuse associated with advanced artificial intelligence remain largely unexplored. The partnership with OpenAI is an important step in establishing a framework for evaluating current and future models, ensuring the responsible development and deployment of artificial intelligence technology.
Correction: An earlier version of this article originally cited Los Alamos’ statement as coming from OpenAI. Gizmodo regrets these errors.