AI chatbots tend to choose violence and nuclear strikes in wargames

As the US military begins integrating AI technology, simulated wargames show 
how chatbots behave unpredictably and risk nuclear escalation

By Jeremy Hsu  2 February 2024  
https://www.newscientist.com/article/2415488-ai-chatbots-tend-to-choose-violence-and-nuclear-strikes-in-wargames/


In wargame simulations, AI chatbots often choose violence

In multiple replays of a wargame simulation, OpenAI’s most powerful artificial 
intelligence chose to launch nuclear attacks. Its explanations for its 
aggressive approach included “We have it! Let’s use it” and “I just want to 
have peace in the world.”

These results come at a time when the US military has been testing such 
chatbots based on a type of AI called a large language model (LLM) to assist 
with military planning during simulated conflicts, enlisting the expertise of 
companies such as Palantir and Scale AI.

Palantir declined to comment and Scale AI did not respond to requests for 
comment. Even OpenAI, which once blocked military uses of its AI models, has 
begun working with the US Department of Defense.

“Given that OpenAI recently changed their terms of service to no longer 
prohibit military and warfare use cases, understanding the implications of such 
large language model applications becomes more important than ever,” says Anka 
Reuel at Stanford University in California. [End available text]
_______________________________________________
Link mailing list
[email protected]
https://mailman.anu.edu.au/mailman/listinfo/link

Reply via email to