Center for Political Awareness

Does Artificial Intelligence Want to Nuke Us? We Don’t Know — and That Could be More Dangerous

Image: Nuclear Blast Mushroom Cloud And Binary Code (Photo illustration by Salon/Getty Images)

 

If human military leaders put robots in charge of our weapons systems, maybe artificial intelligence would fire a nuclear missile. Maybe not. Maybe it would explain its attack to us using perfectly sound logic — or maybe it would treat the script of “Star Wars” like international relations policy, and accord unhinged social media comments the same credibility as case law. 

That’s the whole point of a new study on AI models and war games: AI is so uncertain right now that we risk catastrophic outcomes if globe-shakers like the United States Air Force cash in on the autonomous systems gold rush without understanding the limits of this tech.

The new paper, “Escalation Risks from Language Models in Military and Diplomatic Decision-Making”, is still in preprint and awaiting peer review. But its authors — from the Georgia Institute of Technology, Stanford University, Northeastern University, and the Hoover Wargaming and Crisis Simulation Initiative — found most AI models would choose to launch a nuclear strike when given the reins. These aren’t the AI models carefully muzzled by additional safety design, like ChatGPT, and available to the public. They’re the base models beneath those commercial versions, unmuzzled for research only. 

Click here for full article.

Leave a Reply

2.5" Vinyl Sticker

with each Donation

Enjoying what your are reading. Please donate to help with our operating expenses. Thank you!

Enjoying what your are reading. Please donate to help with our operating expenses. Thank you!

Find your US Representative: Click Here