Yesterday, we included a story in the newsletter about a Google researcher who leaked a critique of Google’s AI efforts, complaining that Google was losing ground to open source AI projects.
That story hit home today with the publication of a piece in The New York Times entitled, “The Next Fear on A.I.: Hollywood’s Killer Robots Become the Military’s Tools.”
The military is of course deathly afraid that AI-powered weapons could dramatically accelerate the pace of war, making decisions much faster than humans could control. The ability of artificial intelligence models to pump out disinformation coupled with their susceptibility to hallucinations and misinformation only adds to these fears.
Up until now, we have been hoping that depriving China of advanced chipsets might delay the use of artificial intelligence by our adversaries. Also, Google’s Bard and OpenAI’s ChatGPT have controls in place that limit public access to dangerous information such as homw to build an atom bomb.
But as the Google researcher points out, Google and Open AI are no longer the only game it town.
“Open-source models,” this person writes, “are faster, more customizable, more private, and pound-for-pound more capable. They are doing things with $100 and 13B params that we struggle with at $10M and 540B. And they are doing so in weeks, not months.”
They don’t need those advanced chipsets, in other words.
This may sound alarmist, but it’s not only possible but probable that rogue nations like North Korea are exploring how they can embed open source AI into their nuclear weapons systems.
And as the Times points out, “So far there are no treaties or international agreements that deal with such autonomous weapons.”
So enjoy your weekend, everyone. We’ll sort this out … right?!