NEW YORK – This is not the first time humanity has stared down the possibility of extinction due to its technological creations. But the threat of AI is very different from the nuclear weapons we’ve learned to live with. Nukes can’t think. They can’t lie, deceive or manipulate. They can’t plan and execute. Somebody has to push the big red button.

Let’s be clear from the outset: if we agree that artificial superintelligence has a chance of wiping out all life on Earth, there doesn’t seem to be much we can do about it anyway. It’s not just that we don’t know how to stop something smarter than us. We can’t even, as a species, stop ourselves from racing to create it. Who’s going to make the laws? The US Congress? The United Nations? This is a global issue. Desperate open letters from industry leaders asking for a six-month pause to figure out where we’re at may be about the best we can do.

The incentives you’d be working against are enormous. First off, it’s an arms race; if America doesn’t build it, China will, and whoever gets there first might rule the world. But there’s also economics; the smarter and more capable an AI you develop, the bigger a money printing machine you’ve got. “They spit out gold, until they get large enough and ignite the atmosphere and kill everybody,” said AI researcher and philosopher Eliezer Yudkowsky earlier today to Lex Fridman.

Yudkowsky has long been one of the leading voices in the “AI will kill us all” camp. And the people leading the race to superintelligence no longer think he’s a crank. “I think that there’s some chance of that,” said OpenAI CEO Sam Altman, again to Fridman. “And it’s really important to acknowledge it. Because if we don’t talk about it, if we don’t treat it as potentially real, we won’t put enough effort into solving it.”

To read more, click on New Atlas