NEW YORK – More than 50 years before ChatGPT could tell you what to cook for dinner, a 1968 science fiction film was shaping how we think about machines that talk to us.

In 2001: A Space Odyssey, a Jupiter-bound spacecraft is controlled by HAL, a computer that thinks for itself and has its own agenda. As the film progresses, HAL stops cooperating with the human astronauts it was programmed to assist and eventually turns off their life support.

HAL planted an idea in the public’s imagination: That one day, our machines will become so intelligent—and so human-like—that we will no longer be able to control them. We have never stopped believing in this possibility. With the recent arrival of generative AI programs that can write conversationally, produce vivid imagery, and perform myriad tasks for us, some technologists believe the superintelligent machines of science fiction are right around the corner.

But while the abilities of chatbots like OpenAI’s ChatGPT and Google’s Gemini are impressive, are these technologies really a stepping stone toward Star Trek’s Data, C-3PO from Star Wars, or, more ominously, the Terminator?

“That is the big debate,” says Melanie Mitchell, a professor at the Santa Fe Institute who studies intelligence in both machines and humans.

Some experts say it’s just tech industry hype. But others believe machines that surpass human intelligence on many important metrics while pursuing their own goals—including self-preservation at the expense of human life—are closer than the public appreciates.

A Series T-800 Robot in Terminator Genisys.

Science fictional AI versus ChatGPT

The term “artificial intelligence” evokes a few key tropes. In science fiction, we see AIs that are conscious and self-determined, like HAL from 2001 and Data from Star Trek. We see emotional machines, like Samantha, the AI assistant that falls in love with a human in the 2013 film Her, and C-3PO, the lovably anxious protocol droid from Star Wars. We see AIs that are indistinguishable from humans, like the replicants in Blade Runner. And we see machines that want to kill us.

It’s with these science fictional characters in mind that we are now trying to process what it means to live in an age of AI. Today, so-called AI tools can write catchy music, craft compelling essays, and empathetically discuss your relationship problems, to name just a few possibilities.

But Emily Bender, a computational linguist at the University of Washington and author of The AI Con, argues they share little more than a catchy name with the thinking, feeling AIs of science fiction.

“What companies mean when they say AI is ‘venture capitalists, please give us money,’” Bender says. “It does not refer to a coherent set of technologies.”

Part of the challenge with defining AI is that “our definition of intelligence is constantly evolving,” Mitchell says. She points out that in the early 1990s, many experts thought human-like intelligence would be required to play chess at the grandmaster level. Then, in 1996, IBM’s “Deep Blue” supercomputer beat chess grandmaster Garry Kasparov. Yet far from thinking abstractly, Deep Blue was built to be a “brute force search machine,” according to Mitchell.

“We see certain things as requiring intelligence, and when we realize that it can be done without what we would consider to be intelligence, we change our definition,” Mitchell says.

Today, Bender says, tech companies lean into our perception of what it means to be intelligent is to make their products seem more human-like.  ChatGPT is trained on huge amounts of human-generated text and conversational dynamics to predict the most likely next word in a conversation. Its predictive abilities are so good that its responses often sound human—an impression enhanced by its use of first-person pronouns and emotional language.

As a result of this mimicry, ChatGPT can now pass  versions of AI godfather Alan Turing’s famous “Turing Test” that assess whether it responds like a human. Turing proposed that if a computer system can fool a human into thinking it is also human, we should consider it a thinking entity.

Yet experts overwhelmingly agree that generative AI tools are not sentient, raising questions about the validity of Turing tests. “There’s no ‘I’ inside of this,” Bender says. “We’re imagining a mind that isn’t there.”

Can we get to Hal 9000?

If today’s generative AI tools are more like a fancy version of auto-complete than a HAL 9000, could they eventually lead to the sort of AI we see in science fiction?

Read more at National Geographic