Neuroscientist David Eagleman proposes test of intelligence for AI to Utah audience –

4 minutes, 0 seconds Read

SALT LAKE CITY — What is intelligence? As it turns out, the rapidly developing technology of artificial intelligence and large language models may not only become an asset in pursuing new scientific discoveries, but also might turn out to be a vital resource from which we can explore the true nature of intelligence.

That’s according to David Eagleman, a neuroscientist currently teaching at Stanford University.

“The question is, will things get strange as we enter into a world with another intelligence?” Eagleman, also a best-selling author and writer of the TV series “The Brain with David Eagleman,” pondered with an audience Tuesday at the University of Utah’s Kingsbury Hall. He spoke as part of the Natural History Museum of Utah’s 2024 lecture series centered around the nature of intelligence.

Eagleman explained to the audience that AI only appears to be so smart because of its ability to instantly synthesize information from any documented source on the internet and eloquently generate answers to queries through the use of advanced large language models — a form of artificial neural network that allows a program to synthesize information into accurate statements that imitate natural speech through a learned understanding of language and syntax.

“What you should consider is that absorbing text, absorbing trillions of pages of text and running massive statistical models on them is not in itself intelligence or sentience, because ChatGPT has no idea what is saying,” Eagleman explained, adding that programs like ChatGPT collect information at an incredible rate using a near-limitless library of sources.

To illustrate this point, Eagleman used the thought experiment of a Chinese room, which posits that an artificial intelligence can’t actually display intelligence no matter how human-like its programmer creates it to be.

The premise of the “Chinese Room Thought Experiment” involves a person who is a fluent Chinese speaker and a person who doesn’t know any Chinese at all. The person who doesn’t know Chinese is trapped in a room full of books with instructions on what to do with Chinese symbols. When the fluent Chinese speaker hands the person in the room messages in Chinese, the person who doesn’t speak Chinese looks up matching symbols and which symbols are the appropriate responses to those messages.

Eagleman reasoned that given the choice of sources in the room is abundant and a thorough representation of the Chinese written language, the nonfluent person could convince the fluent person that they know Chinese. But that doesn’t mean the nonfluent person actually knows Chinese, it only means that they could potentially convince the other person that they are fluent in Chinese by piecing together the information they have access to.

“The question is: Have we proven that a computer can have intelligence? In reality, these computers are just playing these statistical games,” Eagleman said, explaining that previous tests of intelligence like the Turing Test or the Lovelace Test test large language models on capabilities that they have long been capable of.

“What I proposed to the literature a couple of months ago is a new test for intelligence — if a system is truly intelligent, it should be able to do scientific discovery,” he said. “One of the most important things humans do is science, so the day that our AI can make real discoveries is the day I’ll consider it to be intelligent.”

One of the most important things humans do is science, so the day that our AI can make real discoveries is the day I’ll consider it to be intelligent.

–David Eagleman, neuroscientist and author

Eagleman further illustrated his point by classifying two forms of intelligence: level 1 scientific discovery and level 2 scientific discovery. The former involves culminating pre-existing facts or notions to find a solution that works, and the latter involves reaching a scientific discovery through the conceptualization of original ideas.

Examples of instances of level 2 scientific discovery, as Eagleman cited, were Albert Einstein’s discovery of the theory of relativity and Charles Darwin’s ideas surrounding evolution.

However, Eagleman also said he believes AI has reached the point where it has proven itself capable of being creative.

“Human brains are great at remixing information, bending, breaking and blending ideas — the thing is, what we’ve seen is that these LLMs (large language models) are perfectly good at that,” Eagleman said. “I feel like, at this point, they might be just as creative as we are. What they’re not good at is filtering the things that humans would care about.”

As far as the near future for a technology that seems to be advancing in sophistication every week, Eagleman said he sees a bright future where AI will be making scientific discoveries alongside its human cohorts and providing humans with invaluable personal resources — for example, an AI therapist that is tailored to your needs and is available for consultation at all hours of the day.

This post was originally published on this site

Similar Posts