« 6TH POPE Home | Email msg. | Reply to msg. | Post new | Board info. Previous | Home | Next

Is Chat GPT About to Reach Artificial GENERAL Intelligence? The Debate Rages.

By: Fiz in 6TH POPE | Recommend this post (0)
Wed, 19 Apr 23 1:44 AM | 39 view(s)
Boardmark this board | 6th Edition Pope Board
Msg. 41907 of 60008
Jump:
Jump to board:
Jump to msg. #

I suggest you do your own search, read, and make up your own mind. Here is ONE (of many, many) articles discussing the question.

http://www.wired.com/story/chatgpt-agi-intelligence/

Excerpt:

Some Glimpse AGI in ChatGPT. Others Call It a Mirage
A new generation of AI algorithms can feel like they’re reaching artificial general intelligence—but it’s not clear how to measure that.

Sébastien Bubeck, a machine learning researcher at Microsoft, woke up one night last September thinking about artificial intelligence—and unicorns.

Bubeck had recently gotten early access to GPT-4, a powerful text generation algorithm from OpenAI and an upgrade to the machine learning model at the heart of the wildly popular chatbot ChatGPT. Bubeck was part of a team working to integrate the new AI system into Microsoft’s Bing search engine. But he and his colleagues kept marveling at how different GPT-4 seemed from anything they’d seen before.

GPT-4, like its predecessors, had been fed massive amounts of text and code and trained to use the statistical patterns in that corpus to predict the words that should be generated in reply to a piece of text input. But to Bubeck, the system’s output seemed to do so much more than just make statistically plausible guesses.

That night, Bubeck got up, went to his computer, and asked GPT-4 to draw a unicorn using TikZ, a relatively obscure programming language for generating scientific diagrams. Bubeck was using a version of GPT-4 that only worked with text, not images. But the code the model presented him with, when fed into a TikZ rendering software, produced a crude yet distinctly unicorny image cobbled together from ovals, rectangles, and a triangle. To Bubeck, such a feat surely required some abstract grasp of the elements of such a creature. “Something new is happening here,” he says. “Maybe for the first time we have something that we could call intelligence.”

How intelligent AI is becoming—and how much to trust the increasingly common feeling that a piece of software is intelligent—has become a pressing, almost panic-inducing, question.

After OpenAI released ChatGPT, then powered by GPT-3, last November, it stunned the world with its ability to write poetry and prose on a vast array of subjects, solve coding problems, and synthesize knowledge from the web. But awe has been coupled with shock and concern about the potential for academic fraud, misinformation, and mass unemployment—and fears that companies like Microsoft are rushing to develop technology that could prove dangerous.

Understanding the potential or risks of AI’s new abilities means having a clear grasp of what those abilities are—and are not. But while there’s broad agreement that ChatGPT and similar systems give computers significant new skills, researchers are only just beginning to study these behaviors and determine what’s going on behind the prompt.

While OpenAI has promoted GPT-4 by touting its performance on bar and med school exams, scientists who study aspects of human intelligence say its remarkable capabilities differ from our own in crucial ways. The models’ tendency to make things up is well known, but the divergence goes deeper. And with millions of people using the technology every day and companies betting their future on it, this is a mystery of huge importance.
Sparks of Disagreement

Bubeck and other AI researchers at Microsoft were inspired to wade into the debate by their experiences with GPT-4. A few weeks after the system was plugged into Bing and its new chat feature was launched, the company released a paper claiming that in early experiments, GPT-4 showed “sparks of artificial general intelligence.”

The authors presented a scattering of examples in which the system performed tasks that appear to reflect more general intelligence, significantly beyond previous systems such as GPT-3. The examples show that unlike most previous AI programs, GPT-4 is not limited to a specific task but can turn its hand to all sorts of problems—a necessary quality of general intelligence.


The authors also suggest that these systems demonstrate an ability to reason, plan, learn from experience, and transfer concepts from one modality to another, such as from text to imagery. “Given the breadth and depth of GPT-4’s capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system,” the paper states.

Bubeck’s paper, written with 14 others, including Microsoft’s chief scientific officer, was met with pushback from AI researchers and experts on social media. Use of the term AGI, a vague descriptor sometimes used to allude to the idea of super-intelligent or godlike machines, irked some researchers, who saw it as a symptom of the current hype.

The fact that Microsoft has invested more than $10 billion in OpenAI suggested to some researchers that the company’s AI experts had an incentive to hype GPT-4’s potential while downplaying its limitations. Others griped that the experiments are impossible to replicate because GPT-4 rarely responds in the same way when a prompt is repeated, and because OpenAI has not shared details of its design. Of course, people also asked why GPT-4 still makes ridiculous mistakes if it is really so smart.

Talia Ringer, a professor at the University of Illinois at Urbana-Champaign, says Microsoft’s paper “shows some interesting phenomena and then makes some really over-the-top claims.” Touting systems that are highly intelligent encourages users to trust them even when they’re deeply flawed, she says. Ringer also points out that while it may be tempting to borrow ideas from systems developed to measure human intelligence, many have proven unreliable and even rooted in racism.
See What’s Next in Tech With the Fast Forward Newsletter
A weekly dispatch from the future by Will Knight, exploring AI advances and other technology set to change our lives. Delivered every Thursday.
Your email
By signing up you agree to our User Agreement (including the class action waiver and arbitration provisions), our Privacy Policy & Cookie Statement and to receive marketing and account-related emails from WIRED. You can unsubscribe at any time.

Bubek admits that his study has its limits, including the reproducibility issue, and that GPT-4 also has big blind spots. He says use of the term AGI was meant to provoke debate. “Intelligence is by definition general,” he says. “We wanted to get at the intelligence of the model and how broad it is—that it covers many, many domains.”

But for all of the examples cited in Bubeck’s paper, there are many that show GPT-4 getting things blatantly wrong—often on the very tasks Microsoft’s team used to tout its success. For example, GPT-4’s ability to suggest a stable way to stack a challenging collection of objects—a book, four tennis balls, a nail, a wine glass, a wad of gum, and some uncooked spaghetti—seems to point to a grasp of the physical properties of the world that is second nature to humans, including infants. However, changing the items and the request can result in bizarre failures that suggest GPT-4’s grasp of physics is not complete or consistent.

Bubeck notes that GPT-4 lacks a working memory and is hopeless at planning ahead. “GPT-4 is not good at this, and maybe large language models in general will never be good at it,” he says, referring to the large-scale machine learning algorithms at the heart of systems like GPT-4. “If you want to say that intelligence is planning, then GPT-4 is not intelligent.”
Most Popular

Sam Altman
Business
OpenAI’s CEO Says the Age of Giant AI Models Is Already Over

Will Knight
Two people holding hands at a wedding alter
Culture
The Love Is Blind Fiasco Proves Netflix Has a Lot to Learn

Amanda Hoover
Ford F-150 Lightning pickup truck on stage at an event with an American flag in the background
Business
The US Wants to Close an ‘SUV Loophole’ That Supersized Cars

Aarian Marshall
Overhead view of the Polestar 4 EV on black backdrop on gold backdrop
Gear
Polestar’s New Electric Car Has No Rear Window

Jason Barlow

One thing beyond debate is that the workings of GPT-4 and other powerful AI language models do not resemble the biology of brains or the processes of the human mind. The algorithms must be fed an absurd amount of training data—a significant portion of all the text on the internet—far more than a human needs to learn language skills. The “experience” that imbues GPT-4, and things built with it, with smarts is shoveled in wholesale rather than gained through interaction with the world and didactic dialog. And with no working memory, ChatGPT can maintain the thread of a conversation only by feeding itself the history of the conversation over again at each turn. Yet despite these differences, GPT-4 is clearly a leap forward, and scientists who research intelligence say its abilities need further interrogation.
Mind of a Machine

A team of cognitive scientists, linguists, neuroscientists, and computer scientists from MIT, UCLA, and the University of Texas, Austin, posted a research paper in January that explores how the abilities of large language models differ from those of humans.

The group concluded that while large language models demonstrate impressive linguistic skill—including the ability to coherently generate a complex essay on a given theme—that is not the same as understanding language and how to use it in the world. That disconnect may be why language models have begun to imitate the kind of commonsense reasoning needed to stack objects or solve riddles. But the systems still make strange mistakes when it comes to understanding social relationships, how the physical world works, and how people think.

The way these models use language, by predicting the words most likely to come after a given string, is very difference from how humans speak or write to convey concepts or intentions. The statistical approach can cause chatbots to follow and reflect back the language of users’ prompts to the point of absurdity.

When a chatbot tells someone to leave their spouse, for example, it only comes up with the answer that seems most plausible given the conversational thread.




» You can also:
« 6TH POPE Home | Email msg. | Reply to msg. | Post new | Board info. Previous | Home | Next