Key in my view:

"The capabilities of the model, which have been demonstrated above, both in
terms of depth and generality, suggest that the machine learning
community *needs
to move beyond classical benchmarking* via structured datasets and tasks,
and that the evaluation of the capabilities and cognitive abilities of
those new models have become much closer in essence to the task of
evaluating those of a human rather than those of a narrow AI model."

"The central claim of our work is that GPT-4 attains a form of general
intelligence, indeed showing sparks of artificial general intelligence.
This is demonstrated by its core mental capabilities (such as reasoning,
creativity, and deduction), its range of topics on which it has gained
expertise (such as literature, medicine, and coding), and the variety of
tasks it is able to perform (e.g., playing games, using tools, explaining
itself,
...). A lot remains to be done to create a system that could qualify as a
complete AGI."

I don't think the above statement is even controversial. Is anybody really
going to argue not, and on what grounds? In my view it meets the criteria
for a starter AGI. Why not?

Of the paper, look for Gary Marcus to offer his usual critiques in the next
few days and Emily Bender her usual ridicule.

But neither of those two can offer anything remotely comparable.



On Thu, Mar 23, 2023 at 6:42 PM Mike Archbold <[email protected]> wrote:

> Here they basically reveal their definition of "understanding"
>
> "A question that might be lingering on many readers’ mind is whether
> GPT-4 truly understands all these
> concepts, or whether it just became much better than previous models
> at improvising on the fly, without any
> real or deep understanding. We hope that after reading this paper the
> question should almost flip, and that
> one might be left wondering how much more there is to true
> understanding than on-the-fly improvisation.
> Can one reasonably say that a system that passes exams for software
> engineering candidates (Figure 1.5) is
> not really intelligent? Perhaps the only real test of understanding is
> whether one can produce new knowledge,
> such as proving new mathematical theorems, a feat that currently
> remains out of reach for LLMs."
>
> On Wed, Mar 22, 2023 at 9:57 PM <[email protected]> wrote:
> >
> > "Given the breadth and depth of GPT-4's capabilities, we believe that it
> could reasonably be viewed as an early (yet still incomplete) version of an
> artificial general intelligence (AGI) system."
> >
> > https://arxiv.org/abs/2303.12712
> > Artificial General Intelligence List / AGI / see discussions +
> participants + delivery options Permalink
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T208c0ae6f6f8ee11-M9f29c2e87984c4826cfa833f
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to