Gary Marcus's article explains quite clearly why and how GPT2 fails to
approach human-like AGI,

https://thegradient.pub/gpt2-and-the-nature-of-intelligence/

He also explains the fallacy of simplistically claiming that
prediction = understanding

The merits or demerits of OpenCog are a different question.   If I had
none of my own ideas about AGI, or OpenCog did not exist, etc.,
Marcus's critique of GPT2 as relates to AGI would still be very apt
and I would still agree with it.

Whether GPT2 is "Closer to AGI" than other current AI systems is a red
herring IMO.   It's like discussing whether a blimp or a glider is
closer to a starship ... or whether a cat or dog is closer to human
intelligence.   Who cares?   (And if we wanted to pursue this metaphor
-- OpenCog is then like a partly-built starship that is designed
according to a proper theory of starships, but doesn't yet fly as high
as the blimp or glider.   But as I said, I don't want to make this a
comparison of OpenCog vs. GPT2 which are completely different sorts of
animals.   The two things should be discussed separately.)

Chollet has made similar points in a style designed to be palatable to
folks from the contemporary ML/DL school of AI,

https://arxiv.org/pdf/1911.01547.pdf


-- Ben G


On Tue, Jul 7, 2020 at 10:29 PM <immortal.discover...@gmail.com> wrote:
>
> Make sure to read my above post.
>
> Really? You don't see how Blender (or my improvement above) is closer to AGI 
> than GPT-2 is? Or that GPT-2 is close-ish to AGI? Do you have something 
> better? Does it predict text/images better? What does OpenCog AI do if it 
> can't compare to OpenAI's showcase!? Either you have better results showing 
> it can think more like AGI than my vision of Blender or you haven't coded it 
> yet and can explain it instead, but as far as I can see, OpenCog AI isn't as 
> "AGI" as Blender or my vision of Blender. Explain OpenCog. Why doesn't it 
> just recognize a sentence/image patch and predict the next item? What does it 
> do? I can't even find out. AGI is just recognition, prediction, attention, it 
> has to create/predict the future thoughts/discoveries it desires. Prediction 
> is 90% of AGI and all new AGI mechanisms simply improve it or allow it to do 
> interesting manipulation on data ex. logic AND OR or when you edit a Paper or 
> delete things or clone paragraphs, or count how many letters in a sentence or 
> how many times 's' appears in a sentence, etc etc. These things are AGI like 
> 'behavior', but still connected to Prediction and can be showcased.
>
> GPT-2 is obviously very close to the AGI we're looking for. And Blender is 
> even better because 1) it knows how to finish its replies (because of Byte 
> Pair Encoding), 2) is trained not just on wikipedia but actual human-like 
> chat log conversations, 3) is forced to talk about/stick to some 
> domain/question and not just some nutty prompt like unicorns (which changes 
> over time too) which is not about cancer or immortality (and my improvement 
> is to set it to force it to talk about Survival, AND evolve it as explained 
> how to let it find the right sub domains of sub domains as it thinks. This 
> let's it much better answer the installed question/goal). There's no better 
> than this. It's closer to natural AGI than all others.
> Artificial General Intelligence List / AGI / see discussions + participants + 
> delivery options Permalink



-- 
Ben Goertzel, PhD
http://goertzel.org

“The only people for me are the mad ones, the ones who are mad to
live, mad to talk, mad to be saved, desirous of everything at the same
time, the ones who never yawn or say a commonplace thing, but burn,
burn, burn like fabulous yellow roman candles exploding like spiders
across the stars.” -- Jack Kerouac

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T604ad04bc1ba220c-M05d283814bf9533ee64f28f7
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to