On Sat, Aug 1, 2020 at 7:08 PM Matt Mahoney <mattmahone...@gmail.com> wrote:

>
> On Fri, Jul 31, 2020, 10:00 PM Ben Goertzel <b...@goertzel.org> wrote:
>
>> I think "mechanisms for how to predict the next word" is the wrong
>> level at which to think about the problem, if AGI is your interest...
>>
>
> Exactly. The problem is to predict the next bit. I mean my interest is in
> compression, but you still have to solve high level language modeling.
> Compression is not well suited to solving other aspects of AGI like
> robotics and vision where the signal is dominated by noise.
>

It doesn't matter if predicting the next word is the right level to think
about a given problem Matt. What matters is that this is the first time the
symbol grounding problem has been solved for any subset of cognition. For
any problem. This is the first.

I'm not sure Ben thinks it has been solved. He seems to think words are
still detached from their meaning in some important way. I disagree. I
think these GPT-x features are attaching words to meaning.

Perhaps we need a more powerful representation for that meaning. Something
like a hypergraph no doubt. Something that will be populated by relating
text to richer sensory experience, surely. But the grounding is being done,
and this shows us how it can be done. How symbols can be related to
observation.

That's a big thing. And it is also a big thing that the way it has been
solved is by using billions of parameters calculated from simple relational
principles. So not solved by finding some small Holy Grail set of
parameters in one-to-one correspondence with the world in some way, but by
billions of simple parameters formed by combining observations. And
seemingly no limit to how many you need. It matters that it turned out
there appears to be no limit on the number of useful parameters. And it
matters that these limitless numbers of parameters can be calculated from
simple relational principles.

This suggests that the solution to the grounding problem is firstly through
limitless numbers of parameters which can resolve contradictions through
context. But importantly that these limitless numbers of parameters can be
calculated from simple relational principles.

Given this insight it can open the door to symbol grounding for all kinds
of cognitive structures. Personally I think causal invariance will be a big
one. The solution for language it would seem. Grammar, anyway. I think for
vision too. But there may be others. Different forms of analogy I don't
doubt. But all grounded in limitless numbers of parameters which can
resolve contradictions through context. And those limitless numbers of
parameters all calculated from simple relational principles.

Another way to look at this is to say it suggests to us the solution to the
symbol grounding problem turned out to be an expansion on observation, not
a compression.

You can go on thinking the solution is to find some sanctified Holy Grail
small set of parameters. A God given kernel of cognition. But meanwhile
what is working is just constantly unpacking structure by combining
observations, billions of features of it. The number is the thing. More
than we imagined. And contradicting but resolved in context. Moving first
to networks, then to more and more parameters over networks. That is what
is actually working. Allowing the network to blow out and generate more and
more billions of parameters, which can resolve contradiction with context.

-Rob

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T21c073d3fe3faef0-Md4d2a1a723ce7c2afad4db23
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to