I'm disappointed you don't address my points James. You just double
down that there needs to be some framework for learning, and that
nested stacks might be one such constraint.

I replied that nested stacks might be emergent on dependency length.
So not a constraint based on actual nested stacks in the brain, but a
"soft" constraint based on the effect of dependency
 length on groups/stacks generated/learned from sequence networks.

BTW just noticed your "Combinatorial Hierarchy, Computational
Irreducibility and other things that just don't matter..." thread.
Perhaps that thread is a better location to discuss this. Were you
positing in that thread that all of maths and physics might be
emergent on combinatorial hierarchies? Were you saying yes, but it
doesn't matter to the practice of AGI, because for physics we can't
find the combinatorial basis, and in practice we can find top down
heuristics which work well enough?

Well, maybe for language a) we can't find top down heuristics which
work well enough and b) we don't need to, because for language a
combinatorial basis is actually sitting right there for us, manifest,
in (sequences of) text.

With language we don't just have the top-down perception of structure
like we do with physics (or maths.) Language is different to other
perceptual phenomena that way. Because language is the brain's attempt
to generate a perception in others. So with language we're also privy
to what the system looks like bottom up. We also have the, bottom up,
"word" tokens which are the combinatorial basis which generates a
perception.

Anyway, it seems like my point is similar to your point: language
structure, and cognition, might be emergent on combinatorial
hierarchies.

LLMs go part way to implementing that emergent structure. They succeed
to the extent they abandon an explicit search for top-down structure,
and just allow the emergent structure to balloon. Seemingly endlessly.
But they are a backwards implementation of emergent structure.
Succeeding by allowing the structure to grow. But failing because
back-prop assumes the structure will somehow not grow too. That there
will be an end to growth. Which will somehow be a compression of the
growth it hasn't captured yet... Actually, if it grows, you can't
capture it all. And in particular, back-prop can't capture all of the
emergent structure, because, like physics, that emergent structure
manifests some entanglement, and chaos.

In this thesis, LLMs are on the right track. We just need to replace
back-prop with some other way of finding emergent hierarchies of
predictive symmetries, and do it generatively, on the fly.

In practical terms, maybe, as I said earlier, the variational
estimation with heat of Extropic. Or maybe some kind of distributed
reservoir computer like LiquidAI are proposing. Otherwise just
straight out spiking NNs should be a good fit. If we focus on actively
seeking new variational symmetries using the spikes, and not
attempting to (mis)fit them to back-propagation.

On Tue, May 7, 2024 at 11:32 PM James Bowery <[email protected]> wrote:
>...
>
> At all levels of abstraction where natural science is applicable, people 
> adopt its unspoken presumption which is that mathematics is useful.  This is 
> what makes Solomonoff's proof relevant despite the intractability of proving 
> that one has found the ideal mathematical model.  The hard sciences are 
> merely the most obvious level of abstraction in which one may recognize this.
>...
>
> Any constraint on the program search (aka search for the ultimate algorithmic 
> encoding of all data in evidence at any given level of abstraction) is a 
> prior.  The thing that makes the high order push down automata (such as 
> nested stacks) interesting is that it may provide a constraint on program 
> search that evolution has found useful enough to hard wire into the structure 
> of the human brain -- specifically in the ratio of "capital investment" 
> between sub-modules of brain tissue.  This is a constraint, the usefulness of 
> which, may be suspected as generally applicable to the extent that human 
> cognition is generally applicable.

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tb63883dd9d6b59cc-M321384a83da19a33df5ba986
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to