Rob, not butting in, but rather adding to what you said (see quotation
below).

The conviction across industries that hierachy (systems robustness) persist
only in descending and/or ascending structures, though true, can be proven
to be somewhat incomplete.

There's another computational way to derive systems-control hierarchy(ies)
from. This is the quantum-engineering way (referred to before), where
hierachy lies hidden within contextual abstraction, identified via
case-based decision making and represented via compound functionality
outcomes. Hierarchy as a centre-outwards, in the sense of emergent,
essential characteristic of a scalable system. Not deterministically
specified.

In an evolutionary sense, hierarchies are N-nestable and self discoverable.
With the addition of integrated vectors, knowledge graphs may also be
derived, instead of crafted.

Here, I'm referring to 2 systems hierarchies in particular. 'A', a
hierarchy of criticality (aka constraints) and 'B', a hierarchy of priority
(aka systemic order).

Over the lifecycles of a growing system, as it mutates and evolve in
relevance (optimal semantics), hierarchy would start resembling - without
compromising -NNs and LLMs.

Yes, a more-holistic envelope then, a new, quantum reality, where
fully-recursive functionality wasn't only guaranteed, but correlation and
association became foundational, architectural principles.

This is the future of quantum systems engineering, which I believe quantum
computing would eventually lead all researchers to. Frankly, without it,
we'll remain stuck in the quagmire of early 1990s+ functional
analysis-paralysis, by any name.

I'll hold out hope for that one, enlightened developer to make that quantum
leap into exponential systems computing. A seachange is needed.

Inter-alia, Rob Freeman said: "And it seems by chance that the idea seems
consistent with the emergent structure theme of this thread. With the
difference that with language, we have access to the emergent system,
bottom-up, instead of top down, the way we do with physics, maths."

On Thu, May 9, 2024, 11:15 Rob Freeman <[email protected]> wrote:

> On Thu, May 9, 2024 at 6:15 AM James Bowery <[email protected]> wrote:
> >
> > Shifting this thread to a more appropriate topic.
> >
> > ---------- Forwarded message ---------
> >>
> >> From: Rob Freeman <[email protected]>
> >> Date: Tue, May 7, 2024 at 8:33 PM
> >> Subject: Re: [agi] Hey, looks like the goertzel is hiring...
> >> To: AGI <[email protected]>
> >
> >
> >> I'm disappointed you don't address my points James. You just double
> >> down that there needs to be some framework for learning, and that
> >> nested stacks might be one such constraint.
> > ...
> >> Well, maybe for language a) we can't find top down heuristics which
> >> work well enough and b) we don't need to, because for language a
> >> combinatorial basis is actually sitting right there for us, manifest,
> >> in (sequences of) text.
> >
> >
> > The origin of the Combinatorial Hierarchy thence ANPA was the Cambridge
> Language Research Unit.
>
> Interesting tip about the Cambridge Language Research Unit. Inspired
> by Wittgenstein?
>
> But this history means what?
>
> > PS:  I know I've disappointed you yet again for not engaging directly
> your line of inquiry.  Just be assured that my failure to do so is not
> because I in any way discount what you are doing -- hence I'm not "doubling
> down" on some opposing line of thought -- I'm just not prepared to defend
> Granger's work as much as I am prepared to encourage you to take up your
> line of thought directly with him and his school of thought.
> 
> Well, yes.
> 
> Thanks for the link to Granger's work. It looks like he did a lot on
> brain biology, and developed a hypothesis that the biology of the
> brain split into different regions is consistent with aspects of
> language suggesting limits on nested hierarchy.
> 
> But I don't see it engages in any way with the original point I made
> (in response to Matt's synopsis of OpenCog language understanding.)
> That OpenCog language processing didn't fail because it didn't do
> language learning (or even because it didn't attempt "semantic"
> learning first.) That it was somewhat the opposite. That OpenCog
> language failed because it did attempt to find an abstract grammar.
> And LLMs succeed to the extent they do because they abandon a search
> for abstract grammar, and just focus on prediction.
> 
> That's just my take on the OpenCog (and LLM) language situation.
> People can take it or leave it.
> 
> Criticisms are welcome. But just saying, oh, but hey look at my idea
> instead... Well, it might be good for people who are really puzzled
> and looking for new ideas.
> 
> I guess it's a problem for AI research in general that people rarely
> attempt to engage with other people's ideas. They all just assert
> their own ideas. Like Matt's reply to the above... "Oh no, the real
> problem was they didn't try to learn semantics..."
> 
> If you think OpenCog language failed instead because it didn't attempt
> to learn grammar as nested stacks, OK, that's your idea. Good luck
> trying to learn abstract grammar as nested stacks.
> 
> Actual progress in the field stumbles along by fits and starts. What's
> happened in 30 years? Nothing much. A retreat to statistical
> uncertainty about grammar in the '90s with HMMs? A first retreat to
> indeterminacy. Then, what, 8 years ago the surprise success of
> transformers, a cross-product of embedding vectors which ignores
> structure and focuses on prediction. Why did it succeed? You, because
> transformers somehow advance the nested stack idea? Matt, because
> transformers somehow advance the semantics first idea?
> 
> My idea is that they advance the idea that a search for an abstract
> grammar is flawed (in practice if not in theory.)
> 
> My idea is consistent with the ongoing success of LLMs. Which get
> bigger and bigger, and don't appear to have any consistent structure.
> But also their failures. That they still try to learn that structure
> as a fixed artifact.
> 
> Actually, as far as I know, the first model in the LLM style of
> indeterminate grammar as a cross-product of embedding vectors, was
> mine.
> 
> ***If anyone can point to an earlier precedent I'd love to see it.***
> 
> So LLMs feel like a nice vindication of those early ideas to me.
> Without embracing the full extent of them. They still don't grasp the
> full point. I don't see reason to be discouraged in it.
> 
> And it seems by chance that the idea seems consistent with the
> emergent structure theme of this thread. With the difference that with
> language, we have access to the emergent system, bottom-up, instead of
> top down, the way we do with physics, maths.
> 
> But everyone is working on their own thing. I just got drawn in by
> Matt's comment that OpenCog didn't do language learning.
> 
> -Rob

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-M93c1991e98d5b3bad820c6f6
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to