On Mon, Sep 1, 2025 at 11:27 PM Dorian Aur <[email protected]> wrote:

> ...
>
No issue with any of that Dorian. I'm broadly aligned on the dynamics. And
the need for dynamics to model cognition.

Note also the need for dynamics to come into it at some point is a core
belief of the founder(?) of this list, Ben Goertzel. You may know he wrote
a book "Chaotic Logic" in 1994. At the time of another bit of an uptick on
emergent models back then. Emergent Computation was on the first Gartner
Hype Cycle in 1995. People could see it, but it never really got traction.
Very much like neural networks in general for the time. Emergent computing
is awaiting its analogue of neural networks' Nvidia GPU moment to wake
these 30 year old ideas.

Ben characterizes what he's been doing since as seeking that (Nvidia GPU?)
substrate on which to express the inevitable chaos. But there is broad
agreement, chaos will be necessary.

Perhaps the analogue is not Nvidia GPUs. Perhaps the analogue is "deep"
networks. Emergent computing is awaiting its "deep" structural insight.

I think the insight is... dynamics yes, leading to chaos (and a certain
"quantum" quality.) But this emergent on the same "shared context and
prediction" which is actually the basis of LLMs too. The single simplifying
link between structure and meaning has been staring us in the face.
Language leads you to it. Linguistics choked on it. It's just LLMs, because
of their historical attachment to backprop, are not dynamic.

> *On Shared Context and Prediction*
>
> You bring up an important critique, *the lack of explicit modeling of
> shared context or prediction* , and I agree that’s a central theme that
> needs to be folded in more explicitly. At present, the model’s closest
> analog to this is through the *network coherence factor* , which weights
> how phase-synchronized or field-aligned different nodes or regions are
> during active processing. This isn’t prediction per se, however it does
> reflect *how distributed units align to form stable configurations*,
> which are often* the substrate for expectations, resonances, and temporal
> sequences.*
>
>  I agree this doesn't go far enough to explicitly encode *semantic
> generalization or predictive symmetry* , and this may well be where your
> point about “internal meaning” can expand the model. Right now, the
> feedback loops are environmental, as you note (similar to Edelman). As you
> suggest, *recursive internal structure, * especially if structured around
> shared contexts , might allow *meaning to emerge endogenously*, without
> waiting for extrinsic signals to do the filtering.
>
Yes. Good. You have a "network coherence factor". And specifically this is
based on "phase synchrony"? Phase synchrony for memristors may not be the
same as for neuron spikes, but I'm also looking at phase synchrony as the
relevant parameter (in contrast to spike rate.)

Phase synchrony need not mean prediction. But if the phases synchronize on
shared predictions, then it will. The question is how to make them
synchronize on shared prediction. You can make a sequence network easily
enough. But I struggled for a long time with how to recurrently feedback
information from the posterior context in the sequence. Given A->X->B and
A->Y->B, how does B feedback to X and Y to synchronise their phase? The...
energy, actually does cycle around recurrently for language, because most
words connect to most others. But it's not clear how it carries information
about the downstream context (B) when it does that.

I have an idea for that which I'm working on now. But maybe you have your
own ideas. I'm interested to hear suggestions.

Also note, shared context of this kind is also the basis of
Izhikevich's polychrony. But Izhikevich has X and Y jointly locking
together with B not with synchrony, but with co-ordinated delays. This
might be better than synchrony. Much greater coding depth. And it natively
addresses sequence.

But as I say, maybe you can think of another "*network coherence factor" *which
will reflect posterior context in a sequence network (if you can, the
sequence gives you meaning, and it is job done.)

> *On Phase Transitions vs. Chaos*
>
> You mentioned concern that this might be leaning too much toward static
> attractors, and again, that's well taken. However,  the goal isn’t to
> reduce dynamics to fixed points, rather, it’s to explore the *regime
> around the phase transition,* where stability and fluidity coexist.
> Walter Freeman’s work on chaotic attractors is deeply aligned here, and I’m
> glad you brought him up. The “quantized” phrasing may be a bit misleading,
> it’s meant to describe *threshold phenomena* in energy-coherence space,
> not rigid states.
>
I actually have no problem with the "quantized" phrasing. It was an early
observation of mine that these groupings (actually before looking at the
dynamics, just looking at meaningful groupings in language) had a kind of
"quantum" indeterminacy, contradiction, or "uncertainty principle".

This relates to the contradictory/subjective meaning idea which I think
prevents compression of "meaning". And I believe is a key insight we're
ignoring in AI. I slip between this and chaos as the key insights (for
AGI?) Both seem to be powers of assemblies of elements to defy abstraction.
Perhaps chaos captures the growth/expansion aspect of it, and quantum
captures the contradiction/subjectivity aspect of it. So both may apply.

So I don't mind the quantum analogy at all. Though you need to be careful
it doesn't immediately make people think of a subatomic connection,
Penrose, etc. But in recent years I've found more and more people making
the quantum analogy. (Bob Coecke was one of the first, applying quantum
maths to distributional models of meaning, around 2007.)

> *Final Thought: A Potential Synthesis?*
>
> If we can bring coherence, context-sharing, and recursive reconfiguration
> into a unified model , where *meaning is emergent from stable-but-fluid
> predictive dynamics,* then I think we're close to something quite
> powerful. Your framing of oscillations encoding shared context fits
> beautifully into that trajectory, and I’d be interested in integrating that
> perspective further.
>
Great. I have some ideas I'm working on in a spiking neuron context. But
I'd be interested to hear any ideas you may have on the problem of "network
coherence factor" for a sequence network in your hardware context. It may
be all that you need is to confront the idea that shared context in
sequence maps to meaning. You may immediately have ideas how to extract
attractors based on that (which will then implement the "internal meaning"
we seek) in your (memristor?) context.

-R

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta9b77fda597cc07a-M9b6cbfced16eafd4e8270415
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to