Fields might sum contexts better. I wonder if spikes win on coding depth
though... Something I'm working on at the moment.

Either way, spikes or fields, I think the information is in the network.
It's equivalent to the information now found by LLMs. But greater. And
creative, new meaning. And structured, as chaos actually has more
structure, but only appears to be unstructured because the structure must
be expressed differently with each context.

The key insight is that it must be found dynamically, because it will be
borderline chaotic.

You might also want to look at a current focus of SingularityNET with their
actPC Active Predictive Coding, initiative. I think the concepts are
getting very close. Predictive Coding is inclusive of more powerful
dynamics. It is just that historically it too has been limited to
abstractions using Bayesian statistics.

We should be able to fit the dynamics of finding our predictive patterns
with the "information geometries" the SingularityNET actPC team are working
on:

ActPC-Geom: Towards Scalable Online Neural-Symbolic Learning via
Accelerating Active Predictive Coding with Information Geometry & Diverse
Cognitive Mechanisms
Ben Goertzel
https://arxiv.org/html/2501.04832v1#S4

On Fri, Sep 5, 2025 at 4:30 AM Dorian Aur <dorian...@gmail.com> wrote:

>
>
> On Wed, Sep 3, 2025 at 6:46 PM Rob Freeman <chaotic.langu...@gmail.com>
> wrote:
>
>> Dorian,
>>
>> On Thu, Sep 4, 2025 at 2:12 AM Dorian Aur <dorian...@gmail.com> wrote:
>>
>>>   ...
>>> in EDI systems, the network coherence factor is indeed *inspired by
>>> phase synchrony,* though its implementation in* memristor-based
>>> substrates* naturally diverges from traditional spike-based models like
>>> those in neuronal systems.
>>>
>>> Whereas neuronal synchrony is typically framed in terms of spike timing
>>> correlations, in *EDI*, coherence emerges through* field-aligned signal
>>> propagation,* where the *timing, energy phase,* and *recursive
>>> trajectory alignment* of memristive states reinforce each other
>>> dynamically. It’s less about discrete spike coincidences and more about 
>>> *continuous,
>>> phase-sensitive alignment across the network.*
>>>
>>> We’re not simply measuring oscillation phase across devices, but rather
>>> capturing how signal propagation patterns entrain one another over time , a
>>> kind of analog synchrony driven by *shared context and energy
>>> minimization.* You could think of it as *"*propagation phase coherence
>>> *"* rather than spike phase synchrony.
>>>
>>> This makes it particularly suited to detecting and reinforcing semantic
>>> convergence, especially in the presence of divergent inputs collapsing
>>> toward shared attractors. It’s here that memristors shine offering
>>> nonlinear, history-dependent modulation that makes phase alignment not just
>>> possible, but dynamically stable and meaningful.
>>>
>>> I can see how continuous waves could be better in some ways for summing
>> and feeding back downstream interference.
>>
>> The summing is crucial. I gave the A->X/Y->B example. But in practice
>> what you want is for many of these to stack: A->X/Y->B, C->X/Y->D,
>> E->X/Y->F... etc. The more contexts two elements share, the more they can
>> be assessed as semantically (conversely also syntactically) similar.
>>
>> Using the LLM language of "prompts", given a prompt AXB, you want the
>> system to expand out X, X={Y, ....}
>>
>> So you want the entire set of shared contexts A_B, C_D, E_F... to inform
>> an attractor around X, which will actually define the meaning of X.
>>
>> You can relate this back to LLMs as embeddings. X is "embedded" in a
>> vector space of its contexts, with components of the vector being "weights"
>> along the dimensions of the different contexts.
>>
>> But to do this dynamically, you want to sum all those contexts. And to do
>> that you really want to expand them.
>>
>> Spikes are very all or nothing. Analog waves might be easier to sum, as
>> feedback can be continuous.
>>
>> Currently I'm imagining that this "summing" might happen by way of these
>> inhibition "landscapes". The prompt sequence is presented, and "holes" in
>> its inhibition of noise, spread. So for a prompt AXB, B creates a "hole"
>> which allows noise to spread also to Y, which causes D, F, etc. to spike
>> (and create their own "holes"...) The sum of the "holes" creating the sum
>> of contexts to define the grouping generated around X.
>>
>> It's the ability to go "backwards" using the inhibition "holes" which
>> allows this summing. Just expanding over synapses from AXB won't sum over
>> all shared contexts.
>>
>> Analog waves might do it better. If there were a way for B to affect Y,
>> and Y to recruit D, F, etc. To sum them in real time rather than in
>> discrete steps over a cascade of spikes.
>>
>> Given the insight, it might not be too hard to do.
>>
>> Given there's not much general interaction on this thread, feel free to
>> write to me directly to discuss it.
>>
>> Cheers,
>>
>> Rob
>>
>> Absolutely Rob, I think you're articulating something that maps elegantly
> onto the core mechanics of EDI-style propagation. What you’re describing,
> summing over shared contexts via analog, recursive feedback is precisely
> where EDI diverges from conventional spike-based models. While spikes are
> often described as "all-or-nothing," they in fact have important spatial
> characteristics, including directionality of propagation and
> context-dependent modulation. However, their discrete nature still poses
> challenges for capturing distributed, recursive meaning integration.
>
> In contrast, EDI’s continuous, wave-like propagation allows overlapping
> input trajectories to superimpose and reinforce shared attractors. In your
> AXB → X = {Y,…} example, analog propagation allows X to resonate with all
> the “holes” left open by B—and others like D, F, etc.—reactivating past
> trajectories not discretely, but coherently. The key is that in EDI,
> recursive “pulling” isn't just a product of top-down inhibition, but a
> field-mediated reentrance, allowing Y to register B’s “echo” through shifts
> in phase configuration, and to activate semantically adjacent paths such as
> D or F.
>
> That’s where the term *propagation phase coherence* earns its weight:
> it’s not just synchrony, but a physically instantiated coherence driven by
> overlapping histories and energetically favorable feedback loops. The
> attractor that forms around X isn’t derived from symbolic abstraction, but
> from emergent resonance across recurrent paths, a kind of dynamic embedding
> realized in the substrate.
>
> So yes, your idea of inhibition landscapes combining with analog
> reentrance in EDI is a compelling convergence. The deeper insight is this:
> semantic meaning occurs as the stable convergence of shared predictive
> histories and EDI makes that happen not symbolically, but physically
>
> ---Dorian Aur
>
>>
>> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/Ta9b77fda597cc07a-M9116760b0cfd9d3da2485389>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta9b77fda597cc07a-Meaaa4a6cdba1f9e04580f86e
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to