Are you arguing for analog computation? Like how the brain uses 20 watts,
while an equivalent neural network on a 10 petaflop GPU cluster needs 1
megawatt? What technology would you use?

-- Matt Mahoney, [email protected]

On Tue, Dec 16, 2025, 7:51 PM Dorian Aur <[email protected]> wrote:

> This is a fascinating example of *self-replication* and imperfect copying
> in living systems, and it directly resonates with principles of
> Electrodynamic Intelligence (EDI). Concepts like instruction copying and
> emergent complexity can be modeled and explored through EDI frameworks,
> providing a deeper understanding of adaptive and evolving systems.
>
> Dorian Aur
>
> PS For those interested in a practical exploration of these ideas, there
> are resources that illustrate EDI in action and bridge these abstract
> biological concepts with computational models:
> https://dorianaur.gumroad.com/l/udtedn
>
> On Tue, Dec 16, 2025 at 6:20 PM Matt Mahoney <[email protected]>
> wrote:
>
>> Living organisms have the following properties that distinguish them
>> from non living.
>> 1. They reproduce. This implies that they carry the instructions for
>> creating copies of themselves that contain copies of the instructions.
>> 2. The instruction copying is not perfect, allowing them to evolve and
>> gain complexity.
>>
>> These properties can be reproduced in software. The first property in
>> pseudocode looks like this:
>>
>> Print the following twice, the second time in quotes.
>> "Print the following twice, the second time in quotes".
>>
>> An example of a program with both properties can be found in
>> https://mattmahoney.net/rsi.pdf
>>
>> On Tue, Dec 16, 2025 at 3:57 PM Dorian Aur <[email protected]> wrote:
>> >
>> >
>> > Certain properties attributed to “biological intelligence” may instead
>> occur from substrate-independent physical dynamics.
>> >
>> > I’m exploring whether some core properties commonly attributed to
>> biological intelligence might instead reflect substrate-independent
>> physical dynamics, rather than biology
>> >
>> > Concretely, in neural-scale measurements, we can interpret certain
>> correlations as occuring from electrodynamic interactions that are not
>> specific to any living tissue, and which may not be fully captured by any
>> standard computational abstractions.
>> >
>> > Do you see a  reason these dynamics must reduce to computation, and
>> they represent additional physical constraints relevant for AGI
>> architectures?
>> >
>> > I have a short written summary, and a longer text/audio treatment for
>> those interested; I didn’t want to overload this list.
>> >
>> > Dorian Aur
>> >
>> > Artificial General Intelligence List / AGI / see discussions +
>> participants + delivery options Permalink
>> 
>> --
>> -- Matt Mahoney, [email protected]
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/Tca9651e0e10920d5-M12b8ada5233b0e6c3ea60e32>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tca9651e0e10920d5-Me059507b463cda636a4dab6b
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to