"repeated pair tends to compress—MDL likes fewer nodes+wires"

Pareto frontiers don't provide a principled loss function.  You have to
somehow bite the bullet and bring the complexity measures into the same
units and add them.

While I've done this in principle to get rid of the UTM "choice" in
Kolmogorov Compleixty with NiNOR Complexity, there's been no follow-up
theoretic work by anyone let alone reduction to executable code (even if
that code could take longer than the Hawking radiation heat death of the
universe).

One interesting question for theory:

There is some lower bound on NiNOR Complexity approach.  The most obvious
case where it can't work is:
010101010101010101010101...
Which is just
x=NOR(x,x)

I would have thought the Santa Fe guys would at least have pursued
something along these lines with boolean network fixed points.


On Thu, Nov 6, 2025 at 8:46 PM John Rose via AGI <[email protected]>
wrote:

> I take back all the bad things I said about ChatGPT :)
>
> https://chatgpt.com/share/690d49aa-7e78-8007-8daa-9a2777ad185b
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/T19f9264b814f3dfa-M880affaa8a2636e80380426d>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T19f9264b814f3dfa-M4a44422bd163b564628a9fbb
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to