BTW, John, I encourage you to pursue the CTMU as AGI framework.  I think it
is the best bet in terms of filling in AIXI's free parameters (UTM choice &
Utility Function choice) in a principled manner.  Aside from being a
plausible potential advance over AIXI as a top down AGI theory, it would go
a long way toward heading off the "post-modernist" hysteria that is
attempting to put OUGHT before IS in the current gold-rush -- putting them
on what I suppose Chris might call "an equal, alpha/omega, self-dual
footing".

I would probably spend some of my time pursuing that avenue myself were it
not for a disagreement I had with the Mega Foundation regarding its
management of volunteer resources.  This, in turn, put me in a position
where they rejected my $100/month sacrifices to that Foundation -- a pretty
serious disagreement which has nothing to do with the CTMU validity or lack
thereof per se.  I'm instead putting $100/month into the Hutter Prize in
the form of Bitcoin -- which I just awarded to Saurabh Kumar.

On Mon, Aug 7, 2023 at 8:32 AM James Bowery <[email protected]> wrote:

> Chris Langan's CTMU does seem to offer a unification if IS with OUGHT
> within a computational framework and that is indeed why I initially
> contacted him regarding Algorithmic Information Theory's potential of
> providing at least the IS in what he calls "The Linear Ectomorphic
> Semimodel of Reality" aka, ordinary linear time of mechanistic science.
>
> But really, John, give me a break.  The problem of getting people to be
> reasonable about just mechanistic science, given the noise imposed on
> science by the likes of Popper and Kuhn, is hard enough.
>
> On Mon, Aug 7, 2023 at 8:28 AM John Rose <[email protected]> wrote:
>
>> On Sunday, August 06, 2023, at 7:06 PM, James Bowery wrote:
>>
>> Better compression requires not just correlation but causation, which is
>> the entire point of going beyond statistics/Shannon Information criteria to
>> dynamics/Algorithmic information criterion.
>>
>> Regardless of your values, if you can't converge on a global dynamical
>> model of causation you are merely tinkering with subsystems in an
>> incoherent fashion.  You'll end up robbing Peter to pay Paul -- having
>> unintended consequences affecting your human ecologies -- etc.
>>
>> That's why engineers need scientists -- why OUGHT needs IS -- why SDT
>> needs AIT -- etc.
>>
>>
>> I like listening to non-mainstream music for different perspectives. I
>> wonder what Cristopher Langan thinks of the IS/OUGHT issue with his
>> atemporal non-dualistic protocomputational view of determinism/causality. I
>> like the idea of getting rid of time…  and/or multidimensional time… Also
>> I’m a big fan of free will. Free will gives us a tool to fight totalitarian
>> systems. We can choose not to partake in systems, for example modRNA
>> injections and CBDC's. So we need to fight to maintain free will IMHO.
>>
>> https://www.youtube.com/watch?v=qBjmne9X1VQ
>>
>> John
>> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
>> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
>> participants <https://agi.topicbox.com/groups/agi/members> +
>> delivery options <https://agi.topicbox.com/groups/agi/subscription>
>> Permalink
>> <https://agi.topicbox.com/groups/agi/T772759d8ceb4b92c-M85774330f9c2a75525e1a0ff>
>>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T772759d8ceb4b92c-M92062bc9fb0ce4070d8161f1
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to