On Mon, Sep 25, 2023 at 12:11 PM Matt Mahoney <[email protected]>
wrote:

> On Mon, Sep 25, 2023, 2:15 AM Quan Tesla <[email protected]> wrote:
>
>>
>> I can't find one good reason why greater society (the world nations)
>> would all be ok with artificial control of their humanity and sources of
>> life by tyrants.
>>
>
> Because we want AGI to give us everything we want.
>

"We" is a big concept.


> Wolpert's law says that two computers cannot mutually model or predict
> each other. (Or else who would win rock scissors paper?)
>

To the best of my knowledge, Chris Langan's resolution of Newcomb's Paradox
<https://megasociety.org/noesis/44/> involves a self-dual stratification of
simulator/simulated, in which case "there is no contest" between the "two
computers" as one is simulated by the other.  This can't be countered by
claiming one is introducing an assumption of bidirectional causality since
it is equally if not more valid to claim that the *constraint* of
unidirectionality is an assumption -- and only *constraints* really count
as assumptions.

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-M8efb8b73f6fa289950b0a3f7
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to