Newcomb's paradox is another proof of Wolpert's theorem. It assumes that
you and ND can both predict each others actions, and shows that this
assumption leads to a contradiction. ND can simulate a copy of your mind
and predict whether you will take one box or both. You can simulate ND
because you are given the rules that the black box contains $1M only if you
don't take the clear $1000 box. Both can't be true.

Wolpert's proof. Suppose two programs simultaneously output a bit. One wins
if the bits are the same and the other wins if they are opposite. Each
program has as input a copy of the source code and initial state of the
other, which they can run to predict the other player's move. Who wins?

Corollary. A computer (or brain) cannot simulate or model itself. It cannot
predict it's own output. Proof: this is a special case of both computers
identical.


On Mon, Sep 25, 2023, 2:02 PM James Bowery <jabow...@gmail.com> wrote:

>
>
> On Mon, Sep 25, 2023 at 12:11 PM Matt Mahoney <mattmahone...@gmail.com>
> wrote:
>
>> On Mon, Sep 25, 2023, 2:15 AM Quan Tesla <quantes...@gmail.com> wrote:
>>
>>>
>>> I can't find one good reason why greater society (the world nations)
>>> would all be ok with artificial control of their humanity and sources of
>>> life by tyrants.
>>>
>>
>> Because we want AGI to give us everything we want.
>>
>
> "We" is a big concept.
>
>
>> Wolpert's law says that two computers cannot mutually model or predict
>> each other. (Or else who would win rock scissors paper?)
>>
>
> To the best of my knowledge, Chris Langan's resolution of Newcomb's
> Paradox <https://megasociety.org/noesis/44/> involves a self-dual
> stratification of simulator/simulated, in which case "there is no contest"
> between the "two computers" as one is simulated by the other.  This can't
> be countered by claiming one is introducing an assumption of bidirectional
> causality since it is equally if not more valid to claim that the
> *constraint* of unidirectionality is an assumption -- and only
> *constraints* really count as assumptions.
>
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-M8efb8b73f6fa289950b0a3f7>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-M128675954d013ead27b6fea2
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to