On Tuesday, September 24, 2019 at 8:40:57 AM UTC-5, John Clark wrote:
>
> On Mon, Sep 23, 2019 at 4:22 PM Jason Resch <[email protected] 
> <javascript:>> wrote:
>
> > *This Halloween will mark 6 years since you agreed with Step 3,*
>
>
> *BULLSHIT! *
>
> This is the entire post and even though 6 years has passed I stand by 
> every word and wouldn't change anything:
>
> On Thu, Oct 31, 2013 at 2:12 PM, Jason Resch <[email protected] 
> <javascript:>> wrote:
>
> *>  A) The test described where the simulation process forks 8 times and 
>> 256 copies are created and they each see a different pattern of the ball 
>> changing color*
>
>
> Duplicating a brain is not enough, the intelligence has NOT forked until 
> there is something different about them, such as one remembering seeing a 
> red ball and the other remember seeing a green ball, only then do they 
> fork. It was the decision made by somebody or something outside the 
> simulation to make sure all 256 saw a difference sequence of colored balls 
> that created 256 distinct minds. And to a simulated physicist a decision 
> made outside the simulation would be indistinguishable from being random, 
> that is to say the simulated laws of physics could not be used to figure 
> out what that decision would be.  
>
> *>  B) A test where the AI is not duplicated but instead a random number 
>> generator (controlled entirely outside the simulation) determines whether 
>> the ball changes to red or blue with 50% probability 8 times Then the AI 
>> (or AIs) could not say whether test A occurred first or test B occurred 
>> first.*
>
>
> Both A and B are identical in that the intelligence doesn't know what it 
> is going to see next; but increasingly convoluted thought experiments are 
> not needed to demonstrate that everyday fact. The only difference is that 
> in A lots of copies are made of the intelligence and in B they are not; but 
> as the intelligence would have no way of knowing if a copy had been made of 
> itself or not nor would it have any way of knowing if it was the original 
> or the copy, subjectively it doesn't matter if A or B is true.
>
> So yes, subjectively the intelligence would have no way of knowing if A 
> was true or B, or to put it another way subjectively it would make no 
> difference.  
>
> *> I reformulated the UDA in a way that does not use any pronouns at all, 
>> and it doesn't matter if you consider the question from one view or from 
>> all the views, the conclusion is the same.*
>
>
> Yes, the conclusion is the same, and that is the not very profound 
> conclusion that you never know what you're going to see next, and Bruno's 
> grand discovery of First Person Indeterminacy is just regular old dull as 
> dishwater indeterminacy first discovered by Og the caveman. After the big 
> buildup it's a bit of a letdown actually.
>
>   John K Clark
>  
>
>> >> important it's crystal clear exactly what the correct prediction would 
>>> have turned out to be.  
>>>
>>
>> > I did a few days ago, but you didn't respond.  I'll post it again:
>>
>
>>
>>
>>
>>
>> *First, consider this experiment:Imagine there is a conscious AI (or 
>> uploaded mind) inside a virtual environment (an open field)Inside that 
>> virtual environment is a ball, which the AI is looking at and next to the 
>> ball is a note which reads:"At noon (when the virtual sun is directly 
>> overhead) the protocol will begin.  In the protocol, the process containing 
>> this simulation will fork (split in two), after the fork, the color of the 
>> ball will change to red for the parent process and it will change to blue 
>> in the child process (forking duplicates a process into two identical 
>> copies, with one called the parent and the other the child). A second after 
>> the color of the ball is set, another fork will happen.  This will happen 8 
>> times leading to 256 processes, after which the simulation will end."Now, 
>> with the understanding of that experiment, consider the following:If the AI 
>> (or all of them) went through two tests, test A, and test B*
>
>  *A) The test described where the simulation process forks 8 times and 
>> 256 copies are created and they each see a different pattern of the ball 
>> changing color*
>
>  *B) A test where the AI is not duplicated but instead a random number 
>> generator (controlled entirely outside the simulation) determines whether 
>> the ball changes to red or blue with 50% probability 8 times*
>
> *Then the AI (or AIs) could not say whether test A occurred first or test 
>> B occurred first.*
>
> *Do you agree that it is impossible for any entity within the simulation 
>> to determine whether test A was executed first, or whether test B was 
>> executed first, with higher than a 50% probability?*
>
>
> Yes of course I agree with that, but that doesn't mean Bruno's "question 
> isn't gibberish as is his "proof"!  Unlike Bruno's thought experiment you 
> did not use any personal pronouns and I congratulate you for that, although 
> why you made it so convoluted is a mystery to me. And unlike Bruno you 
> didn't demand predictions of events where the veracity of the predictions 
> could never be judged, not even long after the events in question were 
> over. Because of Quantum Indeterminacy you can't say for certain if a atom 
> of Uranium will decay tomorrow but at least the day after tomorrow you'll 
> know, but with Bruno's "first person indeterminacy" no one and no thing 
> will ever have any way of knowing or even know what he was suposed to know, 
> and yet he still talks about probability as if it has meaning in that 
> context.    
>
> You could have used personal pronouns in case B but not for case A because 
> in that case there is no such thing as *THE *first person, there are lots 
> of them, and as a result although your thought experiment didn't teach us 
> anything we didn't already know at least it didn't produce gibberish.
>
> There is one other point, for case A, the one that has relevance for Many 
> Worlds, you say "*after the fork, the color of the ball will change*" 
> however, and Carroll specifically mentions this in his book, a mind (not to 
> be confused with a brain) does not fork until AFTER a change is detected by 
> it. So in Bruno's thought experiment a mind is not duplicated and then 
> there is some sort of halfass metaphysical mystery as to how one of them is 
> chosen to see Washington and the other is chosen to see Moscow, instead the 
> very act of seeing Washington is what has turned the Helsinki Man into the 
> Washington Man. So there is no "first person indeterminacy" and the answer 
> to the grand question "Why am I the Washington Man?" has a mundane answer 
> that is nevertheless 100% correct, because you saw Washington.
>
> *> All I ask is whether or not any entity at any time has access to 
>> information that can distinguish between iterated forking or randomized 
>> switching.  *
>
>
> No, and that is why it's so hard to get experimental proof that Many 
> Worlds is correct or proof it is incorrect, and the same is true for every 
> other quantum interpretation.
>  
>
>> >>The  difference is in the Many Worlds case, after the universe splits, 
>>> if I asked *you* today what the correct answer *you* should have given 
>>> yesterday was:
>>>
>>> 1) It would be obvious who the question was directed to.
>>> 2)  It would obvious what would have been the correct answer.
>>>
>>> Neither of these things is true for Bruno's "question".
>>>
>>
>> > What's so special about duplicating universes? 
>>
>
> Well, for one thing in your thought exparament there is someone outside of 
> the simulation observing it, but by the definition of the multiverse there 
> is nobody and nothing outside of it to observe a universe splitting. And 
> that's pretty special.  
>
> And for another thing in the Many Worlds case, after the universe splits, 
> if I asked *you* today what the correct answer *you* should have given 
> yesterday was:
>
> 1) It would be obvious who the question was directed to.
> 2)  It would be obvious what would have been the correct answer.
>
> Neither of these things is true for Bruno's "question". 
>  
>
>> * > Perhaps you can explain why one leads to apparent randomness but the 
>> other does not lead to randomness *
>
>  
> One leads to apparent randomness, but Bruno's "question" does not lead to 
> randomness 
> or non-randomness, it leads to gibberish.
>  
>
>> > *Is anything I said about Carroll wrong? *
>>
>
> Yes obviously, you said he would agree with Bruno.
>  
>
>> > *What do you hope I will learn from reading Caroll's book?*
>>
>
> You might learn what Many Worlds is saying, and just as important what it 
> is not saying.
>
> John K Clark
>


It's a bit like a Twilight Zone:

A Many Worlds defender is against Arithmetic Reality.

@hilipthrift 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/dcb27fe0-2c4d-4e49-add0-bd80d14123b0%40googlegroups.com.

Reply via email to