> On 27 May 2018, at 23:57, Brent Meeker <[email protected]> wrote:
> 
> 
> 
> On 5/27/2018 1:22 AM, smitra wrote:
>> This is a physical version of what Bruno has been talking about on this list.
>> 
>> With "strong AI" I mean that any simulation of a person generates the mind 
>> of that person, and the subjective state of that person is independent of 
>> how that simulation is performed. So, what matters is if certain 
>> computations are performed correctly, not on how those computations are 
>> performed.
> 
> This is actually ambiguous.  First, computation is an execution of a 
> function: so being "the same function" can only be defined if all possible 
> inputs are defined; which is excessive. 


A computation is more an execution of a program, or a machine, than a function, 
but "same function” means here simply same input/output behaviour, whatever the 
inputs could be. And as we are locally finite, “all inputs” can be replaced by 
all sufficiently large inputs. It makes no sense to say “no” to the doctor, 
because the copy and the original (if not annihilated) which taste chocolate in 
slightly different ways in two billions years.



> Can the output be the same only for some small subset of inputs?...for only 
> one?


No, it has to be the same for the inputs making sense for some lifetime.



> Second, what does it mean for the same function to be computed 
> differently...does it mean the same output of the human body given the input 
> of the whole environment?...or does it mean the same input-output of the 
> amgydala, the visual cortex, the cerebrum, the cereberal cortex, the frontal 
> lobe, parietal lobe,....  Exactly where is input boundary and the output 
> boundary.

That depends on the level of substitution. In case of doubt, and if you can 
afford it, better to bet on a low level. Personally it means that the behaviour 
is correct at the input/output of the cellular level, bacteria included. I 
would most certainly avoid a  brain transplant which does not take the full 
metabolism of the glial cells in the brain.



> 
> I don't think such a boundary can make sense.

The boundaries are internal, at some description level. If that does not exist, 
it means that your mind relies on actual infinities. That is why Turing 
insisted that a human is a finite being, and that there would be a confusion of 
mental state if we allow genuine continuous information. But the reasoning 
works even if you would ask to the doctor to copy a description of the entire 
physical universe at the superstring+gravitation level, with 1000^(1000^1000) 
correct decimals.
Stubbornly enough, perhaps, the arithmetical relation simulates this. It is a 
chance we don’t have to wait for the delays of the universal dovetailer ...



>   I think the "mind" only exists in the context of a world, a world which is 
> effectively physical.



That just put the emulation level very low, without changing anything, unless 
you add that the world must be “real primary ontological thing”, in which case 
mechanism stop working, but that is like criticising the theory of evolution 
because it missed your favourite metaphysical option. It is like betting on 
something we have no evidence for, just to prevent a simpler theory to work. 



> 
>> 
>> While this doesn't seem to have anything to do with quantum mechanics, the 
>> MWI etc., it actually implies the MWI. This follows from the assumption that 
>> whatever subjective experience is generated by the simulation is independent 
>> of the implementation of the physical device performing the simulation. The 
>> time evolution operator defines a mapping from the past state of the device 
>> to the future state. This implies that the simulation of a person in a 
>> certain state is also a scrambled version of the simulation of that person 
>> at some time in the future or the past. Since by implementation Independence 
>> that scrambling should not affect the subjective experience, one has to 
>> conclude that the consciousness of the person at any time in the future and 
>> the past is also generated by the simulation.
>> 
>> If we start running a simulation at t = 0 s and the simulation is programmed 
>> to shut down at t = 1000 s, then at any time, the running of the simulation 
>> generates the consciousness of the simulated person at all times between t = 
>> 0 and t = 1000. The computer plus the environment it is in contains all the 
>> information about the past and present, including how the computer was 
>> powered on, the simulation was started and how the simulation will end etc. 
>> This then contains all the information about the person between t = 0  and t 
>> = 1000, but not before or after this time period.
>> 
>> The next step is to consider running a simulation that depends on the 
>> outcome of a spin measurement in the real world. We polarize a spin the 
>> positive x-direction and measure the z-component, feed the result of the 
>> measurement in the simulation and that then affects the simulation, the 
>> simulated person will be made aware of something that is different depending 
>> on the outcome of the measurement.
>> 
>> The moment this experiment has been set up and is ready to go, the time 
>> evolution of the entire system that includes the experimental set up., the 
>> computer and everything else that is of relevance, is fixed. But this time 
>> evolution will bring the system into a superposition of the two experimental 
>> outcomes of the measurements and its consequences for the simulation.
> 
> This implies a completely deterministic evolution.  In that case it is not 
> clear how the Born rule is implemented in the simulation.

By what you said earlier: by conveniently looking here and now where and when 
we are relatively to the most probable (sheaf) of undistinguishable (below our 
substitution level) computations. Then the Born rule can be extracted from the 
(arithmetic-based, or self-referentially based) quantum logic semantics. If 
this does not work, the mechanist theory has to be  abandoned or radically 
improved.



> 
>> 
>> It then seems to be a matter of belief in the MWI whether or not one should 
>> believe if both possibilities happen or if only one of them is going to be 
>> real. But if we accept strong AI, then we have to accept that the physical 
>> state of the system before the measurement is performed, will also generated 
>> the consciousness of the person after the measurement. Because the 
>> information present in the physical state before the measurement contains 
>> both branches, it generates the consciousness of the two versions of the 
>> person in both branches.
> 
> But of course the simulation can simply stop all the branches but one.

You can only stop those relative to you. You can’t stop those emulated in 
arithmetic, no more than a mathematician could make the number 666 disappear 
from the arithmetical reality.

Everett and Mechanism go hands in hand. In his longer text Everett makes this 
remark with some precision. He just fail to address the mind-body problem, and 
he is unaware (like many) that elementary arithmetic do implement all 
computations, so that the wave itself must still be extracted from 
machine/number self-reference.

Bruno 



> 
> Brent
> 
>> 
>> Finally, one can argue that the conclusion applies to real persons that are 
>> not simulated by computers, because the brain computes us, and in the above 
>> argument it doesn't matter if you use a computer or a biological brain. In 
>> fact one needs to appeal to the entire environment containing the computer, 
>> and to get to a rigorous argument this has to be taken as large as the size 
>> of the lightcone starting from the moment the simulation starts.
>> 
>> Saibal
>> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected].
> To post to this group, send email to [email protected].
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to