On Sun, Mar 11, 2012 at 10:43 PM, acw <a...@lavabit.com> wrote:

> On 3/11/2012 21:44, R AM wrote:
>> However, I think that if comp is true, future experience is not only
>> indeterminate, but also arbitrary: our future experience could be anything
>> at all. But given that this is not the case, shouldn't we conclude that
>> comp is false?
> You're basically presenting the "White Rabbit" problem here. I used to
> wonder if that is indeed the case, but after considering it further, it
> doesn't seem to be: your 1p is identified with some particular abstract
> machine - that part is mostly determinate and deterministic (or
> quasi-deterministic if you allow some leeway as to what constitutes persona
> identity) in its behavior, but below that substitution level, anything can
> change, as long as that machine is implemented correctly/consistently.

Not sure if I understand you ... I was thinking of something like this: if
comp is true, then we can upload the mind into a computer and simulate the
environment. The simulator could be constructed so that the stimuli given
to the mind is a sequence of arbitrary "white rabbits". Is there somehing
in comp that makes the existence of such "evil" simulators unlikely?


> If the level is low enough and most of the machines implementing the lower
> layers that eventually implement our mind correspond to one world (such as
> ours), that would imply reasonably stable experience and some MWI-like laws
> of physics - not white noise experiences. That is to say that if we don't
> experience white noise, statistically our experiences will be stable - this
> does not mean that we won't have really unusual "jumps" or changes in
> laws-of-physics or experience when our measure is greatly reduced (such as
> the current statistically winning machines no longer being able to
> implement your mind - 3p death from the point of view of others).
> Also, one possible way of showing COMP false is to show that such stable
> implementations are impossible, however this seems not obvious to me. A
> more practical concern would be to consider the case of what would happen
> if the substitution level is chosen slightly wrong or too high - would it
> lead to too unstable 1p or merely just allow the SIM(Substrate Independent
> Mind) to more easily pick which lower-level machines implement it (there's
> another thought experiment which shows how this could be done, if a machine
> can find one of its own Godel-number).
>> Ricardo.
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To post to this group, send email to 
> everything-list@googlegroups.**com<everything-list@googlegroups.com>
> .
> To unsubscribe from this group, send email to everything-list+unsubscribe@
> **googlegroups.com <everything-list%2bunsubscr...@googlegroups.com>.
> For more options, visit this group at http://groups.google.com/**
> group/everything-list?hl=en<http://groups.google.com/group/everything-list?hl=en>
> .

You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to