I appreciate. I am not sure this will convince people, like Peter
Jones, who assume the existence of a primary material world, and
insists that a material implementation has to exist at some level for
a computation to exist. I agree this is a poorly convincing sort of
magical hand waving, but from a logical point of view an argument of
the style of the movie-graph or Olympia is still needed.
On 10 Aug 2008, at 05:56, Quentin Anciaux wrote:
> 1) Why 1 is more than 0 and simpler than n ?
> 'Entia non sunt multiplicanda praeter necessitatem'... It follows by
> looking at it in a first sight that it would means the one universe
> hypothesis is simpler than MW. Yes, one universe involves many less
> than MW (either there exists a finite number of other universes or
> infinitely many)... Then by O'R we should take the one universe
> hypothesis as simpler because requiring less universes (in this case
> 1). But it's an ill way of understanding O'R... It should be
> understood as saying something about the premises, the axioms... One
> shouldn't add an axiom unnecessarily. And in this case none of one
> univers, 42 universes or infinitely many hypothesis are simpler
> relative to each other... and O'R could not help you choose or if it
> could help for something would be to choose the 0 univers
> hypothesis... well 0 < 1 << oo and this for all values of 0 even big
> ones :)
> 2) Why turing emulability of the mind entails first person
> undeterminacy and/or MW ?
> Because if you're a computation then you're not dependant on the
> substrate of the computation... but only to the computation itself. A
> computation is substrate independant.
> Well you'd say it may be substrate independent but still it needs a
> substrate to 'exist'. Ok let's accept that, but let's return on the
> mind and on the hypothesis that the mind is a computation and the
> brain the substrate on which it is run. As a computation is substrate
> independant then what follows is if the mind is a computation it can
> be run on other computational substrate for example on a... computer
> for example. And 'the mind' wouldn't be able to tell if it is run on a
> brain or on a computer. By our hypothesis the mind is a computation,
> and a computation is dependant only on it's state and transition rule,
> if the same input is given to the same algorithm it will yield the
> same result so seeing a brain is of no help because you would see a
> brain even run somewhere else if the same input is given.
> So why this entails first person undeterminacy and/or MW ? let's
> assume we could replicate the computation of your mind (I have assume
> by hypothesis that it is a computation, so replication can be done,
> even if currently we don't have a clue and even we don't know if the
> mind is a computation... but here I assume it just for the argument to
> see what it entails) then I could execute the 'you' computation on a
> computer then if I can, I can also run the 'you' computation not only
> on one computer but on many computers. Ok so now I have at least two
> computers running the same mind (computation)... I switch off one
> computer, the mind die ? hell no, by our hypothesis mind is
> computation and the computation is still running on the other
> computer. So from the point of view of the mind unplugging one of the
> two computers didn't change a thing. Now I'm a real serail killer I
> switch of the last computer running the computation/mind... so now the
> mind die now ?? Let's say I've done a program dump before stopping the
> last computer and I decide 5 years later to rerun the computation from
> this save point and on. Wasn't the mind dead ? If it is and mind ==
> the computation, how can I have the ability to run the computation
> without it being the mind ? It means also that if you're a computation
> you can't know at which 'level' you're run (if you're run on a VM
> running in a VM running in a VM or a non emulated substrate). So if
> mind is a computation to make correct prediction about the next state
> you must take all computation having the same state into account. Even
> using the 'real switch' theory a mind could be run on different 'real'
> (composed of substance) substrate... and the mind will *have* to take
> into account these runs on real substrate to make correct prediction.
> And unplugging one real substrate run will not kill the mind,
> unplugging them all also. The only way would be to not only unplug
> them all but to garantee that it wil *never and ever* be run *again*
> (even only one).
> If I'm run on another computational substrate than
> my brain, If someone pull the plug, I die ?
> Quentin Anciaux
> All those moments will be lost in time, like tears in rain.
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at