On 12 May 2014, at 03:01, Pierz wrote:

I've been following the "Is consciousness computable?" thread and it occurs to me that there may be a contradiction in the UDA. Step 6 introduces the idea that we can teleport a "brain" (i.e. digitally instantiate a set of memories, predispositions etc) into a computed virtual environment.

OK. Precisely we bet that there is a description level of the brain which will preserve its functioning, and manifest the person consciousness with intact memories, etc.



Yet according to the final conclusion of the UDA, physics is necessarily non-computable, because it arises from an infinity of computations. If step 6 is to work, ISTM that physics has to be computable.

That does not follow.
The virtual environment simulating Moscow and Washington can even be very crude, so that you know in advance that you will be aware of being in a video game. But once you bet on a description level, you can bet you will survive in such environment, and discover that they are fake, indeed, if you have enough time, and fair ways to explore the environment. Step six is only about the invariance of the (local) probability calculus for the real, and locally virtual change, *for a second*.




It will not be enough that we approximate physics computationally, because we can always imagine teleporting the brain from and into a physics lab where advanced particle experiments are being carried out. We can imagine here an arbitrarily advanced physics lab of the future capable of carrying out the most advanced experiments that are theoretically possible. The simulated lab must reproduce the exact same results as the actual lab or the teleportation fails - the "brain" can tell there's been a switch. If the conclusion of UDA is non-computable physics, but the reasoning to reach that conclusion depends on it, then clearly the argument is faulty. This might even constitute a real argument for primitive matter (not that I'm a fan of it), since primitive matter stops us proceeding at step 7, thus saving us from the contradiction.

Bruno, how are you getting out of this one?

Well tried :)

But good point, UDA benefits of two readings. In the second reading you can keep in mind that we reason assuming those normal histories, in which the duplication experiments, and experiences, are done. This does not solve the rabbit problem, but then that's why I interview the machine on it.

Bruno










--
You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to