On 15 Jan 2014, at 16:43, Jason Resch wrote:
On Jan 15, 2014, at 2:34 AM, Bruno Marchal <[email protected]> wrote:
On 14 Jan 2014, at 22:29, Terren Suydam wrote:
condescending dismissal in 3... 2... 1...
On Tue, Jan 14, 2014 at 4:27 PM, LizR <[email protected]> wrote:
On 15 January 2014 06:53, Edgar L. Owen <[email protected]> wrote:
Liz,
See my response to Brent on consciousness of an hour ago. It
answers this question...
Actually to answer your question properly you have to define
'person', what you mean by an 'AI' and what you mean by a
'simulation'. In the details of those definitions will be your
answer... It's arbitrary and ill formed as asked....
Yeah, unlike waffle about "it's really real because it's real in
the real actual world, really, because I say so" (insert eye-
rolling emoticon here)
OK, let's say we simulate you in a virtual world. Or, to get a
particular scenario, let's assume some aliens with advanced
technology turned up last night and scanned your body, and created
a computer model of it. We won't worry about subtleties like
substitution levels and whether "you" are actually duplicated in
the process. It's enough for the present discussion that the
simulated Edgar feels it's you, believes it's you, thinks its you,
and appears to have a body like yours which it can move around,
just as you do, in a world just like the one you're living in
(they have also modelled the Earth and its surroundings. Using
nanotechnology they can do all this inside a relatively small
space). The simulated Edgar will think just like you, assuming
your thoughts are, in fact, the product of computation in your
brain, and it has your memories, because the aliens were able to
model the part of your brain that stores them.
So, sim-Edgar wakes up the next morning and believes himself to be
earth-Edgar.
Would he know, or discover at some point, that he's a simulation
in a virtual world, and if so, how?
And the answer is "yes, he would know that, but not immediately".
So it would not change the indeterminacy, as he will not
immediately see that he is in a simulation, but, unless you
intervene repeatedly on the simulation, or unless you manipulate
directly his mind, he can see that he is in a simulation by
comparing the comp physics ("in his head") and the physics in the
simulation.
The simulation is locally finite, and the comp-physics is
necessarily infinite (it emerges from the 1p indeterminacy on the
whole UD*), so, soon or later, he will bet that he is in a
simulation (or that comp is wrong).
OK?
Bruno
Is this necessarily true if the simulation were run on a quantum
computer so the simulants observed a kind of FPI?
But even a classical computer inherit our FPI. The global FPI should
be completely invariant. You might only make the task of the simulant
more difficult, by some encoding.
In a sense, you couldt add complexity to the task of failing them on
physics, as you exploit the root of the FPI, (in case comp gives
exactly QM, and the Heisenberg uncertainty circumscribe our level of
substitution).
The simulants themselves follow the FPI all the times. The difficulty,
is making them relating their normal histories.
I might try to think of a better answer. The difficulty is that a
precise answer to this depends on the precise relation between the
comp FPI and Everett FPI.
Bruno
Jason
--
You received this message because you are subscribed to the Google
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to [email protected].
To post to this group, send email to [email protected]
.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.
--
You received this message because you are subscribed to the Google
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to [email protected].
To post to this group, send email to [email protected]
.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.
http://iridia.ulb.ac.be/~marchal/
--
You received this message because you are subscribed to the Google
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to [email protected].
To post to this group, send email to everything-
[email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.
--
You received this message because you are subscribed to the Google
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.
http://iridia.ulb.ac.be/~marchal/
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.