Jason Resch wrote:
> On Tue, Sep 9, 2008 at 7:44 PM, Stathis Papaioannou <[EMAIL PROTECTED]
> <mailto:[EMAIL PROTECTED]>> wrote:
> 2008/9/10 Jason Resch <[EMAIL PROTECTED]
> <mailto:[EMAIL PROTECTED]>>:
> > Uv,
> > One of the concerns people have with free will or the lack
> thereof is that
> > if physics is deterministic, one's future actions can predicted
> > without them even having to exist. However, an interesting
> consequence of
> > computationalism is this: One's future actions cannot be
> predicted without a
> > simulation that goes into enough detail to instantiate that person's
> > consciousness. As conscious creatures, our wills cannot be
> > without our consciousness being invoked by the calculations, just
> as the
> > physics of this universe is doing now.
> Hm, sounds good, but is that true?
> I think it is, if you ignoring unpredictability due to QM, measurement
> problems, need to simulate the environment etc. We can set aside the
> debate on these other issues for the purposes of this thought experiment
> by saying there exists a simulated mind and environment together inside
> a computer and both the mind and environment evolve according to
> deterministic rules which can be computed in finite time.
> Within that situation, it is clear that there is no way to leap to
> future states of the system other than having the computer compute each
> intermediate step, skipping or abridging finer details of the system
> (environment or the mind) will lead to ever growing inaccuracies later
> down the road, as Rich mentioned a sensitive dependence on initial
> conditions. The only sure way to _know_ with certainty what the future
> holds is to process every instruction of the program. Unless you
> believe in the possibility of philosophical zombies, a conscious being
> cannot be accurately simulated without simulating its mind in enough
> detail for that being to be conscious.
The impossibility of a philosphical zombie means that if you simulate the
behavior of a conscious being you must thereby also instantiate consciousness.
But Stathis is questioning the converse. Is it possible to instantiate
consciousness without simulating everything about the behavior? And I'd say
this question is different than the question of predicting behavior. We might
be able to instantiate consciousness and yet fail in predictions because of the
sensitivity to initial conditions and/or quantum noise.
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at