Le 03-sept.-06, à 17:18, David Nyman a écrit :

> Bruno Marchal wrote:
>> Maudlin build first a digital machine, let us call it M, which do a
>> computation PI (Maudlin's name for it) which we suppose does 
>> correspond
>> to a genuine consciousness experience (for example some remembering of
>> the taste of cocoa).
> At this point we don't know whether the conscious experience is
> supposed to:
> 1) inhere in the computation independent of physical instantiation
>  or
> 2) inhere in some subset of the physical activity of the machine that
> is supposed to be 'relevant' for the computation
> It seems that what is intended under 2) must be *any* physical activity
> that could be construed as 'implementing' this computation, since
> syntactically equivalent hardwares aren't constrained to any particular
> set of physical activities.

All right.

>> Suppose that during the running of that particular computation PI, the
>> register r1, ...r67 are never used. Maudlin argue that if 
>> consciousness
>> is attached to the physical activity relevant for the computation, we
>> can retrieve those unused part of the computer, without changing the
>> consciousness experience.
> OK, under either assumption 1) or 2) above.
>> He shows then that he can managed to build a version of M,
>> proto-olympia (say) which has almost no physical activity at all when
>> he follows the PI computation.
> But this will only preserve the conscious experience under the prior
> assumption of its invariance to physical activity.

Yes. OK.

> If this invariance
> is false we have a third possibility:
> 3) consciousness inheres in *specific* physical activities (and
> consequently physically-instantiated comp is merely 'syntactic
> simulation')

Either those *specific* physical activities are turing emulable, and we 
are back to "1)" and "2)", or they are not, and then comp is false. 
Recall we assume comp.

> Under this assumption, changing the physical details of the
> implementation might have any arbitrary effect whatsoever on the
> original conscious experience.
>> Proto-olympia  is *physically* accidentally correct for PI, but no 
>> more
>> counterfactually correct.
> We don't know what effect the lack of counterfactuality would have on
> the conscious experience. None, if 3) is correct.

All right (but comp need to be false).

>> Then Maudlin reintroduces the unused parts, the Klaras, which
>> reintroduces the counterfactual correctness, WITHOUT ADDING any comp
>> relevant physical activity (if not, it would mean the level is
>> incorrect(*)).
> Again, under 3) this wouldn't affect the conscious experience if the
> relevant physical invariance is preserved.
> So comp + physical supervenience (phys-sup) would force
>> us to associate any consciousness experience to any physical 
>> processes.
> Under 3) it would force us to associate specific conscious experiences
> to specific physical processes, at the correct (physical) substitution
> level.
>> And that would kill comp! So sup-phys -> NOT comp, or equivalently 
>> comp
>> -> NOT sup-phys.
> Under 3) it would kill comp as a theory of the invariance of
> consciousness to physical activity.


> It would be possible for a physical
> process that was conscious to be turing-emulable, but for the conscious
> experience to be non-invariant to different instantiations of such
> emulation.

We would have zombie. Why not. Once comp is false ...

> This would follow from the inherence of consciousness in
> *specific* physical activities. I'm speaking here of comp as
> instantiated in a *physical* machine, and consequently this is no
> different to the claim that you can't drive a comp-emulated car down to
> the shops (at least not the ones *outside* of the machine). The car you
> need for your trip is non-invariant to turing-emulation.
> This is essentially the point I attempted to establish in my original
> 'anti-roadmap' post. Assumption 3 claims that 'conscious' activity must
> inhere in specific causal sequences seamlessly spanning the machine and
> the world outside it. Without this, it is difficult to see how
> 'consciousness' could be causally relevant to the intentional
> interaction of the machine with its environment.
> As conscious machines ourselves we understand very well the difference
> between the car we dream of (the 'emulated' Ferrari) and the one we
> actually drive (the VW we causally interact with).

OK in this situation. But comp makes impossible to distinguish the 
experience of driving a car, and the experience of driving a virtual 
car in a virtual environment, done at the right level of substitution 
(or below). Then the movie-graph or Maudlin's Olympia shows that 
machines cannot even distinguish a physical virtual environment and a 
purely arithmetical virtual environment.



You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 

Reply via email to