Le 03-nov.-08, à 08:32, Brent Meeker a écrit :

> I have reservations about #6:  Consciousness is a process, but  it
> depends on a context.

That is why I use the notion of generalized brain. I take into account 
the possible need of a context. The argument would break only if you 
stipulate that the context cannot be captured digitally, but this would 
make the generalized brain non turing emulable, and this would mean 
comp is false. Recall that my point is that comp implies something, not 
that comp is true.

>  In the argument as to whether a stone is a
> computer, even a universal computer, the error is in ignoring that the
> computation in a computer has an interpretation which the programmer
> provides.

I don't see the relevance of this concerning the step #6.
I have never written nor indeed believed that a stone can be a computer.

>  If he can provide this interpretation to the processes within
> a stone, then indeed it would be a computer; but in general he can't.

I agree with this, but I don't see the relevance.

>  I think consciousness is similar; it is a process but it only has an
> interpretation as a *conscious* process within a context of perception
> and action within a world.

In step six, the context is taken into account. Your argument  will go 
through only if you think that the context is both needed integrally 
and is not turing emulable, But then comp is false.
Also consciousness makes sense only, strictly speaking, for the 
subject. If some direct access to a world was needed throughout, then 
even the experience of dream becomes impossible.

> Which is why I think philosophical zombies
> are impossible.

If this were true, then the movie graph (step 8 without occam) would 
not been needed. Arithmetical truth is provably full of philosophical 
zombies if comp is true and step 8 false.

> But then, when you imagine reproducing someone's
> consciousness, in a computer and simulating all the input/output, i.e.
> all the context, then you have created a separate world in which there
> is a consciousness in the context of *that* world.  But it doesn't
> follow that it is a consciousness in this world.

To accept this I have to assume "I = the world", and that world is not 
turing-emulable. But then comp is false.

> The identification of
> things that happen in the computer as "He experiences this." depend on
> our interpretation of the computer program.  There is no inherent,
> ding-an-sich consciousness.

Here I disagree. This would entail that if you beat a child in a way 
such that nobody knows, then the child does not suffer.

> Your step #6 can be saved by supposing that a robot is constructed so
> that the duplicated consciousness lives in the context of our world, 
> but
> this does not support the extension to the UD in step #7.  To identify
> some program the UD is generating as reproducing someone's 
> consciousness
> requires an interpretation.

With comp the universal machine is the interpreter. Again you are 
telling me that comp is false.

> But an interpretation is a mapping between
> the program states and the real world states - so it presumes a real 
> world.

Then dreaming cannot be a conscious experience. But since the work of 
Laberge and Hearne, all brain physiologist accept this.
I am afraid you put something magical (non Turing emulable) in the 
world and in consciousness. This makes us non digital machine or 

> I have several problems with step #8.  What are consistent 1-histories?

This is needed for the AUDA (arithmetical translation of the UDA). The 
movie graph just explain that comp makes it impossible to attach 
consciousness to the physical activity of the running UD. It explains 
why we don't have to run the UD. Digital machines cannot distinguish 
physical computations from arithmetical computations.

> Can they be characterized without reference to nomological consistency?
> The reduction to Platonia seems almost like a reduction argument 
> against
> comp.

This is certainly possible, but up to now, nobody has been able to get 
a contradiction. In the seventies, some people pretend that I have 
refute comp by showing it entails many-worlds. At least since 
Everett-Feynman-Deutsch, people have abandon this idea (that many 
worlds = contradiction).

>  Except that comp was the assumption that one physical process can
> be replaced by another that instantiates the same physical relations.

No, comp implicates the notion of "me" or of "my consciousness". Comp 
is just the assumption that my consciousness is unchanged when my 
(generalized) brain is substituted by digital devices at some level of 

>  I
> don't see how it follows from that there need not be an instantiation 
> at
> all and we can just assume that the timeless existence in Platonia is
> equivalent.

Well, it comes from the impossibility to attach consciopusness to the 
exclusively physical: that is the point of the movie graph argument 
(also entailed by Maudlin's Olympia argument).

> You  write: "...the appearance of physics must be recovered from some
> point of views emerging from those propositions."  But how does is this
> "emergence" work?  Isn't it like saying if I postulate an absolute 
> whole
> that includes all logically possible relations then this must include
> the appearance of physics and all I need is the probability measure 
> that
> picks it out.  It's like Michaelangelo saying, "This block of marble
> contains a statue of David.  All I need is the measure that assigns 0 
> to
> the part that's not David and 1 to the part that is David."

To select effectively the statue of David, so that it becomes manifest 
relatively to his public, Michael Angelo has still to remove the zero 
part. This can be done by ... sculpting.

>> To be sure, do you understand the nuance between the following theses:
>> WEAK AI: some machines can behave as if their were conscious (but
>> could as well be zombies)
>> STRONG AI: some machines can be conscious
>> COMP: I am a machine
>> We have
>> WEAK does not imply STRONG AI which does not imply COMP. (it is not
>> because machine can be conscious that we are necessarily machine
>> ourself, of course with occam razor, STRONG AI go in the direction of
>> COMP).
>> Does those nuances make sense? If not (1...8) does not, indeed, make
>> sense. You just don't believe in consciousness and/or person like in
>> the eliminative materialism of neuro-philosophers ( the Churchland,
>> amost Dennett in "consciousness explained").
> I think they make some good arguments.  I don't think that 
> consciousness
> is a thing or can exist apart from a much larger context.

Again, if consciousness needs that context, then either the context is 
not turing emulable, and this means comp is false (because in that case 
consciousness needs something non turing emulable), or the context is 
turing emulable, and then the reasoning go through.

You are not refuting the derivation. It seems you are hesitating 
between eliminating consciousness, or making it relying on something 
non Turing emulable. In both case comp is false then. You could as well 
say "Marchals shows that comp entails there is no fundamental physical 
universe, thus marchal has refuted comp. But nobody has ever prove 
there is a primary physical universe so I take it as premature to 
pretend that we have shown comp contradictory. All what comp implies is 
that the appearance of a physical universe has to emerge from the many 
computational histories existing already in elementary arithmetic.
The contradiction even disappear completely once we look at the 
translation of UDA in arithmetic. Thanks to incompleteness, we have all 
the ingredient to explain how the laws of physics emerge from the 
computations as seen from internal points of view.
Is it so astonishing that, like the biological principles have emerged 
by natural selection, the laws of physics emerge from numbers by a 
logical and arithmetical selection principle?

Tell me if you understand why step 6 and 7 are correct, before we 
tackle the more subtle step 8. Just remember that comp explicitly 
invoke the notion of consciousness, remind the use of generalized 
brain. It is very rare now that people have still problem with 1..7.  
To be sure in the seventies people did understood 1...7, and I have 
never met since (never, even at Brussels) someone who does not 
understand 1..7 in a oral presentation, despite the huge utilization of 
the first person experience delay-invariance. I hope you understand 
that your critics of #6 does not go through. OK?

Bruno Marchal


You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 

Reply via email to