Re: Maudlin's Demon (Argument)

2007-10-08 Thread George Levy
Sorry Bruno, no disrespect, I meant to type "Hi Bruno".
George

George Levy wrote:

> Ho Bruno
>
> Sorry, I have been unclear with myself and with you. I have been 
> lumping together the assumption of an "objective physical world" and 
> an "objective platonic world". So you are right, I do reject the 
> objective physical world, but why stop there? Is there a need for an 
> objective platonic world? Would it be possible to go one more step - 
> the last step hopefully - and show that a the world that we perceive 
> is solely tied to our own consciousness? So I am more extreme than you 
> thought. I believe that the only necessary assumption is the 
> subjective world. Just like Descartes said: Cogito...
>
> I think that the world and consciousness co-emerge together, and the 
> rules governing one are tied to the rules governing the other. In a 
> sense Church's thesis is tied to the Anthropic principle.  Subjective 
> reality also ties in nicely with relativity and with the relative 
> formulation of QT.
>
> This being said, I am not denying physical reality or objective 
> reality. However these may be derivable from purely subjective 
> reality. Our experience of a common physical reality and a common 
> objective reality require the existence of common physical frame of 
> reference and a common platonic frame of reference respectively.  A 
> common platonic frame of reference implies that there are other 
> platonic frames of references.This is unthinkable... literally.  
> Maybe I have painted myself into a corner Yet maybe not... No one 
> in this Universe can say...
>
> George
>
>
> Bruno Marchal wrote:
>
>>Hi George,
>>
>>I think that we agree on the main line. Note that I never have 
>>pretended that the conjunction of comp and weak materialism (the 
>>doctrine which asserts the existence of primary matter) gives a 
>>contradiction. What the filmed-graph and/or Maudlin shows is that comp 
>>makes materialism
>>empty of any explicative power, so that your "ether" image is quite 
>>appropriate. Primary matter makes, through comp, the observation of 
>>matter (physics) and of course qualia, devoied of any explanation power 
>>even about just the apparent presence of physical laws.
>>I do think nevertheless that you could be a little quick when asserting 
>>that the mind-body problem is solved at the outset when we abandon the 
>>postulate of an objective (I guess you mean physical) world. I hope you 
>>believe in some objective world, being it number theoretical or 
>>computer science theoretical, etc.
>>You point "3)" (see below) is quite relevant sure,
>>
>>Bruno
>>
>>
>>Le 08-oct.-07, à 05:10, George Levy a écrit :
>>
>>  
>>
>>>Bruno Marchal wrote:
>>>
>>>
>>>
I think that Maudlin refers to the conjunction of the comp hyp and
supervenience, where consciousness is supposed to be linked (most of
the time in a sort of "real-time" way) to the *computational activity*
of the brain, and not to the history of any of the state occurring in
that computation.

If you decide to attach consciousness to the whole physical history,
then you can perhaps keep comp by making the substitution level very
low, but once the level is chosen, I am not sure how you will make it
possible for the machine to distinguish a purely arithmetical version
of that history (in the arithmetical "plenitude" (your wording)) from
a "genuinely physical one" (and what would that means?). Hmmm...
perhaps I am quick here ...

May be I also miss your point. This is vastly more complex than the
seven first steps of UDA, sure. I have to think how to make this
transparently clear or ... false.
  

>>>As you know I believe that the physical world can be derived from
>>>consciousness operating on a platonic "arithmetic plenitude."
>>>Consequently, tokens describing objective instances in a physical world
>>>cease to be fundamental. Instead, platonic types become fundamentals. 
>>>In
>>>the platonic world each type exists only once. Hence the whole concept
>>>of indexicals looses its functionality. Uniqueness of types leads
>>>naturally to the "merging universes:" If two observers together with 
>>>the
>>>world that they observe (within a light cone for example) are identical
>>>then these two observers are indistinguishable from themselves and are
>>>actually one and the same.
>>>
>>>I have argued (off list) about my platonic outlook versus the more
>>>established (objective reality) Aristotelian viewpoint and I was told
>>>that I am attempting to undo more than 2000 years of philosophy going
>>>back to Plato. Dealing with types only presents formidable logical
>>>difficulties:  How can types exist without tokens?  I find extremely
>>>difficult to "prove" that the absence of an objective reality at the
>>>fundamental level. Similarly, about a century ago people were asking 
>>>how
>>>can light travel without Ether. How can one "prove" that Ether does 

Re: Maudlin's Demon (Argument)

2007-10-08 Thread George Levy
Ho Bruno

Sorry, I have been unclear with myself and with you. I have been lumping 
together the assumption of an "objective physical world" and an 
"objective platonic world". So you are right, I do reject the objective 
physical world, but why stop there? Is there a need for an objective 
platonic world? Would it be possible to go one more step - the last step 
hopefully - and show that a the world that we perceive is solely tied to 
our own consciousness? So I am more extreme than you thought. I believe 
that the only necessary assumption is the subjective world. Just like 
Descartes said: Cogito...

I think that the world and consciousness co-emerge together, and the 
rules governing one are tied to the rules governing the other. In a 
sense Church's thesis is tied to the Anthropic principle.  Subjective 
reality also ties in nicely with relativity and with the relative 
formulation of QT.

This being said, I am not denying physical reality or objective reality. 
However these may be derivable from purely subjective reality. Our 
experience of a common physical reality and a common objective reality 
require the existence of common physical frame of reference and a common 
platonic frame of reference respectively.  A common platonic frame of 
reference implies that there are other platonic frames of 
references.This is unthinkable... literally.  Maybe I have painted 
myself into a corner Yet maybe not... No one in this Universe can say...

George


Bruno Marchal wrote:

>Hi George,
>
>I think that we agree on the main line. Note that I never have 
>pretended that the conjunction of comp and weak materialism (the 
>doctrine which asserts the existence of primary matter) gives a 
>contradiction. What the filmed-graph and/or Maudlin shows is that comp 
>makes materialism
>empty of any explicative power, so that your "ether" image is quite 
>appropriate. Primary matter makes, through comp, the observation of 
>matter (physics) and of course qualia, devoied of any explanation power 
>even about just the apparent presence of physical laws.
>I do think nevertheless that you could be a little quick when asserting 
>that the mind-body problem is solved at the outset when we abandon the 
>postulate of an objective (I guess you mean physical) world. I hope you 
>believe in some objective world, being it number theoretical or 
>computer science theoretical, etc.
>You point "3)" (see below) is quite relevant sure,
>
>Bruno
>
>
>Le 08-oct.-07, à 05:10, George Levy a écrit :
>
>  
>
>>Bruno Marchal wrote:
>>
>>
>>
>>>I think that Maudlin refers to the conjunction of the comp hyp and
>>>supervenience, where consciousness is supposed to be linked (most of
>>>the time in a sort of "real-time" way) to the *computational activity*
>>>of the brain, and not to the history of any of the state occurring in
>>>that computation.
>>>
>>>If you decide to attach consciousness to the whole physical history,
>>>then you can perhaps keep comp by making the substitution level very
>>>low, but once the level is chosen, I am not sure how you will make it
>>>possible for the machine to distinguish a purely arithmetical version
>>>of that history (in the arithmetical "plenitude" (your wording)) from
>>>a "genuinely physical one" (and what would that means?). Hmmm...
>>>perhaps I am quick here ...
>>>
>>>May be I also miss your point. This is vastly more complex than the
>>>seven first steps of UDA, sure. I have to think how to make this
>>>transparently clear or ... false.
>>>  
>>>
>>As you know I believe that the physical world can be derived from
>>consciousness operating on a platonic "arithmetic plenitude."
>>Consequently, tokens describing objective instances in a physical world
>>cease to be fundamental. Instead, platonic types become fundamentals. 
>>In
>>the platonic world each type exists only once. Hence the whole concept
>>of indexicals looses its functionality. Uniqueness of types leads
>>naturally to the "merging universes:" If two observers together with 
>>the
>>world that they observe (within a light cone for example) are identical
>>then these two observers are indistinguishable from themselves and are
>>actually one and the same.
>>
>>I have argued (off list) about my platonic outlook versus the more
>>established (objective reality) Aristotelian viewpoint and I was told
>>that I am attempting to undo more than 2000 years of philosophy going
>>back to Plato. Dealing with types only presents formidable logical
>>difficulties:  How can types exist without tokens?  I find extremely
>>difficult to "prove" that the absence of an objective reality at the
>>fundamental level. Similarly, about a century ago people were asking 
>>how
>>can light travel without Ether. How can one "prove" that Ether does not
>>exist? Of course one can't but one can show that Ether is not necessary
>>to explain wave propagation. Similarly, I think that the best one can
>>achieve is to show that the objective world is not necessary f

Re: Maudlin's Demon (Argument)

2007-10-08 Thread Bruno Marchal

Hi George,

I think that we agree on the main line. Note that I never have 
pretended that the conjunction of comp and weak materialism (the 
doctrine which asserts the existence of primary matter) gives a 
contradiction. What the filmed-graph and/or Maudlin shows is that comp 
makes materialism
empty of any explicative power, so that your "ether" image is quite 
appropriate. Primary matter makes, through comp, the observation of 
matter (physics) and of course qualia, devoied of any explanation power 
even about just the apparent presence of physical laws.
I do think nevertheless that you could be a little quick when asserting 
that the mind-body problem is solved at the outset when we abandon the 
postulate of an objective (I guess you mean physical) world. I hope you 
believe in some objective world, being it number theoretical or 
computer science theoretical, etc.
You point "3)" (see below) is quite relevant sure,

Bruno


Le 08-oct.-07, à 05:10, George Levy a écrit :

>
> Bruno Marchal wrote:
>
>> I think that Maudlin refers to the conjunction of the comp hyp and
>> supervenience, where consciousness is supposed to be linked (most of
>> the time in a sort of "real-time" way) to the *computational activity*
>> of the brain, and not to the history of any of the state occurring in
>> that computation.
>>
>> If you decide to attach consciousness to the whole physical history,
>> then you can perhaps keep comp by making the substitution level very
>> low, but once the level is chosen, I am not sure how you will make it
>> possible for the machine to distinguish a purely arithmetical version
>> of that history (in the arithmetical "plenitude" (your wording)) from
>> a "genuinely physical one" (and what would that means?). Hmmm...
>> perhaps I am quick here ...
>>
>> May be I also miss your point. This is vastly more complex than the
>> seven first steps of UDA, sure. I have to think how to make this
>> transparently clear or ... false.
>
> As you know I believe that the physical world can be derived from
> consciousness operating on a platonic "arithmetic plenitude."
> Consequently, tokens describing objective instances in a physical world
> cease to be fundamental. Instead, platonic types become fundamentals. 
> In
> the platonic world each type exists only once. Hence the whole concept
> of indexicals looses its functionality. Uniqueness of types leads
> naturally to the "merging universes:" If two observers together with 
> the
> world that they observe (within a light cone for example) are identical
> then these two observers are indistinguishable from themselves and are
> actually one and the same.
>
> I have argued (off list) about my platonic outlook versus the more
> established (objective reality) Aristotelian viewpoint and I was told
> that I am attempting to undo more than 2000 years of philosophy going
> back to Plato. Dealing with types only presents formidable logical
> difficulties:  How can types exist without tokens?  I find extremely
> difficult to "prove" that the absence of an objective reality at the
> fundamental level. Similarly, about a century ago people were asking 
> how
> can light travel without Ether. How can one "prove" that Ether does not
> exist? Of course one can't but one can show that Ether is not necessary
> to explain wave propagation. Similarly, I think that the best one can
> achieve is to show that the objective world is not necessary for
> consciousness to exist and to perceive or observe a world.
>
> However, some points can be made: getting rid of the objective world
> postulate has the following advantages:
>
> 1) The resulting theory (or model) is simpler and more universal (Occam
> Razor)
> 2) The mind-body problem is eliminated at the outset.
> 3) Physics has been evolving toward greater and greater emphasis on the
> observer. So why not go all the way and see what happens?
>
> I don't find Maudlin argument convincing. Recording the output of a
> computer and replaying the recording spreads out the processing in time
> and can be used to link various processes across time but does not 
> prove
> that the consciousness is independent of a physical substrate.
> Rearranging a tape interferes with the thought experiment and should 
> not
> be allowed if we are going to play fair. By the way, I find the phrases
> "supervenience" and "physical supervenience" confusing. At first glance
> I am not sure if physical supervenience means the physical world
> supervening on the mental world or vice versa. I would prefer to use 
> the
> active tense and say  "the physical world supervening on the mental
> world," or even use the expression "the physical world acting as a
> substrate for consciousness".
>
> >
>
http://iridia.ulb.ac.be/~marchal/

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from th

Re: Maudlin's Demon (Argument)

2007-10-07 Thread George Levy

Bruno Marchal wrote:

> I think that Maudlin refers to the conjunction of the comp hyp and 
> supervenience, where consciousness is supposed to be linked (most of 
> the time in a sort of "real-time" way) to the *computational activity* 
> of the brain, and not to the history of any of the state occurring in 
> that computation.
>
> If you decide to attach consciousness to the whole physical history, 
> then you can perhaps keep comp by making the substitution level very 
> low, but once the level is chosen, I am not sure how you will make it 
> possible for the machine to distinguish a purely arithmetical version 
> of that history (in the arithmetical "plenitude" (your wording)) from 
> a "genuinely physical one" (and what would that means?). Hmmm... 
> perhaps I am quick here ...
>
> May be I also miss your point. This is vastly more complex than the 
> seven first steps of UDA, sure. I have to think how to make this 
> transparently clear or ... false.

As you know I believe that the physical world can be derived from 
consciousness operating on a platonic "arithmetic plenitude." 
Consequently, tokens describing objective instances in a physical world 
cease to be fundamental. Instead, platonic types become fundamentals. In 
the platonic world each type exists only once. Hence the whole concept 
of indexicals looses its functionality. Uniqueness of types leads 
naturally to the "merging universes:" If two observers together with the 
world that they observe (within a light cone for example) are identical  
then these two observers are indistinguishable from themselves and are 
actually one and the same.

I have argued (off list) about my platonic outlook versus the more 
established (objective reality) Aristotelian viewpoint and I was told 
that I am attempting to undo more than 2000 years of philosophy going 
back to Plato. Dealing with types only presents formidable logical 
difficulties:  How can types exist without tokens?  I find extremely 
difficult to "prove" that the absence of an objective reality at the 
fundamental level. Similarly, about a century ago people were asking how 
can light travel without Ether. How can one "prove" that Ether does not 
exist? Of course one can't but one can show that Ether is not necessary 
to explain wave propagation. Similarly, I think that the best one can 
achieve is to show that the objective world is not necessary for 
consciousness to exist and to perceive or observe a world.

However, some points can be made: getting rid of the objective world 
postulate has the following advantages:

1) The resulting theory (or model) is simpler and more universal (Occam 
Razor)
2) The mind-body problem is eliminated at the outset.
3) Physics has been evolving toward greater and greater emphasis on the 
observer. So why not go all the way and see what happens?

I don't find Maudlin argument convincing. Recording the output of a 
computer and replaying the recording spreads out the processing in time 
and can be used to link various processes across time but does not prove 
that the consciousness is independent of a physical substrate. 
Rearranging a tape interferes with the thought experiment and should not 
be allowed if we are going to play fair. By the way, I find the phrases 
"supervenience" and "physical supervenience" confusing. At first glance 
I am not sure if physical supervenience means the physical world 
supervening on the mental world or vice versa. I would prefer to use the 
active tense and say  "the physical world supervening on the mental 
world," or even use the expression "the physical world acting as a 
substrate for consciousness".

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Maudlin's Demon (Argument)

2007-10-04 Thread Bruno Marchal
Hi George,



Le 03-oct.-07, à 01:52, George Levy a écrit :

>  Hi Bruno,
>  Yes I am still on the list, barely trying to keep up, but I have been 
> very busy. Actually the ball was in my court and I was supposed to 
> answer to your last post to me about a year ago!!!. Generally I agree 
> with you on many things but here I am just playing the devils' 
> advocate. The Maudlin experiment reminds me of an attempt to prove the 
> falsity of the second law of thermodynamics using Newton's demon. As 
> you probably know, this attempt fails because the thermodynamics 
> effect on the demon is neglected when in fact it should not be The 
> Newton Demon experiment is not thermodynamically closed. If you 
> include the demon in a closed system, then the second law is correct.
>  Similarly, Maudlin's experiment is not informationally closed because 
> Maudlin has interjected himself into his own experiment! The 
> "accidentally" correctly operating machines need to have their tape 
> rearranged to work correctly and Maudlin is the agent doing the 
> rearranging.
>
>  So essentially Maudlin's argument is not valid as an attack on 
> physical supervenience.


I am not sure. "physical supervenience" is well defined (actually this 
is my terming, Maudlin just say "supervenience"). But here you are 
changing the definition of supervenience, and it seems to me you have 
to abandon comp for doing that. If comp and supervenience is correct 
the later machine (OLYMPIA + the KLARAs) should be conscious, with or 
without Maudlin's interjection.

>

>> Yes, you are right from a logical point of view, but only by assuming
>> some form of non-computationalism.
>> With comp + physical supervenience, you have to attach a consciousness
>> to the active boolean graph, and then, by physical supervenience, to
>> the later process, which do no more compute. (And then Maudlin shows
>> that you can change the second process so that it computes again, but
>> without any physical activity of the kind relevant to say that you
>> implement a computation. So, physical supervenience is made wrong.
>>
>>
>
>  Yes but Maudlin cheated by interjecting himself into his experiment. 
> So this argument does not count.


I think that Maudlin refers to the conjunction of the comp hyp and 
supervenience, where consciousness is supposed to be linked (most of 
the time in a sort of "real-time" way) to the *computational activity* 
of the brain, and not to the history of any of the state occurring in 
that computation.

If you decide to attach consciousness to the whole physical history, 
then you can perhaps keep comp by making the substitution level very 
low, but once the level is chosen, I am not sure how you will make it 
possible for the machine to distinguish a purely arithmetical version 
of that history (in the arithmetical "plenitude" (your wording)) from a 
"genuinely physical one" (and what would that means?).  Hmmm... perhaps 
I am quick here ...

May be I also miss your point. This is vastly more complex than the 
seven first steps of UDA, sure. I have to think how to make this 
transparently clear or ... false.

I will also be more and more busy the next two month, so I can also 
take some time for commenting posts.

Best,

Bruno



http://iridia.ulb.ac.be/~marchal/

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Maudlin's Demon (Argument)

2007-10-03 Thread George Levy
Oops: replace Newton's demon by Maxwell's demon.
George

George Levy wrote:

> Hi Bruno,
> Yes I am still on the list, barely trying to keep up, but I have been 
> very busy. Actually the ball was in my court and I was supposed to 
> answer to your last post to me about a year ago!!!. Generally I agree 
> with you on many things but here I am just playing the devils' 
> advocate. The Maudlin experiment reminds me of an attempt to prove the 
> falsity of the second law of thermodynamics using Newton's demon. As 
> you probably know, this attempt fails because the thermodynamics 
> effect on the demon is neglected when in fact it should not be The 
> Newton Demon experiment is not thermodynamically closed. If you 
> include the demon in a closed system, then the second law is correct.
> Similarly, Maudlin's experiment is not informationally closed because 
> Maudlin has interjected himself into his own experiment! The 
> "accidentally" correctly operating machines need to have their tape 
> rearranged to work correctly and Maudlin is the agent doing the 
> rearranging.
>
> So essentially Maudlin's argument is not valid as an attack on 
> physical supervenience. As you know, I am at the extreme end of the 
> spectrum with regards the physical world supervening on consciousness. 
> (Mind over  matter instead of matter over mind), so I would very much 
> like to see an argument that could prove it, but in my opinion 
> Maudlin's does not cut it.  More comments below.
>
> Bruno Marchal wrote:
>
>>Hi George,
>>
>>Are you still there on the list?
>>I am really sorry to (re)discover your post just now, with a label 
>>saying that I have to answer it, but apparently I didn't. So here is 
>>the answer, with a delay of about one year :(
>>
>>
>>
>>Le 08-oct.-06, à 08:00, George Levy wrote :
>>
>>
>>  
>>
>>>Finally I read your filmed graph argument which I have stored in my
>>>computer. (The original at the Iridia web site
>>>http://iridia.ulb.ac.be/~marchal/bxlthesis/Volume3CC/3%20%202%20.pdf
>>>is not accessible anymore. I am not sure why.)
>>>
>>>
>>
>>
>>Apparently it works now. You have to scroll on the pdf document to see 
>>the text.
>>
>>
>>
>>  
>>
>>>In page TROIS -61 you describe an experience of consciousness which is
>>>comprised partially of a later physical process and partially of the
>>>recording of an earlier physical process.
>>>
>>>
>>
>>
>>Right.
>>
>>
>>
>>  
>>
>>>It is possible to resolve the paradox simply by saying that
>>>consciousness involves two partial processes ...
>>>
>>>
>>
>>Why? With comp, consciousness can be associated with the active boolean 
>>graph, the one which will be recorded. No need of the second one.
>>
>>  
>>
> Yes, but in the eyes of a materialist but I have restored  the 
> possibility that consciousness can supervene on the physical. I have 
> exposed Maudlin's trickery. I agree that consciousness can be 
> associated with a boolean graph and that there is no need for physical 
> substrate. However, Maudlin does not prove this case because he got 
> involved in his own experiment.
>
>>>... each occupying two
>>>different time intervals, the time intervals being connected by a
>>>recording, such that the earlier partial process is combined with the
>>>later partial process, the recording acting as a connection device.
>>>
>>>
>>
>>
>>But is there any sense in which consciousness can supervene on the 
>>later partial process? All the trouble is there, because the later 
>>process has the same physical process-features than the active brain, 
>>although by construction there is no sense to attribute it any 
>>computational process (like a movie).
>>
>>
>>
>>  
>>
>>>I am not saying that consciousness supervene on the physical substrate.
>>>
>>>
>>
>>
>>ok.
>>
>>
>>
>>  
>>
>>>All I am saying is that the example does not prove that consciousness
>>>does not supervene the physical.
>>>
>>>
>>
>>
>>Yes, you are right from a logical point of view, but only by assuming 
>>some form of non-computationalism.
>>With comp + physical supervenience, you have to attach a consciousness 
>>to the active boolean graph, and then, by physical supervenience, to 
>>the later process, which do no more compute. (And then Maudlin shows 
>>that you can change the second process so that it computes again, but 
>>without any physical activity of the kind relevant to say that you 
>>implement a computation. So, physical supervenience is made wrong.
>>
>>  
>>
>
> Yes but Maudlin cheated by interjecting himself into his experiment. 
> So this argument does not count.
>
>>>The example is just an instance of
>>>consciousness operating across two different time intervals by mean of 
>>>a
>>>physical substrate and a physical means (recording) of connecting these
>>>two time intervals.
>>>
>>>
>>
>>
>>The problem is that with comp, consciousness has to be associated to 
>>the first process, and by physical supervenience, it has to be attached 
>>also to the second process.

Re: Maudlin's Demon (Argument)

2007-10-02 Thread George Levy
Hi Bruno,
Yes I am still on the list, barely trying to keep up, but I have been 
very busy. Actually the ball was in my court and I was supposed to 
answer to your last post to me about a year ago!!!. Generally I agree 
with you on many things but here I am just playing the devils' advocate. 
The Maudlin experiment reminds me of an attempt to prove the falsity of 
the second law of thermodynamics using Newton's demon. As you probably 
know, this attempt fails because the thermodynamics effect on the demon 
is neglected when in fact it should not be The Newton Demon experiment 
is not thermodynamically closed. If you include the demon in a closed 
system, then the second law is correct.
Similarly, Maudlin's experiment is not informationally closed because 
Maudlin has interjected himself into his own experiment! The 
"accidentally" correctly operating machines need to have their tape 
rearranged to work correctly and Maudlin is the agent doing the rearranging.

So essentially Maudlin's argument is not valid as an attack on physical 
supervenience. As you know, I am at the extreme end of the spectrum with 
regards the physical world supervening on consciousness. (Mind over  
matter instead of matter over mind), so I would very much like to see an 
argument that could prove it, but in my opinion Maudlin's does not cut 
it.  More comments below.

Bruno Marchal wrote:

>Hi George,
>
>Are you still there on the list?
>I am really sorry to (re)discover your post just now, with a label 
>saying that I have to answer it, but apparently I didn't. So here is 
>the answer, with a delay of about one year :(
>
>
>
>Le 08-oct.-06, à 08:00, George Levy wrote :
>
>
>  
>
>>Finally I read your filmed graph argument which I have stored in my
>>computer. (The original at the Iridia web site
>>http://iridia.ulb.ac.be/~marchal/bxlthesis/Volume3CC/3%20%202%20.pdf
>>is not accessible anymore. I am not sure why.)
>>
>>
>
>
>Apparently it works now. You have to scroll on the pdf document to see 
>the text.
>
>
>
>  
>
>>In page TROIS -61 you describe an experience of consciousness which is
>>comprised partially of a later physical process and partially of the
>>recording of an earlier physical process.
>>
>>
>
>
>Right.
>
>
>
>  
>
>>It is possible to resolve the paradox simply by saying that
>>consciousness involves two partial processes ...
>>
>>
>
>Why? With comp, consciousness can be associated with the active boolean 
>graph, the one which will be recorded. No need of the second one.
>
>  
>
Yes, but in the eyes of a materialist but I have restored  the 
possibility that consciousness can supervene on the physical. I have 
exposed Maudlin's trickery. I agree that consciousness can be associated 
with a boolean graph and that there is no need for physical substrate. 
However, Maudlin does not prove this case because he got involved in his 
own experiment.

>>... each occupying two
>>different time intervals, the time intervals being connected by a
>>recording, such that the earlier partial process is combined with the
>>later partial process, the recording acting as a connection device.
>>
>>
>
>
>But is there any sense in which consciousness can supervene on the 
>later partial process? All the trouble is there, because the later 
>process has the same physical process-features than the active brain, 
>although by construction there is no sense to attribute it any 
>computational process (like a movie).
>
>
>
>  
>
>>I am not saying that consciousness supervene on the physical substrate.
>>
>>
>
>
>ok.
>
>
>
>  
>
>>All I am saying is that the example does not prove that consciousness
>>does not supervene the physical.
>>
>>
>
>
>Yes, you are right from a logical point of view, but only by assuming 
>some form of non-computationalism.
>With comp + physical supervenience, you have to attach a consciousness 
>to the active boolean graph, and then, by physical supervenience, to 
>the later process, which do no more compute. (And then Maudlin shows 
>that you can change the second process so that it computes again, but 
>without any physical activity of the kind relevant to say that you 
>implement a computation. So, physical supervenience is made wrong.
>
>  
>

Yes but Maudlin cheated by interjecting himself into his experiment. So 
this argument does not count.

>>The example is just an instance of
>>consciousness operating across two different time intervals by mean of 
>>a
>>physical substrate and a physical means (recording) of connecting these
>>two time intervals.
>>
>>
>
>
>The problem is that with comp, consciousness has to be associated to 
>the first process, and by physical supervenience, it has to be attached 
>also to the second process. But then you can force the second process 
>to do the correct computation (meaning that it handles the 
>counterfactuals), without any genuine physical activity (reread Maudlin 
>perhaps, or its translation in term of filmed graph like in chapter 
>trois 

Re: Maudlin's Demon (Argument)

2007-10-02 Thread Bruno Marchal

Hi George,

Are you still there on the list?
I am really sorry to (re)discover your post just now, with a label 
saying that I have to answer it, but apparently I didn't. So here is 
the answer, with a delay of about one year :(



Le 08-oct.-06, à 08:00, George Levy wrote :


> Finally I read your filmed graph argument which I have stored in my
> computer. (The original at the Iridia web site
> http://iridia.ulb.ac.be/~marchal/bxlthesis/Volume3CC/3%20%202%20.pdf
> is not accessible anymore. I am not sure why.)


Apparently it works now. You have to scroll on the pdf document to see 
the text.



>
> In page TROIS -61 you describe an experience of consciousness which is
> comprised partially of a later physical process and partially of the
> recording of an earlier physical process.


Right.



>
> It is possible to resolve the paradox simply by saying that
> consciousness involves two partial processes ...

Why? With comp, consciousness can be associated with the active boolean 
graph, the one which will be recorded. No need of the second one.


> ... each occupying two
> different time intervals, the time intervals being connected by a
> recording, such that the earlier partial process is combined with the
> later partial process, the recording acting as a connection device.


But is there any sense in which consciousness can supervene on the 
later partial process? All the trouble is there, because the later 
process has the same physical process-features than the active brain, 
although by construction there is no sense to attribute it any 
computational process (like a movie).



>
> I am not saying that consciousness supervene on the physical substrate.


ok.



> All I am saying is that the example does not prove that consciousness
> does not supervene the physical.


Yes, you are right from a logical point of view, but only by assuming 
some form of non-computationalism.
With comp + physical supervenience, you have to attach a consciousness 
to the active boolean graph, and then, by physical supervenience, to 
the later process, which do no more compute. (And then Maudlin shows 
that you can change the second process so that it computes again, but 
without any physical activity of the kind relevant to say that you 
implement a computation. So, physical supervenience is made wrong.




> The example is just an instance of
> consciousness operating across two different time intervals by mean of 
> a
> physical substrate and a physical means (recording) of connecting these
> two time intervals.


The problem is that with comp, consciousness has to be associated to 
the first process, and by physical supervenience, it has to be attached 
also to the second process. But then you can force the second process 
to do the correct computation (meaning that it handles the 
counterfactuals), without any genuine physical activity (reread Maudlin 
perhaps, or its translation in term of filmed graph like in chapter 
trois of "Conscience et Mécanisme").

So, postulating comp, we have to associate the many possible "physical 
brains" to a type of computation, and not the inverse.

Does this help?


Bruno


http://iridia.ulb.ac.be/~marchal/

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Maudlin's Demon (Argument)

2006-10-11 Thread Bruno Marchal

Le 11-oct.-06, à 05:46, George Levy a écrit :





Therefore from the point of view of the second machine, the first machine appears conscious. Note that for the purpose of the argument WE don't have to assume initially that the second machine IS conscious, only that it can detect if the first machine is conscious. Now once we establish that the first machine is conscious we can infer that the second machine is also conscious simply because it is identical. 


How could any machine detect that another machine is conscious? I am not sure this makes sense imo. We have to come back on this in late november.



I'll be traveling to France in early November. We'll leave the detailed discussion for later in November.

I go to Bergen (Norway) tomorrow where I will present the AUDA (Arithmetical UDA). The list has receive the info, see here:
http://www.mail-archive.com/everything-list@eskimo.com/msg11097.html

We will most probably have some opportunity to e-talk before your trip in France,

Bye,

Bruno


http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Everything List" group. To post to this group, send email to everything-list@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/everything-list -~--~~~~--~~--~--~--- 

Re: Maudlin's Demon (Argument)

2006-10-10 Thread George Levy




Bruno Marchal wrote:

  
Le 09-oct.-06, à 21:54, George Levy a écrit :
  
  
   To observe a split consciousness, you need an observer
who
is also split, 
  
  
?
  

This is simple. The time/space/substrate/level of the observer must
match the time/space/substrate/level of what he observes.  The Leibniz
analogy is good. In your example if one observes just the recording
without observing the earlier creation of the recording and the later
utilization of the recording, then one may conclude rightfully that the
recording is not conscious.


  in sync with the split consciousness, across time, space,
substrate and level (a la Zelazny - Science Fiction writer). In your
example, for an observer to see consciousness in the machine, he must
be willing to exist at the earlier interval, skip over the time delay
carrying the recording and resume his existence at the later interval.
If he observes only a part of the whole thing, say the recording, he
may conclude that the machine is not conscious.

  
  
This is unclear for me. Unless you are just saying like Leibniz that
you will not "see" consciousness in a brain by examining it under a
microscope.
  
  
Note also that I could attribute consciousness to a recording, but
this makes sense only if the recording is precise enough so that I
could add the "Klaras" or anything which would make it possible to
continue some conversation with the system. And then I do not
attribute consciousness to the physical appearance of the system, but
to some people which manifests him/it/herself through it.
  

Adding Klaras complicate the problem but the result is the same. Klaras
must be programmed. Programming is like recording, a means for
inserting oneself at programming time for later playback at execution
time. I have already shown that Maudlin was cheating by rearranging his
tape, in effect programming the tape. So I agree with you if you agree
that programming the tape sequence is just a means for connecting
different pieces of a conscious processes where each piece operates at
different times.

   In addition, if we are going to split consciousness
maximally in this fashion, the concept of observer becomes important,
something you do not include in your example.
  
Could you elaborate. I don't understand. As a consequence of the
reasoning the observer (like the knower, the feeler) will all be very
important (and indeed will correspond to the hypostases (n-person pov)
in the AUDA). But in the reasoning, well either we are valid going
from one step to the next or not, and I don't see the relevance of
your point here. I guess I miss something.
  
  

I do not understand the connection with the hypostases in the AUDA.
However, it is true that the conscious machine is its own observer, no
matter how split its operation is. (i.e., time sharing, at different
levels... etc). However, the examples will be more striking if a
separate observer is introduced. Of course the separate observer will
have to track the time/space/substrate/level of the machine to observe
the machine to be conscious (possibly with a Turing test). Forgive me
for insisting on a separate observer, but I think that a relativity
approach could bear fruits.

You could even get rid of the recording and replace it with random
inputs (happy rays in your paper). 

As you can see with random inputs, the machine is not conscious to an
observer anchored in the physical. The machine just appears to follow a
random series of states.

But if the machine can be observed to be conscious if it is observed
precisely at those times when the random inputs match the
counterfactual recording. So the observer needs to "open his eyes"
precisely only at those times. So the observer needs to be linked in
some ways to the machine being conscious. 

If the observer is the (self reflecting) machine itself there is no
problem, the observer will automatically be conscious at those times.

If the observer is not the machine, we need to invoke a mechanism that
will force him to be conscious at those times. It will have to be
almost identical to the machine and will have to accept the same random
data So in a sense the observer will have to be a parallel machine with
some possible variations as long as these variations are not large
enough to make the observer and the machine exist on different
time/space/substrate/level. 

Therefore from the point of view of the second machine, the first
machine appears conscious. Note that for the purpose of the argument WE
don't have to assume initially that the second machine IS conscious,
only that it can detect if the first machine is conscious. Now once we
establish that the first machine is conscious we can infer that the
second machine is also conscious simply because it is identical. 

The example is of course a representation of our own (many)world. 


  
  (**) I am open to thoroughly discuss this, for
example
in november. 
Right now I am a bit over-busy (until the end of october).


Re: Maudlin's Demon (Argument)

2006-10-10 Thread jamikes



George and List:
a very naive question (even more than my other 
posts) since I miss lots of posts that have been exuded on this list (since a 
decade or so of my incompletely reading 
it):
Has it been ever formulated (and accepted on 
this list!) what we mean by the verb "to observe"? What does an 'observer' do in 
its (not his!!!) 'observer minute'? WHAT (and not 'who') is an observer? 

 
I did not 'study' the concept just developed a 
feeling. 
Observer in this feeling is "anything" 
(including 'anybody') that absorbs some (any?) information - of 
course according to my vocabulary as a difference to be 
acknowledged/absorbed.
An electron is an observer to the potential it 
senses/follows and a reader/viewer is an observer of Shakespeare's plays. And 
anything in between.  - This ID implies a target of the attention, not only 
the 'blank' observation itself. 
 
In this sense it is in the ballpark of the 
consciousness domain, in my identification, of course,  which calls for 
acknowledgment of and response to (my!) information as the *process* of that 
darn consciousness. (Including memory/experience, decisionmaking, moving the 
body or else, both sensorially and ideationally). 
 
Which comes close to an 'observer' being 
conscious. Not bad even  for a machine, what WE ARE ourselves as well 
(Bruno).
(Well, also 'gods', for that matter).  A 
thermostat observes and responds consciously - at its own level. Not at the 
level of I.Kant. 
 
John M
 

  - Original Message - 
  From: 
  George Levy 
  
  To: everything-list@googlegroups.com 
  
  Sent: Monday, October 09, 2006 5:55 
  PM
  Subject: Re: Maudlin's Demon 
  (Argument)
  David Nyman wrote:
  
On Oct 9, 8:54 pm, George Levy <[EMAIL PROTECTED]> wrote:

  
To observe a split consciousness, you need an observer who is also
split, in sync with the split consciousness, across time, space,
substrate and level (a la Zelazny - Science Fiction writer). In your
example, for an observer to see consciousness in the machine, he must be
willing to exist at the earlier interval, skip over the time delay
carrying the recording and resume his existence at the later interval.
If he observes only a part of the whole thing, say the recording, he may
conclude that the machine is not conscious.

Careful, George. Remember the observer *is* the machine. Consequently
he's never in a position to 'conclude that the machine is not
conscious', because in that case, it is precisely *he* that is not
conscious. There is no question that the machine needs to 
  be conscious - this is the whole point of the experiment - The observer *may* 
  be the machine, but does not have to be (we could conduct a Turing test for 
  example). In any case I think there may be great benefit in decoupling the 
  observer function explicitely. The presence of such an observer and its 
  location with respect the machine will force the issue on the first and third 
  person perspective.In fact the consciousness of the observer is not 
  really at issue. What I think is at issue is the consciousness of the machine 
  as seen from different perspectives. It may even be sufficient to make the 
  observer some kind of testing program running on a computer. 
  But you're right IMO that the the concatenation of these
observer moments represents the observer's conscious 'existence in
time' . The 1-person narrative of this concatenation is what comprises
IMO, the A-series (i.e. the conscious discriminability of observer
moments arising from the consistent 1-person compresence of global and
local aspects of the observer), whereas any 3-person account of this is
necessarily stripped back to a B-series that reduces, ultimately, to
Planck-length 'snapshots' devoid of temporality.

David


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups "Everything List" group.  To post to this group, send email to everything-list@googlegroups.com  To unsubscribe from this group, send email to [EMAIL PROTECTED]  For more options, visit this group at http://groups.google.com/group/everything-list  -~--~~~~--~~--~--~---




Re: Maudlin's Demon (Argument)

2006-10-10 Thread Bruno Marchal

Le 09-oct.-06, à 21:54, George Levy a écrit :

To observe a split consciousness, you need an observer who is also split, 

?

in sync with the split consciousness, across time, space, substrate and level (a la Zelazny - Science Fiction writer). In your example, for an observer to see consciousness in the machine, he must be willing to exist at the earlier interval, skip over the time delay carrying the recording and resume his existence at the later interval. If he observes only a part of the whole thing, say the recording, he may conclude that the machine is not conscious.

This is unclear for me. Unless you are just saying like Leibniz that you will not "see" consciousness in a brain by examining it under a microscope.

Note also that I could attribute consciousness to a recording, but this makes sense only if the recording is precise enough so that I could add the "Klaras" or anything which would make it possible to continue some conversation with the system. And then I do not attribute consciousness to the physical appearance of the system, but to some people which manifests him/it/herself through it.



In addition, if we are going to split consciousness maximally in this fashion, the concept of observer becomes important, something you do not include in your example. 


Could you elaborate. I don't understand. As a consequence of the reasoning the observer (like the knower, the feeler) will all be very important (and indeed will correspond to the hypostases (n-person pov) in the AUDA). But in the reasoning, well either we are valid going from one step to the next or not, and I don't see the relevance of your point here. I guess I miss something.



(**) I am open to thoroughly discuss this, for example in november. 
Right now I am a bit over-busy (until the end of october).


 OK. Take your time.


I will, thanks. In the meanwhile I would appreciate if you could elaborate your point.

Bruno


http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Everything List" group. To post to this group, send email to everything-list@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/everything-list -~--~~~~--~~--~--~--- 

Re: Maudlin's Demon (Argument)

2006-10-09 Thread George Levy




David Nyman wrote:

  

On Oct 9, 8:54 pm, George Levy <[EMAIL PROTECTED]> wrote:

  
  
To observe a split consciousness, you need an observer who is also
split, in sync with the split consciousness, across time, space,
substrate and level (a la Zelazny - Science Fiction writer). In your
example, for an observer to see consciousness in the machine, he must be
willing to exist at the earlier interval, skip over the time delay
carrying the recording and resume his existence at the later interval.
If he observes only a part of the whole thing, say the recording, he may
conclude that the machine is not conscious.

  
  
Careful, George. Remember the observer *is* the machine. Consequently
he's never in a position to 'conclude that the machine is not
conscious', because in that case, it is precisely *he* that is not
conscious. 

There is no question that the machine needs to be conscious - this is
the whole point of the experiment - The observer *may* be the machine,
but does not have to be (we could conduct a Turing test for example).
In any case I think there may be great benefit in decoupling the
observer function explicitely. The presence of such an observer and its
location with respect the machine will force the issue on the first and
third person perspective.

In fact the consciousness of the observer is not really at issue. What
I think is at issue is the consciousness of the machine as seen from
different perspectives. It may even be sufficient to make the observer
some kind of testing program running on a computer. 


  But you're right IMO that the the concatenation of these
observer moments represents the observer's conscious 'existence in
time' . The 1-person narrative of this concatenation is what comprises
IMO, the A-series (i.e. the conscious discriminability of observer
moments arising from the consistent 1-person compresence of global and
local aspects of the observer), whereas any 3-person account of this is
necessarily stripped back to a B-series that reduces, ultimately, to
Planck-length 'snapshots' devoid of temporality.

David

  
  


  



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups "Everything List" group.  To post to this group, send email to everything-list@googlegroups.com  To unsubscribe from this group, send email to [EMAIL PROTECTED]  For more options, visit this group at http://groups.google.com/group/everything-list  -~--~~~~--~~--~--~---





Re: Maudlin's Demon (Argument)

2006-10-09 Thread David Nyman



On Oct 9, 8:54 pm, George Levy <[EMAIL PROTECTED]> wrote:

> To observe a split consciousness, you need an observer who is also
> split, in sync with the split consciousness, across time, space,
> substrate and level (a la Zelazny - Science Fiction writer). In your
> example, for an observer to see consciousness in the machine, he must be
> willing to exist at the earlier interval, skip over the time delay
> carrying the recording and resume his existence at the later interval.
> If he observes only a part of the whole thing, say the recording, he may
> conclude that the machine is not conscious.

Careful, George. Remember the observer *is* the machine. Consequently
he's never in a position to 'conclude that the machine is not
conscious', because in that case, it is precisely *he* that is not
conscious. But you're right IMO that the the concatenation of these
observer moments represents the observer's conscious 'existence in
time' . The 1-person narrative of this concatenation is what comprises
IMO, the A-series (i.e. the conscious discriminability of observer
moments arising from the consistent 1-person compresence of global and
local aspects of the observer), whereas any 3-person account of this is
necessarily stripped back to a B-series that reduces, ultimately, to
Planck-length 'snapshots' devoid of temporality.

David


> Bruno Marchal wrote:
> >Le 08-oct.-06, à 08:00, George Levy a écrit :
>
> >>Bruno,
>
> >>Finally I read your filmed graph argument which I have stored in my
> >>computer. (The original at the Iridia web site
> >>http://iridia.ulb.ac.be/~marchal/bxlthesis/Volume3CC/3%20%202%20.pdf
> >>is not accessible anymore. I am not sure why.)
>
> >Thanks for telling. I know people a reconfiguring the main
> >server at IRIDIA, I hope it is only that.
>
> >>In page TROIS -61 you describe an experience of consciousness which is
> >>comprised partially of a later physical process and partially of the
> >>recording of an earlier physical process.
>
> >>It is possible to resolve the paradox simply by saying that
> >>consciousness involves two partial processes each occupying two
> >>different time intervals, the time intervals being connected by a
> >>recording, such that the earlier partial process is combined with the
> >>later partial process, the recording acting as a connection device.
>
> >I mainly agree. But assuming comp it seems to me this is just a
> >question of "acceptable" implementation of consciousness.
> >Once implemented in any "correct" ways, the reasoning shows, or is
> >supposed to show, that the inner first person experience cannot be
> >attributed to the physical activity. The "physical" keep an important
> >role by giving the frame of the possible relative manifestations of the
> >consciousness. But already at this stage, consciousness can no more
> >been attached to it. On the contrary, keeping the comp hyp, the
> >physical must emerge from the coherence of "enough" possible relative
> >manifestations.
>
> >>I am not saying that consciousness supervene on the physical substrate.
> >>All I am saying is that the example does not prove that consciousness
> >>does not supervene the physical. The example is just an instance of
> >>consciousness operating across two different time intervals by mean of
> >>a
> >>physical substrate and a physical means (recording) of connecting these
> >>two time intervals.
>
> >In this case, would you take this as an argument for the necessity of
> >the physical, you would change the notion of physical supervenience a
> >lot. You would be attaching consciousness to some history of physical
> >activity.I agree with all this. I would be changing the notion of physical
> supervenience such that the physical substrate can be split into time
> intervals connected by recordings. . But why stop here. We could create
> an example in which the substrate is maximally split, across time,
> space, substrate and level.
>
> On the other hand, widening the domain of supervenience (time, space,
> substrate and level) does not seem to eliminate the need for the
> physical. Here I am arguing against myself... We may solve the problem
> if we make supervenience recursive, i.e.. software supervening on itself
> without needing a physical substrate just like photons do not need Ether.
>
> In addition, if we are going to split consciousness maximally in this
> fashion, the concept of observer becomes important, something you do not
> include in your example.
>
> To observe a split consciousness, you need an observer who is also
> split, in sync with the split consciousness, across time, space,
> substrate and level (a la Zelazny - Science Fiction writer). In your
> example, for an observer to see consciousness in the machine, he must be
> willing to exist at the earlier interval, skip over the time delay
> carrying the recording and resume his existence at the later interval.
> If he observes only a part of the whole thing, say the recording, he may
> conclude that the m

Re: Maudlin's Demon (Argument)

2006-10-09 Thread George Levy




Bruno Marchal wrote:

  
Le 08-oct.-06, à 08:00, George Levy a écrit :

  
  
Bruno,

Finally I read your filmed graph argument which I have stored in my
computer. (The original at the Iridia web site
http://iridia.ulb.ac.be/~marchal/bxlthesis/Volume3CC/3%20%202%20.pdf
is not accessible anymore. I am not sure why.)

  
  
Thanks for telling. I know people a reconfiguring the main
server at IRIDIA, I hope it is only that.



  
  
In page TROIS -61 you describe an experience of consciousness which is
comprised partially of a later physical process and partially of the
recording of an earlier physical process.

It is possible to resolve the paradox simply by saying that
consciousness involves two partial processes each occupying two
different time intervals, the time intervals being connected by a
recording, such that the earlier partial process is combined with the
later partial process, the recording acting as a connection device.

  
  
I mainly agree. But assuming comp it seems to me this is just a 
question of "acceptable" implementation of consciousness.
Once implemented in any "correct" ways, the reasoning shows, or is 
supposed to show, that the inner first person experience cannot be 
attributed to the physical activity. The "physical" keep an important 
role by giving the frame of the possible relative manifestations of the 
consciousness. But already at this stage, consciousness can no more 
been attached to it. On the contrary, keeping the comp hyp, the 
physical must emerge from the coherence of "enough" possible relative 
manifestations.



  
  
I am not saying that consciousness supervene on the physical substrate.
All I am saying is that the example does not prove that consciousness
does not supervene the physical. The example is just an instance of
consciousness operating across two different time intervals by mean of 
a
physical substrate and a physical means (recording) of connecting these
two time intervals.

  
  
In this case, would you take this as an argument for the necessity of 
the physical, you would change the notion of physical supervenience a 
lot. You would be attaching consciousness to some history of physical 
activity. 

I agree with all this. I would be changing the notion of physical
supervenience such that the physical substrate can be split into time
intervals connected by recordings. . But why stop here. We could create
an example in which the substrate is maximally split, across time,
space, substrate and level.

On the other hand, widening the domain of supervenience (time, space,
substrate and level) does not seem to eliminate the need for the
physical. Here I am arguing against myself... We may solve the problem
if we make supervenience recursive, i.e.. software supervening on
itself without needing a physical substrate just like photons do not
need Ether.

In addition, if we are going to split consciousness maximally in this
fashion, the concept of observer becomes important, something you do
not include in your example. 

To observe a split consciousness, you need an observer who is also
split, in sync with the split consciousness, across time, space,
substrate and level (a la Zelazny - Science Fiction writer). In your
example, for an observer to see consciousness in the machine, he must
be willing to exist at the earlier interval, skip over the time delay
carrying the recording and resume his existence at the later interval.
If he observes only a part of the whole thing, say the recording, he
may conclude that the machine is not conscious.


  But if you keep comp, you will not been able to use genuinely 
that past physical activity. If you could, it would be like asking to 
the doctor an artificial brain with the guarantee that the hardware of 
that brain has been gone through some genuine physical stories, 
although no memory of those stories are needed in the computation made 
by the new (artificial) brain; or if such memory *are* needed, it would 
mean the doctor has not made the right level choice.
Now, when you say the reasoning does not *prove* that consciousness 
does not supervene the physical, you are correct. But sup-phys says 
there is no consciousness without the physical, i.e. some physical 
primary ontology is needed for consciusness, and that is what the 
reasoning is supposed to be showing absurd: not only we don't need the 
physical (like thermodynamicians do not need "invisible horses pulling 
cars"),  but MOVIE-GRAPH + UDA (*) makes obligatory the appearance of 
the physical emerging from *all* (relative) computations, making twice 
the concept of primitive matter useless.
OK? ...I realize I could be clearer(**)

(*) Caution: in "Conscience et Mecanisme" the movie-graph argument 
precedes the UD argument (the seven first step of the 8-steps-version 
of the current UDA). In my Lille thesis, the movie graph follows the UD 
argument for eliminating the use of the "existence of a universe 
hypothesis"; so there are so

Re: Maudlin's Demon (Argument)

2006-10-09 Thread Bruno Marchal


Le 08-oct.-06, à 08:00, George Levy a écrit :

>
> Bruno,
>
> Finally I read your filmed graph argument which I have stored in my
> computer. (The original at the Iridia web site
> http://iridia.ulb.ac.be/~marchal/bxlthesis/Volume3CC/3%20%202%20.pdf
> is not accessible anymore. I am not sure why.)

Thanks for telling. I know people a reconfiguring the main
server at IRIDIA, I hope it is only that.



>
> In page TROIS -61 you describe an experience of consciousness which is
> comprised partially of a later physical process and partially of the
> recording of an earlier physical process.
>
> It is possible to resolve the paradox simply by saying that
> consciousness involves two partial processes each occupying two
> different time intervals, the time intervals being connected by a
> recording, such that the earlier partial process is combined with the
> later partial process, the recording acting as a connection device.

I mainly agree. But assuming comp it seems to me this is just a 
question of "acceptable" implementation of consciousness.
Once implemented in any "correct" ways, the reasoning shows, or is 
supposed to show, that the inner first person experience cannot be 
attributed to the physical activity. The "physical" keep an important 
role by giving the frame of the possible relative manifestations of the 
consciousness. But already at this stage, consciousness can no more 
been attached to it. On the contrary, keeping the comp hyp, the 
physical must emerge from the coherence of "enough" possible relative 
manifestations.



>
> I am not saying that consciousness supervene on the physical substrate.
> All I am saying is that the example does not prove that consciousness
> does not supervene the physical. The example is just an instance of
> consciousness operating across two different time intervals by mean of 
> a
> physical substrate and a physical means (recording) of connecting these
> two time intervals.

In this case, would you take this as an argument for the necessity of 
the physical, you would change the notion of physical supervenience a 
lot. You would be attaching consciousness to some history of physical 
activity. But if you keep comp, you will not been able to use genuinely 
that past physical activity. If you could, it would be like asking to 
the doctor an artificial brain with the guarantee that the hardware of 
that brain has been gone through some genuine physical stories, 
although no memory of those stories are needed in the computation made 
by the new (artificial) brain; or if such memory *are* needed, it would 
mean the doctor has not made the right level choice.
Now, when you say the reasoning does not *prove* that consciousness 
does not supervene the physical, you are correct. But sup-phys says 
there is no consciousness without the physical, i.e. some physical 
primary ontology is needed for consciusness, and that is what the 
reasoning is supposed to be showing absurd: not only we don't need the 
physical (like thermodynamicians do not need "invisible horses pulling 
cars"),  but MOVIE-GRAPH + UDA (*) makes obligatory the appearance of 
the physical emerging from *all* (relative) computations, making twice 
the concept of primitive matter useless.
OK? ...I realize I could be clearer(**)

(*) Caution: in "Conscience et Mecanisme" the movie-graph argument 
precedes the UD argument (the seven first step of the 8-steps-version 
of the current UDA). In my Lille thesis, the movie graph follows the UD 
argument for eliminating the use of the "existence of a universe 
hypothesis"; so there are some nuances between the different versions.

(**) I am open to thoroughly discuss this, for example in november. 
Right now I am a bit over-busy (until the end of october).

Bruno

http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Maudlin's Demon (Argument)

2006-10-08 Thread jamikes

Stathis:
you see, we go on with newer assumptions.
You ask:
"...why did we not evolve to be zombie animals?"
Some of us did.

I believe the other animals are very much of the same built as this one
(Homo) with a rather quantitative difference (which, of course, turns into
qualitative - V.A.Lenin) due to the interactive connectivity of different
orders of magnitude of neurons.  "Zombie" is one of those
discussion-promoters I pointed to as 'assumed nonsense'.
Is digital vs. biological computing so different? maybe simpler. I think you
are implying digital as the programmed and input-restricted embryonic level
of our PC etc., while the biologic refers to unlimited connectivity to
totality, including (beyond model) input in toto.
I find such a comparison unfair. (My opinion).
Every animal has 'experience' and 'memory' (whatever these terms are
meaning) according to the level of its functional complexity. Even a hydra
is learning. "WE" cannot explain with our sophistication how simpler
organisms work, including the alleged puzzling 'collective consciousness' of
social insects.

John M



- Original Message -
From: "Stathis Papaioannou" <[EMAIL PROTECTED]>
To: 
Sent: Saturday, October 07, 2006 7:50 PM
Subject: RE: Maudlin's Demon (Argument)



John Mikes writes:

> Stathis, your post is 'logical', 'professional', 'smart', - good.
> It shows why we have so many posts on this list and why we get nowhere.
> You handle an assumption (robot) - its qualia, characteristics, make up a
> "thought-situation" and ASK about its annexed details. Now, your style is
> such that one cannot just disregard the irrelevance. So someone (many, me
> included) respond with similar mindtwists  and it goes on and on. \
> Have you ever ps-analyzed a robot? Professionally, I mean.
> If it is a simple digital computer, it certainly has a memory, the one
fixed
> into chips as this PC I am using. Your and MY memory is quite different, I
> wish somebody could tell me acceptably, HOW???, but it is plastic,
> approximate, mixed with emotional changes, short and in cases false. I
would
> throw out a robot with such memory.

I did put in parentheses "this of course assumes a robot can have
experiences".
We can't know that this is so, but it seems a reasonable assumption to me.
If we
had evolution with digital processors rather than biological processors do
you think
it would have been possible for animals with similar behaviours to those
with which
we are familiar to have developed? If so, do you think these animals would
not
really have "experiences" despite behaving as if they did? Since evolution
can only
work on behaviour, if zombie animals were possible why did we not evolve to
be
zombie animals?

Stathis Papaioannou

> John,
>
> I should have been more precise with the terms "copy" and "emulate".
> What I was asking is whether a robot which experiences something while
> it is shovelling coal (this of course assumes that a robot can have
> experiences)
> would experience the same thing if it were fed input to all its sensors
> exactly
> the same as if it were doing its job normally, such that it was not aware
> the
> inputs were in fact a sham. It seems to me that if the answer is "no" the
> robot
> would need to have some mysterious extra-computational knowledge of the
> world, which I find very difficult to conceptualise if we are talking
about
> a standard
> digital computer. It is easier to conceptualise that such
non-computational
> effects
> may be at play in a biological brain, which would then be an argument
> against
> computationalism.
>
> Stathis Papaioannou
>
> > Stathis:
> > let me skip the quoted texts and ask a particular question.
> > - Original Message -
> > From: "Stathis Papaioannou" <[EMAIL PROTECTED]>
> > Sent: Wednesday, October 04, 2006 11:41 PM
> > Subject: RE: Maudlin's Demon (Argument)
> > You wrote:
> > Do you believe it is possible to copy a particular consciousness by
> > emulating it, along
> > with sham inputs (i.e. in virtual reality), on a general purpose
computer?
> > Or do you believe
> > a coal-shovelling robot could only have the coal-shovelling experience
by
> > actually shovelling
> > coal?
> >
> > Stathis Papaioannou
> > -
> > My question is about 'copy' and 'emulate'.
> >
> > Are we considering 'copying' the model and its content (in which case
the
> > coal shoveling robot last sentence applies) or do we include the
> > interconnections unlimited in "experience", beyond the

Re: Maudlin's Demon (Argument)

2006-10-08 Thread jamikes

Bruno:
once I 'learn' about what you imply as 3rd pers. theory, my personal
interpretation absorbs it (partly, distorted, or perfectly) as MY 1st pers.
knowledge. It ENTERS my knowledge and from there on I can formulate my
'theories' (models) about it. Whether it is true or not.
So when I hear you saying that the moon is a big lighting ball, I know so,
it is not 'outside' my circle of information anymore.  3rd pers info is not
a catalyst, it is an addition. (Right/wrong, accepted/rejected).
*
Sorry for the NESS after 'nothing-'.  I don't look for a model when there is
nothing to be found. Theory? maybe.
*
I drew a parallel (with the differences pointed out) between religion and
science in an earlier draft. Of course both are belief systems. And I don't
think I am talking about 'theology' when I say religion. "Th-y" is a
reductionist science of a non-science. It is the speculation about the
belief. ONE belief. It tries to apply secular thinking to mystical stuff: an
oxymoron. In the logic of the believers.(Oxym. No2). The Greeks were honest:
their gods cheated, lied, were adulterous, raped and stole etc., just as the
humans they were simulated after. The JudeoChrIslamics retained mostly the
vainness: fishing for praise, the uncritical obedience,
(religio?)chauvinism, wrath and punishing, vengefulness and a lot of
hypocrisy.
Science is subtle: the potentates just prevent the publication, tenure and
grants for an opposing point-of-view - the establishment guards its
integrity against new theories (enlarged models).

John

- Original Message -
From: "Bruno Marchal" <[EMAIL PROTECTED]>
To: 
Sent: Sunday, October 08, 2006 10:15 AM
Subject: Re: Maudlin's Demon (Argument)




Le 07-oct.-06, à 22:24, <[EMAIL PROTECTED]> a écrit :

>
> "my" reductionism is simple: we have a circle of knowledge base
> and view
> the world as limited INTO such model. Well, it is not. The
> reductionist view
> enabled homo to step up into technological prowess but did not support
> an
> extension of understanding BEYOND the (so far) acquired
> knowledge-base. We
> simply cannot FIND OUT what we don't know of the world.
> Sciences are reductionistic, logic can try to step out, but that is
> simple
> sci-fi, use fantasy (imagination?) to bridge ignorance.
> I am stubborn in "I don't know what I don't know".


It is a little ambiguous, but if by "I" you refer to your first person
view I could agree with you.
But for the 3-person view then, once I bet on a theory I can bet on
what I don't know. Example.

If I just look at the moon without theory, I cannot know nor describe
what I don't know.
As soon I bet on a theory, like saying that the moon is a big ball,
then I can know a part of what I don't know (like is there life form on
that sphere, or what is the shape of the other face of that sphere).
 From a third person point of view, a theory (a model in your term) is a
catalyzer for knowing we don't know much, and then formulating problems
and then solving some of them or sometimes changing the theory (the
model).





> Jump outside our knowledge? it is not 'ourselves', it is ALL we know
> and
> outside this is NOTHINGNESS for the mind to consider. Blank.


In which model (theory)?



> This is how most of the religions came about. Provide a belief.



Scientific theories also provide beliefs.
Theology has been extracted from science for political purpose (about
1500 years ago), just to give "name" for what is really economical if
not just xenophobical conflicts. The same happened in the USSR with
genetics. No discipline, even math, is vaccine against the possible
human misuses.




> PS Er..., to Markpeaty and other readers of Parfit: I think that his
> use of the term "reductionist" is misleading, and due in part to his
> lack of clearcut distinction between the person points of view.


Well said.


Bruno



http://iridia.ulb.ac.be/~marchal/



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Maudlin's Demon (Argument)

2006-10-08 Thread Bruno Marchal


Le 07-oct.-06, à 22:24, <[EMAIL PROTECTED]> a écrit :

>
> "my" reductionism is simple: we have a circle of knowledge base 
> and view
> the world as limited INTO such model. Well, it is not. The 
> reductionist view
> enabled homo to step up into technological prowess but did not support 
> an
> extension of understanding BEYOND the (so far) acquired 
> knowledge-base. We
> simply cannot FIND OUT what we don't know of the world.
> Sciences are reductionistic, logic can try to step out, but that is 
> simple
> sci-fi, use fantasy (imagination?) to bridge ignorance.
> I am stubborn in "I don't know what I don't know".


It is a little ambiguous, but if by "I" you refer to your first person 
view I could agree with you.
But for the 3-person view then, once I bet on a theory I can bet on 
what I don't know. Example.

If I just look at the moon without theory, I cannot know nor describe 
what I don't know.
As soon I bet on a theory, like saying that the moon is a big ball, 
then I can know a part of what I don't know (like is there life form on 
that sphere, or what is the shape of the other face of that sphere).
 From a third person point of view, a theory (a model in your term) is a 
catalyzer for knowing we don't know much, and then formulating problems 
and then solving some of them or sometimes changing the theory (the 
model).





> Jump outside our knowledge? it is not 'ourselves', it is ALL we know 
> and
> outside this is NOTHINGNESS for the mind to consider. Blank.


In which model (theory)?



> This is how most of the religions came about. Provide a belief.



Scientific theories also provide beliefs.
Theology has been extracted from science for political purpose (about 
1500 years ago), just to give "name" for what is really economical if 
not just xenophobical conflicts. The same happened in the USSR with 
genetics. No discipline, even math, is vaccine against the possible 
human misuses.




> PS Er..., to Markpeaty and other readers of Parfit: I think that his
> use of the term "reductionist" is misleading, and due in part to his
> lack of clearcut distinction between the person points of view.


Well said.


Bruno



http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Maudlin's Demon (Argument)

2006-10-07 Thread George Levy

Bruno,

Finally I read your filmed graph argument which I have stored in my 
computer. (The original at the Iridia web site 
http://iridia.ulb.ac.be/~marchal/bxlthesis/Volume3CC/3%20%202%20.pdf
is not accessible anymore. I am not sure why.)

In page TROIS -61 you describe an experience of consciousness which is 
comprised partially of a later physical process and partially of the 
recording of an earlier physical process.

It is possible to resolve the paradox simply by saying that 
consciousness involves two partial processes each occupying two 
different time intervals, the time intervals being connected by a 
recording, such that the earlier partial process is combined with the 
later partial process, the recording acting as a connection device.

I am not saying that consciousness supervene on the physical substrate. 
All I am saying is that the example does not prove that consciousness 
does not supervene the physical. The example is just an instance of 
consciousness operating across two different time intervals by mean of a 
physical substrate and a physical means (recording) of connecting these 
two time intervals.

George

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: Maudlin's Demon (Argument)

2006-10-07 Thread Stathis Papaioannou

Brent Meeker writes:

> > I did put in parentheses "this of course assumes a robot can have 
> > experiences". 
> > We can't know that this is so, but it seems a reasonable assumption to me. 
> > If we 
> > had evolution with digital processors rather than biological processors do 
> > you think 
> > it would have been possible for animals with similar behaviours to those 
> > with which 
> > we are familiar to have developed? If so, do you think these animals would 
> > not 
> > really have "experiences" despite behaving as if they did? Since evolution 
> > can only 
> > work on behaviour, if zombie animals were possible why did we not evolve to 
> > be 
> > zombie animals?
> > 
> > Stathis Papaioannou
> 
> Of course evolution is not some perfectly efficient optimizer.  Julian Jaynes 
> idea 
> the consciousness arises from internalizing speech perception would make 
> consciousness a round-about way of recalling complex practical instructions - 
> one 
> that from a final design standpoint could have been done without the 
> consciousness 
> but from an evolutionary standpoint might have been almost inevitable.  I 
> don't know 
> that I buy Jaynes explanation - but it shows that there might well be 
> consciousness 
> even if zombie animals would have done just as well.

That's strictly true. It's also possible that consciousness is a byproduct of 
biological brains 
but not digital computers performing the same functions, to the point of 
passing the Turing 
Test. We'll never know.

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Maudlin's Demon (Argument)

2006-10-07 Thread Brent Meeker

Stathis Papaioannou wrote:
> John Mikes writes:
> 
> 
>>Stathis, your post is 'logical', 'professional', 'smart', - good.
>>It shows why we have so many posts on this list and why we get nowhere.
>>You handle an assumption (robot) - its qualia, characteristics, make up a
>>"thought-situation" and ASK about its annexed details. Now, your style is
>>such that one cannot just disregard the irrelevance. So someone (many, me
>>included) respond with similar mindtwists  and it goes on and on. \
>>Have you ever ps-analyzed a robot? Professionally, I mean.
>>If it is a simple digital computer, it certainly has a memory, the one fixed
>>into chips as this PC I am using. Your and MY memory is quite different, I
>>wish somebody could tell me acceptably, HOW???, but it is plastic,
>>approximate, mixed with emotional changes, short and in cases false. I would
>>throw out a robot with such memory.
> 
> 
> I did put in parentheses "this of course assumes a robot can have 
> experiences". 
> We can't know that this is so, but it seems a reasonable assumption to me. If 
> we 
> had evolution with digital processors rather than biological processors do 
> you think 
> it would have been possible for animals with similar behaviours to those with 
> which 
> we are familiar to have developed? If so, do you think these animals would 
> not 
> really have "experiences" despite behaving as if they did? Since evolution 
> can only 
> work on behaviour, if zombie animals were possible why did we not evolve to 
> be 
> zombie animals?
> 
> Stathis Papaioannou

Of course evolution is not some perfectly efficient optimizer.  Julian Jaynes 
idea 
the consciousness arises from internalizing speech perception would make 
consciousness a round-about way of recalling complex practical instructions - 
one 
that from a final design standpoint could have been done without the 
consciousness 
but from an evolutionary standpoint might have been almost inevitable.  I don't 
know 
that I buy Jaynes explanation - but it shows that there might well be 
consciousness 
even if zombie animals would have done just as well.

Brent Meeker

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: Maudlin's Demon (Argument)

2006-10-07 Thread Stathis Papaioannou

John Mikes writes:

> Stathis, your post is 'logical', 'professional', 'smart', - good.
> It shows why we have so many posts on this list and why we get nowhere.
> You handle an assumption (robot) - its qualia, characteristics, make up a
> "thought-situation" and ASK about its annexed details. Now, your style is
> such that one cannot just disregard the irrelevance. So someone (many, me
> included) respond with similar mindtwists  and it goes on and on. \
> Have you ever ps-analyzed a robot? Professionally, I mean.
> If it is a simple digital computer, it certainly has a memory, the one fixed
> into chips as this PC I am using. Your and MY memory is quite different, I
> wish somebody could tell me acceptably, HOW???, but it is plastic,
> approximate, mixed with emotional changes, short and in cases false. I would
> throw out a robot with such memory.

I did put in parentheses "this of course assumes a robot can have experiences". 
We can't know that this is so, but it seems a reasonable assumption to me. If 
we 
had evolution with digital processors rather than biological processors do you 
think 
it would have been possible for animals with similar behaviours to those with 
which 
we are familiar to have developed? If so, do you think these animals would not 
really have "experiences" despite behaving as if they did? Since evolution can 
only 
work on behaviour, if zombie animals were possible why did we not evolve to be 
zombie animals?

Stathis Papaioannou
 
> John,
> 
> I should have been more precise with the terms "copy" and "emulate".
> What I was asking is whether a robot which experiences something while
> it is shovelling coal (this of course assumes that a robot can have
> experiences)
> would experience the same thing if it were fed input to all its sensors
> exactly
> the same as if it were doing its job normally, such that it was not aware
> the
> inputs were in fact a sham. It seems to me that if the answer is "no" the
> robot
> would need to have some mysterious extra-computational knowledge of the
> world, which I find very difficult to conceptualise if we are talking about
> a standard
> digital computer. It is easier to conceptualise that such non-computational
> effects
> may be at play in a biological brain, which would then be an argument
> against
> computationalism.
> 
> Stathis Papaioannou
> 
> > Stathis:
> > let me skip the quoted texts and ask a particular question.
> > - Original Message -
> > From: "Stathis Papaioannou" <[EMAIL PROTECTED]>
> > Sent: Wednesday, October 04, 2006 11:41 PM
> > Subject: RE: Maudlin's Demon (Argument)
> > You wrote:
> > Do you believe it is possible to copy a particular consciousness by
> > emulating it, along
> > with sham inputs (i.e. in virtual reality), on a general purpose computer?
> > Or do you believe
> > a coal-shovelling robot could only have the coal-shovelling experience by
> > actually shovelling
> > coal?
> >
> > Stathis Papaioannou
> > -
> > My question is about 'copy' and 'emulate'.
> >
> > Are we considering 'copying' the model and its content (in which case the
> > coal shoveling robot last sentence applies) or do we include the
> > interconnections unlimited in "experience", beyond the particular model we
> > talk about?
> > If we go "all the way" and include all input from the unlimited totality
> > that may 'format' or 'complete' the model-experience, then we re-create
> the
> > 'real thing' and it is not a copy. If we restrict our copying to the
> aspect
> > in question (model) then we copy only that aspect and should not draw
> > conclusions on the total.
> >
> > Can we 'emulate' totality? I don't think so. Can we copy the total,
> > unlimited wholeness? I don't think so.
> > What I feel is a restriction to "think" within a model and draw
> conclusions
> > from it towards beyond it.
> > Which looks to me like a category-mistake.
> >
> > John Mikes
> 
> _
> Be one of the first to try Windows Live Mail.
> http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-491
> 1fb2b2e6d
> 
> 
> 
> --
> No virus found in this incoming message.
> Checked by AVG Free Edition.
> Version: 7.1.407 / Virus Database: 268.13.0/465 - Release Date: 10/06/06
> 
> 
> 
> > 

_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Maudlin's Demon (Argument)

2006-10-07 Thread jamikes

Please see some remarks interleft between -lines.
John M
- Original Message -
From: "Bruno Marchal" <[EMAIL PROTECTED]>
To: 
Sent: Friday, October 06, 2006 9:43 AM
Subject: Re: Maudlin's Demon (Argument)




Le 05-oct.-06, à 13:55, <[EMAIL PROTECTED]> a écrit :

> Can we 'emulate' totality? I don't think so.


I don't always insist on that but with just the "Church thesis" part of
comp, it can be argued that we can emulate the third person describable
totality, and indeed this is what the Universal Dovetailer do.

The key thing, but technical (I was beginning to explain Tom and
George), is that such an emulation can be shown to destroy any
reductionist account of that totality, and still better, make the first
person totality (George's first person plenitude perhaps) infinitely
bigger (even non computably bigger, even unameable) than the 3 person
totality.
There is a Skolem-Carroll phenomena: the first person "inside" view of
the 3-totality is infinitely bigger than the 3-totality, like in the
"Wonderland" where a tree can hide a palace ...


JM:
"my" reductionism is simple: we have a circle of knowledge base and view
the world as limited INTO such model. Well, it is not. The reductionist view
enabled homo to step up into technological prowess but did not support an
extension of understanding BEYOND the (so far) acquired knowledge-base. We
simply cannot FIND OUT what we don't know of the world.
Sciences are reductionistic, logic can try to step out, but that is simple
sci-fi, use fantasy (imagination?) to bridge ignorance.
I am stubborn in "I don't know what I don't know".
--


> Can we copy the total,
> unlimited wholeness?

Not really. It is like the quantum states. No clonable, but if known,
preparable in many quantities. At this stage it is only an analogy.


> I don't think so.
> What I feel is a restriction to "think" within a model and draw
> conclusions from it towards beyond it.

Mmmh... It is here that logician have made progress the last century,
but nobody (except the experts) knows about those progress.


JM:
Those "experts" must know that it is not confirmable even true.
That is why 'they' keep it to themselves.


> Which looks to me like a category-mistake.


It looks, but perhaps it isn't. I agree it seems unbelievable, but
somehow,we (the machine) can jump outside ourself ... (with some risk,
though).

-
JM:
Jump outside our knowledge? it is not 'ourselves', it is ALL we know and
outside this is NOTHINGNESS for the mind to consider. Blank.
This is how most of the religions came about. Provide a belief.
-

Bruno

PS Er..., to Markpeaty and other readers of Parfit: I think that his
use of the term "reductionist" is misleading, and due in part to his
lack of clearcut distinction between the person points of view.

-
John
-
http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Maudlin's Demon (Argument)

2006-10-07 Thread jamikes

Stathis, your post is 'logical', 'professional', 'smart', - good.
It shows why we have so many posts on this list and why we get nowhere.
You handle an assumption (robot) - its qualia, characteristics, make up a
"thought-situation" and ASK about its annexed details. Now, your style is
such that one cannot just disregard the irrelevance. So someone (many, me
included) respond with similar mindtwists  and it goes on and on. \
Have you ever ps-analyzed a robot? Professionally, I mean.
If it is a simple digital computer, it certainly has a memory, the one fixed
into chips as this PC I am using. Your and MY memory is quite different, I
wish somebody could tell me acceptably, HOW???, but it is plastic,
approximate, mixed with emotional changes, short and in cases false. I would
throw out a robot with such memory.

Best regards

John Mikes
- Original Message -
From: "Stathis Papaioannou" <[EMAIL PROTECTED]>
To: 
Sent: Friday, October 06, 2006 8:09 AM
Subject: RE: Maudlin's Demon (Argument)



John,

I should have been more precise with the terms "copy" and "emulate".
What I was asking is whether a robot which experiences something while
it is shovelling coal (this of course assumes that a robot can have
experiences)
would experience the same thing if it were fed input to all its sensors
exactly
the same as if it were doing its job normally, such that it was not aware
the
inputs were in fact a sham. It seems to me that if the answer is "no" the
robot
would need to have some mysterious extra-computational knowledge of the
world, which I find very difficult to conceptualise if we are talking about
a standard
digital computer. It is easier to conceptualise that such non-computational
effects
may be at play in a biological brain, which would then be an argument
against
computationalism.

Stathis Papaioannou

> Stathis:
> let me skip the quoted texts and ask a particular question.
> - Original Message -
> From: "Stathis Papaioannou" <[EMAIL PROTECTED]>
> Sent: Wednesday, October 04, 2006 11:41 PM
> Subject: RE: Maudlin's Demon (Argument)
> You wrote:
> Do you believe it is possible to copy a particular consciousness by
> emulating it, along
> with sham inputs (i.e. in virtual reality), on a general purpose computer?
> Or do you believe
> a coal-shovelling robot could only have the coal-shovelling experience by
> actually shovelling
> coal?
>
> Stathis Papaioannou
> -
> My question is about 'copy' and 'emulate'.
>
> Are we considering 'copying' the model and its content (in which case the
> coal shoveling robot last sentence applies) or do we include the
> interconnections unlimited in "experience", beyond the particular model we
> talk about?
> If we go "all the way" and include all input from the unlimited totality
> that may 'format' or 'complete' the model-experience, then we re-create
the
> 'real thing' and it is not a copy. If we restrict our copying to the
aspect
> in question (model) then we copy only that aspect and should not draw
> conclusions on the total.
>
> Can we 'emulate' totality? I don't think so. Can we copy the total,
> unlimited wholeness? I don't think so.
> What I feel is a restriction to "think" within a model and draw
conclusions
> from it towards beyond it.
> Which looks to me like a category-mistake.
>
> John Mikes

_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-491
1fb2b2e6d



--
No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.1.407 / Virus Database: 268.13.0/465 - Release Date: 10/06/06



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Maudlin's Demon (Argument)

2006-10-06 Thread Bruno Marchal


Le 05-oct.-06, à 13:55, <[EMAIL PROTECTED]> a écrit :

> Can we 'emulate' totality? I don't think so.


I don't always insist on that but with just the "Church thesis" part of 
comp, it can be argued that we can emulate the third person describable 
totality, and indeed this is what the Universal Dovetailer do.

The key thing, but technical (I was beginning to explain Tom and 
George), is that such an emulation can be shown to destroy any 
reductionist account of that totality, and still better, make the first 
person totality (George's first person plenitude perhaps) infinitely 
bigger (even non computably bigger, even unameable) than the 3 person 
totality.
There is a Skolem-Carroll phenomena: the first person "inside" view of 
the 3-totality is infinitely bigger than the 3-totality, like in the 
"Wonderland" where a tree can hide a palace ...




> Can we copy the total,
> unlimited wholeness?

Not really. It is like the quantum states. No clonable, but if known, 
preparable in many quantities. At this stage it is only an analogy.


> I don't think so.
> What I feel is a restriction to "think" within a model and draw 
> conclusions
> from it towards beyond it.

Mmmh... It is here that logician have made progress the last century, 
but nobody (except the experts) knows about those progress.



> Which looks to me like a category-mistake.


It looks, but perhaps it isn't. I agree it seems unbelievable, but 
somehow,we (the machine) can jump outside ourself ... (with some risk, 
though).


Bruno

PS Er..., to Markpeaty and other readers of Parfit: I think that his 
use of the term "reductionist" is misleading, and due in part to his 
lack of clearcut distinction between the person points of view.


http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: Maudlin's Demon (Argument)

2006-10-06 Thread Stathis Papaioannou

John, 

I should have been more precise with the terms "copy" and "emulate". 
What I was asking is whether a robot which experiences something while 
it is shovelling coal (this of course assumes that a robot can have 
experiences) 
would experience the same thing if it were fed input to all its sensors exactly 
the same as if it were doing its job normally, such that it was not aware the 
inputs were in fact a sham. It seems to me that if the answer is "no" the robot 
would need to have some mysterious extra-computational knowledge of the 
world, which I find very difficult to conceptualise if we are talking about a 
standard 
digital computer. It is easier to conceptualise that such non-computational 
effects 
may be at play in a biological brain, which would then be an argument against 
computationalism.

Stathis Papaioannou

> Stathis:
> let me skip the quoted texts and ask a particular question.
> - Original Message -
> From: "Stathis Papaioannou" <[EMAIL PROTECTED]>
> Sent: Wednesday, October 04, 2006 11:41 PM
> Subject: RE: Maudlin's Demon (Argument)
> You wrote:
> Do you believe it is possible to copy a particular consciousness by
> emulating it, along
> with sham inputs (i.e. in virtual reality), on a general purpose computer?
> Or do you believe
> a coal-shovelling robot could only have the coal-shovelling experience by
> actually shovelling
> coal?
> 
> Stathis Papaioannou
> -
> My question is about 'copy' and 'emulate'.
> 
> Are we considering 'copying' the model and its content (in which case the
> coal shoveling robot last sentence applies) or do we include the
> interconnections unlimited in "experience", beyond the particular model we
> talk about?
> If we go "all the way" and include all input from the unlimited totality
> that may 'format' or 'complete' the model-experience, then we re-create the
> 'real thing' and it is not a copy. If we restrict our copying to the aspect
> in question (model) then we copy only that aspect and should not draw
> conclusions on the total.
> 
> Can we 'emulate' totality? I don't think so. Can we copy the total,
> unlimited wholeness? I don't think so.
> What I feel is a restriction to "think" within a model and draw conclusions
> from it towards beyond it.
> Which looks to me like a category-mistake.
> 
> John Mikes

_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Maudlin's Demon (Argument)

2006-10-05 Thread David Nyman

George Levy wrote:

> The correct conclusion IMHO is that consciousness is independent of
> time, space, substrate and level and in fact can span all of these just
> as Maudlin partially demonstrated - but you still need an implementation
> -- so what is left? Like the Cheshire cat, nothing except the software
> itself: Consistent logical links operating in a bootstrapping reflexive
> emergent manner.

Surely this is the 'correct conclusion' only given that one first
*accepts comp*? We can show that a maximalist comp position (like the
UDA argument) cannot depend on 'computationally independent' (i.e. as
distinct from 'computationally substituted') physical supervention, at
root because such supervention can be shown to be arbitrary. That is:
any computation can be implemented in arbitrarily many physical
implementations that may incidentally be *interpreted* as being
computationally equivalent, without this having the slightest effect on
what actually occurs in the world.

In other words, given physicalism, comp can only be a metaphor, relying
entirely on physics to do whatever work is entailed in
acting-in-the-world. For supervention to be true (as it may be)
consciousness would have to map to, and co-vary with, *specific*
physical processes, that happen incidentally to be capable of
interpretation as computations. This is simply entailed by what we mean
by physicalism - all complex processes *reduce* in principle to unique
physical events. Conversely, for comp to be true, the 'physical' must
*emerge* from recursively nested computational operations - i.e. the
reverse explanatory direction.

This disjunction is in itself is an extremely powerful result with
profound, and as yet unresolved, consequences for AI and the study of
consciousness. But as to which is true (or neither for that matter) we
can only follow the consequences of our assumptions here - 'proof'
requires empirically falsifiable prediction and experiment.

David

> List members
>
> I scanned Maudlin's paper. Thank you Russell. As I suspected I found a
> few questionable passages:
>
> Page417: line 14:
> "So the spatial sequence of the troughs need not reflect their
> 'computational sequence'. We may so contrive that any sequence of
> address lie next to each other spatially."
>
> Page 418 line 5:
> "The first step in our construction is to rearrange Klara's tape so
> that address T[0] to T[N] lie spatially in sequence, T[0] next to
> T[1] next to T[2], etc...
>
> How does Maudlin know how to arrange the order of the tape locations? He
> must run his task Pi in his head or on a calculator.
>
> Maudlin's reaches a quasi religious conclusion when he states:
>
> "Olympia has shown us a least that some other level beside the
> computational must be sought. But until we have found that level and
> until we have explicated the relationship between it and the
> computational structure, the belief that ...of pure computationalism
> will ever lead to the creation of artificial minds or the the
> understanding of natural ones, remains only a pious hope."
>
>
> Let me try to summarize:
>
> Maudlin is wrong in concluding that there must be something
> non-computational necessary for consciouness.
>
> Maudlin himself was the unwitting missing consciousness piece inserted
> in his machine at programming time  i.e., the machine's consciouness
> spanned execution time and programming time. He himself was the
> unwitting missing piece when he design his tape.
>
> The correct conclusion IMHO is that consciousness is independent of
> time, space, substrate and level and in fact can span all of these just
> as Maudlin partially demonstrated - but you still need an implementation
> -- so what is left? Like the Cheshire cat, nothing except the software
> itself: Consistent logical links operating in a bootstrapping reflexive
> emergent manner.
>
> Bruno is right in applying math/logic to solve the
> consciousness/physical world (Mind/Body) riddle. Physics can be derived
> from machine psychology.
>
> George
>
>
> Russell Standish wrote:
>
> >If I can sumarise George's summary as this:
> >
> >In order to generate a recording, one must physically instantiate the
> >conscious computation. Consciousness supervenes on this, presumably.
> >
> >Maudlin say aha - lets take the recording, and add to it an inert
> >machine that handles the counterfactuals. This combined machine is
> >computationally equivalent to the original. But since the new machine
> >is physically equivalent to a recording, how could consciousness
> >supervene on it. If we want to keep supervenience, there must be
> >something noncomputational that means the first machine is conscious,
> >and the second not.
> >
> >Marchal says consciousness supervenes on neither of the physical
> >machines, but on the abstract computation, and there is only one
> >consciousness involved (not two).
> >
> >Of course, this all applies to dreaming machines, or mac

Re: Maudlin's Demon (Argument)

2006-10-05 Thread jamikes

David, thanks.
Hofstadter's G-E-B is a delightful (BIG) book, I regret that I lost my
(voracious ?) reading situation (possibility), especially to re-read it.
Just next week I will quote GEB at a recital I will perform for our area
music club about the Wohltemperiertes which "Bach wrote for his sons to
practice their fingers in piano-technique learning."
I also loved his "translation-book" about that French poem of 1 word lines.-
I cannot recall in which book I read that he was tricked by AI people into
asking esoteric questions from an AI-computer - getting incredible answers,
until next day 'they' confessed and showed him the 5 young guys in another
room who made up the replies for him.
Thanks for the URL

To the statement in question here: "a 'computation' can be
 anything I say it is " - I find true, as long as I feel free to identify
'comp' as I like (need) it. (Same for 'numbers' and 'consciousness).

John
- Original Message -
From: "David Nyman" <[EMAIL PROTECTED]>
To: "Everything List" 
Sent: Thursday, October 05, 2006 10:38 AM
Subject: Re: Maudlin's Demon (Argument)


>
> [EMAIL PROTECTED] wrote:
>
> > > >In other words, a 'computation' can be
> > > > anything I say it is (cf. Hofstadter for some particularly egregious
> > > > examples).
> > >
> > David, could you give us 'some' of these, or at least an URL to find
such?
>
> John
>
> I was thinking of various examples in 'Godel, Escher, Bach', and it's
> years since I read it. Here's a URL I just Googled that may be
> relevant:
>
> http://www.geocities.com/ResearchTriangle/6100/geb.html
>
> >From memory, Hofstadter decribes 'implementations' of computations that
> involve the detailed behaviour of anthills, and worse yet, detailed
> descriptions of 'Einstein's Brain' listed in a book that you can
> supposedly ask questions and receive answers! Trouble is, Hofstadter is
> such a brilliantly witty and creative writer that I could never be
> completely sure whether he was deliberately torturing your credulity by
> putting these forward as tongue-in-cheek reductios (like Schroedinger
> with his cat apparently) or whether he was actually serious. I'll have
> to re-read the book.
>
> David
>
> > - Original Message -
> > Subject: Re: Maudlin's Demon (Argument)
> >  (Brent's quote):
> > > David Nyman wrote:
> > (I skip the discussion...)
> > >
> > > >In other words, a 'computation' can be
> > > > anything I say it is (cf. Hofstadter for some particularly egregious
> > > > examples).
> > >
> > David, could you give us 'some' of these, or at least an URL to find
such?
> >
> > John M
>


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Maudlin's Demon (Argument)

2006-10-05 Thread David Nyman

[EMAIL PROTECTED] wrote:

> > >In other words, a 'computation' can be
> > > anything I say it is (cf. Hofstadter for some particularly egregious
> > > examples).
> >
> David, could you give us 'some' of these, or at least an URL to find such?

John

I was thinking of various examples in 'Godel, Escher, Bach', and it's
years since I read it. Here's a URL I just Googled that may be
relevant:

http://www.geocities.com/ResearchTriangle/6100/geb.html

>From memory, Hofstadter decribes 'implementations' of computations that
involve the detailed behaviour of anthills, and worse yet, detailed
descriptions of 'Einstein's Brain' listed in a book that you can
supposedly ask questions and receive answers! Trouble is, Hofstadter is
such a brilliantly witty and creative writer that I could never be
completely sure whether he was deliberately torturing your credulity by
putting these forward as tongue-in-cheek reductios (like Schroedinger
with his cat apparently) or whether he was actually serious. I'll have
to re-read the book.

David

> - Original Message -
> Subject: Re: Maudlin's Demon (Argument)
>  (Brent's quote):
> > David Nyman wrote:
> (I skip the discussion...)
> >
> > >In other words, a 'computation' can be
> > > anything I say it is (cf. Hofstadter for some particularly egregious
> > > examples).
> >
> David, could you give us 'some' of these, or at least an URL to find such?
> 
> John M


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Maudlin's Demon (Argument)

2006-10-05 Thread jamikes

Stathis:
let me skip the quoted texts and ask a particular question.
- Original Message -
From: "Stathis Papaioannou" <[EMAIL PROTECTED]>
Sent: Wednesday, October 04, 2006 11:41 PM
Subject: RE: Maudlin's Demon (Argument)
You wrote:
Do you believe it is possible to copy a particular consciousness by
emulating it, along
with sham inputs (i.e. in virtual reality), on a general purpose computer?
Or do you believe
a coal-shovelling robot could only have the coal-shovelling experience by
actually shovelling
coal?

Stathis Papaioannou
-
My question is about 'copy' and 'emulate'.

Are we considering 'copying' the model and its content (in which case the
coal shoveling robot last sentence applies) or do we include the
interconnections unlimited in "experience", beyond the particular model we
talk about?
If we go "all the way" and include all input from the unlimited totality
that may 'format' or 'complete' the model-experience, then we re-create the
'real thing' and it is not a copy. If we restrict our copying to the aspect
in question (model) then we copy only that aspect and should not draw
conclusions on the total.

Can we 'emulate' totality? I don't think so. Can we copy the total,
unlimited wholeness? I don't think so.
What I feel is a restriction to "think" within a model and draw conclusions
from it towards beyond it.
Which looks to me like a category-mistake.

John Mikes


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Maudlin's Demon (Argument)

2006-10-05 Thread jamikes


- Original Message - 
Subject: Re: Maudlin's Demon (Argument)
 (Brent's quote):
> David Nyman wrote:
(I skip the discussion...)
> 
> >In other words, a 'computation' can be
> > anything I say it is (cf. Hofstadter for some particularly egregious
> > examples).
> 
David, could you give us 'some' of these, or at least an URL to find such?

John M

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Maudlin's Demon (Argument)

2006-10-05 Thread David Nyman

Brent Meeker wrote:

> There is another possibility: that consciousness is relative to what it is 
> conscious
> *of* and any computation that implements consciousness must also implement 
> the whole
> world which the consciousness is conscious of.  In that case there may be 
> only one,
> unique physical universe that implements our consciousness.

But this is precisely my point - to sustain supervenience, there must
be a *unique* implementation of the 'computation' that is deemed
responsible for *unique* effects - in which case we are then free to
discard the metaphor of 'computation' and point to the physics as doing
the work. Perhaps there needs to be a distinction on this list (or have
I just missed it?) between:

C1) analyses of consciousness in terms of 'computation',
notwithstanding which any 'effect-in-the-world' is seen as reducing to
the behaviour of some specific physical implementation (i.e. as defined
on a non-computationally-established 'substrate');

C2) 'pure' computational analysis of consciousness, whereby any
'effect-in-the-world' is deemed invariant to 'implementation' (or more
precisely, all notions of 'implementation' - and hence 'the world' -
are alike defined on a computationally-established 'substrate').

C1 is computational theory within physicalism. C2 is what I understand
Bruno et al. to mean by 'comp'. The notion of 'implementation' doesn't
disappear in C2, it just becomes a set of nested 'substitution levels'
within a recursive computational 'reality'. This can be a major source
of confusion IMO. The point remains that you can't consistently hold
both C1 and C2 to be true. The belief that there is an invariant
mapping between consciousness and 'pure' computation (at the correct
substitution level) *entails* a belief in C2, and hence is inconsistent
with C1. This doesn't mean that C1 is *false*, but it isn't 'comp'.  C1
and C2 have precisely opposite explanatory directions and intent.
Hence...you pays your money etc. (but hopefully pending empirical
prediction and disconfirmation).

> This is switching "computation" in place of "consciousness": relying on the 
> idea that
> every computation is conscious?

I don't claim this, but this is apparently what Hofstadter et al. do
(IMO egregiously) maintain, having (apparently) missed the notion of
substitution level inherent in C2. Under C2, we can't be sure that
every 'computation' is conscious: because of substitution uncertainty
we always have the choice of saying 'No' to the doctor. Ant Hillary is
a case in point - AFAICS the only way to make an ant hill 'conscious' -
however you may *interpret* its behaviour - is to eat it (ughh!!) and
thereby incorporate it at the correct level of substitution.  But for
me this would definitely be a case of 'No chef'.

David

> David Nyman wrote:
> > Russell Standish wrote:
> >
> >
> >>Maudlin say aha - lets take the recording, and add to it an inert
> >>machine that handles the counterfactuals. This combined machine is
> >>computationally equivalent to the original. But since the new machine
> >>is physically equivalent to a recording, how could consciousness
> >>supervene on it. If we want to keep supervenience, there must be
> >>something noncomputational that means the first machine is conscious,
> >>and the second not.
> >>
> >>Marchal says consciousness supervenes on neither of the physical
> >>machines, but on the abstract computation, and there is only one
> >>consciousness involved (not two).
> >
> >
> > Is there not a more general appeal to plausibility open to the
> > non-supervenience argument? We are after all attempting to show the
> > *consequences* of a thoroughgoing assumption of comp, not prove its
> > truth.  Under comp, a specific conscious state is taken as mapping to,
> > and consistently co-varying with, some equally specific, but purely
> > computationally defined, entity. The general problem is that any
> > attempt to preserve such consistency of mapping through supervention on
> > a logically and ontically prior 'physical' reality must fail, because
> > under physicalism comp *must* reduce to an arbitrary gloss on the
> > behaviour at an arbitrary level of arbitrarily many *physical*
> > architectures or substrates.
>
> There is another possibility: that consciousness is relative to what it is 
> conscious
> *of* and any computation that implements consciousness must also implement 
> the whole
> world which the consciousness is conscious of.  In that case there may be 
> only one,
> unique physical universe that implements our consciousness.
>
> >In other words, a 'computation' can be
> > anything I say it is (cf. Hofstadter for some particularly egregious
> > examples).
>
> This is switching "computation" in place of "consciousness": relying on the 
> idea that
> every computation is conscious?
> 
> Brent Meeker


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group

Re: Maudlin's Demon (Argument)

2006-10-05 Thread Bruno Marchal


Le 05-oct.-06, à 04:01, Brent Meeker a écrit :

> There is another possibility: that consciousness is relative to what 
> it is conscious
> *of* and any computation that implements consciousness must also 
> implement the whole
> world which the consciousness is conscious of.  In that case there may 
> be only one,
> unique physical universe that implements our consciousness.

This is just saying that you generalized brain is the whole physical 
universe. Then either the physical universe is turing emulable, and in 
that case the reasoning of Maudlin still work.
Or the physical universe is not turing emulable, but then comp is false 
(giving that here your brain is equal to the whole universe).
Note that in general, if your brain is not the entire universe, comp 
entails that the physical universe is not turing emulable.

Bruno


http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Maudlin's Demon (Argument)

2006-10-04 Thread Brent Meeker

Stathis Papaioannou wrote:
> Brent Meeker writes:
> 
> 
>>David Nyman wrote:
>>
>>>Russell Standish wrote:
>>>
>>>
>>>
Maudlin say aha - lets take the recording, and add to it an inert
machine that handles the counterfactuals. This combined machine is
computationally equivalent to the original. But since the new machine
is physically equivalent to a recording, how could consciousness
supervene on it. If we want to keep supervenience, there must be
something noncomputational that means the first machine is conscious,
and the second not.

Marchal says consciousness supervenes on neither of the physical
machines, but on the abstract computation, and there is only one
consciousness involved (not two).
>>>
>>>
>>>Is there not a more general appeal to plausibility open to the
>>>non-supervenience argument? We are after all attempting to show the
>>>*consequences* of a thoroughgoing assumption of comp, not prove its
>>>truth.  Under comp, a specific conscious state is taken as mapping to,
>>>and consistently co-varying with, some equally specific, but purely
>>>computationally defined, entity. The general problem is that any
>>>attempt to preserve such consistency of mapping through supervention on
>>>a logically and ontically prior 'physical' reality must fail, because
>>>under physicalism comp *must* reduce to an arbitrary gloss on the
>>>behaviour at an arbitrary level of arbitrarily many *physical*
>>>architectures or substrates. 
>>
>>There is another possibility: that consciousness is relative to what it is 
>>conscious 
>>*of* and any computation that implements consciousness must also implement 
>>the whole 
>>world which the consciousness is conscious of.  In that case there may be 
>>only one, 
>>unique physical universe that implements our consciousness.
> 
> 
> Do you believe it is possible to copy a particular consciousness by emulating 
> it, along 
> with sham inputs (i.e. in virtual reality), on a general purpose computer? 

That would be my present guess.

>Or do you believe 
> a coal-shovelling robot could only have the coal-shovelling experience by 
> actually shovelling 
> coal?

Probably not.  But from a QM viewpoint the robot and the coal are inevitably 
entangled with the environment (i.e. the rest of the universe); so I don't 
consider 
it a knock-down argument.

Brent Meeker

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: Maudlin's Demon (Argument)

2006-10-04 Thread Stathis Papaioannou

Brent Meeker writes:

> David Nyman wrote:
> > Russell Standish wrote:
> > 
> > 
> >>Maudlin say aha - lets take the recording, and add to it an inert
> >>machine that handles the counterfactuals. This combined machine is
> >>computationally equivalent to the original. But since the new machine
> >>is physically equivalent to a recording, how could consciousness
> >>supervene on it. If we want to keep supervenience, there must be
> >>something noncomputational that means the first machine is conscious,
> >>and the second not.
> >>
> >>Marchal says consciousness supervenes on neither of the physical
> >>machines, but on the abstract computation, and there is only one
> >>consciousness involved (not two).
> > 
> > 
> > Is there not a more general appeal to plausibility open to the
> > non-supervenience argument? We are after all attempting to show the
> > *consequences* of a thoroughgoing assumption of comp, not prove its
> > truth.  Under comp, a specific conscious state is taken as mapping to,
> > and consistently co-varying with, some equally specific, but purely
> > computationally defined, entity. The general problem is that any
> > attempt to preserve such consistency of mapping through supervention on
> > a logically and ontically prior 'physical' reality must fail, because
> > under physicalism comp *must* reduce to an arbitrary gloss on the
> > behaviour at an arbitrary level of arbitrarily many *physical*
> > architectures or substrates. 
> 
> There is another possibility: that consciousness is relative to what it is 
> conscious 
> *of* and any computation that implements consciousness must also implement 
> the whole 
> world which the consciousness is conscious of.  In that case there may be 
> only one, 
> unique physical universe that implements our consciousness.

Do you believe it is possible to copy a particular consciousness by emulating 
it, along 
with sham inputs (i.e. in virtual reality), on a general purpose computer? Or 
do you believe 
a coal-shovelling robot could only have the coal-shovelling experience by 
actually shovelling 
coal?

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Maudlin's Demon (Argument)

2006-10-04 Thread George Levy




List members

I scanned Maudlin's paper. Thank you Russell. As I suspected I found a
few questionable passages:

Page417: line 14: 
"So the spatial sequence of the troughs need not reflect their
'computational sequence'. We may so contrive that any sequence of
address lie next to each other spatially."
  
Page 418 line 5:
"The first step in our construction is to rearrange Klara's tape so
that address T[0] to T[N] lie spatially in sequence, T[0] next to T[1]
next to T[2], etc...

How does Maudlin know how to arrange the order of the tape locations?
He must run his task Pi in his head or on a calculator.

Maudlin's reaches a quasi religious conclusion when he states:
"Olympia has shown us a
least that some other level beside the computational must be sought.
But until we have found that level and until we have explicated the
relationship between it and the computational structure, the belief
that ...of pure computationalism will ever lead to the creation of
artificial minds or the the understanding of natural ones, remains only
a pious hope."


Let me try to summarize:

Maudlin is wrong in concluding that there must be something
non-computational necessary for consciouness. 

Maudlin himself was the unwitting missing consciousness piece inserted
in his machine at programming time  i.e., the machine's consciouness spanned
execution time and programming time. He himself was the unwitting
missing piece when he design his tape.

The correct conclusion IMHO is that consciousness is independent of
time, space, substrate and level and in fact can span all of these just
as Maudlin partially demonstrated - but you still need an
implementation -- so what is left? Like the Cheshire cat, nothing
except the software itself: Consistent logical links operating in a
bootstrapping reflexive emergent manner.

Bruno is right in applying math/logic to solve the
consciousness/physical world (Mind/Body) riddle. Physics can be derived
from machine psychology. 

George


Russell Standish wrote:

  If I can sumarise George's summary as this:

In order to generate a recording, one must physically instantiate the
conscious computation. Consciousness supervenes on this, presumably.

Maudlin say aha - lets take the recording, and add to it an inert
machine that handles the counterfactuals. This combined machine is
computationally equivalent to the original. But since the new machine
is physically equivalent to a recording, how could consciousness
supervene on it. If we want to keep supervenience, there must be
something noncomputational that means the first machine is conscious,
and the second not.

Marchal says consciousness supervenes on neither of the physical
machines, but on the abstract computation, and there is only one
consciousness involved (not two).

Of course, this all applies to dreaming machines, or machines hooked
up to recordings of the real world. This is where I concentrate my
attack on the Maudlin argument (the Multiverse argument).

Cheers

  



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups "Everything List" group.  To post to this group, send email to everything-list@googlegroups.com  To unsubscribe from this group, send email to [EMAIL PROTECTED]  For more options, visit this group at http://groups.google.com/group/everything-list  -~--~~~~--~~--~--~---





RE: Maudlin's Demon (Argument)

2006-10-04 Thread Stathis Papaioannou

George,

By similar reasoning, might you not also say that no computer or computer 
program 
could ever be conscious on its own because at some point it requires human 
intervention 
to produce it, even though once set up no more human intervention is required? 

Stathis Papaioannou




---
> Date: Wed, 4 Oct 2006 11:26:44 -0700
> From: [EMAIL PROTECTED]
> To: everything-list@googlegroups.com
> Subject: Re: Maudlin's Demon (Argument)
> 
> Bruno, Stathis,
> Thank you Stathis for the summary. I do have the paper now and I will read it 
> carefully. Based on Sathis summary I still believe that Maudlin is 
> fallacious. A computer program equivalent to Maudlin's construction can be 
> written as:
> IF (Input = -27098217872180483080234850309823740127)
> THEN (Output = 78972398473024802348523948518347109)
> ELSE Call Conscious_Subroutine
> ENDIF.
> If the input 27098217872180483080234850309823740127 is always given then the 
> ELSE clause is never invoked. The point is that to write the above piece of 
> code, Maudlin must go through the trouble of calculating perhaps on his hand 
> calculator the answer 78972398473024802348523948518347109 that the 
> Conscious_Subroutine would have produced had it been called. (Notice the 
> conditional tense indicating the counterfactual). He then inserts the answer 
> in the IF clause at programming time. In so doing he must instantiate in his 
> own mind and/or calculator the function of the Conscious_Subroutine for the 
> particular case in which input = 27098217872180483080234850309823740127,
> If the single numeral input is replaced by a function with multiple numerical 
> inputs, Maudlin trick could be expanded by  using tables to store the output 
> and instead of using an IF statement, Maudlin could use a CASE statement. But 
> then, Maudlin would have to fill up the whole table with  the answers that 
> the Conscious_Subroutine would have produced. In the ultimate case you could 
> conceive of a huge table that contains all the answers that the 
> Conscious_Subroutine would ever answer to any question. This table however 
> must be filled up. In the process of filling up the table you must 
> instantiate all state of consciousness of the Conscious_Subroutine.
> Bruno, says:
> BTW I thought you did understand the physics/psychology 
> (theology/computer-science/number-theory) reversal. What makes you changing 
> your mind? (just interested).
> I did not change my mind. I just believe that Maudlin's reasoning is faulty.
> By calculating the output Maudlin inserts himself and possibly his calculator 
> in the conscious process. To understand the insertion of Maudlin into the 
> consciousness of The Conscious_Subroutine, you must agree that this 
> consciousness is independent of time, space, substrate and level. This Maybe 
> is the Moral of Maudlin's Machinations...?
> George
> Bruno Marchal wrote:
> Le 03-oct.-06, à 21:33, George Levy a écrit :
> Bruno,
> I looked on the web but could not find Maudlin's paper.
> Mmh... for those working in an institution affiliated to JSTOR, it is 
> available here:
> http://www.jstor.org/view/0022362x/di973301/97p04115/0
> I will search if some free version are available elsewhere, or put a 
> pdf-version on my web page.
> So I just go by what you are saying.
> I still stand by the spirit of what I said but I admit to be misleading in 
> stating that Maudlin himself is part of the machine. It is not Maudlin, but 
> Maudlin's proxy or demon, the Klaras which is now parts of the machine. 
> Maudlin used the same trick that Maxwell used. He used a the demon or proxy 
> to perform his (dirty) work.
> It seems to me that if you trace the information flow you probably can detect 
> that Maudlin is cheating: How are the protoolympia and the Klaras defined?
> Maudlin is cheating ? No more than a doctor who build an artificial brain by 
> copying an original at some level. Remember we *assume* the comp hypothesis.
> To design his protoolympia and the Klaras he must start with the information 
> about the machine and the task PI. If he changes task from PI to PIprime than 
> he has to apply a different protoolympia and different Klaras, and he has to 
> intervene in the process!
> Yes but only once. Changing PI to PIprime would be another thought 
> experiment. I don't see the relevance.
> I know you got the paper now. It will help in this debate.
> Maudlin's argument is far from convincing.
> BTW I thought you did understand the physics/psychology 
> (theology/computer-science/number-theory) reversal. What makes you changing 
> your mind? (just interested).
> Bruno
> http://iridia.ulb.ac.be/~marchal/
> 
__

Re: Maudlin's Demon (Argument)

2006-10-04 Thread Brent Meeker

David Nyman wrote:
> Russell Standish wrote:
> 
> 
>>Maudlin say aha - lets take the recording, and add to it an inert
>>machine that handles the counterfactuals. This combined machine is
>>computationally equivalent to the original. But since the new machine
>>is physically equivalent to a recording, how could consciousness
>>supervene on it. If we want to keep supervenience, there must be
>>something noncomputational that means the first machine is conscious,
>>and the second not.
>>
>>Marchal says consciousness supervenes on neither of the physical
>>machines, but on the abstract computation, and there is only one
>>consciousness involved (not two).
> 
> 
> Is there not a more general appeal to plausibility open to the
> non-supervenience argument? We are after all attempting to show the
> *consequences* of a thoroughgoing assumption of comp, not prove its
> truth.  Under comp, a specific conscious state is taken as mapping to,
> and consistently co-varying with, some equally specific, but purely
> computationally defined, entity. The general problem is that any
> attempt to preserve such consistency of mapping through supervention on
> a logically and ontically prior 'physical' reality must fail, because
> under physicalism comp *must* reduce to an arbitrary gloss on the
> behaviour at an arbitrary level of arbitrarily many *physical*
> architectures or substrates. 

There is another possibility: that consciousness is relative to what it is 
conscious 
*of* and any computation that implements consciousness must also implement the 
whole 
world which the consciousness is conscious of.  In that case there may be only 
one, 
unique physical universe that implements our consciousness.

>In other words, a 'computation' can be
> anything I say it is (cf. Hofstadter for some particularly egregious
> examples).

This is switching "computation" in place of "consciousness": relying on the 
idea that 
every computation is conscious?

Brent Meeker

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Maudlin's Demon (Argument)

2006-10-04 Thread David Nyman

Russell Standish wrote:

> Maudlin say aha - lets take the recording, and add to it an inert
> machine that handles the counterfactuals. This combined machine is
> computationally equivalent to the original. But since the new machine
> is physically equivalent to a recording, how could consciousness
> supervene on it. If we want to keep supervenience, there must be
> something noncomputational that means the first machine is conscious,
> and the second not.
>
> Marchal says consciousness supervenes on neither of the physical
> machines, but on the abstract computation, and there is only one
> consciousness involved (not two).

Is there not a more general appeal to plausibility open to the
non-supervenience argument? We are after all attempting to show the
*consequences* of a thoroughgoing assumption of comp, not prove its
truth.  Under comp, a specific conscious state is taken as mapping to,
and consistently co-varying with, some equally specific, but purely
computationally defined, entity. The general problem is that any
attempt to preserve such consistency of mapping through supervention on
a logically and ontically prior 'physical' reality must fail, because
under physicalism comp *must* reduce to an arbitrary gloss on the
behaviour at an arbitrary level of arbitrarily many *physical*
architectures or substrates. In other words, a 'computation' can be
anything I say it is (cf. Hofstadter for some particularly egregious
examples).

So under physicalism, comp is condemned to be the ghost in the machine
- merely a *metaphor*, entirely dependent on physical implementation
for any 'reality': it's always the physics that does the work.
Consequently, if we would rescue comp, we must perforce reverse the
fundamental assumption, such that some set of logical/mathematical
entities and operations must be logically and ontically prior. The
project (e.g. Bruno's UDA) is then to show that some version of this
generates both the consistent mapping from consciousness to
computation, and a consistent 'physics' that is emergent from
computational psychology.

You pays your money and you takes your choice.

David

> If I can sumarise George's summary as this:
>
> In order to generate a recording, one must physically instantiate the
> conscious computation. Consciousness supervenes on this, presumably.
>
> Maudlin say aha - lets take the recording, and add to it an inert
> machine that handles the counterfactuals. This combined machine is
> computationally equivalent to the original. But since the new machine
> is physically equivalent to a recording, how could consciousness
> supervene on it. If we want to keep supervenience, there must be
> something noncomputational that means the first machine is conscious,
> and the second not.
>
> Marchal says consciousness supervenes on neither of the physical
> machines, but on the abstract computation, and there is only one
> consciousness involved (not two).
>
> Of course, this all applies to dreaming machines, or machines hooked
> up to recordings of the real world. This is where I concentrate my
> attack on the Maudlin argument (the Multiverse argument).
>
> Cheers
>
> --
> *PS: A number of people ask me about the attachment to my email, which
> is of type "application/pgp-signature". Don't worry, it is not a
> virus. It is an electronic signature, that may be used to verify this
> email came from me if you have PGP or GPG installed. Otherwise, you
> may safely ignore this attachment.
>
> 
> A/Prof Russell Standish  Phone 0425 253119 (mobile)
> Mathematics
> UNSW SYDNEY 2052   [EMAIL PROTECTED]
> Australiahttp://parallel.hpc.unsw.edu.au/rks
> International prefix  +612, Interstate prefix 02
> 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Maudlin's Demon (Argument)

2006-10-04 Thread Russell Standish

If I can sumarise George's summary as this:

In order to generate a recording, one must physically instantiate the
conscious computation. Consciousness supervenes on this, presumably.

Maudlin say aha - lets take the recording, and add to it an inert
machine that handles the counterfactuals. This combined machine is
computationally equivalent to the original. But since the new machine
is physically equivalent to a recording, how could consciousness
supervene on it. If we want to keep supervenience, there must be
something noncomputational that means the first machine is conscious,
and the second not.

Marchal says consciousness supervenes on neither of the physical
machines, but on the abstract computation, and there is only one
consciousness involved (not two).

Of course, this all applies to dreaming machines, or machines hooked
up to recordings of the real world. This is where I concentrate my
attack on the Maudlin argument (the Multiverse argument).

Cheers

-- 
*PS: A number of people ask me about the attachment to my email, which
is of type "application/pgp-signature". Don't worry, it is not a
virus. It is an electronic signature, that may be used to verify this
email came from me if you have PGP or GPG installed. Otherwise, you
may safely ignore this attachment.


A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 [EMAIL PROTECTED] 
Australiahttp://parallel.hpc.unsw.edu.au/rks
International prefix  +612, Interstate prefix 02



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Maudlin's Demon (Argument)

2006-10-04 Thread George Levy




Oops. Read: IF (Input = 27098217872180483080234850309823740127)
George 


George Levy wrote:

  
  
Bruno, Stathis,
  
Thank you Stathis for the summary. I do have the paper now and I will
read it carefully. Based on Sathis summary I still believe that Maudlin
is fallacious. A computer program equivalent to Maudlin's construction
can be written as:
  
IF (Input = -27098217872180483080234850309823740127) 
THEN (Output = 78972398473024802348523948518347109)
ELSE Call Conscious_Subroutine
ENDIF.
  
If the input 27098217872180483080234850309823740127 is always given
then the ELSE clause is never invoked. The point is that to write
the above piece of code, Maudlin must go through the trouble of
calculating perhaps on his hand calculator the answer
78972398473024802348523948518347109 that the Conscious_Subroutine would
have produced had it been called. (Notice the conditional tense
indicating the counterfactual). He then inserts the answer in the IF
clause at programming time. In so doing he must instantiate in his own
mind and/or calculator the function of the Conscious_Subroutine for the
particular case in which input = 27098217872180483080234850309823740127,
  
If the single numeral input is replaced by a function with multiple
numerical inputs, Maudlin trick could be expanded by  using tables to
store the output and instead of using an IF statement, Maudlin could
use a CASE statement. But then, Maudlin would have to fill up the whole
table with  the answers that the Conscious_Subroutine would have
produced. In the ultimate case you could conceive of a huge table that
contains all the answers that the Conscious_Subroutine would ever
answer to any question. This table however must be filled up. In the
process of filling up the table you must instantiate all state of
consciousness of the Conscious_Subroutine.
  
Bruno, says:
  
  BTW I thought you did understand the physics/psychology
(theology/computer-science/number-theory) reversal. What makes you
changing your mind? (just interested). 
  
  
I did not change my mind. I just believe that Maudlin's reasoning is
faulty.
  
By calculating the output Maudlin inserts himself and possibly his
calculator in the conscious process. To understand the insertion of
Maudlin into the consciousness of The Conscious_Subroutine, you must
agree that this consciousness is independent of
time, space, substrate and level. This Maybe is the Moral of Maudlin's
Machinations...?
  
George
  
Bruno Marchal wrote:
  

Le 03-oct.-06, à 21:33, George Levy a écrit : 

 Bruno, 
  
I looked on the web but could not find Maudlin's paper. 



Mmh... for those working in an institution affiliated to JSTOR, it is
available here: 
http://www.jstor.org/view/0022362x/di973301/97p04115/0


I will search if some free version are available elsewhere, or put a
pdf-version on my web page. 





So I just go by what you are saying. 
  
I still stand by the spirit of what I said but I admit to be
misleading in stating that Maudlin himself is part of the machine. It
is not Maudlin, but Maudlin's proxy or demon, the Klaras which is now
parts of the machine. Maudlin used the same trick that Maxwell used.
He used a the demon or proxy to perform his (dirty) work. 
  
It seems to me that if you trace the information flow you probably
can detect that Maudlin is cheating: How are the protoolympia and the
Klaras defined? 



Maudlin is cheating ? No more than a doctor who build an artificial
brain by copying an original at some level. Remember we *assume* the
comp hypothesis. 




To design his protoolympia and the Klaras he must start
with
the information about the machine and the task PI. If he changes task
from PI to PIprime than he has to apply a different protoolympia and
different Klaras, and he has to intervene in the process! 


Yes but only once. Changing PI to PIprime would be another thought
experiment. I don't see the relevance. 
I know you got the paper now. It will help in this debate. 



Maudlin's argument is far from convincing. 


BTW I thought you did understand the physics/psychology
(theology/computer-science/number-theory) reversal. What makes you
changing your mind? (just interested). 

Bruno 

http://iridia.ulb.ac.be/~marchal/




  
  
  
  



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups "Everything List" group.  To post to this group, send email to everything-list@googlegroups.com  To unsubscribe from this group, send email to [EMAIL PROTECTED]  For more options, visit this group at http://groups.google.com/group/everything-list  -~--~~~~--~~--~--~---





Re: Maudlin's Demon (Argument)

2006-10-04 Thread George Levy




Bruno, Stathis,

Thank you Stathis for the summary. I do have the paper now and I will
read it carefully. Based on Sathis summary I still believe that Maudlin
is fallacious. A computer program equivalent to Maudlin's construction
can be written as:

IF (Input = -27098217872180483080234850309823740127) 
THEN (Output = 78972398473024802348523948518347109)
ELSE Call Conscious_Subroutine
ENDIF.

If the input 27098217872180483080234850309823740127 is always given
then the ELSE clause is never invoked. The point is that to write
the above piece of code, Maudlin must go through the trouble of
calculating perhaps on his hand calculator the answer
78972398473024802348523948518347109 that the Conscious_Subroutine would
have produced had it been called. (Notice the conditional tense
indicating the counterfactual). He then inserts the answer in the IF
clause at programming time. In so doing he must instantiate in his own
mind and/or calculator the function of the Conscious_Subroutine for the
particular case in which input = 27098217872180483080234850309823740127,

If the single numeral input is replaced by a function with multiple
numerical inputs, Maudlin trick could be expanded by  using tables to
store the output and instead of using an IF statement, Maudlin could
use a CASE statement. But then, Maudlin would have to fill up the whole
table with  the answers that the Conscious_Subroutine would have
produced. In the ultimate case you could conceive of a huge table that
contains all the answers that the Conscious_Subroutine would ever
answer to any question. This table however must be filled up. In the
process of filling up the table you must instantiate all state of
consciousness of the Conscious_Subroutine.

Bruno, says:

BTW I thought you did understand the physics/psychology
(theology/computer-science/number-theory) reversal. What makes you
changing your mind? (just interested).
  


I did not change my mind. I just believe that Maudlin's reasoning is
faulty.

By calculating the output Maudlin inserts himself and possibly his
calculator in the conscious process. To understand the insertion of
Maudlin into the consciousness of The Conscious_Subroutine, you must
agree that this consciousness is independent of
time, space, substrate and level. This Maybe is the Moral of Maudlin's
Machinations...?

George

Bruno Marchal wrote:

  
Le 03-oct.-06, à 21:33, George Levy a écrit :
  
  
   Bruno,


I looked on the web but could not find Maudlin's paper. 
  
  
  
Mmh... for those working in an institution affiliated to JSTOR, it is
available here:
  
http://www.jstor.org/view/0022362x/di973301/97p04115/0
  
  
I will search if some free version are available elsewhere, or put a
pdf-version on my web page.
  
  
  
  
  
  
  So I just go by what you are saying. 

I still stand by the spirit of what I said but I admit to be
misleading in stating that Maudlin himself is part of the machine. It
is not Maudlin, but Maudlin's proxy or demon, the Klaras which is now
parts of the machine. Maudlin used the same trick that Maxwell used.
He used a the demon or proxy to perform his (dirty) work. 

It seems to me that if you trace the information flow you probably
can detect that Maudlin is cheating: How are the protoolympia and the
Klaras defined? 
  
  
  
Maudlin is cheating ? No more than a doctor who build an artificial
brain by copying an original at some level. Remember we *assume* the
comp hypothesis.
  
  
  
  
  
  To design his protoolympia and the Klaras he must start
with
the information about the machine and the task PI. If he changes task
from PI to PIprime than he has to apply a different protoolympia and
different Klaras, and he has to intervene in the process!

  
  
Yes but only once. Changing PI to PIprime would be another thought
experiment. I don't see the relevance.
  
I know you got the paper now. It will help in this debate.
  
  
  
  
Maudlin's argument is far from convincing.

  
  
BTW I thought you did understand the physics/psychology
(theology/computer-science/number-theory) reversal. What makes you
changing your mind? (just interested).
  
  
Bruno
  
  
http://iridia.ulb.ac.be/~marchal/
  
  
  
  



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups "Everything List" group.  To post to this group, send email to everything-list@googlegroups.com  To unsubscribe from this group, send email to [EMAIL PROTECTED]  For more options, visit this group at http://groups.google.com/group/everything-list  -~--~~~~--~~--~--~---





Re: Maudlin's Demon (Argument)

2006-10-04 Thread Bruno Marchal

Le 03-oct.-06, à 21:33, George Levy a écrit :

Bruno,

I looked on the web but could not find Maudlin's paper. 


Mmh... for those working in an institution affiliated to JSTOR, it is available here:
http://www.jstor.org/view/0022362x/di973301/97p04115/0

I will search if some free version are available elsewhere, or put a pdf-version on my web page.





So I just go by what you are saying. 

I still stand by the spirit of what I said but I admit to be misleading in stating that Maudlin himself is part of the machine. It is not Maudlin, but Maudlin's proxy or demon, the Klaras which is now parts of the machine. Maudlin used the same trick that Maxwell used. He used a the demon or proxy to perform his (dirty) work. 

It seems to me that if you trace the information flow you probably can detect that Maudlin is cheating: How are the protoolympia and the Klaras defined? 


Maudlin is cheating ? No more than a doctor who build an artificial brain by copying an original at some level. Remember we *assume* the comp hypothesis.




To design his protoolympia and the Klaras he must start with the information about the machine and the task PI. If he changes task from PI to PIprime than he has to apply a different protoolympia and different Klaras, and he has to intervene in the process!

Yes but only once. Changing PI to PIprime would be another thought experiment. I don't see the relevance.
I know you got the paper now. It will help in this debate.


Maudlin's argument is far from convincing.

BTW I thought you did understand the physics/psychology (theology/computer-science/number-theory) reversal. What makes you changing your mind? (just interested).

Bruno

http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Everything List" group. To post to this group, send email to everything-list@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/everything-list -~--~~~~--~~--~--~--- 

Re: Maudlin's Demon (Argument)

2006-10-03 Thread George Levy




Bruno,

I looked on the web but could not find Maudlin's paper. So I just go by
what you are saying. 

I still stand by the spirit of what I said but I admit to be misleading
in stating that Maudlin himself is part of the machine. It is not
Maudlin, but Maudlin's proxy or demon, the Klaras which is now parts of
the machine. Maudlin used the same trick that Maxwell used. He used a
the demon or proxy to perform his (dirty) work. 

It seems to me that if you trace the information flow you probably can
detect that Maudlin is cheating: How are the protoolympia and the
Klaras
defined? To design his protoolympia and the Klaras he must start with
the information about the machine and the task PI. If he changes task
from PI to PIprime than he has to apply a different protoolympia and
different Klaras, and he has to intervene in the process!

Maudlin's argument is far from convincing.

George


Bruno Marchal wrote:

  
Le 03-oct.-06, à 06:56, George Levy a écrit :
  
  
   Bruno Marchal wrote in explaining Maudlin's argument:


"For any given precise running computation associated
to some
inner experience, you
  
can modify the device in such a way that the amount of physical
activity involved is
  
arbitrarily low, and even null for dreaming experience which has no
inputs and no outputs.
  
Now, having suppressed that physical activity present in the running
computation, the
  
machine will only be accidentally correct. It will be correct only
for that precise computation,
  
with unchanged environment. If it is changed a little bit, it will
make the machine running
  
computation no more relatively correct. But then, Maudlin ingenuously
showed that
  
counterfactual correctness can be recovered, by adding non active
devices which will be
  
triggered only if some (counterfactual) change would appear in the
environment. 
  



To reduce the machine's complexity Maudlin must perform a modicum of
analysis, simulation etc.. to predict how the machine performs in
different situations. Using his newly acquired knowledge, he then 
maximally reduces the machine's complexity for one particular task,
keeping the machine fully operational for all other tasks. In effect
Maudlin has surreptitiously inserted himself in the mechanism. so now,
we don't have just the machine but we have the machine plus Maudlin.
The machine is not simpler or not existent. The machine is now Maudlin!

  
  
  
(We can come back on this real critics, but here is a short answer for
those who have Mauldlin's paper, we can find a version on the net now).
  
  
Olympia is "proto-olympia" + "the Klaras". Maudlin assumes comp and he
needs only the description of the original machine to build the Klaras
(for regaining counterfactual correctness) and add them to the
proto-olympia (the machine with no physical activity which is only
accidentally correct). Once added, the composed, Olympia =
"proto-olympia + Klara", is independent of Maudlin, and is
computationnaly equivalent with the original machine).
  
  
So Olympia, once build, does not need Maudlin's at all. Of course
with comp the building itself cannot influence the future possible
supervenience, for the same reason that if a doctor give you an
artificial brain, the story of each individual components has no
relation with the later use of it (if not it means the comp level has
not been chosen correctly).
  
  
  
  
  
In conclusion, the following conclusion reached by Maudlin and Bruno
is fallacious.


"Now this shows that any inner experience can be
associated
with an arbitrary low (even null) physical
  
activity, and this in keeping counterfactual correctness. And that is
absurd with the
  
conjunction of both comp and materialism."
  



I think the paradox can be resolved by tracing how information flows
and Maudlin is certainly in the circuit, using information, just like
Maxwell's demon is affecting entropy.

  
  
  
Once Olympia is build, Maudlin's is completely out of the circuit. I
think you forget the purpose of the Klaras. 
  
At least, George, this is a real attempt to find an error, and in the
8th step ! I appreciate your try, but it seems to me you have just
forgot that Maudlin's did *program* his intervention: through the
Klaras, so that keeping comp at this stage makes Maudlin's special
role irrelevant. OK?
  
  
Bruno
  
  
  
http://iridia.ulb.ac.be/~marchal/
  
  
  
  



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups "Everything List" group.  To post to this group, send email to everything-list@googlegroups.com  To unsubscribe from this group, send email to [EMAIL PROTECTED]  For more options, visit this group at http://groups.google.com/group/everything-list  -~--~~~~--~~--~--~---