Re: MGA 1

2008-11-28 Thread John Mikes
Thanks, Brent,

at least you read through my blurb. Of course I am vague - besides I wrote
the post in a jiffy - not premeditatedly, I am sorry. Also there is no
adequate language to those things I want to refer to, not even 'in situ',
the ideas and terms about interefficient totality (IMO more than just the
TOE) are still sought of. We have only the old language of the (models -
based) quotidien and scientific terms like your in the physicists' sense
and similar.

BTW: no action at a distance? what would you call a Mars-to-Earth term
when NASA is sending an order and the module on Mars starts digging?  I
think you may consider the beam a 'connecting' (physical) space-term?

I hope to be in the ballpark of your model-based (physicalistic) causality's
*extension* in a sense: (I never considered my position in an 'epistemic
sense') but think of your (physical) distance as 'unrelated', relevant to
more than just measurable space, in any 'dimension' we may (or still cannot)
think. I consider
sometimes 'causality' as some *backwards-deterministic* in the sense that
everything is 'e/affected' by other changes (relations) - as in: nothing
generates itself. (In this respect I shove the ORIGIN under the rag,
because I acknowledge that it is beyond our limited mental capabilities -
and I don't want to start with unreasonable assumptions.

(Yes, in my 'narrative' about a Big Bang fantasy - closer to *human common
sense logic* starts with a Plenitude-assumption, a pretty undetailed image,
giving rise only to some physically-mathematically followable(?) process of
the *mandatory* occurrence of the unlimited (both in quality and number) *
universes*, but I am ready to change it to a better idea any time.)

I wonder if I added to the obscurity of my language. If yes, I am sorry.

John M



On Thu, Nov 27, 2008 at 6:16 PM, Brent Meeker [EMAIL PROTECTED]wrote:


 John Mikes wrote:
  Brent wrote:
  ...
  *But is causality an implementation detail?  There seems to be an
 implicit
  assumption that digitally represented states form a sequence just
  because there
  is a rule that defines(*) that sequence, but in fact all digital (and
  other) sequences depend on(**) causal chains. ...*
 
  I would insert at (*): /*'in digitality'*/  -
  and at (**):
  /*'(the co-interefficiency of) unlimited'*/  - because in my vocabulary
  (and I do not expect the 'rest of the world to accept it) the
  conventional term /'causality'/, meaning to find /A CAUSE/ within the
  (observed)  topical etc. model that entails the (observed) 'effect' -
  gave place to the unlimited inteconnections that - in their total
  interefficiency - result in the effect we observed within a
  model-domain, irrespective of the limits of the observed domain.
  Cause - IMO - is a limited term of ancient narrow epistemic (model
  based?) views, not fit for discussions in a TOE-oriented style.
  Using obsolete words impress the conclusions as well.

 I think I agree with that last remark (although I'm not sure because the
 language seems obscure).  I meant causality in the physicists sense of no
 action at a distance, not in an epistemic sense.

 Brent


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-27 Thread Brent Meeker

Bruno Marchal wrote:
 
 On 25 Nov 2008, at 20:16, Brent Meeker wrote:
 
 Bruno Marchal wrote:
 
 Brent: I don't see why the mechanist-materialists are
 logically disallowed from incorporating that kind of physical
 difference into their notion of consciousness.

 Bruno: In our setting, it means that the neuron/logic gates have  
 some form of
 prescience.
 Brent: I'm not sure I agree with that.  If consciousness is a  
 process it may be
 instantiated in physical relations (causal?).  But relations are in  
 general not
 attributes of the relata.  Distance is an abstract relation but it  
 is always
 realized as the distance between two things.  The things themselves  
 don't have
 distance.  If some neurons encode my experience of seeing a rose  
 might not
 the experience depend on the existence of roses, the evolution of  
 sight, and the
 causal chain as well as the immediate state of the neurons?
 
 
 With *digital* mechanism, it would just mean that we have not chosen  
 the right level of substitution. Once the level is well chosen, then  
 we can no more give role to the implementations details. They can no  
 more be relevant, or we introduce prescience in the elementary  
 components.

But is causality an implementation detail?  There seems to be an implicit 
assumption that digitally represented states form a sequence just because there 
is a rule that defines that sequence, but in fact all digital (and other) 
sequences depend on causal chains.

 
 

 Bostrom's views about fractional
 quantities of experience are a case in point.
 If that was true, why would you say yes to the doctor without
 knowing the thickness of the artificial axons?
 How can you be sure your consciousness will not half diminish when  
 the
 doctor proposes to you the new cheaper brain which use thinner  
 fibers,
 or half the number of redundant security fibers (thanks to a progress
 in security software)?
 I would no more dare to say yes to the doctor if I could loose a
 fraction of my consciousness and become a partial zombie.
 But who would say yes to the doctor if he said that he would take  
 a movie of
 your brain states and project it?  Or if he said he would just  
 destroy you in
 this universe and you would continue your experiences in other  
 branches of the
 multiverse or in platonia?  Not many I think.
 
 
 I agree with you. Not many will say yes to such a doctor!  Even  
 rightly so (with MEC). I think MGA 3 should make this clear.
 The point is just that if we assume both MEC  *and*  MAT, then the  
 movie is also conscious, but of course (well: by MGA 3) it is not  
 conscious qua computatio, so that we get the (NON COMP or NON MAT)  
 conclusion.

It's not so clear to me.  One argument leads to CONSCIOUS and the other leads 
to 
NON-CONSCIOUS, but there is not direct contradiction - only a contradiction of 
intuitions.  So it may be a fault of intuition in evaluating the thought 
experiments.

Brent



 
 I keep COMP (as my working hypothesis, but of course I find it  
 plausible for many reasons), so I abandon MAT. With comp,  
 consciousness can still supervene on computations (in Platonia, or  
 more concretely in the universal deployment), but not on its physical  
 implementation. By UDA we have indeed the obligation now to explain  
 the physical, by the computational. It is the reversal I talked about.  
 Somehow, consciousness does not supervene on brain activity, but brain  
 activity supervene on consciousness. To be short, because  
 consciousness is now somehow related with the whole of arithmetical  
 truth, and things are no so simple.
 
 Bruno
 http://iridia.ulb.ac.be/~marchal/
 
 
 
 
  
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-27 Thread John Mikes
Brent wrote:
...
*But is causality an implementation detail?  There seems to be an implicit
assumption that digitally represented states form a sequence just because
there
is a rule that defines(*) that sequence, but in fact all digital (and other)
sequences depend on(**) causal chains. ...*

I would insert at (*): *'in digitality'*  -
and at (**):
*'(the co-interefficiency of) unlimited'*  - because in my vocabulary (and I
do not expect the 'rest of the world to accept it) the conventional term *
'causality'*, meaning to find *A CAUSE* within the (observed)  topical
etc. model that entails the (observed) 'effect' - gave place to the
unlimited inteconnections that - in their total interefficiency - result in
the effect we observed within a model-domain, irrespective of the limits of
the observed domain.
Cause - IMO - is a limited term of ancient narrow epistemic (model
based?) views, not fit for discussions in a TOE-oriented style.
Using obsolete words impress the coclusions as well.

John Mikes
On Thu, Nov 27, 2008 at 3:43 PM, Brent Meeker [EMAIL PROTECTED]wrote:


 Bruno Marchal wrote:
 
  On 25 Nov 2008, at 20:16, Brent Meeker wrote:
 
  Bruno Marchal wrote:
 
  Brent: I don't see why the mechanist-materialists are
  logically disallowed from incorporating that kind of physical
  difference into their notion of consciousness.
 
  Bruno: In our setting, it means that the neuron/logic gates have
  some form of
  prescience.
  Brent: I'm not sure I agree with that.  If consciousness is a
  process it may be
  instantiated in physical relations (causal?).  But relations are in
  general not
  attributes of the relata.  Distance is an abstract relation but it
  is always
  realized as the distance between two things.  The things themselves
  don't have
  distance.  If some neurons encode my experience of seeing a rose
  might not
  the experience depend on the existence of roses, the evolution of
  sight, and the
  causal chain as well as the immediate state of the neurons?
 
 
  With *digital* mechanism, it would just mean that we have not chosen
  the right level of substitution. Once the level is well chosen, then
  we can no more give role to the implementations details. They can no
  more be relevant, or we introduce prescience in the elementary
  components.

 But is causality an implementation detail?  There seems to be an implicit
 assumption that digitally represented states form a sequence just because
 there
 is a rule that defines that sequence, but in fact all digital (and other)
 sequences depend on causal chains.

 
 
 
  Bostrom's views about fractional
  quantities of experience are a case in point.
  If that was true, why would you say yes to the doctor without
  knowing the thickness of the artificial axons?
  How can you be sure your consciousness will not half diminish when
  the
  doctor proposes to you the new cheaper brain which use thinner
  fibers,
  or half the number of redundant security fibers (thanks to a progress
  in security software)?
  I would no more dare to say yes to the doctor if I could loose a
  fraction of my consciousness and become a partial zombie.
  But who would say yes to the doctor if he said that he would take
  a movie of
  your brain states and project it?  Or if he said he would just
  destroy you in
  this universe and you would continue your experiences in other
  branches of the
  multiverse or in platonia?  Not many I think.
 
 
  I agree with you. Not many will say yes to such a doctor!  Even
  rightly so (with MEC). I think MGA 3 should make this clear.
  The point is just that if we assume both MEC  *and*  MAT, then the
  movie is also conscious, but of course (well: by MGA 3) it is not
  conscious qua computatio, so that we get the (NON COMP or NON MAT)
  conclusion.

 It's not so clear to me.  One argument leads to CONSCIOUS and the other
 leads to
 NON-CONSCIOUS, but there is not direct contradiction - only a contradiction
 of
 intuitions.  So it may be a fault of intuition in evaluating the thought
 experiments.

 Brent



 
  I keep COMP (as my working hypothesis, but of course I find it
  plausible for many reasons), so I abandon MAT. With comp,
  consciousness can still supervene on computations (in Platonia, or
  more concretely in the universal deployment), but not on its physical
  implementation. By UDA we have indeed the obligation now to explain
  the physical, by the computational. It is the reversal I talked about.
  Somehow, consciousness does not supervene on brain activity, but brain
  activity supervene on consciousness. To be short, because
  consciousness is now somehow related with the whole of arithmetical
  truth, and things are no so simple.
 
  Bruno
  http://iridia.ulb.ac.be/~marchal/
 
 
 
 
  
 


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe 

Re: MGA 1

2008-11-27 Thread Brent Meeker

John Mikes wrote:
 Brent wrote:
 ...
 *But is causality an implementation detail?  There seems to be an implicit
 assumption that digitally represented states form a sequence just 
 because there
 is a rule that defines(*) that sequence, but in fact all digital (and 
 other) sequences depend on(**) causal chains. ...*
  
 I would insert at (*): /*'in digitality'*/  - 
 and at (**):
 /*'(the co-interefficiency of) unlimited'*/  - because in my vocabulary 
 (and I do not expect the 'rest of the world to accept it) the 
 conventional term /'causality'/, meaning to find /A CAUSE/ within the 
 (observed)  topical etc. model that entails the (observed) 'effect' - 
 gave place to the unlimited inteconnections that - in their total 
 interefficiency - result in the effect we observed within a 
 model-domain, irrespective of the limits of the observed domain.
 Cause - IMO - is a limited term of ancient narrow epistemic (model 
 based?) views, not fit for discussions in a TOE-oriented style.
 Using obsolete words impress the coclusions as well.

I think I agree with that last remark (although I'm not sure because the 
language seems obscure).  I meant causality in the physicists sense of no 
action at a distance, not in an epistemic sense.

Brent


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-26 Thread Stathis Papaioannou

2008/11/25 Kory Heath [EMAIL PROTECTED]:

 The answer I *used* to give was that it doesn't matter, because no
 matter what accidental order you find in Platonia, you also find the
 real order. In other words, if you find some portion of the digits
 of PI that seems to be following the rules of Conway's Life, then
 there is also (of course) a Platonic object that represents the
 actual computations that the digits of PI seem to be computing.
 This is, essentially, Bostrom's Unification in the context of
 Platonia. It doesn't matter whether or not accidental order in the
 digits of PI can be viewed as conscious, because either way, we know
 the real order exists in Platonia as well, and multiple
 instantiations of the same pain in Platonia wouldn't result in
 multiple pains.

 I'm uncomfortable with the philosophical vagueness of some of this. At
 the very least, I want a better handle on why Unification is correct
 and Duplication is not in the context of Platonia (or why that
 question is confused, if it is).

I'd agree with your first paragraph quoted above. It isn't possible to
introduce, eliminate or duplicate Platonic objects; they're all just
there, eternally.



-- 
Stathis Papaioannou

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-26 Thread Bruno Marchal


On 25 Nov 2008, at 20:16, Brent Meeker wrote:


 Bruno Marchal wrote:



 Brent: I don't see why the mechanist-materialists are
 logically disallowed from incorporating that kind of physical
 difference into their notion of consciousness.


 Bruno: In our setting, it means that the neuron/logic gates have  
 some form of
 prescience.

 Brent: I'm not sure I agree with that.  If consciousness is a  
 process it may be
 instantiated in physical relations (causal?).  But relations are in  
 general not
 attributes of the relata.  Distance is an abstract relation but it  
 is always
 realized as the distance between two things.  The things themselves  
 don't have
 distance.  If some neurons encode my experience of seeing a rose  
 might not
 the experience depend on the existence of roses, the evolution of  
 sight, and the
 causal chain as well as the immediate state of the neurons?


With *digital* mechanism, it would just mean that we have not chosen  
the right level of substitution. Once the level is well chosen, then  
we can no more give role to the implementations details. They can no  
more be relevant, or we introduce prescience in the elementary  
components.




 Bostrom's views about fractional
 quantities of experience are a case in point.

 If that was true, why would you say yes to the doctor without
 knowing the thickness of the artificial axons?
 How can you be sure your consciousness will not half diminish when  
 the
 doctor proposes to you the new cheaper brain which use thinner  
 fibers,
 or half the number of redundant security fibers (thanks to a progress
 in security software)?
 I would no more dare to say yes to the doctor if I could loose a
 fraction of my consciousness and become a partial zombie.

 But who would say yes to the doctor if he said that he would take  
 a movie of
 your brain states and project it?  Or if he said he would just  
 destroy you in
 this universe and you would continue your experiences in other  
 branches of the
 multiverse or in platonia?  Not many I think.


I agree with you. Not many will say yes to such a doctor!  Even  
rightly so (with MEC). I think MGA 3 should make this clear.
The point is just that if we assume both MEC  *and*  MAT, then the  
movie is also conscious, but of course (well: by MGA 3) it is not  
conscious qua computatio, so that we get the (NON COMP or NON MAT)  
conclusion.

I keep COMP (as my working hypothesis, but of course I find it  
plausible for many reasons), so I abandon MAT. With comp,  
consciousness can still supervene on computations (in Platonia, or  
more concretely in the universal deployment), but not on its physical  
implementation. By UDA we have indeed the obligation now to explain  
the physical, by the computational. It is the reversal I talked about.  
Somehow, consciousness does not supervene on brain activity, but brain  
activity supervene on consciousness. To be short, because  
consciousness is now somehow related with the whole of arithmetical  
truth, and things are no so simple.

Bruno
http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1 - (to B.M)

2008-11-25 Thread Bruno Marchal

John,

On 24 Nov 2008, at 00:19, John Mikes wrote:


 Bruno,
 right before my par on 'sharing a 3rd pers. opinion:

 more or less (maybe) resembling the original 'to
 be shared' one. In its (1st) 'personal' variation. (Cf: perceived
 reality).

 you included a remark not too dissimilar in essence, but with one
 word in it I want to reflect on:

 The third person part is what the first person variant is a variant  
 of.
 I don't pretend we can know it. But if we don't bet on it,  we become
 solipsist.

 Solipsist !! I don't consider it a 'dirty word'. WE ARE solipsists,


The first person is solipsist. But in science we bet on sharable  
thrid person view, and in cognitive science (and in everyday life once  
we are grown up) we bet on the possibility of other first person, at  
least locally. I am betting right now that John Mikes has some inner  
knowledge, despite I cannot prove it.



 only
 our 1st person understanding represents the world for us, nothing  
 else.
 I got that (and accepted) from Colin and use ever since the term (see
 above as well): perceived reality
 (I did not refer to that to Kim's question - sorry, Kim).
 Our variant is a manipulated version of the portion we indeed  
 received
 - in any way and quality - by our 'mindset': the previous experience  
 we
 collected, the genetic makeup of reacting to ideas, the actual state  
 of
 our psyche (Stathis could tell all that much more professional...).
 Yet THAT variant is our (mini?) solipsism:
 that's what we are.


Well that is what our first person are.



 So we should not fight being called a solipsist.


Every soul is solipsist, in that sense. Even the universal machine  
agree on this (cf the interview). Only eliminative materialism denies  
this.
But this is different of the doctrinal solipsism, that is, of the word  
solipsist as used in philosophy. Such solipsism asserts that I am  
the only first person which exists. This could be true for the  
universal soul (S4Grz, the third hypostases, etc.), but not for each  
of us right now, when entangled in a more probable computational  
history. I am not a solipsist just because I don't believe that you  
are a zombie.





 Without such there would be no discussion, just zombies' acceptance.


Absolutely. But this means only that, thankfully, you do believe in  
the existence of other first persons, other solipsist. This means  
you are not a doctrinal solipsist, which consider other as zombie,  
actually as non existing at all, just fruits of their personal dream.  
I think we agree.

Best,


Bruno M


http://iridia.ulb.ac.be/~marchal/



 John M



 On 11/23/08, Bruno Marchal [EMAIL PROTECTED] wrote:

 On 23 Nov 2008, at 17:41, John Mikes wrote:


 On 11/23/08, Bruno Marchal [EMAIL PROTECTED] wrote:


 About mechanism, the optimist reasons like that. I love myself
 because
 I have a so interesting life with so many rich experiences. Now you
 tell me I am a machine. So I love machine because machine *can*  
 have
 rich experiences, indeed, myself is an example.
 The pessimist reasons like that. I hate myself because my life is
 boringly uninteresting without any rich experiences. Now you tell
 me I
 am a machine. I knew it! My own life confirms that rumor  
 according to
 which machine are stupid automata. No meaning no future.

 (JM): thanks Bruno, for the nice metaphor of 'machine' -


 It was the pessimist metaphor. I hope you know I am a bit more
 optimist, ... with regard to machines.



 In my vocabulary
 a machine is a model exercising a mechanism, but chacquun a son  
 gout.

 We agree on the definition.








 (JM): Bruno, in my opinion NOTHING is 'third person sharable' -  
 only a
 'thing' (from every- or no-) can give rise to develop a FIRST  
 personal
 variant of the sharing,

 The third person part is what the first person variant is a variant  
 of.
 I don't pretend we can know it. But if we don't bet on it,  we become
 solipsist.



 more or less (maybe) resembling the original 'to
 be shared' one. In its (1st) 'personal' variation. (Cf: perceived
 reality).

 Building theories help to learn how false we can be. We have to take
 our theories seriously, make then precise and clear enough if we want
 to see the contradiction and learn from there. Oh we can also
 contemplate, meditate, or listen to music; or use (legal) entheogen,
 why not, there are many paths, not incompatible. But reasoning up  
 to a
 contradiction, pure or with the facts, is the way of the researcher.

 Bruno
 http://iridia.ulb.ac.be/~marchal/

 -~--~~~~--~~--~--~---




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en

Re: MGA 1

2008-11-25 Thread Bruno Marchal


Le 25-nov.-08, à 02:13, Kory Heath a écrit :



 On Nov 24, 2008, at 11:01 AM, Bruno Marchal wrote:
 If your argument were not merely convincing but definitive, then I
 would not need to make MGA 3 for showing it is ridiculous to endow the
 projection of a movie of a computation with consciousness (in real
 space-time, like the physical supervenience thesis asked for).

 Ok, I think I'm following you now. You're saying that I'm failing to
 provide a definitive argument showing that it is ridiculous to endow
 the projection of a movie of a computation with consciousness. (Or, in
 my alternate thought experiment, I'm failing to provide a *definitive*
 reason why it's ridiculous to endow the playing back of the
 previously-computed block universe with consciousness.)

Yes.



 I concur -
 my arguments are convincing, but not definitive. If MGA 3 (or MGA 4,
 etc.) is definitive, or even just more convincing, so much the better.
 Please proceed!

So you agree that MGA 1 does show that Lucky Alice is conscious 
(logically).
Normally, this means the proof is finished for you (but that is indeed 
what you say before I begun; everything is coherent).

About MGA 3, I feel almost a bit ashamed to explain that. To believe 
that the projection of the movie makes Alice conscious, is almost like 
explaining why we should not send Roger Moore (James Bond) in jail, 
giving that there are obvious movie where he clearly does not respect 
the speed limitation (grin). Of course this is not an argument.

Bruno

http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-25 Thread Russell Standish

On Tue, Nov 25, 2008 at 11:55:37AM +0100, Bruno Marchal wrote:
 About MGA 3, I feel almost a bit ashamed to explain that. To believe 
 that the projection of the movie makes Alice conscious, is almost like 
 explaining why we should not send Roger Moore (James Bond) in jail, 
 giving that there are obvious movie where he clearly does not respect 
 the speed limitation (grin). Of course this is not an argument.
 
 Bruno
 

There is a world of difference between the James Bond movie, which is
clearly not the same as the actor in flesh and blood, and the sort of
movie used in your MGA, which by definition is indistinguishable in
all important respects from the original conscious being. It is
important not to let our intuitions misguide us at this point. Brent
was effectively making the same point, about when unlikely events
become indistinguishable from impossible.

Cheers


-- 


A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 [EMAIL PROTECTED]
Australiahttp://www.hpcoders.com.au


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-25 Thread Bruno Marchal

Just to be clear on this, I obviously agree.

Best,

Bruno



Le 25-nov.-08, à 12:05, Russell Standish a écrit :


 On Tue, Nov 25, 2008 at 11:55:37AM +0100, Bruno Marchal wrote:
 About MGA 3, I feel almost a bit ashamed to explain that. To believe
 that the projection of the movie makes Alice conscious, is almost like
 explaining why we should not send Roger Moore (James Bond) in jail,
 giving that there are obvious movie where he clearly does not respect
 the speed limitation (grin). Of course this is not an argument.

 Bruno


 There is a world of difference between the James Bond movie, which is
 clearly not the same as the actor in flesh and blood, and the sort of
 movie used in your MGA, which by definition is indistinguishable in
 all important respects from the original conscious being. It is
 important not to let our intuitions misguide us at this point. Brent
 was effectively making the same point, about when unlikely events
 become indistinguishable from impossible.

 Cheers


 --  

 --- 
 -
 A/Prof Russell Standish  Phone 0425 253119 (mobile)
 Mathematics   
 UNSW SYDNEY 2052   [EMAIL PROTECTED]
 Australiahttp://www.hpcoders.com.au
 --- 
 -

 

http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-25 Thread Kory Heath


On Nov 25, 2008, at 2:55 AM, Bruno Marchal wrote:
 So you agree that MGA 1 does show that Lucky Alice is conscious
 (logically).

I think I have a less rigorous view of the argument than you do. You  
want the argument to have the rigor of a mathematical proof. You say  
Let's start with the mechanist-materialist assumption that Fully- 
Functional Alice is conscious. We can replace her neurons one-by-one  
with random neurons that just happen to do what the fully-functional  
ones were going to do. By definition none of her exterior or interior  
behavior changes. Therefore, the resulting Lucky Alice must be exactly  
as conscious as Fully-Functional Alice.

To me, this argument doesn't have the full rigor of a mathematical  
proof, because it's not entirely clear what the mechanist-materialists  
really mean when they say that Fully-Functional Alice is conscious,  
and it's not clear whether or not they would agree that none of her  
exterior or interior behavior changes (in any way that's relevant).  
There *is* an objective physical difference between Fully-Functional  
Alice and Lucky Alice - it's precisely the (discoverable, physical)  
fact that her neurons are all being stimulated by cosmic rays rather  
than by each other. I don't see why the mechanist-materialists are  
logically disallowed from incorporating that kind of physical  
difference into their notion of consciousness.

Of course, in practice, Lucky Alice presents a conundrum for such  
mechanist-materialists. But it's not obvious to me that the conundrum  
is unanswerable for them, because the whole notion of consciousness  
in this context seems so vague. Bostrom's views about fractional  
quantities of experience are a case in point. He clearly takes a  
mechanist-materialist view of consciousness, and he believes that a  
grid of randomly-flipping bits cannot be conscious, no matter what it  
does. He would argue that, during Fully-Functional Alice's slide into  
Lucky Alice, her subjective quality of consciousness doesn't change,  
but her quantity of consciousness gradually reduces until it becomes  
zero. That seems weird to me, but I don't see how to logically prove  
that it's wrong. All I have are messy philosophical arguments and  
thought experiments - what Dennett calls intuition pumps.

That being said, I'm happy to proceed as if our hypothetical mechanist- 
materialists have accepted the force of your argument as a logical  
proof. Yes, they claim, given the assumptions of our mechanism- 
materialism, if Fully-Functional Alice is conscious, Lucky Alice must  
*necessarily* also be conscious. If the laser-graph is conscious, then  
the movie of it must *necessarily* be conscious. What's the problem  
(they ask)? On to MGA 3.

-- Kory


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-25 Thread Bruno Marchal


On 25 Nov 2008, at 15:49, Kory Heath wrote:



 On Nov 25, 2008, at 2:55 AM, Bruno Marchal wrote:
 So you agree that MGA 1 does show that Lucky Alice is conscious
 (logically).

 I think I have a less rigorous view of the argument than you do. You
 want the argument to have the rigor of a mathematical proof.



Yes. But it is applied mathematics, in a difficult domain (psychology/ 
theology and foundation of physics).

There is a minimum of common sense and candidness which is asked for.  
The proof is rigorous in the way it should give to anyone the feeling  
that it could be entirely formalized in some intensional mathematics,  
S4 with quantifiers, or in the modal variant of G and G*. This is  
eventually the purpose of the interview of the lobian machine (using  
Theaetetus epistemological definition). But this is normally not  
needed for conscious english speaking being with enough common sense  
and some interest in the matter.


 You say
 Let's start with the mechanist-materialist assumption that Fully-
 Functional Alice is conscious. We can replace her neurons one-by-one
 with random neurons


They are random in the sense that ALL strings are random. They are not  
random in Kolmogorov sense for example. MGA 2 should make this clear.



 that just happen to do what the fully-functional
 ones were going to do.


It is not random for that very reason. It is luckiness in MGA 1, and  
the record of computations in MGA 2.



 By definition none of her exterior or interior
 behavior changes.


I never use those terms in this context, except in comp jokes like  
the brain is in the brain. It is dangerous because interior/exterior  
can refer both to the in-the skull/outside-the-skull,  and objective/ 
subjective.

I just use the fact that you say yes to a doctor qua  
computatio (with or without MAT).



 Therefore, the resulting Lucky Alice must be exactly
 as conscious as Fully-Functional Alice.

 To me, this argument doesn't have the full rigor of a mathematical
 proof, because it's not entirely clear what the mechanist-materialists
 really mean when they say that Fully-Functional Alice is conscious,


Consciousness does not need to be defined more precisely than it is  
needed for saying yes to the doctor qua computatio, like a  
naturalist could say yes for an artificial heart.
Consciousness and (primitive) Matter don't need to be defined more  
precisely than needed to understand the physical supervenience thesis.
Despite term like existence of a primitive physical universe or the  
very general supervenience term itself.
You could have perhaps still a problem with the definitions or with  
the hypotheses?





 and it's not clear whether or not they would agree that none of her
 exterior or interior behavior changes (in any way that's relevant).
 There *is* an objective physical difference between Fully-Functional
 Alice and Lucky Alice - it's precisely the (discoverable, physical)
 fact that her neurons are all being stimulated by cosmic rays rather
 than by each other.



There is an objective difference between very young Alice with her  
biological brain and very young Alice the day after the digital  
graft. But taking both MEC and MAT together, you cannot use that  
difference. If you want use that difference, you have to make change  
to MEC and/or to MAT. You can always be confused by the reasoning in a  
way which pushes you to (re)consider MEC or MAT, and to interpret them  
more vaguely so that those changes are made possible. But then we  
learn nothing clear from the reasoning. We learn if we do the same,  
but precisely.





 I don't see why the mechanist-materialists are
 logically disallowed from incorporating that kind of physical
 difference into their notion of consciousness.


In our setting, it means that the neuron/logic gates have some form of  
prescience.




 Of course, in practice, Lucky Alice presents a conundrum for such
 mechanist-materialists. But it's not obvious to me that the conundrum
 is unanswerable for them, because the whole notion of consciousness
 in this context seems so vague.

No, what could be vague is the idea of linking consciousness with  
matter, but that is the point of the reasoning. If we keep comp, we  
have to (re)define the general notion of matter.



 Bostrom's views about fractional
 quantities of experience are a case in point.

If that was true, why would you say yes to the doctor without  
knowing the thickness of the artificial axons?
How can you be sure your consciousness will not half diminish when the  
doctor proposes to you the new cheaper brain which use thinner fibers,  
or half the number of redundant security fibers (thanks to a progress  
in security software)?
I would no more dare to say yes to the doctor if I could loose a  
fraction of my consciousness and become a partial zombie.


 He clearly takes a
 mechanist-materialist view of consciousness,


Many believes in naturalism. At least, its move shows that he is aware

Re: MGA 1

2008-11-25 Thread Brent Meeker

Bruno Marchal wrote:
 
 On 25 Nov 2008, at 15:49, Kory Heath wrote:
 

 On Nov 25, 2008, at 2:55 AM, Bruno Marchal wrote:
 So you agree that MGA 1 does show that Lucky Alice is conscious
 (logically).
 I think I have a less rigorous view of the argument than you do. You
 want the argument to have the rigor of a mathematical proof.
 
 
 
 Yes. But it is applied mathematics, in a difficult domain (psychology/ 
 theology and foundation of physics).
 
 There is a minimum of common sense and candidness which is asked for.  
 The proof is rigorous in the way it should give to anyone the feeling  
 that it could be entirely formalized in some intensional mathematics,  
 S4 with quantifiers, or in the modal variant of G and G*. This is  
 eventually the purpose of the interview of the lobian machine (using  
 Theaetetus epistemological definition). But this is normally not  
 needed for conscious english speaking being with enough common sense  
 and some interest in the matter.
 
 
 You say
 Let's start with the mechanist-materialist assumption that Fully-
 Functional Alice is conscious. We can replace her neurons one-by-one
 with random neurons
 
 
 They are random in the sense that ALL strings are random. They are not  
 random in Kolmogorov sense for example. MGA 2 should make this clear.
 
 
 
 that just happen to do what the fully-functional
 ones were going to do.
 
 
 It is not random for that very reason. It is luckiness in MGA 1, and  
 the record of computations in MGA 2.
 
 
 
 By definition none of her exterior or interior
 behavior changes.
 
 
 I never use those terms in this context, except in comp jokes like  
 the brain is in the brain. It is dangerous because interior/exterior  
 can refer both to the in-the skull/outside-the-skull,  and objective/ 
 subjective.
 
 I just use the fact that you say yes to a doctor qua  
 computatio (with or without MAT).
 
 
 
 Therefore, the resulting Lucky Alice must be exactly
 as conscious as Fully-Functional Alice.

 To me, this argument doesn't have the full rigor of a mathematical
 proof, because it's not entirely clear what the mechanist-materialists
 really mean when they say that Fully-Functional Alice is conscious,
 
 
 Consciousness does not need to be defined more precisely than it is  
 needed for saying yes to the doctor qua computatio, like a  
 naturalist could say yes for an artificial heart.
 Consciousness and (primitive) Matter don't need to be defined more  
 precisely than needed to understand the physical supervenience thesis.
 Despite term like existence of a primitive physical universe or the  
 very general supervenience term itself.
 You could have perhaps still a problem with the definitions or with  
 the hypotheses?
 
 
 
 
 and it's not clear whether or not they would agree that none of her
 exterior or interior behavior changes (in any way that's relevant).
 There *is* an objective physical difference between Fully-Functional
 Alice and Lucky Alice - it's precisely the (discoverable, physical)
 fact that her neurons are all being stimulated by cosmic rays rather
 than by each other.
 
 
 
 There is an objective difference between very young Alice with her  
 biological brain and very young Alice the day after the digital  
 graft. But taking both MEC and MAT together, you cannot use that  
 difference. If you want use that difference, you have to make change  
 to MEC and/or to MAT. You can always be confused by the reasoning in a  
 way which pushes you to (re)consider MEC or MAT, and to interpret them  
 more vaguely so that those changes are made possible. But then we  
 learn nothing clear from the reasoning. We learn if we do the same,  
 but precisely.
 
 
 
 
 
 I don't see why the mechanist-materialists are
 logically disallowed from incorporating that kind of physical
 difference into their notion of consciousness.
 
 
 In our setting, it means that the neuron/logic gates have some form of  
 prescience.

I'm not sure I agree with that.  If consciousness is a process it may be 
instantiated in physical relations (causal?).  But relations are in general not 
attributes of the relata.  Distance is an abstract relation but it is always 
realized as the distance between two things.  The things themselves don't have 
distance.  If some neurons encode my experience of seeing a rose might not 
the experience depend on the existence of roses, the evolution of sight, and 
the 
causal chain as well as the immediate state of the neurons?

 
 

 Of course, in practice, Lucky Alice presents a conundrum for such
 mechanist-materialists. But it's not obvious to me that the conundrum
 is unanswerable for them, because the whole notion of consciousness
 in this context seems so vague.
 
 No, what could be vague is the idea of linking consciousness with  
 matter, but that is the point of the reasoning. If we keep comp, we  
 have to (re)define the general notion of matter.
 
 
 
 Bostrom's views about fractional
 quantities of experience

Re: MGA 1

2008-11-25 Thread Russell Standish

On Tue, Nov 25, 2008 at 11:16:55AM -0800, Brent Meeker wrote:
 
 But who would say yes to the doctor if he said that he would take a movie 
 of 
 your brain states and project it?  Or if he said he would just destroy you in 
 this universe and you would continue your experiences in other branches of 
 the 
 multiverse or in platonia?  Not many I think.
 
 Brent
 

Then perhaps nobody has sufficient faith in COMP!

Interestingly, I pointed out an inherent contradiction in the Yes,
doctor postulate a while back, which I gather you're still thinking
of a response Bruno. Lets call it the Standish wager, after the
Pascal wager about belief in God.

If YD is true, then you must also accept the consequences, namely
COMP-immortality. In which case you may as well say no to the
doctor, as COMP-immortality guarantees that you will survive the terminal
brain disease that brought you to the doctor in the first place.

Of course, in reality, it may be a very different choice being
presented. Perhaps Vinge's Singularity happens, and one is given the
choice between uploading into the hive mind, or being put to death on
the spot to conserve resources. Or more modestly, one is being given a
choice of whether to have a direct internet connection implanted in
your skull. In each of these cases, one should make the choice based
on whether the new configuration offers a better life over your
existing one, or not. Survival prospects really shouldn't enter into
it. In the event YD is false, you will then not be any worse off than
you were before.

BTW - I watched the Prestige on the weekend. Good recommendation,
Bruno! My wife enjoyed it greatly too, and wants to watch it again
sometime. I can't get her to read my book, though :(

Cheers

-- 


A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 [EMAIL PROTECTED]
Australiahttp://www.hpcoders.com.au


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-25 Thread Kory Heath


On Nov 25, 2008, at 10:00 AM, Bruno Marchal wrote:
 You could have perhaps still a problem with the definitions or with
 the hypotheses?

I think I haven't always been clear on our definitions of mechanism  
and materialism. But I can understand and accept definitions of those  
terms under which MGA 1 shows that it's logically necessary that Lucky  
Alice is conscious, and MGA 2 shows that it's logically necessary that  
the projection of the movie makes Alice conscious (your words from a  
previous email). I think we can proceed with that.

But can you clarify exactly what MECH+MAT is supposed to be saying  
about the movie? Does MECH+MAT say that something special is happening  
when we project the movie, or is the simple existence of the movie  
enough?

-- Kory


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-24 Thread Kory Heath


On Nov 23, 2008, at 4:18 AM, Bruno Marchal wrote:
 Let us consider your  lucky teleportation case, where someone use a
 teleporter which fails badly. So it just annihilates the original
 person, but then, by an incredible luck the person is reconstructed
 with his right state after. If you ask him how do you know how to tie
 shoes, if the person answers, after that bad but lucky
 teleportation because I learn in my youth: he is correct.
 He is correct for the same reason Alice's answer to her exams were
 correct, even if luckily so.

I think it's (subtly) incorrect to focus on Lucky Alice's *answers* to  
her exams. By definition, she wrote down the correct answers. But (I  
claim) she didn't compute those answers. A bunch of cosmic rays just  
made her look like she did. Fully-Functional Alice, on the other hand,  
actually did compute the answers.

Let's imagine that someone interrupts Fully-Functional Alice while  
she's taking the exam and asks her, Do you think that your actions  
right now are being caused primarily by a very unlikely sequence of  
cosmic rays?, and she answers No. She is answering correctly. By  
definition, Lucky Alice will answer the same way. But she will be  
answering incorrectly. That is the sense in which I'm saying that  
Lucky Kory is making a false statement when he says I learned to tie  
my shoes in my youth.

In the case of Lucky Kory, I concur that, despite this difference, his  
subjective consciousness is identical to what Kory's would have been  
if the teleportation was successful. But the reason I can view Lucky  
Kory as conscious at all is that once the lucky accident creates him  
out of whole cloth, his neurons are firing correctly, are causally  
connected to each other in the requisite ways, etc. I have a harder  
time understanding how Lucky Alice can be conscious, because at the  
time I'm supposed to be viewing her as conscious, she isn't meeting  
the causal / computational pre-requisites that I thought were  
necessary for consciousness. And I can essentially turn Lucky Kory  
into Lucky Alice by imagining that he is nothing but a series of  
lucky teleportations. And then suddenly I don't see how he can be  
conscious, either.

 Suppose I send you a copy of my sane paper by the internet, and that,
 the internet demolishes it completely, but that by an incredible
 chance your buggy computer rebuild it in its exact original form. This
 will not change the content of the paper, and the paper will be
 correct or false independently of the way it has flight from me to  
 you.

That's because the SANE paper doesn't happen to talk about it's own  
causal history. Imagine that I take a pencil and a sheet of paper and  
write the following on it:

The patterns of markings on this paper were caused by Kory Heath. Of  
course, that doesn't mean that the molecules in this piece of paper  
touched the hands of Kory Heath. Maybe the paper has been teleported  
since Kory wrote it, and reconstructed out of totally different  
molecules. But there is an unbroken causal chain from these markings  
back to something Kory once did.

If you teleport that paper normally, the statement on it remains true.  
If the teleportation fails, but a lucky accident creates an identical  
piece of paper, the statement on it is false. Maybe this has no  
bearing on consciousness or anything else, but I don't want to forget  
about the distinction until I'm sure it's not relevant.

 Of course, the movie
 has still some relationship with the original consciousness of Alice,
 and this will help us to save the MEC part of the physical
 supervenience thesis, giving rise to the notion of computational
 supervenience, but this form of supervenience does no more refer to
 anything *primarily* physical, and this will be enough preventing the
 use of a concrete universe for blocking the UDA conclusion.

I see what you mean. But for me, these thought experiments are making  
me doubt that I even have a coherent notion of computational  
supervenience.

-- Kory


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-24 Thread Bruno Marchal


On 24 Nov 2008, at 18:08, Kory Heath wrote:




 I see what you mean. But for me, these thought experiments are making
 me doubt that I even have a coherent notion of computational
 supervenience.



You are not supposed to have a coherent idea of what is computational  
supervenience. This belongs to the conclusion of the reasoning, and  
this will need elaboration on what is a computation. This is not so  
hard with ... computer science.

To understand that MEC+MAT is contradictory, you have only to  
understand them well enough so as to get up to the point where the  
contradiction occurs. You give us many quite good argument for saying  
that Lucky Alice, and even Lucky Kory, are not conscious. I do agree,  
mainly, with those argument.

So let me be clear; you argument that , assuming MEC+MAT, Lucky Alice  
is not conscious are almost correct, and very convincing. And so, of  
course Lucky Alice is not conscious.

Now, MGA 1 is an argument showing, that MEC+MAT, due to the physical  
supervenience thesis, and the non prescience of the neurons, entails  
that Lucky Alice is conscious. The question is: do you see this. too

If you see this, we have:

MEC+MAT entails Lucky Alice is not conscious (by your correct argument)
MEC+MAT entails Lucky Alice is conscious (by MGA 1)

Thus MEC+MAT entails (Lucky Alice is conscious AND Lucky Alice is not  
conscious), that is, MEC+MAT entails false, a contradiction.
And that is the point.

If your argument were not merely convincing but definitive, then I  
would not need to make MGA 3 for showing it is ridiculous to endow the  
projection of a movie of a computation with consciousness (in real  
space-time, like the physical supervenience thesis asked for).

OK?


Bruno





 -- Kory


 

http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-24 Thread Kory Heath


On Nov 22, 2008, at 6:24 PM, Stathis Papaioannou wrote:
 Similarly, whenever we
 interact with a computation, it must be realised on a physical
 computer, such as a human brain. But there is also the abstract
 computation, a Platonic object. It seems that consciousness, like
 threeness, may be a property of the Platonic object, and not of its
 physical realisation. This allows resolution of the apparent paradoxes
 we have been discussing.

For reasons that are (mostly) independent of all of these thought  
experiments, I suspect that there's something deeply correct about the  
idea that an abstract computation can be the substrate for  
consciousness. Or at least, I think there's something deeply correct  
about replacing the idea of physical existence with mathematical  
facts-of-the-matter. This immediately eliminates weird questions like  
why is there something instead of nothing, which seem unanswerable  
in the context of the normal view of physical existence.

But what I'm realizing is that I still don't have a clear conception  
of how consciousness is supposed to relate to these Platonic  
computations. (Or maybe I don't have a clear enough picture of what  
counts as a Platonic computation.) In a way, it feels to me as  
though I still have partial zombie problems, even in Platonia.

Lets imagine a block universe in Platonia - a 3D block of cells  
filled (in some order that we specify) with the binary digits of PI.  
Somewhere within this block, there are (I think) regions which look  
as if they're following the rules of Conway's Life, and some of  
those regions contain creatures that look as if they're conscious.  
Are they actually conscious? The move away from physical existence  
to mathematical existence (what I've called mathematical  
physicalism) doesn't immediately help me answer this question.

The answer I *used* to give was that it doesn't matter, because no  
matter what accidental order you find in Platonia, you also find the  
real order. In other words, if you find some portion of the digits  
of PI that seems to be following the rules of Conway's Life, then  
there is also (of course) a Platonic object that represents the  
actual computations that the digits of PI seem to be computing.  
This is, essentially, Bostrom's Unification in the context of  
Platonia. It doesn't matter whether or not accidental order in the  
digits of PI can be viewed as conscious, because either way, we know  
the real order exists in Platonia as well, and multiple  
instantiations of the same pain in Platonia wouldn't result in  
multiple pains.

I'm uncomfortable with the philosophical vagueness of some of this. At  
the very least, I want a better handle on why Unification is correct  
and Duplication is not in the context of Platonia (or why that  
question is confused, if it is).

-- Kory


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-24 Thread Kory Heath


On Nov 24, 2008, at 11:01 AM, Bruno Marchal wrote:
 If your argument were not merely convincing but definitive, then I
 would not need to make MGA 3 for showing it is ridiculous to endow the
 projection of a movie of a computation with consciousness (in real
 space-time, like the physical supervenience thesis asked for).

Ok, I think I'm following you now. You're saying that I'm failing to  
provide a definitive argument showing that it is ridiculous to endow  
the projection of a movie of a computation with consciousness. (Or, in  
my alternate thought experiment, I'm failing to provide a *definitive*  
reason why it's ridiculous to endow the playing back of the  
previously-computed block universe with consciousness.) I concur -  
my arguments are convincing, but not definitive. If MGA 3 (or MGA 4,  
etc.) is definitive, or even just more convincing, so much the better.  
Please proceed!

-- Kory


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-23 Thread Bruno Marchal


On 21 Nov 2008, at 10:45, Kory Heath wrote:

 However, the materialist-mechanist still has some grounds to say that
 there's something interestingly different about Lucky Kory than
 Original Kory. It is a physical fact of the matter that Lucky Kory is
 not causally connected to Pre-Teleportation Kory. When someone asks
 Lucky Kory, Why do you tie your shoes that way?, and Lucky Kory
 says, Because of something I learned when I was ten years old, Lucky
 Kory's statement is quite literally false. Lucky Kory ties his shoes
 that way because of some cosmic rays. I actually don't know what the
 standard mechanist-materialist way of viewing this situation is. But
 it does seem to suggest that maybe breaks in the causal chain
 shouldn't affect consciousness after all.


You are right, at least when, for the sake of the argument, we  
continue to keep MEC and MAT, if only to single out, the most  
transparently possible, the contradiction.
Let us consider your  lucky teleportation case, where someone use a  
teleporter which fails badly. So it just annihilates the original  
person, but then, by an incredible luck the person is reconstructed  
with his right state after. If you ask him how do you know how to tie  
shoes, if the person answers, after that bad but lucky  
teleportation because I learn in my youth: he is correct.
He is correct for the same reason Alice's answer to her exams were  
correct, even if luckily so.
Suppose I send you a copy of my sane paper by the internet, and that,  
the internet demolishes it completely, but that by an incredible  
chance your buggy computer rebuild it in its exact original form. This  
will not change the content of the paper, and the paper will be  
correct or false independently of the way it has flight from me to you.
In the bad-lucky teleporter case, even with MAT (and MEC) it is still  
the right person who survived, with the correct representation of her  
right memories, and so one. Even if just luckily so.
MGA 2 then shows that the random appearance of the lucky event was a  
red hearing, so that we have to admit that consciousness supervenes on  
the movie graph (the movie of the running of the boolean optical  
computer).

Of course I don't believe that consciousness supervene on the physical  
activity of such movie, but this means that I have to abandon the  
whole physical supervenience. I will read the other posts. I think  
many have understood and have already concluded. But from a strict  
logical point of view, perhaps some are willing to defend the idea  
that the movie-graph is conscious, and, in that case, I will present  
MGA 3, which is supposed to show that, well, a movie cannot think,  
through MEC (there is just no computation there). Of course, the movie  
has still some relationship with the original consciousness of Alice,  
and this will help us to save the MEC part of the physical  
supervenience thesis, giving rise to the notion of computational  
supervenience, but this form of supervenience does no more refer to  
anything *primarily* physical, and this will be enough preventing the  
use of a concrete universe for blocking the UDA conclusion.

Bruno




http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-23 Thread Bruno Marchal

On 20 Nov 2008, at 21:27, Jason Resch wrote:



 On Thu, Nov 20, 2008 at 12:03 PM, Bruno Marchal [EMAIL PROTECTED]  
 wrote:



  The state machine that would represent her in the case of  
 injection of random noise is a different state machine that would  
 represent her normally functioning brain.


 Absolutely so.



 Bruno,

 What about the state machine that included the injection of lucky  
 noise from an outside source vs. one in which all information was  
 derived internally from the operation of the state machine itself?

At which times? How? Did MGA 2 clarify this?




 Would those two differently defined machines not differ and compute  
 something different?  Even though the computations are identical the  
 information that is being computed comes from different sources and  
 so carries with it a different connotation.

But the supervenience principle and the non-prescience of the neurons  
makes it impossible to the machine to feel such connotations.



 Though the bits injected are identical, they inherently imply a  
 different meaning because the state machine in the case of injection  
 has a different structure than that of her normally operating  
 brain.  I believe the brain can be abstracted as a computer/ 
 information processing system, but it is not simply the computations  
 and the inputs into the logic gates at each step that are important,  
 but also the source of the input bits, otherwise the computation  
 isn't the same.

If the source differs below the substitution level, the machine cannot  
be aware of it. If she was, it would mean we have been wrong with the  
choice of the substitution level. OK? We can come back on this.

Bruno
http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1 bis (exercise)

2008-11-23 Thread Bruno Marchal


On 20 Nov 2008, at 19:38, Brent Meeker wrote:


  Talk about consciousness will seem as quaint
 as talk about the elan vital does now.


Then you are led to eliminativism of consciousness. This makes MEC+MAT  
trivially coherent. The price is big: consciousness does no more  
exist, like the elan vital. MEC becomes vacuoulsy true: I say yes to  
the doctor, without even meaning it. But it seems to me that  
consciousness is not like the elan vital. I do make the, admittedly  
non sharable, experience of consciousness all the time, so it seems to  
me that such a move consists in negating the data. If the idea of  
keeping the notion of primitive matter, which I recall is really an  
hypothesis, is so demanding that I have to abandon the idea that I am  
conscious, I will abandon the hypothetical notion of primitive matter  
instead.
But you make my point.

Bruno

http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-23 Thread Bruno Marchal

On 20 Nov 2008, at 21:40, Gordon Tsai wrote:

 Bruno:
I think you and John touched the fundamental issues of human  
 rational. It's a dilemma encountered by phenomenology. Now I have a  
 question: In theory we can't distinguish ourselves from a Lobian  
 Machine.


Note that in the math part (Arithmetical UDA), I consider only  
*Sound* Lobian machine. Sound means hat they are never wrong  
(talking about numbers). Now no sound Lobian machine can know that she  
is sound, and I am not yet sure I will find an interesting notion of  
lobianity for unsound machines, and sound Lobian Machine can easily  
get unsound, especially when they begin to confuse deductive inference  
and inductive inference. We just cannot know if we are (sound) Lobian  
Machine.
It is more something we should hope for ...




 But can lobian machines truly have sufficient rich experiences like  
 human?



You know, Mechanism is a bit like the half bottle of wine. The  
optimist thinks that the bottle is yet half full, and the pessimist  
thinks that the bottles is already half-empty.
About mechanism, the optimist reasons like that. I love myself because  
I have a so interesting life with so many rich experiences. Now you  
tell me I am a machine. So I love machine because machine *can* have  
rich experiences, indeed, myself is an example.
The pessimist reasons like that. I hate myself because my life is  
boringly uninteresting without any rich experiences. Now you tell me I  
am a machine. I knew it! My own life confirms that rumor according to  
which machine are stupid automata. No meaning no future.




 For example, is it possible for a lobian machine to still its mind'  
 or cease the computational logic like some eastern philosophy  
 suggested? Maybe any of the out-of-loop experience is still part of  
 the computation/logic, just as our out-of-body experiences are  
 actually the trick of brain chemicals?





The bad news is that the singular point is, imo, behind us. The  
universal machine you bought has been clever, but this has been  
shadowed by your downloadling on so many particular purposes software.  
And then she need to be in a body so that you can use it, as a if it  
was a sort of slave, to send me a mail. It will take time for them  
too. And once a universal machine has a body or a relative  
representation,  the first person and the third person get rich and  
complex, but possibly confused. Its soul falls, would say Plotin. She  
can get hallucinated and all that.

With comp, to be very short and bit provocative, the notion of out-of- 
body experience makes no sense at all because we don't have a body to  
go out of it, at the start. Your body is in your head, if I can say.

This is at least a *consequence* of the assumption of mechanism, and  
I'm afraid you have to understand that by yourself, a bit like a  
theorem in math. But it is third person sharable, for example by UDA,  
I think. it leads I guess to a different view on Reality (different  
from the usual Theology of Aristotle, but not different from Plato  
Theology, roughly speaking).

You can ask any question, but my favorite one are the naive question :)

Bruno Marchal


http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-23 Thread Bruno Marchal


On 22 Nov 2008, at 11:06, Stathis Papaioannou wrote:

 Yes, there must be a problem with the assumptions. The only assumption
 that I see we could eliminate, painful though it might be for those of
 a scientific bent, is the idea that consciousness supervenes on
 physical activity. Q.E.D.


Logically you could also abandon MEC, but I guess you think, as I tend  
to think myself, that this could be even more painful for those of  
scientific bent.
In the long run physicists could be very happy that their foundations  
relies on numbers relations (albeit statistical).

Bruno

http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-23 Thread Bruno Marchal


On 23 Nov 2008, at 03:24, Stathis Papaioannou wrote:


 2008/11/23 Kory Heath [EMAIL PROTECTED]:


 On Nov 22, 2008, at 2:06 AM, Stathis Papaioannou wrote:
 Yes, there must be a problem with the assumptions. The only  
 assumption
 that I see we could eliminate, painful though it might be for  
 those of
 a scientific bent, is the idea that consciousness supervenes on
 physical activity. Q.E.D.

 Right. But the problem is that that conclusion doesn't tell me how to
 deal with the (equally persuasive) arguments that convince me there's
 something deeply correct about viewing consciousness in computational
 terms, and viewing computation in physical terms. So I'm really just
 left with a dilemma. As I've hinted earlier, I suspect that there's
 something wrong with the idea of physical matter and related ideas
 like causality, probability, etc. But that's pretty vague.

 We could say there are two aspects to mathematical objects, a physical
 aspect and a non-physical aspect. Whenever we interact with the number
 three it must be realised, say in the form of three objects. But
 there is also an abstract three, with threeness properties, that lives
 in Platonia independently of any realisation. Similarly, whenever we
 interact with a computation, it must be realised on a physical
 computer, such as a human brain. But there is also the abstract
 computation, a Platonic object. It seems that consciousness, like
 threeness, may be a property of the Platonic object, and not of its
 physical realisation. This allows resolution of the apparent paradoxes
 we have been discussing.

I agree with you. It resolves the conceptual problems about mind and  
matter, but if forces us to redefine matter from how consciousness  
differentiate in Platonia (this comes from MGA + ... UDA(1..7). Comp  
really reduce the mind body problem to the body problem: it remains to  
show we don't have too much white rabbits. But the problem is a pure  
problem in computer science now.

Bruno


http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-23 Thread John Mikes

On 11/22/08, Brent Meeker [EMAIL PROTECTED] wrote:

 John Mikes wrote:
 Brent,
 did your dog communicate to you (in dogese, of course) that she has - NO -
 INNER NARRATIVE? or you are just ignorant to perceive such?
 (Of course do not expect such at the complexity level of your 11b neurons)
 John M

 Of course not.  It's my inference from the fact that my dog has no outer
 narrative.  Have you read Julian Jaynes The Origin of Consciousness in the
 Breakdown of the Bicameral Mind?  He argues, persuasively in my opinion,
 that our inner narrative arises from internalizing our outer narrative, i.e.
 spoken communication with other people.

 Brent

Brent, I appreciate your 'consenting' replyG - however -
yes, I read (long ago) J. Jaynes and appreciated MOST of his ideas, do
not accept him as substitute (verbal) opinion in our presently ongoing
discussion. We may have ideas generated after (in spite of?) J.J.
Yet - in your reply - the spoken communication with other people
refers in the present topic to communication in 'dogese' (with other
dogs?) so your argument is still in limbo.
Just for the fun of it

John Mikes


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-23 Thread John Mikes

On 11/23/08, Bruno Marchal [EMAIL PROTECTED] wrote:

 On 20 Nov 2008, at 21:40, Gordon Tsai wrote:

 Bruno:
I think you and John touched the fundamental issues of human
 rational. It's a dilemma encountered by phenomenology. Now I have a
 question: In theory we can't distinguish ourselves from a Lobian
 Machine.

(JM): Dear Gordon, thanks for your consent. My reply is shorter than
Bruno's (Indeed professional - long - one): If we say so:
 'We' created a machine as we wish and if we created it 'that way',
we cannot distinguish ourselves from it.

(Bruno):
 Note that in the math part (Arithmetical UDA), I consider only
 *Sound* Lobian machine. Sound means hat they are never wrong
 (talking about numbers). Now no sound Lobian machine can know that she
 is sound, and I am not yet sure I will find an interesting notion of
 lobianity for unsound machines, and sound Lobian Machine can easily
 get unsound, especially when they begin to confuse deductive inference
 and inductive inference. We just cannot know if we are (sound) Lobian
 Machine.
 It is more something we should hope for ...

 But can lobian machines truly have sufficient rich experiences like
 human?

 You know, Mechanism is a bit like the half bottle of wine. The
 optimist thinks that the bottle is yet half full, and the pessimist
 thinks that the bottles is already half-empty.
 About mechanism, the optimist reasons like that. I love myself because
 I have a so interesting life with so many rich experiences. Now you
 tell me I am a machine. So I love machine because machine *can* have
 rich experiences, indeed, myself is an example.
 The pessimist reasons like that. I hate myself because my life is
 boringly uninteresting without any rich experiences. Now you tell me I
 am a machine. I knew it! My own life confirms that rumor according to
 which machine are stupid automata. No meaning no future.

(JM): thanks Bruno, for the nice metaphor of 'machine' - In my vocabulary
a machine is a model exercising a mechanism, but chacquun a son gout.
With a mechanism I am differently: I like to expand it onto something like
'anything (process) that gets something entailed' without restrictions. But
again, I do not propose this to universal acceptance.

 For example, is it possible for a lobian machine to still its mind'
 or cease the computational logic like some eastern philosophy
 suggested? Maybe any of the out-of-loop experience is still part of
 the computation/logic, just as our out-of-body experiences are
 actually the trick of brain chemicals?

 The bad news is that the singular point is, imo, behind us. The
 universal machine you bought has been clever, but this has been
 shadowed by your downloadling on so many particular purposes software.
 And then she need to be in a body so that you can use it, as a if it
 was a sort of slave, to send me a mail. It will take time for them
 too. And once a universal machine has a body or a relative
 representation,  the first person and the third person get rich and
 complex, but possibly confused. Its soul falls, would say Plotin. She
 can get hallucinated and all that.

 With comp, to be very short and bit provocative, the notion of out-of-
 body experience makes no sense at all because we don't have a body to
 go out of it, at the start. Your body is in your head, if I can say.

 This is at least a *consequence* of the assumption of mechanism, and
 I'm afraid you have to understand that by yourself, a bit like a
 theorem in math. But it is third person sharable, for example by UDA,
 I think. it leads I guess to a different view on Reality (different
 from the usual Theology of Aristotle, but not different from Plato
 Theology, roughly speaking).
(JM): Bruno, in my opinion NOTHING is 'third person sharable' - only a
'thing' (from every- or no-) can give rise to develop a FIRST personal
variant of the sharing, more or less (maybe) resembling the original 'to
be shared' one. In its (1st) 'personal' variation. (Cf: perceived reality).


 You can ask any question, but my favorite one are the naive question :)

 Bruno Marchal


 http://iridia.ulb.ac.be/~marchal/

(JM): John Mikes


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-23 Thread Bruno Marchal

On 23 Nov 2008, at 17:41, John Mikes wrote:


 On 11/23/08, Bruno Marchal [EMAIL PROTECTED] wrote:


 About mechanism, the optimist reasons like that. I love myself  
 because
 I have a so interesting life with so many rich experiences. Now you
 tell me I am a machine. So I love machine because machine *can* have
 rich experiences, indeed, myself is an example.
 The pessimist reasons like that. I hate myself because my life is
 boringly uninteresting without any rich experiences. Now you tell  
 me I
 am a machine. I knew it! My own life confirms that rumor according to
 which machine are stupid automata. No meaning no future.

 (JM): thanks Bruno, for the nice metaphor of 'machine' -


It was the pessimist metaphor. I hope you know I am a bit more  
optimist, ... with regard to machines.



 In my vocabulary
 a machine is a model exercising a mechanism, but chacquun a son gout.

We agree on the definition.








 (JM): Bruno, in my opinion NOTHING is 'third person sharable' - only a
 'thing' (from every- or no-) can give rise to develop a FIRST personal
 variant of the sharing,

The third person part is what the first person variant is a variant of.
I don't pretend we can know it. But if we don't bet on it,  we become  
solipsist.



 more or less (maybe) resembling the original 'to
 be shared' one. In its (1st) 'personal' variation. (Cf: perceived  
 reality).

Building theories help to learn how false we can be. We have to take  
our theories seriously, make then precise and clear enough if we want  
to see the contradiction and learn from there. Oh we can also  
contemplate, meditate, or listen to music; or use (legal) entheogen,  
why not, there are many paths, not incompatible. But reasoning up to a  
contradiction, pure or with the facts, is the way of the researcher.

Bruno
http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-22 Thread Kory Heath


On Nov 21, 2008, at 6:53 PM, Jason Resch wrote:
 What about a case when only some of Alice's neurons have ceased  
 normal function and became dependent on the lucky rays?

Yes, those are exactly the cases that are highlighting the problem.  
(For me. For Bruno, Lucky Alice is still conscious. But he has the  
analogous problem when we remove half of the neurons from Lucky  
Alice's head.)

 I'm beginning to see how truly frustrating the MGA argument is: If  
 all her neurons break and are luckily fixed I believe she is a  
 zombie, if only one of her neurons fails but we correct it, I don't  
 think this would effect her consciousness in any perceptible way,  
 but cases where some part of her brain needs to be corrected are  
 quite strange, and almost maddeningly so.

I agree.

 I think you are right in that the split brain cases are very  
 different, but I think the similarity is that part of Alice's  
 consciousness would disappear, though the lucky effects ensure she  
 acts as if no change had occurred.

The tough part is that it's not just that she outwardly acts as if no  
change had occurred. It's that, if the mechanistic view of  
consciousness is correct, her subjective experience can't change,  
either - at least, not in any noticeable way. If it did, she would  
notice it and (probably) say something about it. And that can't  
happen, because the act of noticing something or saying something  
requires her neurons and her mouth to do something different.

The conclusion seems to be that, if mechanism is true, it's possible  
for any part of my brain, or all of it, to disappear without changing  
my conscious experience. That suggests a conceptual problem somewhere.

-- Kory


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-22 Thread Stathis Papaioannou

2008/11/22 Kory Heath [EMAIL PROTECTED]:

 If Lucky Alice is conscious and Empty-Headed Alice is not conscious,
 then there are partial zombies halfway between them. Like you, I can't
 make any sense of these partial zombies. But I also can't make any
 sense of the idea that Empty-Headed Alice is conscious. Therefore, I
 don't think this argument shows that Empty-Headed Alice (and by
 extension, Lucky Alice) must be conscious. I think it shows that
 there's a deeper problem - probably with one of our assumptions.

Yes, there must be a problem with the assumptions. The only assumption
that I see we could eliminate, painful though it might be for those of
a scientific bent, is the idea that consciousness supervenes on
physical activity. Q.E.D.

 Even though I actually think that mechanist-materialists should view
 both Lucky Alice and Empty-Headed Alice as not conscious, I still
 think they have to deal with this problem. They have to deal with the
 spectrum of intermediate states between Fully-Functional Alice and
 Lucky Alice. (Or between Fully-Functional Alice and Empty-Headed Alice.)



-- 
Stathis Papaioannou

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-22 Thread Stathis Papaioannou

2008/11/22 Jason Resch [EMAIL PROTECTED]:

 What you described sounds very similar to a split brain patient I saw on a
 documentary.  He was able to respond to images presented to one eye, and
 ended up drawing them with a hand controlled by the other hemisphere, yet he
 had no idea why he drew that image when asked.  The problem may not be that
 he isn't experiencing the visualization, but that the part of his brain that
 is responsible for speech is disconnected from the part of his brain that
 can see.
 See: http://www.youtube.com/watch?v=ZMLzP1VCANo

This differs from the Lucky Alice example in that the split brain
patient notices that something is wrong, for the reason you give:
speech and vision are processed in different hemispheres. Another
interesting neurological example to consider is Anton's Syndrome, a
condition where people with lesions in their occipital cortex
rendering them blind don't seem to notice that they're blind. They
confabulate when they are asked to describe something put in front of
them and make up excuses when they walk into things. One can imagine a
kind of zombie vision if one of these patients were supplied with an
electronic device that sends them messages about their environment:
they would behave as if they can see as well as believe that they can
see, even though they lack any visual experiences. It should be noted,
however, that Anton's syndrome is a specific organic delusional
disorder, where a patient's cognition is affected in addition to the
perceptual loss, not just as a result of the perceptual loss. Blind or
deaf people who aren't delusional know they are blind or deaf.



-- 
Stathis Papaioannou

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-22 Thread Günther Greindl

Hmm,

 However, I do start getting uncomfortable when I realize that this  
 lucky teleportation can happen over and over again, and if it happens  
 fast enough, it just reduces to sheer randomness that just happens to  
 be generating an ordered pattern that looks like Kory. I have a hard  
 time understanding how a mechanist can consider a bunch of random  
 numbers to be conscious. If that's the kind of magic you're referring  

I think that is the major attraction of mathematical universes - that 
the order emerges after the fact - out of random patterns.

The order would take over the function of causality in a materialist 
picture.

Causality, as Brent (I think) has mentionend, is still not really 
understood. What physicists mean is actually a certain kind of locality 
- and macro-causality emerges as a statistical mean.

This is already not so far from order from randomness (in platonia 
locality would also be an after the fact, a physical feature).

Bruno takes the whole step, dumps matter, and let's mind emerge from 
arithmetic truth.

What I think fascinating is why we then find ourselves as single 
persons - if one dumps matter, why is not an arbitrary ordering out of 
the number mess conscious of being many persons at once (in the matter 
picture: being aware of superpositions).

Why then the feeling of being a single person?
In Bruno's system: why are OM's tied to single persons?

Cheers,
Günther

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-22 Thread Günther Greindl



Kory Heath wrote:
 
 If Lucky Alice is conscious and Empty-Headed Alice is not conscious,  
 then there are partial zombies halfway between them. Like you, I can't  
 make any sense of these partial zombies. But 
  also can't make any

I think a materialist would either have to argue that Lucky Alice is 
conscious (if he focuses on physical states) and that removing neurons 
would lead to fading qualia (the partial zombies) or simply assume 
that already Lucky Alice is a Zombie (because he focuses on causal 
dynamics).

(I would like to note that I have dropped MAT in the meantime and tend 
to MECH. Just wanted to simulate a materialist argumentation :-) - 
maybe I can convince myself of MAT and not MECH again *grin*)

Could we say that MAT focuses on _physical states_ (exclusively) and 
MECH on _dynamics_? And that MGA shows that one can't have both?


Cheers,
Günther

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-22 Thread Kory Heath


On Nov 22, 2008, at 2:06 AM, Stathis Papaioannou wrote:
 Yes, there must be a problem with the assumptions. The only assumption
 that I see we could eliminate, painful though it might be for those of
 a scientific bent, is the idea that consciousness supervenes on
 physical activity. Q.E.D.

Right. But the problem is that that conclusion doesn't tell me how to  
deal with the (equally persuasive) arguments that convince me there's  
something deeply correct about viewing consciousness in computational  
terms, and viewing computation in physical terms. So I'm really just  
left with a dilemma. As I've hinted earlier, I suspect that there's  
something wrong with the idea of physical matter and related ideas  
like causality, probability, etc. But that's pretty vague.

-- Kory


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-22 Thread Brent Meeker

Günther Greindl wrote:
 
 
 Kory Heath wrote:
 If Lucky Alice is conscious and Empty-Headed Alice is not conscious,  
 then there are partial zombies halfway between them. Like you, I can't  
 make any sense of these partial zombies. But 
   also can't make any

I don't see why partial zombies are problematic.  My dog is conscious of 
perceptions, of being an individual, of memories and even dreams, but he 
doesn't 
have an inner narrative - so is he a partial zombie?

Brent

 
 I think a materialist would either have to argue that Lucky Alice is 
 conscious (if he focuses on physical states) and that removing neurons 
 would lead to fading qualia (the partial zombies) or simply assume 
 that already Lucky Alice is a Zombie (because he focuses on causal 
 dynamics).
 
 (I would like to note that I have dropped MAT in the meantime and tend 
 to MECH. Just wanted to simulate a materialist argumentation :-) - 
 maybe I can convince myself of MAT and not MECH again *grin*)
 
 Could we say that MAT focuses on _physical states_ (exclusively) and 
 MECH on _dynamics_? And that MGA shows that one can't have both?
 
 
 Cheers,
 Günther
 
  
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-22 Thread John Mikes

Brent,
did your dog communicate to you (in dogese, of course) that she has - NO -
INNER NARRATIVE? or you are just ignorant to perceive such?
(Of course do not expect such at the complexity level of your 11b neurons)
John M

On 11/22/08, Brent Meeker [EMAIL PROTECTED] wrote:

 Günther Greindl wrote:


 Kory Heath wrote:
 If Lucky Alice is conscious and Empty-Headed Alice is not conscious,
 then there are partial zombies halfway between them. Like you, I can't
 make any sense of these partial zombies. But
   also can't make any

 I don't see why partial zombies are problematic.  My dog is conscious of
 perceptions, of being an individual, of memories and even dreams, but he
 doesn't
 have an inner narrative - so is he a partial zombie?

 Brent


 I think a materialist would either have to argue that Lucky Alice is
 conscious (if he focuses on physical states) and that removing neurons
 would lead to fading qualia (the partial zombies) or simply assume
 that already Lucky Alice is a Zombie (because he focuses on causal
 dynamics).

 (I would like to note that I have dropped MAT in the meantime and tend
 to MECH. Just wanted to simulate a materialist argumentation :-) -
 maybe I can convince myself of MAT and not MECH again *grin*)

 Could we say that MAT focuses on _physical states_ (exclusively) and
 MECH on _dynamics_? And that MGA shows that one can't have both?


 Cheers,
 Günther

 



 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-22 Thread Brent Meeker

John Mikes wrote:
 Brent,
 did your dog communicate to you (in dogese, of course) that she has - NO -
 INNER NARRATIVE? or you are just ignorant to perceive such?
 (Of course do not expect such at the complexity level of your 11b neurons)
 John M

Of course not.  It's my inference from the fact that my dog has no outer 
narrative.  Have you read Julian Jaynes The Origin of Consciousness in the 
Breakdown of the Bicameral Mind?  He argues, persuasively in my opinion, that 
our inner narrative arises from internalizing our outer narrative, i.e. spoken 
communication with other people.

Brent

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-22 Thread Stathis Papaioannou

2008/11/23 Kory Heath [EMAIL PROTECTED]:


 On Nov 22, 2008, at 2:06 AM, Stathis Papaioannou wrote:
 Yes, there must be a problem with the assumptions. The only assumption
 that I see we could eliminate, painful though it might be for those of
 a scientific bent, is the idea that consciousness supervenes on
 physical activity. Q.E.D.

 Right. But the problem is that that conclusion doesn't tell me how to
 deal with the (equally persuasive) arguments that convince me there's
 something deeply correct about viewing consciousness in computational
 terms, and viewing computation in physical terms. So I'm really just
 left with a dilemma. As I've hinted earlier, I suspect that there's
 something wrong with the idea of physical matter and related ideas
 like causality, probability, etc. But that's pretty vague.

We could say there are two aspects to mathematical objects, a physical
aspect and a non-physical aspect. Whenever we interact with the number
three it must be realised, say in the form of three objects. But
there is also an abstract three, with threeness properties, that lives
in Platonia independently of any realisation. Similarly, whenever we
interact with a computation, it must be realised on a physical
computer, such as a human brain. But there is also the abstract
computation, a Platonic object. It seems that consciousness, like
threeness, may be a property of the Platonic object, and not of its
physical realisation. This allows resolution of the apparent paradoxes
we have been discussing.



-- 
Stathis Papaioannou

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-22 Thread Stathis Papaioannou

On 2008/11/23 Brent Meeker [EMAIL PROTECTED] wrote:

 I don't see why partial zombies are problematic.  My dog is conscious of
 perceptions, of being an individual, of memories and even dreams, but he 
 doesn't
 have an inner narrative - so is he a partial zombie?

Your dog has experiences, and that seems to me to be the most
important thing distinguishing zombie from non-zombie. If Lucky Alice
is a partial zombie, she is lacking in experiences of a certain kind,
such as visual perception, but behaves just the same and otherwise
thinks and feels just the same. She remembers visual experiences from
before she suffered brain damage and feels that they are just the same
as present visual experiences: so in what sense could she have a
deficit rendering her blind?


-- 
Stathis Papaioannou

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-21 Thread Kory Heath


On Nov 20, 2008, at 10:52 AM, Bruno Marchal wrote:
 I am afraid you are already too much suspect of the contradictory
 nature of MEC+MAT.
 Take the reasoning has a game. Try to keep both MEC and MAT, the game
 consists in showing the more clearly as possible what will go wrong.

I understand what you're saying, and I accept the rules of the game. I  
*am* trying to keep both MEC and MAT. But it seems as though we differ  
on how we understand MEC and MAT, because in my understanding,  
mechanist-materialists should say that Bruno's Lucky Alice is not  
conscious (for the same reason that Telmo's Lucky Alice is not  
conscious).

 You mean the ALICE of Telmo's solution of MGA 1bis, I guess. The
 original Alice, well I mean the one in MGA 1, is functionally
 identical at the right level of description (actually she has already
 digital brain). The physical instantiation of a computation is
 completely realized. No neurons can know that the info (correct and
 at the right places) does not come from the relevant neurons, but from
 a lucky beam.

I agree that the neurons don't know or care where their inputs are  
coming from. They just get their inputs, perform their computations,  
and send their outputs. But when it comes to the functional, physical  
behavior of Alice's whole brain, the mechanist-materialist is  
certainly allowed (indeed, forced) to talk about where each neuron's  
input is coming from. That's a part of the computational picture.

I see the point that you're making. Each neuron receives some input,  
performs some computation, and then produces some output. We're  
imagining that every neuron has been disconnected from its inputs, but  
that cosmic rays have luckily produced the exact same input that the  
previously connected neurons would have produced. You're arguing that  
since every neuron is performing the exact same computations that it  
would have performed anyway, the two situations are computationally  
identical.

But I don't think that's correct. I think that plain old, garden  
variety mechanism-materialism has an easy way of saying that Lucky  
Alice's brain, viewed as a whole system, is not performing the same  
computations that fully-functioning Alice's brain is. None of the  
neurons in Lucky Alice's brain are even causally connected to each  
other. That's a pretty big computational difference!

I am arguing, in essence, that for the mechanist-materialist,  
causality is an important aspect of computation and consciousness.  
Maybe your goal is to show that there's something deeply wrong with  
that idea, or with the idea of causality itself. But we're supposed  
to be starting from a foundation of MEC and MAT.

Are you saying that the mechanist-materialist *does* say that Lucky  
Alice is conscious, or only that the mechanist-materialist *should*  
say it? Because if you're saying the latter, then I'm playing the  
game better than you are! I'm pretty sure that Dennett (and the other  
mechanist-materialists I've read) would say that Lucky Alice is not  
conscious, and for them, they have a perfectly straightforward way of  
explaining what they *mean* when they say that she's not conscious.  
They mean (among other things) that the actions of her neurons are not  
being affected at all by the paper lying in front of her on the table,  
or the ball flying at her head. For Dennett, it's practically a non- 
sequitur to say that she's conscious of a ball that's not affecting  
her brain.

 But the physical difference does not play a role.

It depends on what you mean by play a role. You're right that the  
physical difference (very luckily) didn't change what the neurons did.  
It just so happens that the neurons did exactly what they were going  
to do anyway. But the *cause* of why the neurons did what they did is  
totally different. The action of each individual neuron was caused by  
cosmic rays rather than by neighboring neurons. You seem to be asking,  
Why should this difference play any role in whether or not Alice was  
conscious? But for the mechanist-materialist, the difference is  
primary. Those kinds of causal connections are a fundamental part of  
what they *mean* when they say that something is conscious.

 If you invoke it,
 how could you accept saying yes to a doctor, who introduce bigger
 difference?

Do you mean the teleportation doctor, who makes a copy of me,  
destroys me, and then reconstructs me somewhere else using the copied  
information? That case is not problematic in the way that Lucky Alice  
is, because there is an unbroken causal chain between the new me and  
the old me. What's problematic about Lucky Alice is the fact that  
her ducking out of the way of the ball (the movements of her eyes, the  
look of surprise, etc.) has nothing to do with the ball, and yet  
somehow she's still supposed to be conscious of the ball.

A much closer analogy to Lucky Alice would be if the doctor  
accidentally destroys me without making the copy

Re: MGA 1

2008-11-21 Thread Bruno Marchal
Hi Gordon,

Le 20-nov.-08, à 21:40, Gordon Tsai a écrit :

 Bruno:
    I think you and John touched the fundamental issues of human 
 rational. It's a dilemma encountered by phenomenology. Now I have a 
 question: In theory we can't distinguish ourselves from a Lobian 
 Machine. But can lobian machines truly have sufficient rich 
 experiences like human?

This is our assumption. Assuming comp, we are machine, so certainly 
some machine can have our rich experiences. Indeed, us.



 For example, is it possible for a lobian machine to still its 
 mind' or cease the computational logic like some eastern philosophy 
 suggested? Maybe any of the out-of-loop experience is still part of 
 the computation/logic, just as our out-of-body experiences are 
 actually the trick of brain chemicals?


Eventually we will be led to the idea that it is the brain chemicals 
which are the result of a trick of universal consciousness, but here 
I am anticipating. Let us go carefully step by step.

I think I will have some time this afternoon to make MGA 2,

See you there ...

Bruno


http://iridia.ulb.ac.be/~marchal/

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-21 Thread Bruno Marchal


Jason,

Nice, you are anticipatiing on MGA 2. So if you don't mind I will 
answer your post in MGA 2, or in comments you will perhaps make 
afterward.

... asap.

Bruno


Le 20-nov.-08, à 21:27, Jason Resch a écrit :



 On Thu, Nov 20, 2008 at 12:03 PM, Bruno Marchal [EMAIL PROTECTED] 
 wrote:



  The state machine that would represent her in the case of injection 
 of random noise is a different state machine that would represent 
 her normally functioning brain. 


 Absolutely so.



 Bruno,

 What about the state machine that included the injection of lucky 
 noise from an outside source vs. one in which all information was 
 derived internally from the operation of the state machine itself? 
  Would those two differently defined machines not differ and compute 
 something different?  Even though the computations are identical the 
 information that is being computed comes from different sources and so 
 carries with it a different connotation.  Though the bits injected 
 are identical, they inherently imply a different meaning because the 
 state machine in the case of injection has a different structure than 
 that of her normally operating brain.  I believe the brain can be 
 abstracted as a computer/information processing system, but it is not 
 simply the computations and the inputs into the logic gates at each 
 step that are important, but also the source of the input bits, 
 otherwise the computation isn't the same.

 Jason

  

http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-21 Thread Kory Heath


On Nov 21, 2008, at 3:45 AM, Stathis Papaioannou wrote:
 A variant of Chalmers' Fading Qualia argument
 (http://consc.net/papers/qualia.html) can be used to show Alice must
 be conscious.

The same argument can be used to show that Empty-Headed Alice must  
also be conscious. (Empty-Headed Alice is the version where only  
Alice's motor neurons are stimulated by cosmic rays, while all of the  
other neurons in Alice's head do nothing. Alice's body continues to  
act indistinguishably from the way it would have acted, but there's  
nothing going on in the rest of Alice's brain, random or otherwise.  
Telmo and Bruno have both indicated that they don't think this Alice  
is conscious. Or at least, that a mechanist-materialist shouldn't  
believe that this Alice is conscious.)

Let's assume that Lucky Alice is conscious. Every neuron in her head  
(they're all artificial) has become causally disconnected from all the  
others, but they (very improbably) continue to do exactly what they  
would have done when they were connected, due to cosmic rays. Let's  
say that we remove one of the neurons from Alice's head. This has no  
effect on her outward behavior, or on the behavior of any of her other  
neurons (since they're already causally disconnected). Of course, we  
can remove two neurons, and then three, etc. We can remove her entire  
visual cortex. This can't have any noticeable effect on her  
consciousness, because the neurons that do remain go right on acting  
the way they would have acted if the cortex was there. Eventually, we  
can remove every neuron that isn't a motor neuron, so that we have an  
empty-headed Alice whose body takes the exam, ducks when I throw the  
ball at her head, etc.

If Lucky Alice is conscious and Empty-Headed Alice is not conscious,  
then there are partial zombies halfway between them. Like you, I can't  
make any sense of these partial zombies. But I also can't make any  
sense of the idea that Empty-Headed Alice is conscious. Therefore, I  
don't think this argument shows that Empty-Headed Alice (and by  
extension, Lucky Alice) must be conscious. I think it shows that  
there's a deeper problem - probably with one of our assumptions.

Even though I actually think that mechanist-materialists should view  
both Lucky Alice and Empty-Headed Alice as not conscious, I still  
think they have to deal with this problem. They have to deal with the  
spectrum of intermediate states between Fully-Functional Alice and  
Lucky Alice. (Or between Fully-Functional Alice and Empty-Headed Alice.)

-- Kory


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-21 Thread Michael Rosefield
This is one of those questions were I'm not sure if I'm being relevant or
missing the point entirely, but here goes:

There are multiple universes which implement/contain/whatever Alice's
consciousness. During the period of the experiment, that universe may no
longer be amongst them but shadows along with them closely enough that it
certainly rejoins them upon its termination.

So, was Alice conscious during the experiment? Well, from Alice's
perspective she certainly has the memory of consciousness, and due to the
presence of the implementing universes there was certainly a conscious Alice
out there somewhere. Since consciousness has no intrinsic spatio-temporal
quality, there's no reason for that consciousness not to count.


2008/11/21 Kory Heath [EMAIL PROTECTED]



 On Nov 21, 2008, at 3:45 AM, Stathis Papaioannou wrote:
  A variant of Chalmers' Fading Qualia argument
  (http://consc.net/papers/qualia.html) can be used to show Alice must
  be conscious.

 The same argument can be used to show that Empty-Headed Alice must
 also be conscious. (Empty-Headed Alice is the version where only
 Alice's motor neurons are stimulated by cosmic rays, while all of the
 other neurons in Alice's head do nothing. Alice's body continues to
 act indistinguishably from the way it would have acted, but there's
 nothing going on in the rest of Alice's brain, random or otherwise.
 Telmo and Bruno have both indicated that they don't think this Alice
 is conscious. Or at least, that a mechanist-materialist shouldn't
 believe that this Alice is conscious.)

 Let's assume that Lucky Alice is conscious. Every neuron in her head
 (they're all artificial) has become causally disconnected from all the
 others, but they (very improbably) continue to do exactly what they
 would have done when they were connected, due to cosmic rays. Let's
 say that we remove one of the neurons from Alice's head. This has no
 effect on her outward behavior, or on the behavior of any of her other
 neurons (since they're already causally disconnected). Of course, we
 can remove two neurons, and then three, etc. We can remove her entire
 visual cortex. This can't have any noticeable effect on her
 consciousness, because the neurons that do remain go right on acting
 the way they would have acted if the cortex was there. Eventually, we
 can remove every neuron that isn't a motor neuron, so that we have an
 empty-headed Alice whose body takes the exam, ducks when I throw the
 ball at her head, etc.

 If Lucky Alice is conscious and Empty-Headed Alice is not conscious,
 then there are partial zombies halfway between them. Like you, I can't
 make any sense of these partial zombies. But I also can't make any
 sense of the idea that Empty-Headed Alice is conscious. Therefore, I
 don't think this argument shows that Empty-Headed Alice (and by
 extension, Lucky Alice) must be conscious. I think it shows that
 there's a deeper problem - probably with one of our assumptions.

 Even though I actually think that mechanist-materialists should view
 both Lucky Alice and Empty-Headed Alice as not conscious, I still
 think they have to deal with this problem. They have to deal with the
 spectrum of intermediate states between Fully-Functional Alice and
 Lucky Alice. (Or between Fully-Functional Alice and Empty-Headed Alice.)

 -- Kory


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-21 Thread Bruno Marchal

On 21 Nov 2008, at 10:45, Kory Heath wrote:



 ...
 A much closer analogy to Lucky Alice would be if the doctor
 accidentally destroys me without making the copy, turns on the
 receiving teleporter in desperation, and then the exact copy that
 would have appeared anyway steps out, because (luckily!) cosmic rays
 hit the receiver's mechanisms in just the right way. I actually find
 this thought experiment more persuasive than Lucky Alice (although I'm
 sure some will argue that they're identical). At the very least, the
 mechanist-materialist has to say that the resulting Lucky Kory is
 conscious. I think it's also clear that Lucky Kory's consciousness
 must be exactly what it would have been if the teleportation had
 worked correctly. This does in fact lead me to feel that maybe
 causality shouldn't have any bearing on consciousness after all.


Very good. Thanks.




 However, the materialist-mechanist still has some grounds to say that
 there's something interestingly different about Lucky Kory than
 Original Kory. It is a physical fact of the matter that Lucky Kory is
 not causally connected to Pre-Teleportation Kory.


Keeping the comp hyp (cf the qua computatio) this would introduce  
magic.



 When someone asks
 Lucky Kory, Why do you tie your shoes that way?, and Lucky Kory
 says, Because of something I learned when I was ten years old, Lucky
 Kory's statement is quite literally false. Lucky Kory ties his shoes
 that way because of some cosmic rays. I actually don't know what the
 standard mechanist-materialist way of viewing this situation is. But
 it does seem to suggest that maybe breaks in the causal chain
 shouldn't affect consciousness after all.

Yes.

 .
 Of course I'm entirely on board with the spirit of your thought
 experiment. You think MECH and MAT implies that Lucky Alice is
 conscious, but I don't think it does. I'm not sure how important that
 difference is. It seems substantial. But I can also predict where
 you're going with your thought experiment, and it's the exact same
 place I go. So by all means, continue on to MGA 2, and we'll see what
 happens.


Thanks.  A last comment on your reply on Stathis' recent comment.

Stathis argument, based on Chalmers' fading qualia is mainly correct I  
think. And it could be that your answer to Stathis is correct too.
And this would finish our work. We would have a proof that Telmo Alice  
is uncouscious and that Telmo Alice is conscious, finishing the  
reductio ad absurbo.
Keep in mind that we are doing a reductio ad absurdo. Those who are  
convinced by bith Stathis and Russell Telmo, ...  can already take  
holidays!

Have to write MGA 2 for the others.

Bruno
http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-21 Thread Jason Resch
On Fri, Nov 21, 2008 at 3:45 AM, Kory Heath [EMAIL PROTECTED] wrote:




 However, the materialist-mechanist still has some grounds to say that
 there's something interestingly different about Lucky Kory than
 Original Kory. It is a physical fact of the matter that Lucky Kory is
 not causally connected to Pre-Teleportation Kory. When someone asks
 Lucky Kory, Why do you tie your shoes that way?, and Lucky Kory
 says, Because of something I learned when I was ten years old, Lucky
 Kory's statement is quite literally false. Lucky Kory ties his shoes
 that way because of some cosmic rays. I actually don't know what the
 standard mechanist-materialist way of viewing this situation is. But
 it does seem to suggest that maybe breaks in the causal chain
 shouldn't affect consciousness after all.


This is very similar to an existing thought experiment in identity theory:

http://en.wikipedia.org/wiki/Swamp_man

Jason

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-21 Thread Jason Resch
On Fri, Nov 21, 2008 at 5:45 AM, Stathis Papaioannou [EMAIL PROTECTED]wrote:


 A variant of Chalmers' Fading Qualia argument
 (http://consc.net/papers/qualia.html) can be used to show Alice must
 be conscious.

 Alice is sitting her exam, and a part of her brain stops working,
 let's say the part of her occipital cortex which enables visual
 perception of the exam paper. In that case, she would be unable to
 complete the exam due to blindness. But if the neurons in her
 occipital cortex are stimulated by random events such as cosmic rays
 so that they pass on signals to the rest of the brain as they would
 have normally, Alice won't know she's blind: she will believe she sees
 the exam paper, will be able to read it correctly, and will answer the
 questions just as she would have without any neurological or
 electronic problem.

 If Alice were replaced by a zombie, no-one else would notice, by
 definition; also, Alice herself wouldn't notice, since a zombie is
 incapable of noticing anything (it just behaves as if it does). But I
 don't see how it is possible that Alice could be *partly* zombified,
 behaving as if she has normal vision, believing she has normal vision,
 and having all the cognitive processes that go along with normal
 vision, while actually lacking any visual experiences at all. That
 isn't consistent with the definition of a philosophical zombie.


Stathis,

What you described sounds very similar to a split brain patient I saw on a
documentary.  He was able to respond to images presented to one eye, and
ended up drawing them with a hand controlled by the other hemisphere, yet he
had no idea why he drew that image when asked.  The problem may not be that
he isn't experiencing the visualization, but that the part of his brain that
is responsible for speech is disconnected from the part of his brain that
can see.

See: http://www.youtube.com/watch?v=ZMLzP1VCANo

Jason

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-21 Thread Brent Meeker

Kory Heath wrote:
 
 On Nov 20, 2008, at 10:52 AM, Bruno Marchal wrote:
 I am afraid you are already too much suspect of the contradictory
 nature of MEC+MAT.
 Take the reasoning has a game. Try to keep both MEC and MAT, the game
 consists in showing the more clearly as possible what will go wrong.
 
 I understand what you're saying, and I accept the rules of the game. I  
 *am* trying to keep both MEC and MAT. But it seems as though we differ  
 on how we understand MEC and MAT, because in my understanding,  
 mechanist-materialists should say that Bruno's Lucky Alice is not  
 conscious (for the same reason that Telmo's Lucky Alice is not  
 conscious).
 
 You mean the ALICE of Telmo's solution of MGA 1bis, I guess. The
 original Alice, well I mean the one in MGA 1, is functionally
 identical at the right level of description (actually she has already
 digital brain). The physical instantiation of a computation is
 completely realized. No neurons can know that the info (correct and
 at the right places) does not come from the relevant neurons, but from
 a lucky beam.
 
 I agree that the neurons don't know or care where their inputs are  
 coming from. They just get their inputs, perform their computations,  
 and send their outputs. But when it comes to the functional, physical  
 behavior of Alice's whole brain, the mechanist-materialist is  
 certainly allowed (indeed, forced) to talk about where each neuron's  
 input is coming from. That's a part of the computational picture.
 
 I see the point that you're making. Each neuron receives some input,  
 performs some computation, and then produces some output. We're  
 imagining that every neuron has been disconnected from its inputs, but  
 that cosmic rays have luckily produced the exact same input that the  
 previously connected neurons would have produced. You're arguing that  
 since every neuron is performing the exact same computations that it  
 would have performed anyway, the two situations are computationally  
 identical.
 
 But I don't think that's correct. I think that plain old, garden  
 variety mechanism-materialism has an easy way of saying that Lucky  
 Alice's brain, viewed as a whole system, is not performing the same  
 computations that fully-functioning Alice's brain is. None of the  
 neurons in Lucky Alice's brain are even causally connected to each  
 other. That's a pretty big computational difference!
 
 I am arguing, in essence, that for the mechanist-materialist,  
 causality is an important aspect of computation and consciousness.  
 Maybe your goal is to show that there's something deeply wrong with  
 that idea, or with the idea of causality itself. But we're supposed  
 to be starting from a foundation of MEC and MAT.
 
 Are you saying that the mechanist-materialist *does* say that Lucky  
 Alice is conscious, or only that the mechanist-materialist *should*  
 say it? Because if you're saying the latter, then I'm playing the  
 game better than you are! I'm pretty sure that Dennett (and the other  
 mechanist-materialists I've read) would say that Lucky Alice is not  
 conscious, and for them, they have a perfectly straightforward way of  
 explaining what they *mean* when they say that she's not conscious.  
 They mean (among other things) that the actions of her neurons are not  
 being affected at all by the paper lying in front of her on the table,  
 or the ball flying at her head. For Dennett, it's practically a non- 
 sequitur to say that she's conscious of a ball that's not affecting  
 her brain.
 
 But the physical difference does not play a role.
 
 It depends on what you mean by play a role. You're right that the  
 physical difference (very luckily) didn't change what the neurons did.  
 It just so happens that the neurons did exactly what they were going  
 to do anyway. But the *cause* of why the neurons did what they did is  
 totally different. The action of each individual neuron was caused by  
 cosmic rays rather than by neighboring neurons. You seem to be asking,  
 Why should this difference play any role in whether or not Alice was  
 conscious? But for the mechanist-materialist, the difference is  
 primary. Those kinds of causal connections are a fundamental part of  
 what they *mean* when they say that something is conscious.
 
 If you invoke it,
 how could you accept saying yes to a doctor, who introduce bigger
 difference?
 
 Do you mean the teleportation doctor, who makes a copy of me,  
 destroys me, and then reconstructs me somewhere else using the copied  
 information? That case is not problematic in the way that Lucky Alice  
 is, because there is an unbroken causal chain between the new me and  
 the old me. What's problematic about Lucky Alice is the fact that  
 her ducking out of the way of the ball (the movements of her eyes, the  
 look of surprise, etc.) has nothing to do with the ball, and yet  
 somehow she's still supposed to be conscious of the ball.
 
 A much closer

Re: MGA 1

2008-11-21 Thread Brent Meeker

Kory Heath wrote:
 
 On Nov 21, 2008, at 3:45 AM, Stathis Papaioannou wrote:
 A variant of Chalmers' Fading Qualia argument
 (http://consc.net/papers/qualia.html) can be used to show Alice must
 be conscious.
 
 The same argument can be used to show that Empty-Headed Alice must  
 also be conscious. (Empty-Headed Alice is the version where only  
 Alice's motor neurons are stimulated by cosmic rays, while all of the  
 other neurons in Alice's head do nothing. Alice's body continues to  
 act indistinguishably from the way it would have acted, but there's  
 nothing going on in the rest of Alice's brain, random or otherwise.  
 Telmo and Bruno have both indicated that they don't think this Alice  
 is conscious. Or at least, that a mechanist-materialist shouldn't  
 believe that this Alice is conscious.)
 
 Let's assume that Lucky Alice is conscious. Every neuron in her head  
 (they're all artificial) has become causally disconnected from all the  
 others, but they (very improbably) continue to do exactly what they  
 would have done when they were connected, due to cosmic rays. Let's  
 say that we remove one of the neurons from Alice's head. This has no  
 effect on her outward behavior, or on the behavior of any of her other  
 neurons (since they're already causally disconnected). Of course, we  
 can remove two neurons, and then three, etc. We can remove her entire  
 visual cortex. This can't have any noticeable effect on her  
 consciousness, because the neurons that do remain go right on acting  
 the way they would have acted if the cortex was there. Eventually, we  
 can remove every neuron that isn't a motor neuron, so that we have an  
 empty-headed Alice whose body takes the exam, ducks when I throw the  
 ball at her head, etc.
 
 If Lucky Alice is conscious and Empty-Headed Alice is not conscious,  
 then there are partial zombies halfway between them. Like you, I can't  
 make any sense of these partial zombies. But I also can't make any  
 sense of the idea that Empty-Headed Alice is conscious. Therefore, I  
 don't think this argument shows that Empty-Headed Alice (and by  
 extension, Lucky Alice) must be conscious. I think it shows that  
 there's a deeper problem - probably with one of our assumptions.
 
 Even though I actually think that mechanist-materialists should view  
 both Lucky Alice and Empty-Headed Alice as not conscious, I still  
 think they have to deal with this problem. They have to deal with the  
 spectrum of intermediate states between Fully-Functional Alice and  
 Lucky Alice. (Or between Fully-Functional Alice and Empty-Headed Alice.)

If they were just observing Alice's outward behavior they would say, It 
appears 
that Alice is a conscious being, but of course there's 1e-100 chance that she's 
just an automaton operated by by cosmic rays.  If they were actually observing 
her inner workings, they'd say, Alice is just an automaton who in an extremely 
improbable coincidence has appeared as if conscious, but we can easily prove 
she 
isn't by watching her future behavior or even by blocking the rays.

Brent

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-21 Thread Brent Meeker

Jason Resch wrote:
 
 
 On Fri, Nov 21, 2008 at 5:45 AM, Stathis Papaioannou [EMAIL PROTECTED] 
 mailto:[EMAIL PROTECTED] wrote:
 
 
 A variant of Chalmers' Fading Qualia argument
 (http://consc.net/papers/qualia.html) can be used to show Alice must
 be conscious.
 
 Alice is sitting her exam, and a part of her brain stops working,
 let's say the part of her occipital cortex which enables visual
 perception of the exam paper. In that case, she would be unable to
 complete the exam due to blindness. But if the neurons in her
 occipital cortex are stimulated by random events such as cosmic rays
 so that they pass on signals to the rest of the brain as they would
 have normally, Alice won't know she's blind: she will believe she sees
 the exam paper, will be able to read it correctly, and will answer the
 questions just as she would have without any neurological or
 electronic problem.
 
 If Alice were replaced by a zombie, no-one else would notice, by
 definition; also, Alice herself wouldn't notice, since a zombie is
 incapable of noticing anything (it just behaves as if it does). But I
 don't see how it is possible that Alice could be *partly* zombified,
 behaving as if she has normal vision, believing she has normal vision,
 and having all the cognitive processes that go along with normal
 vision, while actually lacking any visual experiences at all. That
 isn't consistent with the definition of a philosophical zombie.
 
 
 Stathis,
 
 What you described sounds very similar to a split brain patient I saw on 
 a documentary.  He was able to respond to images presented to one eye, 
 and ended up drawing them with a hand controlled by the other 
 hemisphere, yet he had no idea why he drew that image when asked.  The 
 problem may not be that he isn't experiencing the visualization, but 
 that the part of his brain that is responsible for speech is 
 disconnected from the part of his brain that can see.
 
 See: http://www.youtube.com/watch?v=ZMLzP1VCANo
 
 Jason

I think experiments like this support the idea that consciousness is not a 
single thing.  We tend to identify conscious thought with the thought that is 
reported in speech.  But that's just because it is the thought that is readily 
accessible to experimenters.

Brent

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-21 Thread Kory Heath


On Nov 21, 2008, at 8:15 AM, Bruno Marchal wrote:
 On 21 Nov 2008, at 10:45, Kory Heath wrote:
 However, the materialist-mechanist still has some grounds to say that
 there's something interestingly different about Lucky Kory than
 Original Kory. It is a physical fact of the matter that Lucky Kory is
 not causally connected to Pre-Teleportation Kory.


 Keeping the comp hyp (cf the qua computatio) this would introduce  
 magic.

I'm not sure it has to. Can you elaborate on what magic you think it  
ends up introducing?

In the context of mechanism-materialism, I am forced to believe that  
Lucky Kory's consciousness, qualia, etc., are exactly what they would  
have been if the teleportation had worked properly. But I don't see  
how that forces me to accept any magic. It doesn't (for instance)  
force me to say that Kory's real consciousness magically jumped over  
to Lucky Kory despite the lack of the causal connection. As a  
mechanist, I don't think there's any sense in talking about  
consciousness in that way.

Dennett has a slogan: When you describe what happens, you've  
described everything. In this weird case, we have to fall back on  
describing what happened. A pattern of molecules was destroyed, and  
somewhere else that exact pattern was (very improbably) created by a  
random process of cosmic rays. Since we mechanists believe that  
consciousness and qualia are just aspects of patterns, the  
consciousness and qualia of the lucky pattern must (by definition) be  
the same as the original would have been. I don't think that causes  
any (immediate) problem for the mechanist. Is Lucky Kory the same  
person as Original Kory? I don't think the mechanist is committed to  
any particular answer to this question. We've already described what  
happened. Now it's just a matter of how we want to use our words. If  
we want to use them in a certain way, there is a sense in which we can  
say that Lucky Kory is not the same person as Original Kory, as long  
as we understand that *all* we mean is that Lucky Kory isn't causally  
connected to Original Kory (along with whatever else that implies).

However, I do start getting uncomfortable when I realize that this  
lucky teleportation can happen over and over again, and if it happens  
fast enough, it just reduces to sheer randomness that just happens to  
be generating an ordered pattern that looks like Kory. I have a hard  
time understanding how a mechanist can consider a bunch of random  
numbers to be conscious. If that's the kind of magic you're referring  
to, then I agree.

-- Kory


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-21 Thread Kory Heath

On Nov 21, 2008, at 8:52 AM, Jason Resch wrote:
 This is very similar to an existing thought experiment in identity  
 theory:

 http://en.wikipedia.org/wiki/Swamp_man

Cool. Thanks for that link!

-- Kory


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-21 Thread Kory Heath


On Nov 21, 2008, at 9:01 AM, Jason Resch wrote:
 What you described sounds very similar to a split brain patient I  
 saw on a documentary.

It might seem similar on the surface, but it's actually very  
different. The observers of the split-brain patient and the patient  
himself know that something is amiss. There is a real difference in  
his consciousness and his behavior. If cosmic rays randomly severed  
your corpus callosum right now, you would definitely notice a  
difference. (It's an empirical question whether or not you'd know it  
almost immediately, or if it would take a while for you to figure it  
out. I'm sure the neurologists and cognitive scientists already know  
the answer to that one.)

At no point during the replacement of Alice's fully-functioning  
neurons with cosmic-ray stimulated neurons (or during the replacement  
of cosmic-ray neurons with no neurons at all) will Alice notice any  
difference in her consciousness. In principle, she cannot notice it,  
since every one of her full-functionally neurons always continues to  
do exactly what it would have done. This is a serious problem for the  
mechanistic view of consciousness.

-- Kory


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-21 Thread Jason Resch
On Fri, Nov 21, 2008 at 7:54 PM, Kory Heath [EMAIL PROTECTED] wrote:



 On Nov 21, 2008, at 9:01 AM, Jason Resch wrote:
  What you described sounds very similar to a split brain patient I
  saw on a documentary.

 It might seem similar on the surface, but it's actually very
 different. The observers of the split-brain patient and the patient
 himself know that something is amiss. There is a real difference in
 his consciousness and his behavior. If cosmic rays randomly severed
 your corpus callosum right now, you would definitely notice a
 difference. (It's an empirical question whether or not you'd know it
 almost immediately, or if it would take a while for you to figure it
 out. I'm sure the neurologists and cognitive scientists already know
 the answer to that one.)

 At no point during the replacement of Alice's fully-functioning
 neurons with cosmic-ray stimulated neurons (or during the replacement
 of cosmic-ray neurons with no neurons at all) will Alice notice any
 difference in her consciousness. In principle, she cannot notice it,
 since every one of her full-functionally neurons always continues to
 do exactly what it would have done. This is a serious problem for the
 mechanistic view of consciousness.


What about a case when only some of Alice's neurons have ceased normal
function and became dependent on the lucky rays?  Lets say the neurons in
her visual center stopped working but her speech center was unaffected.  In
this manner could she talk about what she saw without having any conscious
experience of sight?  I'm beginning to see how truly frustrating the MGA
argument is: If all her neurons break and are luckily fixed I believe she is
a zombie, if only one of her neurons fails but we correct it, I don't think
this would effect her consciousness in any perceptible way, but cases where
some part of her brain needs to be corrected are quite strange, and almost
maddeningly so.

I think you are right in that the split brain cases are very different, but
I think the similarity is that part of Alice's consciousness would
disappear, though the lucky effects ensure she acts as if no change had
occurred.  If all of a sudden all her neurons started working properly
again, I don't think she would have any recollection of having lost any part
of her consciousess, the lucky effects should have fixed her memories as
well, and the parts of her brain which remained functional would also not
have detected any inconsitencies, yet the parts of her brain that depended
on lucky cosmic rays generated no subjective experience for whatever set of
information they were processing.  (So I would think)

Jason

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1 bis (exercise)

2008-11-20 Thread Kory Heath


On Nov 19, 2008, at 1:43 PM, Brent Meeker wrote:
 So I'm puzzled as to how answer Bruno's question.  In general I  
 don't believe in
 zombies, but that's in the same way I don't believe my glass of  
 water will
 freeze at 20degC.  It's an opinion about what is likely, not what is  
 possible.

I take this to mean that you're uncomfortable with thought experiments  
which revolve around logically possible but exceedingly unlikely  
events. I think that's understandable, but ultimately, I'm on the  
philosopher's side. It really is logically possible - although  
exceedingly unlikely - for a random-number-generator to cause a robot  
to walk around, talk to people, etc. It really is logically possible  
for a computer program to use a random-number-generator to generate a  
lattice of changing bits that follows Conway's Life rule. Mechanism  
and materialism needs to answer questions about these scenarios,  
regardless of how unlikely they are.

-- Kory


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-20 Thread John Mikes

On 11/19/08, Bruno Marchal [EMAIL PROTECTED] wrote:
... Keep in mind we try to refute the  
 conjunction MECH and MAT.
 Nevertheless your intuition below is mainly correct, but the point is  
 that accepting it really works, AND keeping MECH, will force us to  
 negate MAT.

 Bruno
 http://iridia.ulb.ac.be/~marchal/
and lots of other things in the discussion.


the concept of Zombie emerged as questioned. Thinking about
it, (I dislike the entire field together with 'thought-experiments' and 
the fairy-tale processes of differentiated teleportations, etc.)
I concluded that a 'zombie' as used mostly, is a 'person(??)' with 
NO HUMAN CONSCIOUSNESS (whatever WE included in the 'C' 
term). I am willing to expand on it: a (humanly) zombie MAY HAVE 
mental functions beyond the (excluded) select ones WE use in our 
present potential as 'thinking humans'. It needs it, since assumed are 
the activities that must be directed by some form of mentality 
(call it 'physical?' ones). - Zombie does...
 
It boils down to my overall somewhat negative position (although 
I have no better one) of UDA, MPG, comp, etc. - all of them are 
products of HUMAN thinking and restrictions as WE can imagine 
the unfathomable existence (the totality - real TOE).
I find it a 'cousin' of the reductionistic conventional sciences, just 
a bit 'freed up'. Maybe a distant cousin. Meaning: it handles the 
totality WITHIN the framework of our limited (human) logic(s). 

The list's said 100 years 'ahead ways' of thinking (Bruno's 200) 
is still a mental activity of the NOW existing minds. 

Alas, we cannot do better. I just want to take all this mental 
exercise with the grain of salt of there may be more to all of it 
what we cannot even fancy (imagine, fantasize of) today, 
with our mind anchored in our restrictions. (Including 'digital',
 'numbers', learned wisdom, etc.). 

Sorry if I offended anyone on the list, it was not intended. 
I am not up to the level of the list, just 'freed up' my thinking 
into alowing further (unknown?) domains into our ignorance. 
I call it 'my' scinetific agnosticism.

John M



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1 bis (exercise)

2008-11-20 Thread Bruno Marchal


On 19 Nov 2008, at 22:43, Brent Meeker wrote:


 Bruno Marchal wrote:

 On 19 Nov 2008, at 16:06, Telmo Menezes wrote:


 Bruno,

 If no one objects, I will present MGA 2 (soon).
 I also agree completely and am curious to see where this is going.
 Please continue!


 Thanks Telmo, thanks also to Gordon.

 I will try to send MGA 2 asap. But this asks me some time.  
 Meanwhile I
 suggest a little exercise, which, by the way, finishes the proof of
 MECH + MAT implies false, for those who thinks that there is no
 (conceivable) zombies. (they think that exists zombie *is* false).

 Exercise (mat+mec implies zombie exists or are conceivable):

 Could you alter the so-lucky cosmic explosion beam a little bit so
 that Alice still succeed her math exam, but is, reasonably enough, a
 zombie  during the exam. With zombie taken in the traditional sense  
 of
 Kory and Dennett.
 Of course you have to keep well *both*  MECH *and* MAT.

 Bruno

 As I understand it a philosophical zombie is someone who looks and  
 acts just
 like a conscious person but isn't conscious, i.e. has no inner  
 narrative.


No inner narrative, no inner image, no inner souvenir, no inner  
sensation, no qualia, no subject, no first person notions at all. OK.




 Time and circumstance play a part in this.  As Bruno pointed out a  
 cardboard
 cutout of a person's photograph could be a zombie for a moment.  I  
 assume the
 point of the exam is that an exam is long enough in duration and  
 complex enough
 that it rules out the accidental, cutout zombie.

Well, given that it is a thought experiment, the resources are free,  
and I can make the cosmic lucky explosion as lucky as you need for  
making Alice apparently alive, and with COMP+MAT, indeed alive. All  
its neurons break down all the time, and, because she is so lucky, an  
event which occurred 10 billions years before, send to her, at all  
right moment and place (and thus this is certainly NOT random) the  
lucky ray plumber who fixes momentarily the problem by trigging the  
other neurons to which it was supposed to send the infos (for example).
Keeping comp and mat, making her unconscious here would be equivalent  
to give Alice's neurons a sort of physical prescience.


 But then Alice has her normal
 behavior restored by a cosmic ray shower that is just as improbable  
 as the
 accidental zombie, i.e. she is, for the duration of the shower, an  
 accidental
 zombie.


Well, with Telmo solution of the MGA 1bis exercise, where only the  
motor output neuron are fixed and where no internal neuron is fixed  
(almost all neurons),  with MEC + MAT, Alice has no working brain at  
all, is only a lucky puppet, and she has to be a zombie. But in the  
original problem, all neurons are fixed, and then I would say Alice is  
not a zombie (if not, you  give a magical physical prescience to the  
neurons).

But now, you are right, that in both case, the luck can only be  
accidental. If, in the same thought experience, keeping the exact same  
no lucky cosmic explosion, but giving now a phone call to the teacher  
or to Alice, so that she moves 1mm away of the position she had in the  
previous version, she will miss the lucky rays, most probably some  
will go through in wrong places and most probably she will miss the  
exams, and perhaps even die. So you are right, in Telmo's solution of  
MGA 1bis exercise she is an accidental zombie. But in the original  
MGA 1, she should remain conscious (with MECH and MAT), even if  
accidentally so.




 So I'm puzzled as to how answer Bruno's question.

Hope it is clear for every one now?



  In general I don't believe in
 zombies, but that's in the same way I don't believe my glass of  
 water will
 freeze at 20degC.  It's an opinion about what is likely, not what is  
 possible.

OK. Accidental zombie are possible, but are very unlikely (but wait  
for MGA 2 for a lessening of this statement).
Accidental consciousness (like in MGA 1, with MECH+MAT) is possible  
also, and is as much unlikely (same remark).

Of course, as unlikeley as possible, nobody can test if someone else  
is really conscious or is a accidental zombie, because for any  
series of test you can imagine, you can conceive a sufficiently lucky  
cosmic explosion.



 It seems similar to the question, could I have gotten in my car and  
 driven to
 the store, bought something, and driven back and yet not be  
 conscious of it.
 It's highly unlikely, yet people apparently have done such things.

(I think here something different occurs, concerning intensity of  
attention with respect to different conscious streams, but it is out- 
of-topic, I think).


Bruno


http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more

Re: MGA 1

2008-11-20 Thread Bruno Marchal

On 19 Nov 2008, at 23:26, Jason Resch wrote:



 On Wed, Nov 19, 2008 at 1:55 PM, Bruno Marchal [EMAIL PROTECTED]  
 wrote:

 On 19 Nov 2008, at 20:17, Jason Resch wrote:

 To add some clarification, I do not think spreading Alice's logic  
 gates across a field and allowing cosmic rays to cause each gate to  
 perform the same computations that they would had they existed in  
 her functioning brain would be conscious.  I think this because in  
 isolation the logic gates are not computing anything complex, only  
 AND, OR, NAND operations, etc.  This is why I believe rocks are not  
 conscious, the collisions of their molecules may be performing  
 simple computations, but they are never aggregated into complex  
 patterns to compute over a large set of information.


 Actually I agree with this argument. But it does not concern Alice,  
 because I have provide her with an incredible amount of luck. The  
 lucky rays  fix the neurons in a genuine way (by that abnormally big  
 amount of pure luck).

 If the cosmic rays are simply keeping her neurons working normally,  
 then I'm more inclined to believe she remains conscious, but I'm not  
 certain one way or the other.


I have no certainty either. But this I feel related with my  
instinctive rather big uncertainty about the assumptions MECH and   
MAT. Now if both MECH and MAT are, naively enough perhaps, assumed to  
be completely true, I think I have no reason for not attributing to  
Alice consciousness. If not MECH break down, because I have to endow  
neurons with some prescience. The physical activity is the same, as  
far as they serve to instanciate a computation (cf the qua  
computatio).




 If you doubt Alice remain conscious, how could you accept an  
 experience of simple teleportation (UDA step 1 or 2). If you can  
 recover consciousness from a relative digital description, how could  
 that consciousness distinguish between a recovery from a genuine  
 description send from earth (say), and a recovery from a description  
 luckily generated by a random process?

 I believe consciousness can be recovered from a digital description,  
 but I don't believe the description itself is conscious while being  
 beamed from one teleporting station to the other.  I think it is  
 only when the body/computer simulation is instantiated can  
 consciousness recovered from the description.


I agree. No one said that the description was conscious. Only that  
consciousness is related to a physical instantiation of a computation,  
which unluckily break down all the time, but were fixed, at genuine  
places and moments., by an incredibly big (but finite) amount  luck,  
(assuming consciously MECH+MAT)




 Consider sending the description over an encrypted channel, without  
 the right decryption algorithm and key the description can't be  
 differentiated from random noise.  The same bits could be  
 interpreted entirely differently depending completely on how the  
 recipient uses it.  The meaning of the transmission is recovered  
 when it forms a system with complex relations, presumably the same  
 relations as the original one that was teleported, even though it  
 may be running on a different physical substrate, or a different  
 computer architecture.


No problem. I agree.




 I don't deny that a random process could be the source of a  
 transmission that resulted in the creation of a conscious being,  
 what I deny is that random *simple computations, lacking any causal  
 linkages, could form consciousness.



The way the lucky rays fixed Alice neurons illustrates that they were  
not random at all. That is why Alice is so lucky!





 * By simple I mean the types of computation done in discrete steps,  
 such as multiplication, addition, etc.  Those done by a single  
 neuron or a small collection of logic gates.

 If you recover from a description (comp), you cannot know if that  
 description has been generated by a computation or a random process,  
 unless you give some prescience to the logical gates. Keep in mind  
 we try to refute the conjunction MECH and MAT.

 Here I would say that consciousness is not correlated with the  
 physical description at any point in time, but rather the  
 computational history and flow of information, and that this is  
 responsible for the subjective experience of being Alice.  If  
 Alice's mind is described by a random process, albeit one which  
 gives the appearance of consciousness during her exam, she  
 nevertheless has no coherent computational history and her mind  
 contains no large scale informational structures.


If it was random, sure. But it was not. More will be said through MGA 2.




  The state machine that would represent her in the case of injection  
 of random noise is a different state machine that would represent  
 her normally functioning brain.


Absolutely so.


Bruno
http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You 

Re: MGA 1 bis (exercise)

2008-11-20 Thread Bruno Marchal


On 20 Nov 2008, at 00:19, Telmo Menezes wrote:


 Could you alter the so-lucky cosmic explosion beam a little bit so
 that Alice still succeed her math exam, but is, reasonably enough, a
 zombie  during the exam. With zombie taken in the traditional sense  
 of
 Kory and Dennett.
 Of course you have to keep well *both*  MECH *and* MAT.

 I think I can...

 Instead of correcting the brain, the cosmic beams trigger output
 neurons in a sequence that makes Alice write the right answers. That
 is to say, the information content of the beams is no longer a
 representation of an area of Alice's brain, but a representation of
 the answers to the exam. An outside observer cannot distinguish one
 case from the other. In the first she is Alice, in the second she is a
 zombie.


Right.

I guess you see that such a zombie is an accidental zombie. We will  
have to come back later on this accidental part.

Bruno


http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1 bis (exercise)

2008-11-20 Thread Brent Meeker

Kory Heath wrote:
 
 On Nov 19, 2008, at 1:43 PM, Brent Meeker wrote:
 So I'm puzzled as to how answer Bruno's question.  In general I  
 don't believe in
 zombies, but that's in the same way I don't believe my glass of  
 water will
 freeze at 20degC.  It's an opinion about what is likely, not what is  
 possible.
 
 I take this to mean that you're uncomfortable with thought experiments  
 which revolve around logically possible but exceedingly unlikely  
 events. 

I think you really you mean nomologically possible.  I'm not uncomfortable with 
them, I just maintain a little skepticism.  For one thing what is nomologically 
possible or impossible is often reassessed.  Less than a century ago the 
experimental results Elizer, Vaidman, Zeilenger, et al, on delayed choice, 
non-interaction measurement, and other QM phenomena would all have been 
dismissed in advance as logically impossible.

I think that's understandable, but ultimately, I'm on the  
 philosopher's side. It really is logically possible - although  
 exceedingly unlikely - for a random-number-generator to cause a robot  
 to walk around, talk to people, etc. It really is logically possible  
 for a computer program to use a random-number-generator to generate a  
 lattice of changing bits that follows Conway's Life rule. Mechanism  
 and materialism needs to answer questions about these scenarios,  
 regardless of how unlikely they are.

I don't disagree with that.  My puzzlement about how to answer Bruno's question 
comes from the ambiguity as to what we mean by a philosophical zombie.  Do we 
mean its outward actions are the same as a conscious person?  For how long? 
Under what circumstances?  I can easily make a robot that acts just like a 
sleeping person.  I think Dennett changes the question by referring to 
neurophysiological actions.  Does he suppose wetware can't be replaced by 
hardware?

In general when I'm asked if I believe in philosophical zombies, I say no, 
because I'm thinking that the zombie must outwardly behave like a conscious 
person in all circumstances over an indefinite period of time, yet have no 
inner 
experience.  I rule out an accidental zombie accomplishing this as to 
improbable 
- not impossible.  In other words if I were constructing a robot that had to 
act 
as a conscious person would over a long period of time in a wide variety of 
circumstances, I would have to build into the robot some kind of inner 
attention 
module that selected what was important to remember, compressed into short 
representation, linked it to other memories.  And this would be an inner 
narrative.  Similary for the other inner processes.  I don't know if that's 
really what it takes to build a conscious robot, but I'm pretty sure it's 
something like that.  And I think once we understand how to do this, we'll stop 
worrying about the hard problem of consciousness.  Instead we'll talk about 
how efficient the inner narration module is or the memory confabulation module 
or the visual imagination module.  Talk about consciousness will seem as quaint 
as talk about the elan vital does now.

Brent


 
 -- Kory
 
 
  
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-20 Thread Bruno Marchal


On 20 Nov 2008, at 08:23, Kory Heath wrote:



 On Nov 18, 2008, at 11:52 AM, Bruno Marchal wrote:
 The last question (of MGA 1) is:  was Alice, in this case, a zombie
 during the exam?

 Of course, my personal answer would take into account the fact that I
 already have a problem with the materialist's idea of matter. But I
 think we're supposed to be considering the question in the context of
 mechanism and materialism. So I'll ask, what should a mechanist-
 materialist say about the state of Alice's consciousness during the
 exam?

 Maybe I'm jumping ahead, but I think this thought experiment creates a
 dilemma for the mechanist-materialist (which I think is Bruno's
 point). In contrast to many of the other responses in this thread, I
 don't think the mechanist-materialist should believe that Alice is
 conscious in the case when every gate has stopped functioning (but
 cosmic rays are randomly causing them to flip in the exact same way
 that they would have flipped if they were functioning). Alice is in
 that case functionally identical to a random-number generator. It
 shouldn't matter at all whether these cosmic rays are striking the
 broken gates in her head, or if the gates in her head are completely
 inert and the rays are striking the neurons in (say) her arms and her
 spinal chord, still causing her body to behave exactly as it would
 have without the breakdown. I agree with Telmo Menezes that the
 mechanist-materialist shouldn't view Alice as conscious in the latter
 case. But I don't think it's any different than the former case.


I am afraid you are already too much suspect of the contradictory  
nature of MEC+MAT.
Take the reasoning has a game. Try to keep both MEC and MAT, the game  
consists in showing the more clearly as possible what will go wrong.  
The goal is to help the other to understand, or to find an error  
(fatal or fixable: in both case we learn).




 It sounds like many people are under the impression that mechanism-
 materialism, with it's rejection of zombies, is committed to the view
 that Lucky Alice must be conscious, because she's behaviorally
 indistinguishable from the Alice with the correctly-functioning brain.

 But, in the sense that matters, Lucky Alice is *not* behaviorally
 indistinguishable from fully-functional Alice.

You mean the ALICE of Telmo's solution of MGA 1bis, I guess. The  
original Alice, well I mean the one in MGA 1, is functionally  
identical at the right level of description (actually she has already  
digital brain). The physical instantiation of a computation is  
completely realized. No neurons can know that the info (correct and  
at the right places) does not come from the relevant neurons, but from  
a lucky beam.



 For the mechanist-
 materialist, everything physical counts as behavior. And there is a
 clear physical difference between the two Alices, which would be
 physically discoverable by a nearby scientist with the proper
 instruments.

But the physical difference does not play a role. If you invoke it,  
how could you accept saying yes to a doctor, who introduce bigger  
difference?



 Lets imagine that, during the time that Alice's brain is broken but
 luckily acting as though it wasn't due to cosmic rays, someone
 throws a ball at Alice's head, and she (luckily) ducks out of the
 way. The mechanist-materialist may be happy to agree that she did
 indeed duck out of the way, since that's just a description of what
 her body did.

OK, for both ALICE of Telmo's solution of MGA 1bis, and ALICE MGA 1.


 But the mechanist-materialist can (and must) claim that
 Lucky Alice did not in fact respond to the ball at all.

Consciously or privately? Certainly not for ALICE  MGA 1bis. But why  
not for ALICE MGA 1? Please remember to try to naively, or candidly  
enough, keep both  MECH and MAT in mind. You are already reasoning  
like if we were concluding some definitive things, biut we are just  
trying to build an argument. In the end, you will say: I knew it, but  
the point is helping the others to know it too. Many here have  
already the good intuition I think. The point is to make that  
intuition the most communicable possible.



 And that
 statement can be translated into pure physics-talk. The movements of
 Alice's body in this case are being caused by the cosmic rays. They
 are causally disconnected from the movements of the ball (except in
 the incidental way that the ball might be having some causal effect on
 the cosmic rays).


More on this after MGA 2. Hopefully tomorrow.



 When Alice's brain is working properly, her act of
 ducking *is* causally connected to the movement of the ball. And this
 kind of causal connection is an important part of what the mechanist-
 materialist means by consciousness.

Careful:  such kind of causality needs ... MAT.





 Dennett is able to - and in fact must - say that Alice is not
 conscious when all of her brain-gates are broken but very luckily
 being flipped by cosmic rays. When

Re: MGA 1

2008-11-20 Thread Bruno Marchal

Hi John,




 It boils down to my overall somewhat negative position (although
 I have no better one) of UDA, MPG, comp, etc. - all of them are
 products of HUMAN thinking and restrictions as WE can imagine
 the unfathomable existence (the totality - real TOE).
 I find it a 'cousin' of the reductionistic conventional sciences, just
 a bit 'freed up'. Maybe a distant cousin. Meaning: it handles the
 totality WITHIN the framework of our limited (human) logic(s).


I think that Human logic is already a progress compared to Russian, or  
Belgian, or Hungarian, or American logic, or ...

And then  you know how much I agree with you, once you substitute  
human by lobian (where a lobian machine/number is a universal  
machine who know she is universal, and bet she is a machine).


 Alas, we cannot do better.



I'm afraid so. Thanks for acknowledging.


  just want to take all this mental
 exercise with the grain of salt of there may be more to all of it


Sure. And if we take ourself too much seriously, we can miss the  
ultimate cosmic divine joke (if there is one).




 what we cannot even fancy (imagine, fantasize of) today,
 with our mind anchored in our restrictions. (Including 'digital',
 'numbers', learned wisdom, etc.).


Be careful and be open to your own philosophy. The idea that digital  
and numbers (the concept, not our human description of it) are  
restrictions could be due to our human prejudice. May be a machine  
could one day believes this is a form of unfounded prejudicial  
exclusion.

I hope you don't mind my frank attitude, and I wish you the best,

Bruno
http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-20 Thread Bruno Marchal


On 19 Nov 2008, at 20:37, Michael Rosefield wrote:

 Are not logic gates black boxes, though? Does it really matter what  
 happens between Input and Output? In which case, it has absolutely  
 no bearing on Alice's consciousness whether the gate's a neuron, an  
 electronic doodah, a team of well-trained monkeys or a lucky quantum  
 event or synchronicity.


Good summary.



 It does not matter, really, where or when the actions of the gate  
 take place.


As far as they represent, physically or materially, the relevant  
computation, assuming MEC+MAT. OK.



MGA 2 will give one more step forward the idea that the materiality  
cannot play a relevant part in the computation. I will try to do MGA 2  
tomorrow. (It is 21h22m23s here, I mean 9h22m31s pm :).

I have to solve a conflict between two ways to make the MGA 2. If I  
don't succeed, I will make both.

Thanks for trying to understand,

Bruno




http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-20 Thread Jason Resch
On Thu, Nov 20, 2008 at 12:03 PM, Bruno Marchal [EMAIL PROTECTED] wrote:




  The state machine that would represent her in the case of injection of
 random noise is a different state machine that would represent her normally
 functioning brain.



 Absolutely so.



Bruno,

What about the state machine that included the injection of lucky noise
from an outside source vs. one in which all information was derived
internally from the operation of the state machine itself?  Would those two
differently defined machines not differ and compute something different?
 Even though the computations are identical the information that is being
computed comes from different sources and so carries with it a different
connotation.  Though the bits injected are identical, they inherently
imply a different meaning because the state machine in the case of injection
has a different structure than that of her normally operating brain.  I
believe the brain can be abstracted as a computer/information processing
system, but it is not simply the computations and the inputs into the logic
gates at each step that are important, but also the source of the input
bits, otherwise the computation isn't the same.

Jason

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1 bis (exercise)

2008-11-20 Thread Kory Heath


On Nov 20, 2008, at 10:38 AM, Brent Meeker wrote:
 I think you really you mean nomologically possible.

I mean logically possible, but I'm happy to change it to  
nomologically possible for the purposes of this conversation.

 I think Dennett changes the question by referring to
 neurophysiological actions.  Does he suppose wetware can't be  
 replaced by
 hardware?

No, he definitely argues that wetware can replaced by hardware, as  
long as the hardware retains the computational functionality of the  
wetware.

 In general when I'm asked if I believe in philosophical zombies, I  
 say no,
 because I'm thinking that the zombie must outwardly behave like a  
 conscious
 person in all circumstances over an indefinite period of time, yet  
 have no inner
 experience.  I rule out an accidental zombie accomplishing this as  
 to improbable
 - not impossible.

I agree. But if you accept that it's nomologically possible for a  
robot with a random-number-generator in its head to outwardly behave  
like a conscious person in all circumstances over an indefinite period  
of time, then your theory of consciousness, one way or another, has to  
answer the question of whether or not this unlikely robot is  
conscious. Now, maybe your answer is The question is misguided in  
that case, and here's why... But that's a significant burden.

-- Kory


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1 bis (exercise)

2008-11-20 Thread Brent Meeker

Kory Heath wrote:
 
 On Nov 20, 2008, at 10:38 AM, Brent Meeker wrote:
 I think you really you mean nomologically possible.
 
 I mean logically possible, but I'm happy to change it to  
 nomologically possible for the purposes of this conversation.

Doesn't the question go away if it is nomologically impossible?

 
 I think Dennett changes the question by referring to
 neurophysiological actions.  Does he suppose wetware can't be  
 replaced by
 hardware?
 
 No, he definitely argues that wetware can replaced by hardware, as  
 long as the hardware retains the computational functionality of the  
 wetware.

But that's the catch. Computational functionality is a capacity, not a fact. 
Does a random number generator have computational functionality just in case it 
(accidentally) computes something?  I would say it does not.  But referring the 
concept of zombie to a capacity, rather than observed behavior, makes a 
difference in Bruno's question.

 
 In general when I'm asked if I believe in philosophical zombies, I  
 say no,
 because I'm thinking that the zombie must outwardly behave like a  
 conscious
 person in all circumstances over an indefinite period of time, yet  
 have no inner
 experience.  I rule out an accidental zombie accomplishing this as  
 to improbable
 - not impossible.
 
 I agree. But if you accept that it's nomologically possible for a  
 robot with a random-number-generator in its head to outwardly behave  
 like a conscious person in all circumstances over an indefinite period  
 of time, then your theory of consciousness, one way or another, has to  
 answer the question of whether or not this unlikely robot is  
 conscious. Now, maybe your answer is The question is misguided in  
 that case, and here's why... But that's a significant burden.

I would regard it as an empirical question about how the robots brain worked. 
If the brain processed perceptual and memory data to produce the behavior, as 
in 
Jason's causal relations, I would say it is conscious in some sense (I think 
there are different kinds of consciousness, as evidenced by Bruno's list of 
first-person experiences).  If it were a random number generator, i.e. 
accidental behavior, I'd say not.  Observing the robot for some period of time, 
in some circumstances can provide strong evidence against the accidental 
hypothesis, but it cannot rule it out completely.

Brent

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1 bis (exercise)

2008-11-20 Thread Kory Heath


On Nov 20, 2008, at 3:33 PM, Brent Meeker wrote:
 Doesn't the question go away if it is nomologically impossible?

I'm sort of the opposite of you on this issue. You don't like to use  
the term logically possible, while I don't like to use the term  
nomologically impossible. I don't see the relevance of nomological  
possibility to any philosophical question I'm interested in. For  
anything that's nomologically impossible, I can just imagine a  
cellular automaton or some other computational or mathematical  
physics in which that thing is nomologically possible. And then I  
can just imagine physically instantiating that universe on one of our  
real computers. And then all of my philosophical questions still apply.

I can certainly imagine objections to that viewpoint. But life is  
short. My point was that, since you already agreed that it's  
nomologically possible for a random robot to outwardly behave like a  
conscious person for some indefinite period of time, we can sidestep  
the (probably interesting) discussion we might have about nomological  
vs. logical possibility in this case.

 Does a random number generator have computational functionality just  
 in case it
 (accidentally) computes something?  I would say it does not.  But  
 referring the
 concept of zombie to a capacity, rather than observed behavior,  
 makes a
 difference in Bruno's question.

I think that Dennett explicitly refers to computational capacities  
when talking about consciousness (and zombies), and I follow him. But  
Dennett's point is that computational capacity is always, in  
principle, observed behavior - or, at least, behavior that can be  
observed. In the case of Lucky Alice, if you had the right tools, you  
could examine the neurons and see - based on how they were behaving! -  
that they were not causally connected to each other. (The fact that a  
neuron is being triggered by a cosmic ray rather than by a neighboring  
neuron is an observable part of its behavior.) That observed behavior  
would allow you to conclude that this brain does not have the  
computational capacity to compute the answers to a math test, or to  
compute the trajectory of a ball.

 I would regard it as an empirical question about how the robots  
 brain worked.
 If the brain processed perceptual and memory data to produce the  
 behavior, as in
 Jason's causal relations, I would say it is conscious in some sense  
 (I think
 there are different kinds of consciousness, as evidenced by Bruno's  
 list of
 first-person experiences).  If it were a random number generator, i.e.
 accidental behavior, I'd say not.

I agree. But why do you say you're puzzled about how to answer Bruno's  
question about Lucky Alice? I think you just answered it - for you,  
Lucky Alice wouldn't be conscious. (Or do you think that Lucky Alice  
is different than a robot with a random-number-generator in its head?  
I don't.)

-- Kory


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1 bis (exercise)

2008-11-20 Thread Brent Meeker

Kory Heath wrote:
 
 On Nov 20, 2008, at 3:33 PM, Brent Meeker wrote:
 Doesn't the question go away if it is nomologically impossible?
 
 I'm sort of the opposite of you on this issue. You don't like to use  
 the term logically possible, while I don't like to use the term  
 nomologically impossible. I don't see the relevance of nomological  
 possibility to any philosophical question I'm interested in. For  
 anything that's nomologically impossible, I can just imagine a  
 cellular automaton or some other computational or mathematical  
 physics in which that thing is nomologically possible. And then I  
 can just imagine physically instantiating that universe on one of our  
 real computers. And then all of my philosophical questions still apply.
 
 I can certainly imagine objections to that viewpoint. But life is  
 short. My point was that, since you already agreed that it's  
 nomologically possible for a random robot to outwardly behave like a  
 conscious person for some indefinite period of time, we can sidestep  
 the (probably interesting) discussion we might have about nomological  
 vs. logical possibility in this case.
 
 Does a random number generator have computational functionality just  
 in case it
 (accidentally) computes something?  I would say it does not.  But  
 referring the
 concept of zombie to a capacity, rather than observed behavior,  
 makes a
 difference in Bruno's question.
 
 I think that Dennett explicitly refers to computational capacities  
 when talking about consciousness (and zombies), and I follow him. But  
 Dennett's point is that computational capacity is always, in  
 principle, observed behavior - or, at least, behavior that can be  
 observed. In the case of Lucky Alice, if you had the right tools, you  
 could examine the neurons and see - based on how they were behaving! -  
 that they were not causally connected to each other. (The fact that a  
 neuron is being triggered by a cosmic ray rather than by a neighboring  
 neuron is an observable part of its behavior.) That observed behavior  
 would allow you to conclude that this brain does not have the  
 computational capacity to compute the answers to a math test, or to  
 compute the trajectory of a ball.
 
 I would regard it as an empirical question about how the robots  
 brain worked.
 If the brain processed perceptual and memory data to produce the  
 behavior, as in
 Jason's causal relations, I would say it is conscious in some sense  
 (I think
 there are different kinds of consciousness, as evidenced by Bruno's  
 list of
 first-person experiences).  If it were a random number generator, i.e.
 accidental behavior, I'd say not.
 
 I agree. But why do you say you're puzzled about how to answer Bruno's  
 question about Lucky Alice? I think you just answered it - for you,  
 Lucky Alice wouldn't be conscious. (Or do you think that Lucky Alice  
 is different than a robot with a random-number-generator in its head?  
 I don't.)

I think Alice is different.  She has the capacity to be conscious.  This is 
potentially, temporarily interrupted by some mysterious failure of gates (or 
neurons) in her brain - but wait, these failures are serendipitously canceled 
out by a burst of cosmic rays, so they all get the same input/output as if 
nothing had happened.  So, functionally, it's as if the gates didn't fail at 
all.  This functionality is beyond external behavior; it includes forming 
memories, paying attention, etc.  Of course we may say it is not causally 
related to Alice's environment, but this depends on a certain theory of 
causality, a physical theory.  If the cosmic rays exactly replace all the gate 
functions to maintain the same causal chains then from an informational 
perspective we might say the rays were caused by the relations to her 
environment.

Brent

Brent

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-19 Thread Bruno Marchal


Le 19-nov.-08, à 07:13, Russell Standish a écrit :


 I think Alice was indeed not a zombie,


I think you are right.
COMP + MAT implies Alice (in this setting) is not a zombie.



 and that her consciousness
 supervened on the physical activity stimulating her output gates (the
 cosmic explosion that produced the happy rays). Are you suggesting
 that she was a zombie?


Not at all.   (Not yet ...).




 I can see the connection with Tim Maudlin's argument, but in his case,
 the machinery known as Olympia is too simple to be conscious (being
 nothing more than a recording - simpler than most automata anyway),
 and the machinery known as Klara was in fact stationary, leading to a
 rather absurd proposition that consciousness would depend on a
 difference in an inactive machine.

 In your case, the cosmic explosion is far from inactive,



This makes the movie graph argument immune against the first half of 
Barnes objection. But let us not anticipate on the sequel.





 and if a star
 blew up in just such a way that its cosmic rays produced identical
 behaviour to Alice taking her exam (consciously), I have no problems
 in considering her consciousness as having supervened on the cosmic
 rays travelling from that star for that instant. It is no different to
 the proverbial tornado ripping through one of IBM's junk yards and
 miraculously assembling a conscious computer by chance.


Does everyone accept, like Russell,  that, assuming COMP and MAT, Alice 
is not a zombie? I mean, is there someone who object? Remember we are 
proving implication/ MAT+MECH = something. We never try to argue 
about that something per se. Eventually we hope to prove MAT+MECH = 
false, that is NOT(MAT  MECH) which is equivalent to MAT implies NOT 
MECH, MECH = NOT MAT, etc.

(by MAT i mean materialism, or naturalism, or physicalism or more 
generally the physical supervenience thesis, according to which 
consciousness supervenes on the physical activity of the brain.

If no one objects, I will present MGA 2 (soon).





 Of course you know my opinion that the whole argument changes once you
 consider the thought experiment taking place in a multiverse.


We will see (let us go step by step for not confusing the audience). 
Thanks for answering.


Bruno Marchal


http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-19 Thread Telmo Menezes

Bruno,

 If no one objects, I will present MGA 2 (soon).

I also agree completely and am curious to see where this is going.
Please continue!

Cheers,
Telmo Menezes.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-19 Thread Gordon Tsai
Bruno:
 
   I'm intested to see the second part. Thanks!

--- On Wed, 11/19/08, Bruno Marchal [EMAIL PROTECTED] wrote:

From: Bruno Marchal [EMAIL PROTECTED]
Subject: Re: MGA 1
To: [EMAIL PROTECTED]
Date: Wednesday, November 19, 2008, 3:59 AM


Le 19-nov.-08, à 07:13, Russell Standish a écrit :


 I think Alice was indeed not a zombie,


I think you are right.
COMP + MAT implies Alice (in this setting) is not a zombie.



 and that her consciousness
 supervened on the physical activity stimulating her output gates (the
 cosmic explosion that produced the happy rays). Are you
suggesting
 that she was a zombie?


Not at all.   (Not yet ...).




 I can see the connection with Tim Maudlin's argument, but in his case,
 the machinery known as Olympia is too simple to be conscious (being
 nothing more than a recording - simpler than most automata anyway),
 and the machinery known as Klara was in fact stationary, leading to a
 rather absurd proposition that consciousness would depend on a
 difference in an inactive machine.

 In your case, the cosmic explosion is far from inactive,



This makes the movie graph argument immune against the first half of 
Barnes objection. But let us not anticipate on the sequel.





 and if a star
 blew up in just such a way that its cosmic rays produced identical
 behaviour to Alice taking her exam (consciously), I have no problems
 in considering her consciousness as having supervened on the cosmic
 rays travelling from that star for that instant. It is no different to
 the proverbial tornado ripping through one of IBM's junk yards and
 miraculously assembling a conscious computer by chance.


Does everyone accept, like Russell,  that, assuming COMP and MAT, Alice 
is not a zombie? I mean, is there someone who object? Remember we are 
proving implication/ MAT+MECH = something. We never try to argue 
about that something per se. Eventually we hope to prove MAT+MECH =

false, that is NOT(MAT  MECH) which is equivalent to MAT implies NOT 
MECH, MECH = NOT MAT, etc.

(by MAT i mean materialism, or naturalism, or physicalism or more 
generally the physical supervenience thesis, according to which 
consciousness supervenes on the physical activity of the brain.

If no one objects, I will present MGA 2 (soon).





 Of course you know my opinion that the whole argument changes once you
 consider the thought experiment taking place in a multiverse.


We will see (let us go step by step for not confusing the audience). 
Thanks for answering.


Bruno Marchal


http://iridia.ulb.ac.be/~marchal/






  
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-19 Thread Jason Resch
To add some clarification, I do not think spreading Alice's logic gates
across a field and allowing cosmic rays to cause each gate to perform the
same computations that they would had they existed in her functioning brain
would be conscious.  I think this because in isolation the logic gates are
not computing anything complex, only AND, OR, NAND operations, etc.  This is
why I believe rocks are not conscious, the collisions of their molecules may
be performing simple computations, but they are never aggregated into
complex patterns to compute over a large set of information.
Jason

On Wed, Nov 19, 2008 at 12:50 PM, Jason Resch [EMAIL PROTECTED] wrote:



 On Wed, Nov 19, 2008 at 5:59 AM, Bruno Marchal [EMAIL PROTECTED] wrote:



 Does everyone accept, like Russell,  that, assuming COMP and MAT, Alice
 is not a zombie? I mean, is there someone who object? Remember we are
 proving implication/ MAT+MECH = something. We never try to argue
 about that something per se. Eventually we hope to prove MAT+MECH =
 false, that is NOT(MAT  MECH) which is equivalent to MAT implies NOT
 MECH, MECH = NOT MAT, etc.

 (by MAT i mean materialism, or naturalism, or physicalism or more
 generally the physical supervenience thesis, according to which
 consciousness supervenes on the physical activity of the brain.


 Bruno, I am on the fence as to whether or not Alice is a Zombie.  The
 argument for her not being conscious is related to the non causal effect of
 information in this scenario.  A string of 1's and 0's which is simply
 defined out of nowhere, in my opinion cannot contain conscious observers,
 even if it could be considered to encode brain states conscious observers or
 a universe with conscious observers.  To have meaningful information there
 must be relations between objects, such as the flow of information in the
 succession of states in a Turing machine.  In the case of Alice, the
 information coming from the cosmic rays is meaningless, and might as well
 have occurred in isolation.  If all of Alice's logic gates had been spread
 over a field, and made to fire in the same way due to cosmic rays and if all
 logic gates remained otherwise disconnected from each other, would anyone
 consider this field of logic gates be conscious?

 I have an idea that consciousness is related to hierarchies of information,
 at the lowest levels of neural activity, simple computations of small
 amounts of information combine information into a result, and then these
 higher level results are passed up to higher levels of processing, etc.  For
 example the red/green/blue data from the eyes are combined into single
 pixels, these pixels are combined into an field of colors, this field of
 colors is then processed by object classification sections of the brain.  So
 my argument that Alice might not be conscious would be related to the
 skipping of steps through the injection of information which is empty (not
 having been computed from lower level sets of information and hence not
 actually conveying any information).

 Jason

 ) I do not believe is


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-19 Thread Bruno Marchal

On 19 Nov 2008, at 20:17, Jason Resch wrote:

 To add some clarification, I do not think spreading Alice's logic  
 gates across a field and allowing cosmic rays to cause each gate to  
 perform the same computations that they would had they existed in  
 her functioning brain would be conscious.  I think this because in  
 isolation the logic gates are not computing anything complex, only  
 AND, OR, NAND operations, etc.  This is why I believe rocks are not  
 conscious, the collisions of their molecules may be performing  
 simple computations, but they are never aggregated into complex  
 patterns to compute over a large set of information.


Actually I agree with this argument. But it does not concern Alice,  
because I have provide her with an incredible amount of luck. The  
lucky rays  fix the neurons in a genuine way (by that abnormally big  
amount of pure luck).
If you doubt Alice remain conscious, how could you accept an  
experience of simple teleportation (UDA step 1 or 2). If you can  
recover consciousness from a relative digital description, how could  
that consciousness distinguish between a recovery from a genuine  
description send from earth (say), and a recovery from a description  
luckily generated by a random process? If you recover from a  
description (comp), you cannot know if that description has been  
generated by a computation or a random process, unless you give some  
prescience to the logical gates. Keep in mind we try to refute the  
conjunction MECH and MAT.

Nevertheless your intuition below is mainly correct, but the point is  
that accepting it really works, AND keeping MECH, will force us to  
negate MAT.

Bruno





 Jason

 On Wed, Nov 19, 2008 at 12:50 PM, Jason Resch [EMAIL PROTECTED]  
 wrote:


 On Wed, Nov 19, 2008 at 5:59 AM, Bruno Marchal [EMAIL PROTECTED]  
 wrote:


 Does everyone accept, like Russell,  that, assuming COMP and MAT,  
 Alice
 is not a zombie? I mean, is there someone who object? Remember we are
 proving implication/ MAT+MECH = something. We never try to argue
 about that something per se. Eventually we hope to prove MAT+MECH =
 false, that is NOT(MAT  MECH) which is equivalent to MAT implies NOT
 MECH, MECH = NOT MAT, etc.

 (by MAT i mean materialism, or naturalism, or physicalism or more
 generally the physical supervenience thesis, according to which
 consciousness supervenes on the physical activity of the brain.

 Bruno, I am on the fence as to whether or not Alice is a Zombie.   
 The argument for her not being conscious is related to the non  
 causal effect of information in this scenario.  A string of 1's and  
 0's which is simply defined out of nowhere, in my opinion cannot  
 contain conscious observers, even if it could be considered to  
 encode brain states conscious observers or a universe with conscious  
 observers.  To have meaningful information there must be relations  
 between objects, such as the flow of information in the succession  
 of states in a Turing machine.  In the case of Alice, the  
 information coming from the cosmic rays is meaningless, and might as  
 well have occurred in isolation.  If all of Alice's logic gates had  
 been spread over a field, and made to fire in the same way due to  
 cosmic rays and if all logic gates remained otherwise disconnected  
 from each other, would anyone consider this field of logic gates be  
 conscious?

 I have an idea that consciousness is related to hierarchies of  
 information, at the lowest levels of neural activity, simple  
 computations of small amounts of information combine information  
 into a result, and then these higher level results are passed up to  
 higher levels of processing, etc.  For example the red/green/blue  
 data from the eyes are combined into single pixels, these pixels are  
 combined into an field of colors, this field of colors is then  
 processed by object classification sections of the brain.  So my  
 argument that Alice might not be conscious would be related to the  
 skipping of steps through the injection of information which is  
 empty (not having been computed from lower level sets of  
 information and hence not actually conveying any information).

 Jason

 ) I do not believe is


 

http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



MGA 1 bis (exercise)

2008-11-19 Thread Bruno Marchal


On 19 Nov 2008, at 16:06, Telmo Menezes wrote:



 Bruno,

 If no one objects, I will present MGA 2 (soon).

 I also agree completely and am curious to see where this is going.
 Please continue!


Thanks Telmo, thanks also to Gordon.

I will try to send MGA 2 asap. But this asks me some time. Meanwhile I  
suggest a little exercise, which, by the way, finishes the proof of  
MECH + MAT implies false, for those who thinks that there is no  
(conceivable) zombies. (they think that exists zombie *is* false).

Exercise (mat+mec implies zombie exists or are conceivable):

Could you alter the so-lucky cosmic explosion beam a little bit so  
that Alice still succeed her math exam, but is, reasonably enough, a  
zombie  during the exam. With zombie taken in the traditional sense of  
Kory and Dennett.
Of course you have to keep well *both*  MECH *and* MAT.

Bruno

http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1 bis (exercise)

2008-11-19 Thread Brent Meeker

Bruno Marchal wrote:
 
 On 19 Nov 2008, at 16:06, Telmo Menezes wrote:
 
 
 Bruno,

 If no one objects, I will present MGA 2 (soon).
 I also agree completely and am curious to see where this is going.
 Please continue!
 
 
 Thanks Telmo, thanks also to Gordon.
 
 I will try to send MGA 2 asap. But this asks me some time. Meanwhile I  
 suggest a little exercise, which, by the way, finishes the proof of  
 MECH + MAT implies false, for those who thinks that there is no  
 (conceivable) zombies. (they think that exists zombie *is* false).
 
 Exercise (mat+mec implies zombie exists or are conceivable):
 
 Could you alter the so-lucky cosmic explosion beam a little bit so  
 that Alice still succeed her math exam, but is, reasonably enough, a  
 zombie  during the exam. With zombie taken in the traditional sense of  
 Kory and Dennett.
 Of course you have to keep well *both*  MECH *and* MAT.
 
 Bruno

As I understand it a philosophical zombie is someone who looks and acts just 
like a conscious person but isn't conscious, i.e. has no inner narrative. 
Time and circumstance play a part in this.  As Bruno pointed out a cardboard 
cutout of a person's photograph could be a zombie for a moment.  I assume the 
point of the exam is that an exam is long enough in duration and complex enough 
that it rules out the accidental, cutout zombie.  But then Alice has her normal 
behavior restored by a cosmic ray shower that is just as improbable as the 
accidental zombie, i.e. she is, for the duration of the shower, an accidental 
zombie.

So I'm puzzled as to how answer Bruno's question.  In general I don't believe 
in 
zombies, but that's in the same way I don't believe my glass of water will 
freeze at 20degC.  It's an opinion about what is likely, not what is possible. 
It seems similar to the question, could I have gotten in my car and driven to 
the store, bought something, and driven back and yet not be conscious of it. 
It's highly unlikely, yet people apparently have done such things.

Brent

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-19 Thread Jason Resch
On Wed, Nov 19, 2008 at 1:55 PM, Bruno Marchal [EMAIL PROTECTED] wrote:


 On 19 Nov 2008, at 20:17, Jason Resch wrote:

 To add some clarification, I do not think spreading Alice's logic gates
 across a field and allowing cosmic rays to cause each gate to perform the
 same computations that they would had they existed in her functioning brain
 would be conscious.  I think this because in isolation the logic gates are
 not computing anything complex, only AND, OR, NAND operations, etc.  This is
 why I believe rocks are not conscious, the collisions of their molecules may
 be performing simple computations, but they are never aggregated into
 complex patterns to compute over a large set of information.



 Actually I agree with this argument. But it does not concern Alice, because
 I have provide her with an incredible amount of luck. The lucky rays  fix
 the neurons in a genuine way (by that abnormally big amount of pure luck).


If the cosmic rays are simply keeping her neurons working normally, then I'm
more inclined to believe she remains conscious, but I'm not certain one way
or the other.



 If you doubt Alice remain conscious, how could you accept an experience of
 simple teleportation (UDA step 1 or 2). If you can recover consciousness
 from a relative digital description, how could that consciousness
 distinguish between a recovery from a genuine description send from earth
 (say), and a recovery from a description luckily generated by a random
 process?


I believe consciousness can be recovered from a digital description, but I
don't believe the description itself is conscious while being beamed from
one teleporting station to the other.  I think it is only when the
body/computer simulation is instantiated can consciousness recovered from
the description.

Consider sending the description over an encrypted channel, without the
right decryption algorithm and key the description can't be differentiated
from random noise.  The same bits could be interpreted entirely differently
depending completely on how the recipient uses it.  The meaning of the
transmission is recovered when it forms a system with complex relations,
presumably the same relations as the original one that was teleported, even
though it may be running on a different physical substrate, or a different
computer architecture.

I don't deny that a random process could be the source of a transmission
that resulted in the creation of a conscious being, what I deny is that
random *simple computations, lacking any causal linkages, could form
consciousness.

* By simple I mean the types of computation done in discrete steps, such as
multiplication, addition, etc.  Those done by a single neuron or a small
collection of logic gates.

If you recover from a description (comp), you cannot know if that
 description has been generated by a computation or a random process, unless
 you give some prescience to the logical gates. Keep in mind we try to refute
 the conjunction MECH and MAT.


Here I would say that consciousness is not correlated with the physical
description at any point in time, but rather the computational history and
flow of information, and that this is responsible for the subjective
experience of being Alice.  If Alice's mind is described by a random
process, albeit one which gives the appearance of consciousness during her
exam, she nevertheless has no coherent computational history and her mind
contains no large scale informational structures.  The state machine that
would represent her in the case of injection of random noise is a different
state machine that would represent her normally functioning brain.

Jason



 Nevertheless your intuition below is mainly correct, but the point is that
 accepting it really works, AND keeping MECH, will force us to negate MAT.

 Bruno





 Jason

 On Wed, Nov 19, 2008 at 12:50 PM, Jason Resch [EMAIL PROTECTED]wrote:



 On Wed, Nov 19, 2008 at 5:59 AM, Bruno Marchal [EMAIL PROTECTED] wrote:



 Does everyone accept, like Russell,  that, assuming COMP and MAT, Alice
 is not a zombie? I mean, is there someone who object? Remember we are
 proving implication/ MAT+MECH = something. We never try to argue
 about that something per se. Eventually we hope to prove MAT+MECH =
 false, that is NOT(MAT  MECH) which is equivalent to MAT implies NOT
 MECH, MECH = NOT MAT, etc.

 (by MAT i mean materialism, or naturalism, or physicalism or more
 generally the physical supervenience thesis, according to which
 consciousness supervenes on the physical activity of the brain.


 Bruno, I am on the fence as to whether or not Alice is a Zombie.  The
 argument for her not being conscious is related to the non causal effect of
 information in this scenario.  A string of 1's and 0's which is simply
 defined out of nowhere, in my opinion cannot contain conscious observers,
 even if it could be considered to encode brain states conscious observers or
 a universe with conscious observers.  To have meaningful 

Re: MGA 1

2008-11-19 Thread Brent Meeker

Jason Resch wrote:
 
 
 On Wed, Nov 19, 2008 at 1:55 PM, Bruno Marchal [EMAIL PROTECTED] 
 mailto:[EMAIL PROTECTED] wrote:
 
 
 On 19 Nov 2008, at 20:17, Jason Resch wrote:
 
 To add some clarification, I do not think spreading Alice's logic
 gates across a field and allowing cosmic rays to cause each gate
 to perform the same computations that they would had they existed
 in her functioning brain would be conscious.  I think this because
 in isolation the logic gates are not computing anything complex,
 only AND, OR, NAND operations, etc.  This is why I believe rocks
 are not conscious, the collisions of their molecules may be
 performing simple computations, but they are never aggregated into
 complex patterns to compute over a large set of information.
 
 
 Actually I agree with this argument. But it does not concern Alice,
 because I have provide her with an incredible amount of luck. The
 lucky rays  fix the neurons in a genuine way (by that abnormally big
 amount of pure luck). 
 
 
 If the cosmic rays are simply keeping her neurons working normally, then 
 I'm more inclined to believe she remains conscious, but I'm not certain 
 one way or the other.
 
  
 
 If you doubt Alice remain conscious, how could you accept an
 experience of simple teleportation (UDA step 1 or 2). If you can
 recover consciousness from a relative digital description, how could
 that consciousness distinguish between a recovery from a genuine
 description send from earth (say), and a recovery from a description
 luckily generated by a random process?
 
 
 I believe consciousness can be recovered from a digital description, but 
 I don't believe the description itself is conscious while being beamed 
 from one teleporting station to the other.  I think it is only when the 
 body/computer simulation is instantiated can consciousness recovered 
 from the description.
 
 Consider sending the description over an encrypted channel, without the 
 right decryption algorithm and key the description can't be 
 differentiated from random noise.  The same bits could be interpreted 
 entirely differently depending completely on how the recipient uses it. 
  The meaning of the transmission is recovered when it forms a system 
 with complex relations, presumably the same relations as the original 
 one that was teleported, even though it may be running on a different 
 physical substrate, or a different computer architecture.

Right.  That's why I think that a simulation instantiating a conscious being 
would have to include a lot of environment and the being would only be 
conscious 
*relative to that environment*.  I think it is an interesting empirical 
question 
whether a person can be conscious with no interaction with their environment. 
It appears that it is possible for short periods of time, but I once read that 
in sensory deprivation experiments the subjects minds would go into a loop 
after 
a couple of hours.  Is that still being conscious?

Brent Meeker

 
 I don't deny that a random process could be the source of a transmission 
 that resulted in the creation of a conscious being, what I deny is that 
 random *simple computations, lacking any causal linkages, could form 
 consciousness.
  
 * By simple I mean the types of computation done in discrete steps, such 
 as multiplication, addition, etc.  Those done by a single neuron or a 
 small collection of logic gates.
 
 If you recover from a description (comp), you cannot know if that
 description has been generated by a computation or a random process,
 unless you give some prescience to the logical gates. Keep in mind
 we try to refute the conjunction MECH and MAT.
 
 
 Here I would say that consciousness is not correlated with the physical 
 description at any point in time, but rather the computational history 
 and flow of information, and that this is responsible for the subjective 
 experience of being Alice.  If Alice's mind is described by a random 
 process, albeit one which gives the appearance of consciousness during 
 her exam, she nevertheless has no coherent computational history and her 
 mind contains no large scale informational structures.  The state 
 machine that would represent her in the case of injection of random 
 noise is a different state machine that would represent her normally 
 functioning brain. 
 
 Jason
  
 
 
 Nevertheless your intuition below is mainly correct, but the point
 is that accepting it really works, AND keeping MECH, will force us
 to negate MAT.
 
 Bruno
 
 
 
 

 Jason

 On Wed, Nov 19, 2008 at 12:50 PM, Jason Resch
 [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:



 On Wed, Nov 19, 2008 at 5:59 AM, Bruno Marchal
 [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:



 Does everyone accept, like Russell,  that, assuming COMP
 and MAT, Alice
 is not a 

Re: MGA 1 bis (exercise)

2008-11-19 Thread Telmo Menezes

 Could you alter the so-lucky cosmic explosion beam a little bit so
 that Alice still succeed her math exam, but is, reasonably enough, a
 zombie  during the exam. With zombie taken in the traditional sense of
 Kory and Dennett.
 Of course you have to keep well *both*  MECH *and* MAT.

I think I can...

Instead of correcting the brain, the cosmic beams trigger output
neurons in a sequence that makes Alice write the right answers. That
is to say, the information content of the beams is no longer a
representation of an area of Alice's brain, but a representation of
the answers to the exam. An outside observer cannot distinguish one
case from the other. In the first she is Alice, in the second she is a
zombie.

Telmo.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-19 Thread Kory Heath


On Nov 18, 2008, at 11:52 AM, Bruno Marchal wrote:
 The last question (of MGA 1) is:  was Alice, in this case, a zombie
 during the exam?

Of course, my personal answer would take into account the fact that I  
already have a problem with the materialist's idea of matter. But I  
think we're supposed to be considering the question in the context of  
mechanism and materialism. So I'll ask, what should a mechanist- 
materialist say about the state of Alice's consciousness during the  
exam?

Maybe I'm jumping ahead, but I think this thought experiment creates a  
dilemma for the mechanist-materialist (which I think is Bruno's  
point). In contrast to many of the other responses in this thread, I  
don't think the mechanist-materialist should believe that Alice is  
conscious in the case when every gate has stopped functioning (but  
cosmic rays are randomly causing them to flip in the exact same way  
that they would have flipped if they were functioning). Alice is in  
that case functionally identical to a random-number generator. It  
shouldn't matter at all whether these cosmic rays are striking the  
broken gates in her head, or if the gates in her head are completely  
inert and the rays are striking the neurons in (say) her arms and her  
spinal chord, still causing her body to behave exactly as it would  
have without the breakdown. I agree with Telmo Menezes that the  
mechanist-materialist shouldn't view Alice as conscious in the latter  
case. But I don't think it's any different than the former case.

It sounds like many people are under the impression that mechanism- 
materialism, with it's rejection of zombies, is committed to the view  
that Lucky Alice must be conscious, because she's behaviorally  
indistinguishable from the Alice with the correctly-functioning brain.  
But, in the sense that matters, Lucky Alice is *not* behaviorally  
indistinguishable from fully-functional Alice. For the mechanist- 
materialist, everything physical counts as behavior. And there is a  
clear physical difference between the two Alices, which would be  
physically discoverable by a nearby scientist with the proper  
instruments.

Lets imagine that, during the time that Alice's brain is broken but  
luckily acting as though it wasn't due to cosmic rays, someone  
throws a ball at Alice's head, and she (luckily) ducks out of the  
way. The mechanist-materialist may be happy to agree that she did  
indeed duck out of the way, since that's just a description of what  
her body did. But the mechanist-materialist can (and must) claim that  
Lucky Alice did not in fact respond to the ball at all. And that  
statement can be translated into pure physics-talk. The movements of  
Alice's body in this case are being caused by the cosmic rays. They  
are causally disconnected from the movements of the ball (except in  
the incidental way that the ball might be having some causal effect on  
the cosmic rays). When Alice's brain is working properly, her act of  
ducking *is* causally connected to the movement of the ball. And this  
kind of causal connection is an important part of what the mechanist- 
materialist means by consciousness.

Dennett is able to - and in fact must - say that Alice is not  
conscious when all of her brain-gates are broken but very luckily  
being flipped by cosmic rays. When Dennett says that someone is  
conscious, he is referring precisely to these behavioral competences  
that can be described in physical terms. He means that this collection  
of physical stuff we call Alice really is responding to her immediate  
environment (like the ball), observing things, collecting data, etc.  
In that very objective sense, Lucky Alice is not responding to the  
ball at all. She's not conscious by Dennett's physicalist definition  
of consciousness. But she's also not a zombie, because she is behaving  
differently than fully-functional Alice. You just have to be able to  
have the proper instruments to know it.

If you still think that Dennett would claim that Lucky Alice is a  
zombie, take a look at this quote from 
http://ase.tufts.edu/cogstud/papers/zombic.htm 
  : Just remember, by definition, a zombie behaves indistinguishably  
from a conscious being–in all possible tests, including not only  
answers to questions [as in the Turing test] but psychophysical tests,  
neurophysiological tests–all tests that any 'third-person' science can  
devise. Lucky Alice does *not* behave indistinguishably from a  
conscious being in all possible tests. The proper third-person test  
examining her logic gates would show that she is not responding to her  
immediate environment at all. Dennett should claim that she's a non- 
conscious non-zombie.

Nevertheless, I think Bruno's thought experiment causes a problem for  
the mechanist-materialist, as it is supposed to. If we believe that  
the fully-functional Alice is conscious and the random-gate-brain  
Alice is not conscious, what happens when we start

MGA 1

2008-11-18 Thread Bruno Marchal

Hi,

Those who dislikes introduction can skip up to THE FIRST THOUGHT  
EXPERIMENT AND THE FIRST QUESTION.
-


INTRODUCTION

MGA is for Movie Graph Argument (like UDA is for Universal Dovetailer  
Argument).

By UDA(1...7), the seven first step of the UDA, we have a proof or  
argument that


 (COMP + there is a concrete universe with a concrete universal  
dovetailer running forever in  it)

implies that

  physics is emerging statistically from the computations (as seen  
from a first person points of view).

Note: I will use computationalism, digital mechanism, and even just  
mechanism, as synonymous.

MGA is intended to eliminate the hypothesis that:

there is a concrete universe with a concrete universal dovetailer  
running forever)


Leading to:  comp implies that physics is a branch of (mathematical)  
computer science.

Some nuances will have to be added. But I prefer to be slightly wrong,  
and understandable, than to make a long list of vocabulary and  
pursuing in some obscur jargon.


But in case you have not read the UDA, there is no problem. MGA by  
itself shows something independent of the UDA, indeed it shows (is  
supposed to show) that the physical supervenience thesis is false.  
Consciousness does not supervene on the *physical activity* of the  
brain/computer/universe. This shows that mechanism is incompatible  
with materialism (even weak form) or naturalism or physicalism,  
because they traditionally assume the physical supervenience thesis.

It is more subtle than UDA, and I expect possible infinite  
discussions. (Zombies will come back!)


Now a preliminary remark for clarifying what we mean by MECHANISM.  
When the mechanist says yes to the doctor, it is because he believes  
(or hopes) he will survive QUA COMPUTATIO (sorry for the latin). I  
mean he believes that he will survive because the computational device  
he will get in place of its old brain does the right computations  
(which exists by hypothesis). he does not believe something like this  
(although he could!). I believe that there is God who will, by its  
magic means,  pull out my soul, and then put it back in the new  
computational device.
A mechanical theory of consciousness, as well explained by Dennett,  
should rely of the fact that we don't attribute knowledge or  
consciousness, still less prescience, to the neurons, or elementary  
logical gates, or quarks, ... that is to the elementary part of the  
computational device. (The elementary parts depends of course of the  
substitution level choice).

This means, assuming both mechanism and naturalism (i.e. the physical  
supervenience thesis), that when consciousness supervenes on the  
physical activity of a brain, no neuron is aware of the other neurons  
to which they are related. Each neuron is aware only of some  
information they get of the neurons, not of the neurons themselves. If  
that was not the case, so that some neurons have some prescience of  
the identity of the neurons to which they are connected, it would just  
mean, when keeping the mechanist hypothesis, that we have not chosen  
the right level of substitution, and should go down further.

Now come the first thought experiment and the first question.
-

THE FIRST THOUGHT EXPERIMENT AND THE FIRST QUESTIONS   (MGA 1) : The  
lucky cosmic event.

One billions years ago, at one billion light years away, somewhere in  
the universe (which exists by the naturalist hypo) a cosmic explosion  
occurred. And ...

... Alice had her math exam this afternoon.
 From 3h to 4h, she solved successfully a problem. She though to  
herself, oh, easy, Oh careful there is trap, yet I can solve it.

What really happened is this. Alice already got an artificial brain,  
since a fatal brain tumor in her early childhood. At 3h17 pm one  
logical gate did broke, (resp. two logical gates, three, 24, 4567,  
234987, ... all).

But Alice was lucky (incredibly lucky). When the logical gate A did  
break, and for example did not send a bit to logical gate B, an  
energetic particle coming from the cosmic explosion, by pure chance,  
did trigger the logical gate B at the right time. And just after this  
happening another energetic particle fixed the gate problem.

Question: did this change Alice's consciousness during the exam?

I ask the same question with 2440 broken gates. They broke, let us say  
during an oral exam, and each time a gate broke, by sending a wrong  
info, or by not sending some info, an energetic particle coming from  
that cosmic explosion do the job, and at some point in time, a bunch  
of energetic particle fix Alice's brain.

Suppose that ALL the neurons/logical gates of Alice are broken during  
the exam, all the time. But Alice, I told you, is incredibly lucky,  
and that cosmic beam again manage each logical gates to complete their  
work in the relevant places and times. And again at the end of the  
exam, a cosmic last beam fixed her