Re: MGA 1

2008-11-28 Thread John Mikes
Thanks, Brent, at least you read through my blurb. Of course I am vague - besides I wrote the post in a jiffy - not premeditatedly, I am sorry. Also there is no adequate language to those things I want to refer to, not even 'in situ', the ideas and terms about interefficient totality (IMO more

Re: MGA 1

2008-11-27 Thread Brent Meeker
Bruno Marchal wrote: On 25 Nov 2008, at 20:16, Brent Meeker wrote: Bruno Marchal wrote: Brent: I don't see why the mechanist-materialists are logically disallowed from incorporating that kind of physical difference into their notion of consciousness. Bruno: In our setting, it means

Re: MGA 1

2008-11-27 Thread John Mikes
Brent wrote: ... *But is causality an implementation detail? There seems to be an implicit assumption that digitally represented states form a sequence just because there is a rule that defines(*) that sequence, but in fact all digital (and other) sequences depend on(**) causal chains. ...* I

Re: MGA 1

2008-11-27 Thread Brent Meeker
John Mikes wrote: Brent wrote: ... *But is causality an implementation detail? There seems to be an implicit assumption that digitally represented states form a sequence just because there is a rule that defines(*) that sequence, but in fact all digital (and other) sequences depend

Re: MGA 1

2008-11-26 Thread Stathis Papaioannou
2008/11/25 Kory Heath [EMAIL PROTECTED]: The answer I *used* to give was that it doesn't matter, because no matter what accidental order you find in Platonia, you also find the real order. In other words, if you find some portion of the digits of PI that seems to be following the rules of

Re: MGA 1

2008-11-26 Thread Bruno Marchal
On 25 Nov 2008, at 20:16, Brent Meeker wrote: Bruno Marchal wrote: Brent: I don't see why the mechanist-materialists are logically disallowed from incorporating that kind of physical difference into their notion of consciousness. Bruno: In our setting, it means that the neuron/logic

Re: MGA 1 - (to B.M)

2008-11-25 Thread Bruno Marchal
John, On 24 Nov 2008, at 00:19, John Mikes wrote: Bruno, right before my par on 'sharing a 3rd pers. opinion: more or less (maybe) resembling the original 'to be shared' one. In its (1st) 'personal' variation. (Cf: perceived reality). you included a remark not too dissimilar in

Re: MGA 1

2008-11-25 Thread Bruno Marchal
Le 25-nov.-08, à 02:13, Kory Heath a écrit : On Nov 24, 2008, at 11:01 AM, Bruno Marchal wrote: If your argument were not merely convincing but definitive, then I would not need to make MGA 3 for showing it is ridiculous to endow the projection of a movie of a computation with

Re: MGA 1

2008-11-25 Thread Russell Standish
On Tue, Nov 25, 2008 at 11:55:37AM +0100, Bruno Marchal wrote: About MGA 3, I feel almost a bit ashamed to explain that. To believe that the projection of the movie makes Alice conscious, is almost like explaining why we should not send Roger Moore (James Bond) in jail, giving that there

Re: MGA 1

2008-11-25 Thread Bruno Marchal
Just to be clear on this, I obviously agree. Best, Bruno Le 25-nov.-08, à 12:05, Russell Standish a écrit : On Tue, Nov 25, 2008 at 11:55:37AM +0100, Bruno Marchal wrote: About MGA 3, I feel almost a bit ashamed to explain that. To believe that the projection of the movie makes Alice

Re: MGA 1

2008-11-25 Thread Kory Heath
On Nov 25, 2008, at 2:55 AM, Bruno Marchal wrote: So you agree that MGA 1 does show that Lucky Alice is conscious (logically). I think I have a less rigorous view of the argument than you do. You want the argument to have the rigor of a mathematical proof. You say Let's start with the

Re: MGA 1

2008-11-25 Thread Bruno Marchal
On 25 Nov 2008, at 15:49, Kory Heath wrote: On Nov 25, 2008, at 2:55 AM, Bruno Marchal wrote: So you agree that MGA 1 does show that Lucky Alice is conscious (logically). I think I have a less rigorous view of the argument than you do. You want the argument to have the rigor of a

Re: MGA 1

2008-11-25 Thread Brent Meeker
Bruno Marchal wrote: On 25 Nov 2008, at 15:49, Kory Heath wrote: On Nov 25, 2008, at 2:55 AM, Bruno Marchal wrote: So you agree that MGA 1 does show that Lucky Alice is conscious (logically). I think I have a less rigorous view of the argument than you do. You want the argument to have

Re: MGA 1

2008-11-25 Thread Russell Standish
On Tue, Nov 25, 2008 at 11:16:55AM -0800, Brent Meeker wrote: But who would say yes to the doctor if he said that he would take a movie of your brain states and project it? Or if he said he would just destroy you in this universe and you would continue your experiences in other branches

Re: MGA 1

2008-11-25 Thread Kory Heath
On Nov 25, 2008, at 10:00 AM, Bruno Marchal wrote: You could have perhaps still a problem with the definitions or with the hypotheses? I think I haven't always been clear on our definitions of mechanism and materialism. But I can understand and accept definitions of those terms under

Re: MGA 1

2008-11-24 Thread Kory Heath
On Nov 23, 2008, at 4:18 AM, Bruno Marchal wrote: Let us consider your lucky teleportation case, where someone use a teleporter which fails badly. So it just annihilates the original person, but then, by an incredible luck the person is reconstructed with his right state after. If you ask

Re: MGA 1

2008-11-24 Thread Bruno Marchal
On 24 Nov 2008, at 18:08, Kory Heath wrote: I see what you mean. But for me, these thought experiments are making me doubt that I even have a coherent notion of computational supervenience. You are not supposed to have a coherent idea of what is computational supervenience. This

Re: MGA 1

2008-11-24 Thread Kory Heath
On Nov 22, 2008, at 6:24 PM, Stathis Papaioannou wrote: Similarly, whenever we interact with a computation, it must be realised on a physical computer, such as a human brain. But there is also the abstract computation, a Platonic object. It seems that consciousness, like threeness, may be a

Re: MGA 1

2008-11-24 Thread Kory Heath
On Nov 24, 2008, at 11:01 AM, Bruno Marchal wrote: If your argument were not merely convincing but definitive, then I would not need to make MGA 3 for showing it is ridiculous to endow the projection of a movie of a computation with consciousness (in real space-time, like the physical

Re: MGA 1

2008-11-23 Thread Bruno Marchal
On 21 Nov 2008, at 10:45, Kory Heath wrote: However, the materialist-mechanist still has some grounds to say that there's something interestingly different about Lucky Kory than Original Kory. It is a physical fact of the matter that Lucky Kory is not causally connected to Pre-Teleportation

Re: MGA 1

2008-11-23 Thread Bruno Marchal
On 20 Nov 2008, at 21:27, Jason Resch wrote: On Thu, Nov 20, 2008 at 12:03 PM, Bruno Marchal [EMAIL PROTECTED] wrote: The state machine that would represent her in the case of injection of random noise is a different state machine that would represent her normally functioning

Re: MGA 1 bis (exercise)

2008-11-23 Thread Bruno Marchal
On 20 Nov 2008, at 19:38, Brent Meeker wrote: Talk about consciousness will seem as quaint as talk about the elan vital does now. Then you are led to eliminativism of consciousness. This makes MEC+MAT trivially coherent. The price is big: consciousness does no more exist, like the

Re: MGA 1

2008-11-23 Thread Bruno Marchal
On 20 Nov 2008, at 21:40, Gordon Tsai wrote: Bruno: I think you and John touched the fundamental issues of human rational. It's a dilemma encountered by phenomenology. Now I have a question: In theory we can't distinguish ourselves from a Lobian Machine. Note that in the math

Re: MGA 1

2008-11-23 Thread Bruno Marchal
On 22 Nov 2008, at 11:06, Stathis Papaioannou wrote: Yes, there must be a problem with the assumptions. The only assumption that I see we could eliminate, painful though it might be for those of a scientific bent, is the idea that consciousness supervenes on physical activity. Q.E.D.

Re: MGA 1

2008-11-23 Thread Bruno Marchal
On 23 Nov 2008, at 03:24, Stathis Papaioannou wrote: 2008/11/23 Kory Heath [EMAIL PROTECTED]: On Nov 22, 2008, at 2:06 AM, Stathis Papaioannou wrote: Yes, there must be a problem with the assumptions. The only assumption that I see we could eliminate, painful though it might be for

Re: MGA 1

2008-11-23 Thread John Mikes
On 11/22/08, Brent Meeker [EMAIL PROTECTED] wrote: John Mikes wrote: Brent, did your dog communicate to you (in dogese, of course) that she has - NO - INNER NARRATIVE? or you are just ignorant to perceive such? (Of course do not expect such at the complexity level of your 11b neurons) John

Re: MGA 1

2008-11-23 Thread John Mikes
On 11/23/08, Bruno Marchal [EMAIL PROTECTED] wrote: On 20 Nov 2008, at 21:40, Gordon Tsai wrote: Bruno: I think you and John touched the fundamental issues of human rational. It's a dilemma encountered by phenomenology. Now I have a question: In theory we can't distinguish ourselves

Re: MGA 1

2008-11-23 Thread Bruno Marchal
On 23 Nov 2008, at 17:41, John Mikes wrote: On 11/23/08, Bruno Marchal [EMAIL PROTECTED] wrote: About mechanism, the optimist reasons like that. I love myself because I have a so interesting life with so many rich experiences. Now you tell me I am a machine. So I love machine because

Re: MGA 1 - (to B.M)

2008-11-23 Thread John Mikes
Bruno, right before my par on 'sharing a 3rd pers. opinion: more or less (maybe) resembling the original 'to be shared' one. In its (1st) 'personal' variation. (Cf: perceived reality). you included a remark not too dissimilar in essence, but with one word in it I want to reflect on: The

Re: MGA 1

2008-11-22 Thread Kory Heath
On Nov 21, 2008, at 6:53 PM, Jason Resch wrote: What about a case when only some of Alice's neurons have ceased normal function and became dependent on the lucky rays? Yes, those are exactly the cases that are highlighting the problem. (For me. For Bruno, Lucky Alice is still conscious.

Re: MGA 1

2008-11-22 Thread Stathis Papaioannou
2008/11/22 Kory Heath [EMAIL PROTECTED]: If Lucky Alice is conscious and Empty-Headed Alice is not conscious, then there are partial zombies halfway between them. Like you, I can't make any sense of these partial zombies. But I also can't make any sense of the idea that Empty-Headed Alice is

Re: MGA 1

2008-11-22 Thread Stathis Papaioannou
2008/11/22 Jason Resch [EMAIL PROTECTED]: What you described sounds very similar to a split brain patient I saw on a documentary. He was able to respond to images presented to one eye, and ended up drawing them with a hand controlled by the other hemisphere, yet he had no idea why he drew

Re: MGA 1

2008-11-22 Thread Günther Greindl
Hmm, However, I do start getting uncomfortable when I realize that this lucky teleportation can happen over and over again, and if it happens fast enough, it just reduces to sheer randomness that just happens to be generating an ordered pattern that looks like Kory. I have a hard

Re: MGA 1

2008-11-22 Thread Günther Greindl
Kory Heath wrote: If Lucky Alice is conscious and Empty-Headed Alice is not conscious, then there are partial zombies halfway between them. Like you, I can't make any sense of these partial zombies. But also can't make any I think a materialist would either have to argue that Lucky

Re: MGA 1

2008-11-22 Thread Kory Heath
On Nov 22, 2008, at 2:06 AM, Stathis Papaioannou wrote: Yes, there must be a problem with the assumptions. The only assumption that I see we could eliminate, painful though it might be for those of a scientific bent, is the idea that consciousness supervenes on physical activity. Q.E.D.

Re: MGA 1

2008-11-22 Thread Brent Meeker
Günther Greindl wrote: Kory Heath wrote: If Lucky Alice is conscious and Empty-Headed Alice is not conscious, then there are partial zombies halfway between them. Like you, I can't make any sense of these partial zombies. But also can't make any I don't see why partial zombies

Re: MGA 1

2008-11-22 Thread John Mikes
Brent, did your dog communicate to you (in dogese, of course) that she has - NO - INNER NARRATIVE? or you are just ignorant to perceive such? (Of course do not expect such at the complexity level of your 11b neurons) John M On 11/22/08, Brent Meeker [EMAIL PROTECTED] wrote: Günther Greindl

Re: MGA 1

2008-11-22 Thread Brent Meeker
John Mikes wrote: Brent, did your dog communicate to you (in dogese, of course) that she has - NO - INNER NARRATIVE? or you are just ignorant to perceive such? (Of course do not expect such at the complexity level of your 11b neurons) John M Of course not. It's my inference from the fact

Re: MGA 1

2008-11-22 Thread Stathis Papaioannou
2008/11/23 Kory Heath [EMAIL PROTECTED]: On Nov 22, 2008, at 2:06 AM, Stathis Papaioannou wrote: Yes, there must be a problem with the assumptions. The only assumption that I see we could eliminate, painful though it might be for those of a scientific bent, is the idea that consciousness

Re: MGA 1

2008-11-22 Thread Stathis Papaioannou
On 2008/11/23 Brent Meeker [EMAIL PROTECTED] wrote: I don't see why partial zombies are problematic. My dog is conscious of perceptions, of being an individual, of memories and even dreams, but he doesn't have an inner narrative - so is he a partial zombie? Your dog has experiences, and

Re: MGA 1

2008-11-21 Thread Kory Heath
On Nov 20, 2008, at 10:52 AM, Bruno Marchal wrote: I am afraid you are already too much suspect of the contradictory nature of MEC+MAT. Take the reasoning has a game. Try to keep both MEC and MAT, the game consists in showing the more clearly as possible what will go wrong. I understand

Re: MGA 1

2008-11-21 Thread Bruno Marchal
Hi Gordon, Le 20-nov.-08, à 21:40, Gordon Tsai a écrit : Bruno:    I think you and John touched the fundamental issues of human rational. It's a dilemma encountered by phenomenology. Now I have a question: In theory we can't distinguish ourselves from a Lobian Machine. But can lobian

Re: MGA 1

2008-11-21 Thread Stathis Papaioannou
A variant of Chalmers' Fading Qualia argument (http://consc.net/papers/qualia.html) can be used to show Alice must be conscious. Alice is sitting her exam, and a part of her brain stops working, let's say the part of her occipital cortex which enables visual perception of the exam paper. In that

Re: MGA 1

2008-11-21 Thread Bruno Marchal
Jason, Nice, you are anticipatiing on MGA 2. So if you don't mind I will answer your post in MGA 2, or in comments you will perhaps make afterward. ... asap. Bruno Le 20-nov.-08, à 21:27, Jason Resch a écrit : On Thu, Nov 20, 2008 at 12:03 PM, Bruno Marchal [EMAIL PROTECTED] wrote:

Re: MGA 1

2008-11-21 Thread Kory Heath
On Nov 21, 2008, at 3:45 AM, Stathis Papaioannou wrote: A variant of Chalmers' Fading Qualia argument (http://consc.net/papers/qualia.html) can be used to show Alice must be conscious. The same argument can be used to show that Empty-Headed Alice must also be conscious. (Empty-Headed Alice

Re: MGA 1

2008-11-21 Thread Michael Rosefield
This is one of those questions were I'm not sure if I'm being relevant or missing the point entirely, but here goes: There are multiple universes which implement/contain/whatever Alice's consciousness. During the period of the experiment, that universe may no longer be amongst them but shadows

Re: MGA 1

2008-11-21 Thread Bruno Marchal
On 21 Nov 2008, at 10:45, Kory Heath wrote: ... A much closer analogy to Lucky Alice would be if the doctor accidentally destroys me without making the copy, turns on the receiving teleporter in desperation, and then the exact copy that would have appeared anyway steps out, because

Re: MGA 1

2008-11-21 Thread Jason Resch
On Fri, Nov 21, 2008 at 3:45 AM, Kory Heath [EMAIL PROTECTED] wrote: However, the materialist-mechanist still has some grounds to say that there's something interestingly different about Lucky Kory than Original Kory. It is a physical fact of the matter that Lucky Kory is not causally

Re: MGA 1

2008-11-21 Thread Jason Resch
On Fri, Nov 21, 2008 at 5:45 AM, Stathis Papaioannou [EMAIL PROTECTED]wrote: A variant of Chalmers' Fading Qualia argument (http://consc.net/papers/qualia.html) can be used to show Alice must be conscious. Alice is sitting her exam, and a part of her brain stops working, let's say the part

Re: MGA 1

2008-11-21 Thread Brent Meeker
Kory Heath wrote: On Nov 20, 2008, at 10:52 AM, Bruno Marchal wrote: I am afraid you are already too much suspect of the contradictory nature of MEC+MAT. Take the reasoning has a game. Try to keep both MEC and MAT, the game consists in showing the more clearly as possible what will go

Re: MGA 1

2008-11-21 Thread Brent Meeker
Kory Heath wrote: On Nov 21, 2008, at 3:45 AM, Stathis Papaioannou wrote: A variant of Chalmers' Fading Qualia argument (http://consc.net/papers/qualia.html) can be used to show Alice must be conscious. The same argument can be used to show that Empty-Headed Alice must also be

Re: MGA 1

2008-11-21 Thread Brent Meeker
Jason Resch wrote: On Fri, Nov 21, 2008 at 5:45 AM, Stathis Papaioannou [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: A variant of Chalmers' Fading Qualia argument (http://consc.net/papers/qualia.html) can be used to show Alice must be conscious. Alice is

Re: MGA 1

2008-11-21 Thread Kory Heath
On Nov 21, 2008, at 8:15 AM, Bruno Marchal wrote: On 21 Nov 2008, at 10:45, Kory Heath wrote: However, the materialist-mechanist still has some grounds to say that there's something interestingly different about Lucky Kory than Original Kory. It is a physical fact of the matter that Lucky

Re: MGA 1

2008-11-21 Thread Kory Heath
On Nov 21, 2008, at 8:52 AM, Jason Resch wrote: This is very similar to an existing thought experiment in identity theory: http://en.wikipedia.org/wiki/Swamp_man Cool. Thanks for that link! -- Kory --~--~-~--~~~---~--~~ You received this message because

Re: MGA 1

2008-11-21 Thread Kory Heath
On Nov 21, 2008, at 9:01 AM, Jason Resch wrote: What you described sounds very similar to a split brain patient I saw on a documentary. It might seem similar on the surface, but it's actually very different. The observers of the split-brain patient and the patient himself know that

Re: MGA 1

2008-11-21 Thread Jason Resch
On Fri, Nov 21, 2008 at 7:54 PM, Kory Heath [EMAIL PROTECTED] wrote: On Nov 21, 2008, at 9:01 AM, Jason Resch wrote: What you described sounds very similar to a split brain patient I saw on a documentary. It might seem similar on the surface, but it's actually very different. The

Re: MGA 1 bis (exercise)

2008-11-20 Thread Kory Heath
On Nov 19, 2008, at 1:43 PM, Brent Meeker wrote: So I'm puzzled as to how answer Bruno's question. In general I don't believe in zombies, but that's in the same way I don't believe my glass of water will freeze at 20degC. It's an opinion about what is likely, not what is possible.

Re: MGA 1

2008-11-20 Thread John Mikes
On 11/19/08, Bruno Marchal [EMAIL PROTECTED] wrote: ... Keep in mind we try to refute the conjunction MECH and MAT. Nevertheless your intuition below is mainly correct, but the point is that accepting it really works, AND keeping MECH, will force us to negate MAT. Bruno

Re: MGA 1 bis (exercise)

2008-11-20 Thread Bruno Marchal
On 19 Nov 2008, at 22:43, Brent Meeker wrote: Bruno Marchal wrote: On 19 Nov 2008, at 16:06, Telmo Menezes wrote: Bruno, If no one objects, I will present MGA 2 (soon). I also agree completely and am curious to see where this is going. Please continue! Thanks Telmo, thanks also to

Re: MGA 1

2008-11-20 Thread Bruno Marchal
On 19 Nov 2008, at 23:26, Jason Resch wrote: On Wed, Nov 19, 2008 at 1:55 PM, Bruno Marchal [EMAIL PROTECTED] wrote: On 19 Nov 2008, at 20:17, Jason Resch wrote: To add some clarification, I do not think spreading Alice's logic gates across a field and allowing cosmic rays to cause

Re: MGA 1 bis (exercise)

2008-11-20 Thread Bruno Marchal
On 20 Nov 2008, at 00:19, Telmo Menezes wrote: Could you alter the so-lucky cosmic explosion beam a little bit so that Alice still succeed her math exam, but is, reasonably enough, a zombie during the exam. With zombie taken in the traditional sense of Kory and Dennett. Of course you

Re: MGA 1 bis (exercise)

2008-11-20 Thread Brent Meeker
Kory Heath wrote: On Nov 19, 2008, at 1:43 PM, Brent Meeker wrote: So I'm puzzled as to how answer Bruno's question. In general I don't believe in zombies, but that's in the same way I don't believe my glass of water will freeze at 20degC. It's an opinion about what is likely, not

Re: MGA 1

2008-11-20 Thread Bruno Marchal
On 20 Nov 2008, at 08:23, Kory Heath wrote: On Nov 18, 2008, at 11:52 AM, Bruno Marchal wrote: The last question (of MGA 1) is: was Alice, in this case, a zombie during the exam? Of course, my personal answer would take into account the fact that I already have a problem with the

Re: MGA 1

2008-11-20 Thread Bruno Marchal
Hi John, It boils down to my overall somewhat negative position (although I have no better one) of UDA, MPG, comp, etc. - all of them are products of HUMAN thinking and restrictions as WE can imagine the unfathomable existence (the totality - real TOE). I find it a 'cousin' of the

Re: MGA 1

2008-11-20 Thread Bruno Marchal
On 19 Nov 2008, at 20:37, Michael Rosefield wrote: Are not logic gates black boxes, though? Does it really matter what happens between Input and Output? In which case, it has absolutely no bearing on Alice's consciousness whether the gate's a neuron, an electronic doodah, a team of

Re: MGA 1

2008-11-20 Thread Jason Resch
On Thu, Nov 20, 2008 at 12:03 PM, Bruno Marchal [EMAIL PROTECTED] wrote: The state machine that would represent her in the case of injection of random noise is a different state machine that would represent her normally functioning brain. Absolutely so. Bruno, What about the state

Re: MGA 1

2008-11-20 Thread Gordon Tsai
?   Gordon  --- On Thu, 11/20/08, Bruno Marchal [EMAIL PROTECTED] wrote: From: Bruno Marchal [EMAIL PROTECTED] Subject: Re: MGA 1 To: [EMAIL PROTECTED] Date: Thursday, November 20, 2008, 12:05 PM Hi John, It boils down to my overall somewhat negative position (although I have no better one

Re: MGA 1 bis (exercise)

2008-11-20 Thread Kory Heath
On Nov 20, 2008, at 10:38 AM, Brent Meeker wrote: I think you really you mean nomologically possible. I mean logically possible, but I'm happy to change it to nomologically possible for the purposes of this conversation. I think Dennett changes the question by referring to

Re: MGA 1 bis (exercise)

2008-11-20 Thread Brent Meeker
Kory Heath wrote: On Nov 20, 2008, at 10:38 AM, Brent Meeker wrote: I think you really you mean nomologically possible. I mean logically possible, but I'm happy to change it to nomologically possible for the purposes of this conversation. Doesn't the question go away if it is

Re: MGA 1 bis (exercise)

2008-11-20 Thread Kory Heath
On Nov 20, 2008, at 3:33 PM, Brent Meeker wrote: Doesn't the question go away if it is nomologically impossible? I'm sort of the opposite of you on this issue. You don't like to use the term logically possible, while I don't like to use the term nomologically impossible. I don't see the

Re: MGA 1 bis (exercise)

2008-11-20 Thread Brent Meeker
Kory Heath wrote: On Nov 20, 2008, at 3:33 PM, Brent Meeker wrote: Doesn't the question go away if it is nomologically impossible? I'm sort of the opposite of you on this issue. You don't like to use the term logically possible, while I don't like to use the term nomologically

Re: MGA 1

2008-11-19 Thread Bruno Marchal
Le 19-nov.-08, à 07:13, Russell Standish a écrit : I think Alice was indeed not a zombie, I think you are right. COMP + MAT implies Alice (in this setting) is not a zombie. and that her consciousness supervened on the physical activity stimulating her output gates (the cosmic

Re: MGA 1

2008-11-19 Thread Telmo Menezes
Bruno, If no one objects, I will present MGA 2 (soon). I also agree completely and am curious to see where this is going. Please continue! Cheers, Telmo Menezes. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups

Re: MGA 1

2008-11-19 Thread Gordon Tsai
Bruno:      I'm intested to see the second part. Thanks! --- On Wed, 11/19/08, Bruno Marchal [EMAIL PROTECTED] wrote: From: Bruno Marchal [EMAIL PROTECTED] Subject: Re: MGA 1 To: [EMAIL PROTECTED] Date: Wednesday, November 19, 2008, 3:59 AM Le 19-nov.-08, à 07:13, Russell Standish a écrit

Re: MGA 1

2008-11-19 Thread Jason Resch
On Wed, Nov 19, 2008 at 5:59 AM, Bruno Marchal [EMAIL PROTECTED] wrote: Does everyone accept, like Russell, that, assuming COMP and MAT, Alice is not a zombie? I mean, is there someone who object? Remember we are proving implication/ MAT+MECH = something. We never try to argue about that

Re: MGA 1

2008-11-19 Thread Jason Resch
To add some clarification, I do not think spreading Alice's logic gates across a field and allowing cosmic rays to cause each gate to perform the same computations that they would had they existed in her functioning brain would be conscious. I think this because in isolation the logic gates are

Re: MGA 1

2008-11-19 Thread Michael Rosefield
Are not logic gates black boxes, though? Does it really matter what happens between Input and Output? In which case, it has absolutely no bearing on Alice's consciousness whether the gate's a neuron, an electronic doodah, a team of well-trained monkeys or a lucky quantum event or synchronicity. It

Re: MGA 1

2008-11-19 Thread Bruno Marchal
On 19 Nov 2008, at 20:17, Jason Resch wrote: To add some clarification, I do not think spreading Alice's logic gates across a field and allowing cosmic rays to cause each gate to perform the same computations that they would had they existed in her functioning brain would be conscious.

Re: MGA 1 bis (exercise)

2008-11-19 Thread Brent Meeker
Bruno Marchal wrote: On 19 Nov 2008, at 16:06, Telmo Menezes wrote: Bruno, If no one objects, I will present MGA 2 (soon). I also agree completely and am curious to see where this is going. Please continue! Thanks Telmo, thanks also to Gordon. I will try to send MGA 2 asap.

Re: MGA 1

2008-11-19 Thread Jason Resch
On Wed, Nov 19, 2008 at 1:55 PM, Bruno Marchal [EMAIL PROTECTED] wrote: On 19 Nov 2008, at 20:17, Jason Resch wrote: To add some clarification, I do not think spreading Alice's logic gates across a field and allowing cosmic rays to cause each gate to perform the same computations that they

Re: MGA 1

2008-11-19 Thread Brent Meeker
Jason Resch wrote: On Wed, Nov 19, 2008 at 1:55 PM, Bruno Marchal [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: On 19 Nov 2008, at 20:17, Jason Resch wrote: To add some clarification, I do not think spreading Alice's logic gates across a field and allowing cosmic

Re: MGA 1 bis (exercise)

2008-11-19 Thread Telmo Menezes
Could you alter the so-lucky cosmic explosion beam a little bit so that Alice still succeed her math exam, but is, reasonably enough, a zombie during the exam. With zombie taken in the traditional sense of Kory and Dennett. Of course you have to keep well *both* MECH *and* MAT. I think I

Re: MGA 1

2008-11-19 Thread Kory Heath
On Nov 18, 2008, at 11:52 AM, Bruno Marchal wrote: The last question (of MGA 1) is: was Alice, in this case, a zombie during the exam? Of course, my personal answer would take into account the fact that I already have a problem with the materialist's idea of matter. But I think we're