Thanks, Brent,
at least you read through my blurb. Of course I am vague - besides I wrote
the post in a jiffy - not premeditatedly, I am sorry. Also there is no
adequate language to those things I want to refer to, not even 'in situ',
the ideas and terms about interefficient totality (IMO more
Bruno Marchal wrote:
On 25 Nov 2008, at 20:16, Brent Meeker wrote:
Bruno Marchal wrote:
Brent: I don't see why the mechanist-materialists are
logically disallowed from incorporating that kind of physical
difference into their notion of consciousness.
Bruno: In our setting, it means
Brent wrote:
...
*But is causality an implementation detail? There seems to be an implicit
assumption that digitally represented states form a sequence just because
there
is a rule that defines(*) that sequence, but in fact all digital (and other)
sequences depend on(**) causal chains. ...*
I
John Mikes wrote:
Brent wrote:
...
*But is causality an implementation detail? There seems to be an implicit
assumption that digitally represented states form a sequence just
because there
is a rule that defines(*) that sequence, but in fact all digital (and
other) sequences depend
2008/11/25 Kory Heath [EMAIL PROTECTED]:
The answer I *used* to give was that it doesn't matter, because no
matter what accidental order you find in Platonia, you also find the
real order. In other words, if you find some portion of the digits
of PI that seems to be following the rules of
On 25 Nov 2008, at 20:16, Brent Meeker wrote:
Bruno Marchal wrote:
Brent: I don't see why the mechanist-materialists are
logically disallowed from incorporating that kind of physical
difference into their notion of consciousness.
Bruno: In our setting, it means that the neuron/logic
John,
On 24 Nov 2008, at 00:19, John Mikes wrote:
Bruno,
right before my par on 'sharing a 3rd pers. opinion:
more or less (maybe) resembling the original 'to
be shared' one. In its (1st) 'personal' variation. (Cf: perceived
reality).
you included a remark not too dissimilar in
just more convincing, so much the better.
Please proceed!
So you agree that MGA 1 does show that Lucky Alice is conscious
(logically).
Normally, this means the proof is finished for you (but that is indeed
what you say before I begun; everything is coherent).
About MGA 3, I feel almost a bit
On Tue, Nov 25, 2008 at 11:55:37AM +0100, Bruno Marchal wrote:
About MGA 3, I feel almost a bit ashamed to explain that. To believe
that the projection of the movie makes Alice conscious, is almost like
explaining why we should not send Roger Moore (James Bond) in jail,
giving that there
Just to be clear on this, I obviously agree.
Best,
Bruno
Le 25-nov.-08, à 12:05, Russell Standish a écrit :
On Tue, Nov 25, 2008 at 11:55:37AM +0100, Bruno Marchal wrote:
About MGA 3, I feel almost a bit ashamed to explain that. To believe
that the projection of the movie makes Alice
On Nov 25, 2008, at 2:55 AM, Bruno Marchal wrote:
So you agree that MGA 1 does show that Lucky Alice is conscious
(logically).
I think I have a less rigorous view of the argument than you do. You
want the argument to have the rigor of a mathematical proof. You say
Let's start
On 25 Nov 2008, at 15:49, Kory Heath wrote:
On Nov 25, 2008, at 2:55 AM, Bruno Marchal wrote:
So you agree that MGA 1 does show that Lucky Alice is conscious
(logically).
I think I have a less rigorous view of the argument than you do. You
want the argument to have the rigor
Bruno Marchal wrote:
On 25 Nov 2008, at 15:49, Kory Heath wrote:
On Nov 25, 2008, at 2:55 AM, Bruno Marchal wrote:
So you agree that MGA 1 does show that Lucky Alice is conscious
(logically).
I think I have a less rigorous view of the argument than you do. You
want the argument to have
On Tue, Nov 25, 2008 at 11:16:55AM -0800, Brent Meeker wrote:
But who would say yes to the doctor if he said that he would take a movie
of
your brain states and project it? Or if he said he would just destroy you in
this universe and you would continue your experiences in other branches
which MGA 1 shows that it's logically necessary that Lucky
Alice is conscious, and MGA 2 shows that it's logically necessary that
the projection of the movie makes Alice conscious (your words from a
previous email). I think we can proceed with that.
But can you clarify exactly what MECH+MAT
On Nov 23, 2008, at 4:18 AM, Bruno Marchal wrote:
Let us consider your lucky teleportation case, where someone use a
teleporter which fails badly. So it just annihilates the original
person, but then, by an incredible luck the person is reconstructed
with his right state after. If you ask
, of
course Lucky Alice is not conscious.
Now, MGA 1 is an argument showing, that MEC+MAT, due to the physical
supervenience thesis, and the non prescience of the neurons, entails
that Lucky Alice is conscious. The question is: do you see this. too
If you see this, we have:
MEC+MAT entails Lucky
On Nov 22, 2008, at 6:24 PM, Stathis Papaioannou wrote:
Similarly, whenever we
interact with a computation, it must be realised on a physical
computer, such as a human brain. But there is also the abstract
computation, a Platonic object. It seems that consciousness, like
threeness, may be a
On Nov 24, 2008, at 11:01 AM, Bruno Marchal wrote:
If your argument were not merely convincing but definitive, then I
would not need to make MGA 3 for showing it is ridiculous to endow the
projection of a movie of a computation with consciousness (in real
space-time, like the physical
On 21 Nov 2008, at 10:45, Kory Heath wrote:
However, the materialist-mechanist still has some grounds to say that
there's something interestingly different about Lucky Kory than
Original Kory. It is a physical fact of the matter that Lucky Kory is
not causally connected to Pre-Teleportation
On 20 Nov 2008, at 21:27, Jason Resch wrote:
On Thu, Nov 20, 2008 at 12:03 PM, Bruno Marchal [EMAIL PROTECTED]
wrote:
The state machine that would represent her in the case of
injection of random noise is a different state machine that would
represent her normally functioning
On 20 Nov 2008, at 19:38, Brent Meeker wrote:
Talk about consciousness will seem as quaint
as talk about the elan vital does now.
Then you are led to eliminativism of consciousness. This makes MEC+MAT
trivially coherent. The price is big: consciousness does no more
exist, like the
On 20 Nov 2008, at 21:40, Gordon Tsai wrote:
Bruno:
I think you and John touched the fundamental issues of human
rational. It's a dilemma encountered by phenomenology. Now I have a
question: In theory we can't distinguish ourselves from a Lobian
Machine.
Note that in the math
On 22 Nov 2008, at 11:06, Stathis Papaioannou wrote:
Yes, there must be a problem with the assumptions. The only assumption
that I see we could eliminate, painful though it might be for those of
a scientific bent, is the idea that consciousness supervenes on
physical activity. Q.E.D.
On 23 Nov 2008, at 03:24, Stathis Papaioannou wrote:
2008/11/23 Kory Heath [EMAIL PROTECTED]:
On Nov 22, 2008, at 2:06 AM, Stathis Papaioannou wrote:
Yes, there must be a problem with the assumptions. The only
assumption
that I see we could eliminate, painful though it might be for
On 11/22/08, Brent Meeker [EMAIL PROTECTED] wrote:
John Mikes wrote:
Brent,
did your dog communicate to you (in dogese, of course) that she has - NO -
INNER NARRATIVE? or you are just ignorant to perceive such?
(Of course do not expect such at the complexity level of your 11b neurons)
John
On 11/23/08, Bruno Marchal [EMAIL PROTECTED] wrote:
On 20 Nov 2008, at 21:40, Gordon Tsai wrote:
Bruno:
I think you and John touched the fundamental issues of human
rational. It's a dilemma encountered by phenomenology. Now I have a
question: In theory we can't distinguish ourselves
On 23 Nov 2008, at 17:41, John Mikes wrote:
On 11/23/08, Bruno Marchal [EMAIL PROTECTED] wrote:
About mechanism, the optimist reasons like that. I love myself
because
I have a so interesting life with so many rich experiences. Now you
tell me I am a machine. So I love machine because
On Nov 21, 2008, at 6:53 PM, Jason Resch wrote:
What about a case when only some of Alice's neurons have ceased
normal function and became dependent on the lucky rays?
Yes, those are exactly the cases that are highlighting the problem.
(For me. For Bruno, Lucky Alice is still conscious.
2008/11/22 Kory Heath [EMAIL PROTECTED]:
If Lucky Alice is conscious and Empty-Headed Alice is not conscious,
then there are partial zombies halfway between them. Like you, I can't
make any sense of these partial zombies. But I also can't make any
sense of the idea that Empty-Headed Alice is
2008/11/22 Jason Resch [EMAIL PROTECTED]:
What you described sounds very similar to a split brain patient I saw on a
documentary. He was able to respond to images presented to one eye, and
ended up drawing them with a hand controlled by the other hemisphere, yet he
had no idea why he drew
Hmm,
However, I do start getting uncomfortable when I realize that this
lucky teleportation can happen over and over again, and if it happens
fast enough, it just reduces to sheer randomness that just happens to
be generating an ordered pattern that looks like Kory. I have a hard
Kory Heath wrote:
If Lucky Alice is conscious and Empty-Headed Alice is not conscious,
then there are partial zombies halfway between them. Like you, I can't
make any sense of these partial zombies. But
also can't make any
I think a materialist would either have to argue that Lucky
On Nov 22, 2008, at 2:06 AM, Stathis Papaioannou wrote:
Yes, there must be a problem with the assumptions. The only assumption
that I see we could eliminate, painful though it might be for those of
a scientific bent, is the idea that consciousness supervenes on
physical activity. Q.E.D.
Günther Greindl wrote:
Kory Heath wrote:
If Lucky Alice is conscious and Empty-Headed Alice is not conscious,
then there are partial zombies halfway between them. Like you, I can't
make any sense of these partial zombies. But
also can't make any
I don't see why partial zombies
Brent,
did your dog communicate to you (in dogese, of course) that she has - NO -
INNER NARRATIVE? or you are just ignorant to perceive such?
(Of course do not expect such at the complexity level of your 11b neurons)
John M
On 11/22/08, Brent Meeker [EMAIL PROTECTED] wrote:
Günther Greindl
John Mikes wrote:
Brent,
did your dog communicate to you (in dogese, of course) that she has - NO -
INNER NARRATIVE? or you are just ignorant to perceive such?
(Of course do not expect such at the complexity level of your 11b neurons)
John M
Of course not. It's my inference from the fact
2008/11/23 Kory Heath [EMAIL PROTECTED]:
On Nov 22, 2008, at 2:06 AM, Stathis Papaioannou wrote:
Yes, there must be a problem with the assumptions. The only assumption
that I see we could eliminate, painful though it might be for those of
a scientific bent, is the idea that consciousness
On 2008/11/23 Brent Meeker [EMAIL PROTECTED] wrote:
I don't see why partial zombies are problematic. My dog is conscious of
perceptions, of being an individual, of memories and even dreams, but he
doesn't
have an inner narrative - so is he a partial zombie?
Your dog has experiences, and
that Telmo's Lucky Alice is not
conscious).
You mean the ALICE of Telmo's solution of MGA 1bis, I guess. The
original Alice, well I mean the one in MGA 1, is functionally
identical at the right level of description (actually she has already
digital brain). The physical instantiation
Hi Gordon,
Le 20-nov.-08, à 21:40, Gordon Tsai a écrit :
Bruno:
I think you and John touched the fundamental issues of human
rational. It's a dilemma encountered by phenomenology. Now I have a
question: In theory we can't distinguish ourselves from a Lobian
Machine. But can lobian
Jason,
Nice, you are anticipatiing on MGA 2. So if you don't mind I will
answer your post in MGA 2, or in comments you will perhaps make
afterward.
... asap.
Bruno
Le 20-nov.-08, à 21:27, Jason Resch a écrit :
On Thu, Nov 20, 2008 at 12:03 PM, Bruno Marchal [EMAIL PROTECTED]
wrote:
On Nov 21, 2008, at 3:45 AM, Stathis Papaioannou wrote:
A variant of Chalmers' Fading Qualia argument
(http://consc.net/papers/qualia.html) can be used to show Alice must
be conscious.
The same argument can be used to show that Empty-Headed Alice must
also be conscious. (Empty-Headed Alice
This is one of those questions were I'm not sure if I'm being relevant or
missing the point entirely, but here goes:
There are multiple universes which implement/contain/whatever Alice's
consciousness. During the period of the experiment, that universe may no
longer be amongst them but shadows
On 21 Nov 2008, at 10:45, Kory Heath wrote:
...
A much closer analogy to Lucky Alice would be if the doctor
accidentally destroys me without making the copy, turns on the
receiving teleporter in desperation, and then the exact copy that
would have appeared anyway steps out, because
On Fri, Nov 21, 2008 at 3:45 AM, Kory Heath [EMAIL PROTECTED] wrote:
However, the materialist-mechanist still has some grounds to say that
there's something interestingly different about Lucky Kory than
Original Kory. It is a physical fact of the matter that Lucky Kory is
not causally
On Fri, Nov 21, 2008 at 5:45 AM, Stathis Papaioannou [EMAIL PROTECTED]wrote:
A variant of Chalmers' Fading Qualia argument
(http://consc.net/papers/qualia.html) can be used to show Alice must
be conscious.
Alice is sitting her exam, and a part of her brain stops working,
let's say the part
(for the same reason that Telmo's Lucky Alice is not
conscious).
You mean the ALICE of Telmo's solution of MGA 1bis, I guess. The
original Alice, well I mean the one in MGA 1, is functionally
identical at the right level of description (actually she has already
digital brain). The physical
Kory Heath wrote:
On Nov 21, 2008, at 3:45 AM, Stathis Papaioannou wrote:
A variant of Chalmers' Fading Qualia argument
(http://consc.net/papers/qualia.html) can be used to show Alice must
be conscious.
The same argument can be used to show that Empty-Headed Alice must
also be
Jason Resch wrote:
On Fri, Nov 21, 2008 at 5:45 AM, Stathis Papaioannou [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
A variant of Chalmers' Fading Qualia argument
(http://consc.net/papers/qualia.html) can be used to show Alice must
be conscious.
Alice is
On Nov 21, 2008, at 8:15 AM, Bruno Marchal wrote:
On 21 Nov 2008, at 10:45, Kory Heath wrote:
However, the materialist-mechanist still has some grounds to say that
there's something interestingly different about Lucky Kory than
Original Kory. It is a physical fact of the matter that Lucky
On Nov 21, 2008, at 8:52 AM, Jason Resch wrote:
This is very similar to an existing thought experiment in identity
theory:
http://en.wikipedia.org/wiki/Swamp_man
Cool. Thanks for that link!
-- Kory
--~--~-~--~~~---~--~~
You received this message because
On Nov 21, 2008, at 9:01 AM, Jason Resch wrote:
What you described sounds very similar to a split brain patient I
saw on a documentary.
It might seem similar on the surface, but it's actually very
different. The observers of the split-brain patient and the patient
himself know that
On Fri, Nov 21, 2008 at 7:54 PM, Kory Heath [EMAIL PROTECTED] wrote:
On Nov 21, 2008, at 9:01 AM, Jason Resch wrote:
What you described sounds very similar to a split brain patient I
saw on a documentary.
It might seem similar on the surface, but it's actually very
different. The
On Nov 19, 2008, at 1:43 PM, Brent Meeker wrote:
So I'm puzzled as to how answer Bruno's question. In general I
don't believe in
zombies, but that's in the same way I don't believe my glass of
water will
freeze at 20degC. It's an opinion about what is likely, not what is
possible.
On 11/19/08, Bruno Marchal [EMAIL PROTECTED] wrote:
... Keep in mind we try to refute the
conjunction MECH and MAT.
Nevertheless your intuition below is mainly correct, but the point is
that accepting it really works, AND keeping MECH, will force us to
negate MAT.
Bruno
the
exams, and perhaps even die. So you are right, in Telmo's solution of
MGA 1bis exercise she is an accidental zombie. But in the original
MGA 1, she should remain conscious (with MECH and MAT), even if
accidentally so.
So I'm puzzled as to how answer Bruno's question.
Hope it is clear
On 19 Nov 2008, at 23:26, Jason Resch wrote:
On Wed, Nov 19, 2008 at 1:55 PM, Bruno Marchal [EMAIL PROTECTED]
wrote:
On 19 Nov 2008, at 20:17, Jason Resch wrote:
To add some clarification, I do not think spreading Alice's logic
gates across a field and allowing cosmic rays to cause
On 20 Nov 2008, at 00:19, Telmo Menezes wrote:
Could you alter the so-lucky cosmic explosion beam a little bit so
that Alice still succeed her math exam, but is, reasonably enough, a
zombie during the exam. With zombie taken in the traditional sense
of
Kory and Dennett.
Of course you
Kory Heath wrote:
On Nov 19, 2008, at 1:43 PM, Brent Meeker wrote:
So I'm puzzled as to how answer Bruno's question. In general I
don't believe in
zombies, but that's in the same way I don't believe my glass of
water will
freeze at 20degC. It's an opinion about what is likely, not
On 20 Nov 2008, at 08:23, Kory Heath wrote:
On Nov 18, 2008, at 11:52 AM, Bruno Marchal wrote:
The last question (of MGA 1) is: was Alice, in this case, a zombie
during the exam?
Of course, my personal answer would take into account the fact that I
already have a problem
Hi John,
It boils down to my overall somewhat negative position (although
I have no better one) of UDA, MPG, comp, etc. - all of them are
products of HUMAN thinking and restrictions as WE can imagine
the unfathomable existence (the totality - real TOE).
I find it a 'cousin' of the
On 19 Nov 2008, at 20:37, Michael Rosefield wrote:
Are not logic gates black boxes, though? Does it really matter what
happens between Input and Output? In which case, it has absolutely
no bearing on Alice's consciousness whether the gate's a neuron, an
electronic doodah, a team of
On Thu, Nov 20, 2008 at 12:03 PM, Bruno Marchal [EMAIL PROTECTED] wrote:
The state machine that would represent her in the case of injection of
random noise is a different state machine that would represent her normally
functioning brain.
Absolutely so.
Bruno,
What about the state
On Nov 20, 2008, at 10:38 AM, Brent Meeker wrote:
I think you really you mean nomologically possible.
I mean logically possible, but I'm happy to change it to
nomologically possible for the purposes of this conversation.
I think Dennett changes the question by referring to
Kory Heath wrote:
On Nov 20, 2008, at 10:38 AM, Brent Meeker wrote:
I think you really you mean nomologically possible.
I mean logically possible, but I'm happy to change it to
nomologically possible for the purposes of this conversation.
Doesn't the question go away if it is
On Nov 20, 2008, at 3:33 PM, Brent Meeker wrote:
Doesn't the question go away if it is nomologically impossible?
I'm sort of the opposite of you on this issue. You don't like to use
the term logically possible, while I don't like to use the term
nomologically impossible. I don't see the
Kory Heath wrote:
On Nov 20, 2008, at 3:33 PM, Brent Meeker wrote:
Doesn't the question go away if it is nomologically impossible?
I'm sort of the opposite of you on this issue. You don't like to use
the term logically possible, while I don't like to use the term
nomologically
Le 19-nov.-08, à 07:13, Russell Standish a écrit :
I think Alice was indeed not a zombie,
I think you are right.
COMP + MAT implies Alice (in this setting) is not a zombie.
and that her consciousness
supervened on the physical activity stimulating her output gates (the
cosmic
Bruno,
If no one objects, I will present MGA 2 (soon).
I also agree completely and am curious to see where this is going.
Please continue!
Cheers,
Telmo Menezes.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups
Bruno:
I'm intested to see the second part. Thanks!
--- On Wed, 11/19/08, Bruno Marchal [EMAIL PROTECTED] wrote:
From: Bruno Marchal [EMAIL PROTECTED]
Subject: Re: MGA 1
To: [EMAIL PROTECTED]
Date: Wednesday, November 19, 2008, 3:59 AM
Le 19-nov.-08, à 07:13, Russell Standish a écrit
To add some clarification, I do not think spreading Alice's logic gates
across a field and allowing cosmic rays to cause each gate to perform the
same computations that they would had they existed in her functioning brain
would be conscious. I think this because in isolation the logic gates are
On 19 Nov 2008, at 20:17, Jason Resch wrote:
To add some clarification, I do not think spreading Alice's logic
gates across a field and allowing cosmic rays to cause each gate to
perform the same computations that they would had they existed in
her functioning brain would be conscious.
On 19 Nov 2008, at 16:06, Telmo Menezes wrote:
Bruno,
If no one objects, I will present MGA 2 (soon).
I also agree completely and am curious to see where this is going.
Please continue!
Thanks Telmo, thanks also to Gordon.
I will try to send MGA 2 asap. But this asks me some time.
Bruno Marchal wrote:
On 19 Nov 2008, at 16:06, Telmo Menezes wrote:
Bruno,
If no one objects, I will present MGA 2 (soon).
I also agree completely and am curious to see where this is going.
Please continue!
Thanks Telmo, thanks also to Gordon.
I will try to send MGA 2 asap.
On Wed, Nov 19, 2008 at 1:55 PM, Bruno Marchal [EMAIL PROTECTED] wrote:
On 19 Nov 2008, at 20:17, Jason Resch wrote:
To add some clarification, I do not think spreading Alice's logic gates
across a field and allowing cosmic rays to cause each gate to perform the
same computations that they
Jason Resch wrote:
On Wed, Nov 19, 2008 at 1:55 PM, Bruno Marchal [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
On 19 Nov 2008, at 20:17, Jason Resch wrote:
To add some clarification, I do not think spreading Alice's logic
gates across a field and allowing cosmic
Could you alter the so-lucky cosmic explosion beam a little bit so
that Alice still succeed her math exam, but is, reasonably enough, a
zombie during the exam. With zombie taken in the traditional sense of
Kory and Dennett.
Of course you have to keep well *both* MECH *and* MAT.
I think I
On Nov 18, 2008, at 11:52 AM, Bruno Marchal wrote:
The last question (of MGA 1) is: was Alice, in this case, a zombie
during the exam?
Of course, my personal answer would take into account the fact that I
already have a problem with the materialist's idea of matter. But I
think we're
THOUGHT EXPERIMENT AND THE FIRST QUESTIONS (MGA 1) : The
lucky cosmic event.
One billions years ago, at one billion light years away, somewhere in
the universe (which exists by the naturalist hypo) a cosmic explosion
occurred. And ...
... Alice had her math exam this afternoon.
From 3h
80 matches
Mail list logo