Re: UDA revisited and then some

2006-12-07 Thread Pete Carlton

A definitive treatment of this problem is Daniel Dennett's story  
Where am I?
http://www.newbanner.com/SecHumSCM/WhereAmI.html

On Dec 6, 2006, at 4:06 PM, Brent Meeker wrote:


 Quentin Anciaux wrote:
 Le Mercredi 6 Décembre 2006 19:35, Brent Meeker a écrit :
 Quentin Anciaux wrote:
 ...

 Another thing that puzzles me is that consciousness should be  
 generated
 by physical (and chemicals which is also physical) activities  
 of the
 brain, yet I feel my consciousness (in fact me) is located in  
 the upper
 front of my skull... Why then neurons located in the back of my  
 brain do
 not generate conscious feeling ? And if they do participate, why  
 am I
 located in the front of my brain ? Why this location ? Why only  
 a tiny
 part of the brain feels conscious activities ?
 Because you're not an ancient Greek.  They felt their   
 consciousness was
 located in their stomach.

 Brent Meeker

 While I'm not, the questionning was serious... While I've never  
 ask where
 other people feels they were... I'm there (in upper front of the  
 brain)... Is
 my feelings not in accordance with yours ?

 Quenton

 It might be because we're so visual and hence locate ourself at the  
 viewpoint of our vision (but that wouldn't explain the Greeks).  Or  
 it might be because we've been taught that consciousness is done by  
 the brain.

 Brent Meeker

 


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




RE: UDA revisited and then some

2006-12-06 Thread Stathis Papaioannou


Brent meeker writes:

 Stathis Papaioannou wrote:
  
  Brent Meeker writes:
  
  I assume that there is some copy of me possible which preserves
  my 1st person experience. After all, physical copying literally
  occurs in the course of normal life and I still feel myself to be
  the same person. But suppose I am offered some artificial means
  of being copied. The evidence I am presented with is that Fred2
  here is a robot who behaves exactly the same as the standard
  human Fred: has all his memories, a similar personality, similar
  intellectual abilities, and passes whatever other tests one cares
  to set him. The question is, how can I be sure that Fred2 really
  has the same 1st person experiences as Fred? A software engineer
  might copy a program's look and feel without knowing anything
  about the original program's internal code, his goal being to
  mimic the external appearance as seen by the end user by whatever
  means available. Similarly with Fred2, although the hope was to
  produce a copy with the same 1st person experiences, the only
  possible research method would have been to produce a copy that
  mimics Fred's behaviour. If Fred2 has 1st person experiences at
  all, they may be utterly unlike those of Fred. Fred2 may even be
  aware that he is different but be extremely good at hiding it,
  because if he were not he would have been rejected in the testing
  process.
  
  If it could be shown that Fred2 behaves like Fred *and* is 
  structurally similar
  Or *functionally* similar at lower levels, e.g. having long and
  short-term memory, having reflexes, having mostly separate areas
  for language and vision.
  
  to Fred then I would be more confident in accepting copying. If
  behaviour is similar but the underlying mechanism completely
  different then I would consider that only by accident could 1st
  person experience be similar.
  I'd say that would still be the way to bet - just with less
  confidence.
  
  Brent Meeker
  
  It's the level of confidence which is the issue. Would it be fair to
  assume that a digital and an analogue audio source have the same 1st
  person experience (such as it may be) because their output signal is
  indistinguishable to human hearing and scientific instruments?
  
  Stathis Papaioannou 
 
 Fair is a vague term.  That they are the same would be my default 
 assumption, absent any other information.  Of course knowing that one is 
 analog and the other digital reduces my confidence in that assumption, but no 
 theory of audio source experience I have no way to form a specific 
 alternative hypothesis.

You're implying that the default assumption should be that consciousness 
correlates more closely with external behaviour than with internal activity 
generating the behaviour: the tape recorder should reason that as the CD player 
produces the same audio output as I do, most likely it has the same experiences 
as I do. But why shouldn't the tape recorder reason: even though the CD player 
produces the same output as I do, it does so using completely different 
technology, so it most likely has completely different experiences to my own. 

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: UDA revisited and then some

2006-12-06 Thread Quentin Anciaux

Hi Stathis,

Le Mercredi 6 Décembre 2006 10:23, Stathis Papaioannou a écrit :
 Brent meeker writes:
  Stathis Papaioannou wrote:
  Fair is a vague term.  That they are the same would be my default
  assumption, absent any other information.  Of course knowing that one is
  analog and the other digital reduces my confidence in that assumption,
  but no theory of audio source experience I have no way to form a
  specific alternative hypothesis.

 You're implying that the default assumption should be that consciousness
 correlates more closely with external behaviour than with internal activity
 generating the behaviour: the tape recorder should reason that as the CD
 player produces the same audio output as I do, most likely it has the same
 experiences as I do. But why shouldn't the tape recorder reason: even
 though the CD player produces the same output as I do, it does so using
 completely different technology, so it most likely has completely different
 experiences to my own.

 Stathis Papaioannou

A tape recorder or a CD has no external behavior that would mimic a human. But 
I really think that if you have same external behavior than a human then 
the copy (whathever it is made of) will be conscious. Exact replica means 
you can talk with the replica, learn, etc... It's not just sound (and or 
move). Even If I knew that the brain copy was made of smashed apples it 
would not change my mind ;) about it. The only evidence of others 
consciousness is behavior, social interactions, ... You could scan a brain, 
yet you won't see consciousness.

Another thing that puzzles me is that consciousness should be generated by 
physical (and chemicals which is also physical) activities of the brain, 
yet I feel my consciousness (in fact me) is located in the upper front of my 
skull... Why then neurons located in the back of my brain do not generate 
conscious feeling ? And if they do participate, why am I located in the front 
of my brain ? Why this location ? Why only a tiny part of the brain feels 
conscious activities ?

Quentin


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: UDA revisited and then some

2006-12-06 Thread Brent Meeker

Quentin Anciaux wrote:
...
 Another thing that puzzles me is that consciousness should be generated by 
 physical (and chemicals which is also physical) activities of the brain, 
 yet I feel my consciousness (in fact me) is located in the upper front of my 
 skull... Why then neurons located in the back of my brain do not generate 
 conscious feeling ? And if they do participate, why am I located in the front 
 of my brain ? Why this location ? Why only a tiny part of the brain feels 
 conscious activities ?

Because you're not an ancient Greek.  They felt their  consciousness was 
located in their stomach.

Brent Meeker

--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited and then some

2006-12-06 Thread Brent Meeker

Stathis Papaioannou wrote:
 
 Brent meeker writes:
 
 Stathis Papaioannou wrote:
 Brent Meeker writes:
 
 I assume that there is some copy of me possible which
 preserves my 1st person experience. After all, physical
 copying literally occurs in the course of normal life and I
 still feel myself to be the same person. But suppose I am
 offered some artificial means of being copied. The evidence I
 am presented with is that Fred2 here is a robot who behaves
 exactly the same as the standard human Fred: has all his
 memories, a similar personality, similar intellectual
 abilities, and passes whatever other tests one cares to set
 him. The question is, how can I be sure that Fred2 really has
 the same 1st person experiences as Fred? A software engineer 
 might copy a program's look and feel without knowing
 anything about the original program's internal code, his goal
 being to mimic the external appearance as seen by the end
 user by whatever means available. Similarly with Fred2,
 although the hope was to produce a copy with the same 1st
 person experiences, the only possible research method would
 have been to produce a copy that mimics Fred's behaviour. If
 Fred2 has 1st person experiences at all, they may be utterly
 unlike those of Fred. Fred2 may even be aware that he is
 different but be extremely good at hiding it, because if he
 were not he would have been rejected in the testing process.
 
 If it could be shown that Fred2 behaves like Fred *and* is 
 structurally similar
 Or *functionally* similar at lower levels, e.g. having long and
  short-term memory, having reflexes, having mostly separate
 areas for language and vision.
 
 to Fred then I would be more confident in accepting copying.
 If behaviour is similar but the underlying mechanism
 completely different then I would consider that only by
 accident could 1st person experience be similar.
 I'd say that would still be the way to bet - just with less 
 confidence.
 
 Brent Meeker
 It's the level of confidence which is the issue. Would it be fair
 to assume that a digital and an analogue audio source have the
 same 1st person experience (such as it may be) because their
 output signal is indistinguishable to human hearing and
 scientific instruments?
 
 Stathis Papaioannou
 Fair is a vague term.  That they are the same would be my default
 assumption, absent any other information.  Of course knowing that
 one is analog and the other digital reduces my confidence in that
 assumption, but no theory of audio source experience I have no
 way to form a specific alternative hypothesis.
 
 You're implying that the default assumption should be that
 consciousness correlates more closely with external behaviour than
 with internal activity generating the behaviour: the tape recorder
 should reason that as the CD player produces the same audio output as
 I do, most likely it has the same experiences as I do. But why
 shouldn't the tape recorder reason: even though the CD player
 produces the same output as I do, it does so using completely
 different technology, so it most likely has completely different
 experiences to my own.

Here's my reasoning: We think other people (and animals) are conscious, have 
experiences, mainly because of the way they behave and to a lesser degree 
because they are like us in appearance and structure.  On the other hand we're 
pretty sure that consciousness requires a high degree of complexity, something 
supported by our theories and technology of information.  So we don't think 
that individual molecules or neurons are conscious - it must be something about 
how a large number of subsystems interact.  This implies that any one subsystem 
could be replaced by a functionally similar one, e.g. silicon neuron, and not 
change consciousness.  So our theory is that it is not technology in the sense 
of digital vs analog, but in some functional information processing sense.

So given two things that have the same behavior, the default assumption is they 
have the same consciousness (i.e. little or none in the case of CD and tape 
players).  If I look into them deeper and find they use different technologies, 
that doesn't do much to change my opinion - it's like a silicon neuron vs a 
biochemical one.  If I find the flow and storage of information is different, 
e.g. one throws away more information than the other, or one adds randomness, 
then I'd say that was evidence for different consciousness.

Brent Meeker

--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: UDA revisited and then some

2006-12-06 Thread Stathis Papaioannou


Hi Quentin,
 
 Hi Stathis,
 
 Le Mercredi 6 Décembre 2006 10:23, Stathis Papaioannou a écrit :
  Brent meeker writes:
   Stathis Papaioannou wrote:
   Fair is a vague term.  That they are the same would be my default
   assumption, absent any other information.  Of course knowing that one is
   analog and the other digital reduces my confidence in that assumption,
   but no theory of audio source experience I have no way to form a
   specific alternative hypothesis.
 
  You're implying that the default assumption should be that consciousness
  correlates more closely with external behaviour than with internal activity
  generating the behaviour: the tape recorder should reason that as the CD
  player produces the same audio output as I do, most likely it has the same
  experiences as I do. But why shouldn't the tape recorder reason: even
  though the CD player produces the same output as I do, it does so using
  completely different technology, so it most likely has completely different
  experiences to my own.
 
  Stathis Papaioannou
 
 A tape recorder or a CD has no external behavior that would mimic a human. 
 But 
 I really think that if you have same external behavior than a human then 
 the copy (whathever it is made of) will be conscious. Exact replica means 
 you can talk with the replica, learn, etc... It's not just sound (and or 
 move). Even If I knew that the brain copy was made of smashed apples it 
 would not change my mind ;) about it. The only evidence of others 
 consciousness is behavior, social interactions, ... You could scan a brain, 
 yet you won't see consciousness.

The tape recorder/ CD player example was to show that two entities may have 
similar behaviour generated by completely different mechanisms. As you say, we 
can see the brain, we can see the behaviour, but we *deduce* the consciousness, 
unless it is our own. If someone has similar behaviour generated by a similar 
brain, then you would have to invoke magical processes to explain why he would 
not also have similar consciousness. But if someone has similar behaviour with 
a very different brain, I don't think there is anything in the laws of nature 
which says that he has to have the same consciousness, even if you say that he 
must have *some* sort of consciousness.

 Another thing that puzzles me is that consciousness should be generated by 
 physical (and chemicals which is also physical) activities of the brain, 
 yet I feel my consciousness (in fact me) is located in the upper front of my 
 skull... Why then neurons located in the back of my brain do not generate 
 conscious feeling ? And if they do participate, why am I located in the front 
 of my brain ? Why this location ? Why only a tiny part of the brain feels 
 conscious activities ?

That's just what our brains make us think. If our brains were slightly 
different our consciousness could seem to be located in our big toe, or on the 
moons of Jupiter.

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: UDA revisited and then some

2006-12-06 Thread Quentin Anciaux

Le Mercredi 6 Décembre 2006 19:35, Brent Meeker a écrit :
 Quentin Anciaux wrote:
 ...

  Another thing that puzzles me is that consciousness should be generated
  by physical (and chemicals which is also physical) activities of the
  brain, yet I feel my consciousness (in fact me) is located in the upper
  front of my skull... Why then neurons located in the back of my brain do
  not generate conscious feeling ? And if they do participate, why am I
  located in the front of my brain ? Why this location ? Why only a tiny
  part of the brain feels conscious activities ?

 Because you're not an ancient Greek.  They felt their  consciousness was
 located in their stomach.

 Brent Meeker

While I'm not, the questionning was serious... While I've never ask where 
other people feels they were... I'm there (in upper front of the brain)... Is 
my feelings not in accordance with yours ?

Quenton

--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: UDA revisited and then some

2006-12-06 Thread Russell Standish

On Wed, Dec 06, 2006 at 11:38:32PM +0100, Quentin Anciaux wrote:
 
 Le Mercredi 6 Décembre 2006 19:35, Brent Meeker a écrit :
  Quentin Anciaux wrote:
  ...
 
   Another thing that puzzles me is that consciousness should be generated
   by physical (and chemicals which is also physical) activities of the
   brain, yet I feel my consciousness (in fact me) is located in the upper
   front of my skull... Why then neurons located in the back of my brain do
   not generate conscious feeling ? And if they do participate, why am I
   located in the front of my brain ? Why this location ? Why only a tiny
   part of the brain feels conscious activities ?
 
  Because you're not an ancient Greek.  They felt their  consciousness was
  located in their stomach.
 
  Brent Meeker
 
 While I'm not, the questionning was serious... While I've never ask where 
 other people feels they were... I'm there (in upper front of the brain)... Is 
 my feelings not in accordance with yours ?
 
 Quenton
 

I don't feel very pointlike. Rather my consciousness feels distributed over
a volume that is usually a substantial fraction of my brain. When
meditating my consciousness feels like it expands to fill the room or
maybe even larger.

What of it? Probably not significant

-- 


A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 [EMAIL PROTECTED]
Australiahttp://www.hpcoders.com.au



--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: UDA revisited and then some

2006-12-06 Thread Brent Meeker

Quentin Anciaux wrote:
 Le Mercredi 6 Décembre 2006 19:35, Brent Meeker a écrit :
 Quentin Anciaux wrote:
 ...

 Another thing that puzzles me is that consciousness should be generated
 by physical (and chemicals which is also physical) activities of the
 brain, yet I feel my consciousness (in fact me) is located in the upper
 front of my skull... Why then neurons located in the back of my brain do
 not generate conscious feeling ? And if they do participate, why am I
 located in the front of my brain ? Why this location ? Why only a tiny
 part of the brain feels conscious activities ?
 Because you're not an ancient Greek.  They felt their  consciousness was
 located in their stomach.

 Brent Meeker
 
 While I'm not, the questionning was serious... While I've never ask where 
 other people feels they were... I'm there (in upper front of the brain)... Is 
 my feelings not in accordance with yours ?
 
 Quenton

It might be because we're so visual and hence locate ourself at the viewpoint 
of our vision (but that wouldn't explain the Greeks).  Or it might be because 
we've been taught that consciousness is done by the brain.

Brent Meeker

--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




RE: UDA revisited and then some

2006-12-06 Thread Stathis Papaioannou


Brent Meeker writes:

  You're implying that the default assumption should be that
  consciousness correlates more closely with external behaviour than
  with internal activity generating the behaviour: the tape recorder
  should reason that as the CD player produces the same audio output as
  I do, most likely it has the same experiences as I do. But why
  shouldn't the tape recorder reason: even though the CD player
  produces the same output as I do, it does so using completely
  different technology, so it most likely has completely different
  experiences to my own.
 
 Here's my reasoning: We think other people (and animals) are conscious, have 
 experiences, mainly because of the way they behave and to a lesser degree 
 because they are like us in appearance and structure.  On the other hand 
 we're pretty sure that consciousness requires a high degree of complexity, 
 something supported by our theories and technology of information.  So we 
 don't think that individual molecules or neurons are conscious - it must be 
 something about how a large number of subsystems interact.  This implies that 
 any one subsystem could be replaced by a functionally similar one, e.g. 
 silicon neuron, and not change consciousness.  So our theory is that it is 
 not technology in the sense of digital vs analog, but in some functional 
 information processing sense.
 
 So given two things that have the same behavior, the default assumption is 
 they have the same consciousness (i.e. little or none in the case of CD and 
 tape players).  If I look into them deeper and find they use different 
 technologies, that doesn't do much to change my opinion - it's like a silicon 
 neuron vs a biochemical one.  If I find the flow and storage of information 
 is different, e.g. one throws away more information than the other, or one 
 adds randomness, then I'd say that was evidence for different consciousness.

I basically agree, but with qualifications. If the attempt to copy human 
intelligence is bottom up, for example by emulating neurons with electronics, 
then I think it is a good bet that if it behaves like a human and is based on 
the same principles as the human brain, it probably has the same types of 
conscious experiences as a human. But long before we are able to build such 
artificial brains, we will probably have the equivalent of characters in 
advanced computer games designed to pass the Turing Test using technology 
nothing like a biological brain. If such a computer program is conscious at all 
I would certainly not bet that it was conscious in the same way as a human is 
conscious, just because it is able to fool us into thinking it is human.

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: UDA revisited and then some

2006-12-05 Thread Bruno Marchal


Le 05-déc.-06, à 00:31, Stathis Papaioannou a écrit :


 Well, in the case comp will be refuted (for example by predicting that
 electrons weigh one ton, or by predicting non eliminable white 
 rabbits)
 , then everyone will be able to guess that those people were 
 committing
 suicide. The problem is that we will probably copy brain at some level
 well before refuting comp, if ever.
 The comp hyp. entails the existence of possible relative zombies, but
 from the point of view of those who accept artificial brains, if they
 survive, they will survive where the level has been correctly chosen. 
 A
 linguistic difficulty is that the where does not denote a place in a
 universe, but many similar instants in many consistent histories.



 But how good a predictor of the right level having been  chosen is 3rd 
 person
 observable behaviour?


Stathis, I don't understand the question. Could you elaborate just a 
few bits, thanks.

Bruno

http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




RE: UDA revisited and then some

2006-12-05 Thread Stathis Papaioannou


Bruno Marchal writes:

  Well, in the case comp will be refuted (for example by predicting that
  electrons weigh one ton, or by predicting non eliminable white 
  rabbits)
  , then everyone will be able to guess that those people were 
  committing
  suicide. The problem is that we will probably copy brain at some level
  well before refuting comp, if ever.
  The comp hyp. entails the existence of possible relative zombies, but
  from the point of view of those who accept artificial brains, if they
  survive, they will survive where the level has been correctly chosen. 
  A
  linguistic difficulty is that the where does not denote a place in a
  universe, but many similar instants in many consistent histories.
 
 
 
  But how good a predictor of the right level having been  chosen is 3rd 
  person
  observable behaviour?
 
 
 Stathis, I don't understand the question. Could you elaborate just a 
 few bits, thanks.

I assume that there is some copy of me possible which preserves my 1st person 
experience. After all, physical copying literally occurs in the course of 
normal life and I still feel myself to be the same person. But suppose I am 
offered some artificial means of being copied. The evidence I am presented with 
is that Fred2 here is a robot who behaves exactly the same as the standard 
human Fred: has all his memories, a similar personality, similar intellectual 
abilities, and passes whatever other tests one cares to set him. The question 
is, how can I be sure that Fred2 really has the same 1st person experiences as 
Fred? A software engineer might copy a program's look and feel without 
knowing anything about the original program's internal code, his goal being to 
mimic the external appearance as seen by the end user by whatever means 
available. Similarly with Fred2, although the hope was to produce a copy with 
the same 1st person experiences, the only possible research method wou
 ld have been to produce a copy that mimics Fred's behaviour. If Fred2 has 1st 
person experiences at all, they may be utterly unlike those of Fred. Fred2 may 
even be aware that he is different but be extremely good at hiding it, because 
if he were not he would have been rejected in the testing process. 

If it could be shown that Fred2 behaves like Fred *and* is structurally similar 
to Fred then I would be more confident in accepting copying. If behaviour is 
similar but the underlying mechanism completely different then I would consider 
that only by accident could 1st person experience be similar.

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: UDA revisited and then some

2006-12-05 Thread Brent Meeker

Stathis Papaioannou wrote:
... 
 I assume that there is some copy of me possible which preserves my
 1st person experience. After all, physical copying literally occurs
 in the course of normal life and I still feel myself to be the same
 person. But suppose I am offered some artificial means of being
 copied. The evidence I am presented with is that Fred2 here is a
 robot who behaves exactly the same as the standard human Fred: has
 all his memories, a similar personality, similar intellectual
 abilities, and passes whatever other tests one cares to set him. The
 question is, how can I be sure that Fred2 really has the same 1st
 person experiences as Fred? A software engineer might copy a
 program's look and feel without knowing anything about the original
 program's internal code, his goal being to mimic the external
 appearance as seen by the end user by whatever means available.
 Similarly with Fred2, although the hope was to produce a copy with
 the same 1st person experiences, the only possible research method
 would have been to produce a copy that mimics Fred's behaviour. If
 Fred2 has 1st person experiences at all, they may be utterly unlike
 those of Fred. Fred2 may even be aware that he is different but be
 extremely good at hiding it, because if he were not he would have
 been rejected in the testing process.
 
 If it could be shown that Fred2 behaves like Fred *and* is
 structurally similar 

Or *functionally* similar at lower levels, e.g. having long and short-term 
memory, having reflexes, having mostly separate areas for language and vision.

to Fred then I would be more confident in
 accepting copying. If behaviour is similar but the underlying
 mechanism completely different then I would consider that only by
 accident could 1st person experience be similar.

I'd say that would still be the way to bet - just with less confidence.

Brent Meeker


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: UDA revisited and then some

2006-12-05 Thread Stathis Papaioannou


Brent Meeker writes:

  I assume that there is some copy of me possible which preserves my
  1st person experience. After all, physical copying literally occurs
  in the course of normal life and I still feel myself to be the same
  person. But suppose I am offered some artificial means of being
  copied. The evidence I am presented with is that Fred2 here is a
  robot who behaves exactly the same as the standard human Fred: has
  all his memories, a similar personality, similar intellectual
  abilities, and passes whatever other tests one cares to set him. The
  question is, how can I be sure that Fred2 really has the same 1st
  person experiences as Fred? A software engineer might copy a
  program's look and feel without knowing anything about the original
  program's internal code, his goal being to mimic the external
  appearance as seen by the end user by whatever means available.
  Similarly with Fred2, although the hope was to produce a copy with
  the same 1st person experiences, the only possible research method
  would have been to produce a copy that mimics Fred's behaviour. If
  Fred2 has 1st person experiences at all, they may be utterly unlike
  those of Fred. Fred2 may even be aware that he is different but be
  extremely good at hiding it, because if he were not he would have
  been rejected in the testing process.
  
  If it could be shown that Fred2 behaves like Fred *and* is
  structurally similar 
 
 Or *functionally* similar at lower levels, e.g. having long and short-term 
 memory, having reflexes, having mostly separate areas for language and vision.
 
 to Fred then I would be more confident in
  accepting copying. If behaviour is similar but the underlying
  mechanism completely different then I would consider that only by
  accident could 1st person experience be similar.
 
 I'd say that would still be the way to bet - just with less confidence.
 
 Brent Meeker

It's the level of confidence which is the issue. Would it be fair to assume 
that a digital and an analogue audio source have the same 1st person experience 
(such as it may be) because their output signal is indistinguishable to human 
hearing and scientific instruments? 

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: UDA revisited and then some

2006-12-05 Thread Brent Meeker

Stathis Papaioannou wrote:
 
 Brent Meeker writes:
 
 I assume that there is some copy of me possible which preserves
 my 1st person experience. After all, physical copying literally
 occurs in the course of normal life and I still feel myself to be
 the same person. But suppose I am offered some artificial means
 of being copied. The evidence I am presented with is that Fred2
 here is a robot who behaves exactly the same as the standard
 human Fred: has all his memories, a similar personality, similar
 intellectual abilities, and passes whatever other tests one cares
 to set him. The question is, how can I be sure that Fred2 really
 has the same 1st person experiences as Fred? A software engineer
 might copy a program's look and feel without knowing anything
 about the original program's internal code, his goal being to
 mimic the external appearance as seen by the end user by whatever
 means available. Similarly with Fred2, although the hope was to
 produce a copy with the same 1st person experiences, the only
 possible research method would have been to produce a copy that
 mimics Fred's behaviour. If Fred2 has 1st person experiences at
 all, they may be utterly unlike those of Fred. Fred2 may even be
 aware that he is different but be extremely good at hiding it,
 because if he were not he would have been rejected in the testing
 process.
 
 If it could be shown that Fred2 behaves like Fred *and* is 
 structurally similar
 Or *functionally* similar at lower levels, e.g. having long and
 short-term memory, having reflexes, having mostly separate areas
 for language and vision.
 
 to Fred then I would be more confident in accepting copying. If
 behaviour is similar but the underlying mechanism completely
 different then I would consider that only by accident could 1st
 person experience be similar.
 I'd say that would still be the way to bet - just with less
 confidence.
 
 Brent Meeker
 
 It's the level of confidence which is the issue. Would it be fair to
 assume that a digital and an analogue audio source have the same 1st
 person experience (such as it may be) because their output signal is
 indistinguishable to human hearing and scientific instruments?
 
 Stathis Papaioannou 

Fair is a vague term.  That they are the same would be my default assumption, 
absent any other information.  Of course knowing that one is analog and the 
other digital reduces my confidence in that assumption, but no theory of audio 
source experience I have no way to form a specific alternative hypothesis.

Brent Meeker

--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited and then some

2006-12-04 Thread Bruno Marchal


Le 01-déc.-06, à 20:05, Brent Meeker a écrit :


 Bruno Marchal wrote:

 Le 01-déc.-06, à 10:24, Stathis Papaioannou a écrit :


 Bruno Marchal writes:

 snip

 We can assume that the structural difference makes a difference to
 consciousness but
 not external behaviour. For example, it may cause spectrum 
 reversal.

 Let us suppose you are right. This would mean that there is
 substitution level such that the digital copy person would act AS IF
 she has been duplicated at the correct level, but having or living a
 (1-person) spectrum reversal.

 Now what could that mean? Let us interview the copy and ask her the
 color of the sky. Having the same external behavior as the original,
 she will told us the usual answer: blue (I suppose a sunny day!).

 So, apparently she is not 1-aware of that spectrum reversal. This
 means
 that from her 1-person point of view, there was no spectrum 
 reversal,
 but obviously there is no 3-description of it either 

 So I am not sure your assertion make sense. I agree that if we take 
 an
 incorrect substitution level, the copy could experience a spectrum
 reversal, but then the person will complain to her doctor saying
 something like I have not been copied correctly, and will not pay
 her
 doctor bill (but this is a different  external behaviour, ok?)
 I don't doubt that there is some substitution level that preserves 
 3rd
 person
 behaviour and 1st person experience, even if this turns out to mean
 copying
 a person to the same engineering tolerances as nature has specified
 for ordinary
 day to day life. The question is, is there some substitution level
 which preserves
 3rd person behaviour but not 1st person experience? For example,
 suppose
 you carried around with you a device which monitored all your
 behaviour in great
 detail, created predictive models, compared its predictions with your
 actual
 behaviour, and continuously refined its models. Over time, this 
 device
 might be
 able to mimic your behaviour closely enough such that it could take
 over control of
 your body from your brain and no-one would be able to tell that the
 substitution
 had occurred. I don't think it would be unreasonable to wonder 
 whether
 this copy
 experiences the same thing when it looks at the sky and declares it 
 to
 be blue as
 you do before the substitution.



 Thanks for the precision.
 It *is* as reasonable to ask such a question as it is reasonable to 
 ask
 if tomorrow my first person experience will not indeed permute my blue
 and orange qualia *including my memories of it* in such a way that my
 3-behavior will remain unchanged. In that case we are back to the
 original spectrum reversal problem.
 This is a reasonable question in the sense that the answer can be 
 shown
 relatively (!) undecidable: it is not verifiable by any external 
 means,
 nor by the first person itself. We could as well conclude that such a
 change occurs each time the magnetic poles permute, or that it changes
 at each season, etc.
 *But* (curiously enough perhaps) such a change can be shown to be
 guess-able by some richer machine.
 The spectrum reversal question points on the gap between the 1 and 3
 descriptions. With acomp your question should be addressable in the
 terms of the modal logic Z and X, or more precisely Z1* minus Z1 and
 X1* minus X1, that is their true but unprovable (and undecidable)
 propositions. Note that the question makes no sense at all for the
 pure 1-person because S4Grz1* minus S4Grz1 is empty.
 So your question makes sense because at the level of the fourth and
 fifth hypo your question can be translated into purely arithmetical
 propositions, which although highly undecidable by the machine itself
 can be decided by some richer machine.
 And I would say, without doing the calculus which is rather complex,
 that the answer could very well be positive indeed, but this remains 
 to
 be proved. At least the unexpected nuances between computability,
 provability, knowability, observability, perceivability (all redefined
 by modal variant of G) gives plenty room for this, indeed.

 Bruno

 So what does your calculus say about the experience of people who wear 
 glasses which invert their field of vision?


This is just an adaptation process. If I remember people wearing those 
glasses are aware of the inversion of their field of vision until their 
brain generates an unconscious correction. All this can be explained 
self-referentially in G without problem and even without mentioning the 
qualia (which would need the Z* or X* ). Stathis' remarks on the 
existence of qualia changes without first person knowledge of the 
change is far less obvious.

Bruno


http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL 

Re: UDA revisited and then some

2006-12-04 Thread Bruno Marchal


Le 02-déc.-06, à 06:11, Stathis Papaioannou a écrit :

 In addition to spectrum reversal type situations, where no change is 
 noted from
 either 3rd or 1st person perspective (and therefore it doesn't really 
 matter to anyone:
 as you say, it may be occurring all the time anyway and we would never 
 know), there is
 the possibility that a change is noted from a 1st person perspective, 
 but never reported.
 If you consider a practical research program to make artificial 
 replacement brains, all the
 researchers can ever do is build a brain that behaves like the 
 original. It may do this because
 it thinks like the original, or it may do it because it is a very good 
 actor and is able to pretend
 that it thinks like the original. Those brains which somehow betray 
 the fact that they are
 acting will be rejected, but the ones that never betray this fact will 
 be accepted as true
 replacement brains when they are actually not. Millions of people 
 might agree to have these
 replacement brains and no-one will ever know that they are committing 
 suicide.


Well, in the case comp will be refuted (for example by predicting that 
electrons weigh one ton, or by predicting non eliminable white rabbits) 
, then everyone will be able to guess that those people were committing 
suicide. The problem is that we will probably copy brain at some level 
well before refuting comp, if ever.
The comp hyp. entails the existence of possible relative zombies, but 
from the point of view of those who accept artificial brains, if they 
survive, they will survive where the level has been correctly chosen. A 
linguistic difficulty is that the where does not denote a place in a 
universe, but many similar instants in many consistent histories.

Bruno


http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




RE: UDA revisited and then some

2006-12-04 Thread Stathis Papaioannou


Bruno Marchal writes:

 Le 02-déc.-06, à 06:11, Stathis Papaioannou a écrit :
 
  In addition to spectrum reversal type situations, where no change is 
  noted from
  either 3rd or 1st person perspective (and therefore it doesn't really 
  matter to anyone:
  as you say, it may be occurring all the time anyway and we would never 
  know), there is
  the possibility that a change is noted from a 1st person perspective, 
  but never reported.
  If you consider a practical research program to make artificial 
  replacement brains, all the
  researchers can ever do is build a brain that behaves like the 
  original. It may do this because
  it thinks like the original, or it may do it because it is a very good 
  actor and is able to pretend
  that it thinks like the original. Those brains which somehow betray 
  the fact that they are
  acting will be rejected, but the ones that never betray this fact will 
  be accepted as true
  replacement brains when they are actually not. Millions of people 
  might agree to have these
  replacement brains and no-one will ever know that they are committing 
  suicide.
 
 
 Well, in the case comp will be refuted (for example by predicting that 
 electrons weigh one ton, or by predicting non eliminable white rabbits) 
 , then everyone will be able to guess that those people were committing 
 suicide. The problem is that we will probably copy brain at some level 
 well before refuting comp, if ever.
 The comp hyp. entails the existence of possible relative zombies, but 
 from the point of view of those who accept artificial brains, if they 
 survive, they will survive where the level has been correctly chosen. A 
 linguistic difficulty is that the where does not denote a place in a 
 universe, but many similar instants in many consistent histories.

But how good a predictor of the right level having been  chosen is 3rd person 
observable behaviour?

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




RE: UDA revisited and then some

2006-12-02 Thread Stathis Papaioannou


Brent meeker writes:
 
  I don't doubt that there is some substitution level that preserves 3rd 
  person 
  behaviour and 1st person experience, even if this turns out to mean copying 
  a person to the same engineering tolerances as nature has specified for 
  ordinary 
  day to day life. The question is, is there some substitution level which 
  preserves 
  3rd person behaviour but not 1st person experience? For example, suppose 
  you carried around with you a device which monitored all your behaviour in 
  great 
  detail, created predictive models, compared its predictions with your 
  actual 
  behaviour, and continuously refined its models. Over time, this device 
  might be 
  able to mimic your behaviour closely enough such that it could take over 
  control of 
  your body from your brain and no-one would be able to tell that the 
  substitution 
  had occurred. I don't think it would be unreasonable to wonder whether this 
  copy 
  experiences the same thing when it looks at the sky and declares it to be 
  blue as 
  you do before the substitution.
 
 That's a precis of Greg Egan's short story The Jewel.  I wouldn't call it 
 unreasonable to wonder whether the copy experiences the same qualia, but I'd 
 call it unreasonable to conclude that it did not on the stated evidence.  In 
 fact I find it hard to think of what evidence would count against it have 
 some kind of qualia.

It would be a neat theory if any machine that processed environmental 
information 
in a manner analogous to an animal had some level of conscious experience (and 
consistent 
with Colin's no zombie scientists hypothesis, although I don't think it is a 
conclusion he would 
agree with). It would explain consciousness as a corollary of this sort of 
information processing. 
However, I don't know how such a thing could ever be proved or disproved. 

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: UDA revisited and then some

2006-12-02 Thread Brent Meeker

Stathis Papaioannou wrote:
 
 Brent meeker writes:
  
 I don't doubt that there is some substitution level that preserves 3rd 
 person 
 behaviour and 1st person experience, even if this turns out to mean copying 
 a person to the same engineering tolerances as nature has specified for 
 ordinary 
 day to day life. The question is, is there some substitution level which 
 preserves 
 3rd person behaviour but not 1st person experience? For example, suppose 
 you carried around with you a device which monitored all your behaviour in 
 great 
 detail, created predictive models, compared its predictions with your 
 actual 
 behaviour, and continuously refined its models. Over time, this device 
 might be 
 able to mimic your behaviour closely enough such that it could take over 
 control of 
 your body from your brain and no-one would be able to tell that the 
 substitution 
 had occurred. I don't think it would be unreasonable to wonder whether this 
 copy 
 experiences the same thing when it looks at the sky and declares it to be 
 blue as 
 you do before the substitution.
 That's a precis of Greg Egan's short story The Jewel.  I wouldn't call it 
 unreasonable to wonder whether the copy experiences the same qualia, but I'd 
 call it unreasonable to conclude that it did not on the stated evidence.  In 
 fact I find it hard to think of what evidence would count against it have 
 some kind of qualia.
 
 It would be a neat theory if any machine that processed environmental 
 information 
 in a manner analogous to an animal had some level of conscious experience 
 (and consistent 
 with Colin's no zombie scientists hypothesis, although I don't think it is 
 a conclusion he would 
 agree with). It would explain consciousness as a corollary of this sort of 
 information processing. 
 However, I don't know how such a thing could ever be proved or disproved. 
 
 Stathis Papaioannou

Things are seldom proved or disproved in science.  Right now I'd say the 
evidence favors the no-zombie theory.  The only evidence beyond observation of 
behavior that I can imagine is to map processes in the brain and determine how 
memories are stored and how manipulation of symbolic and graphic 
representations are done.  It might then be possible to understand how a 
computer/robot could achieve the same behavior with a different functional 
structure; analogous say to imperative vs functional programs.  But then we'd 
only be able to infer that the robot might be conscious in a different way.  I 
don't see how we could infer that it was not conscious.

On a related point, it is often said here that consciousness is ineffable: what 
it is like to be someone cannot be communicated.  But there's another side to 
this: it is exactly the content of consciousness that we can communicate.  We 
can tell someone how we prove a theorem: we're conscious of those steps.  But 
we can't tell someone how our brain came up with the proof (the Poincare' 
effect) or why it is persuasive.

Brent Meeker

--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: UDA revisited and then some

2006-12-01 Thread Stathis Papaioannou


Bruno Marchal writes:

 snip
 
  We can assume that the structural difference makes a difference to 
  consciousness but
  not external behaviour. For example, it may cause spectrum reversal.
 
 
 Let us suppose you are right. This would mean that there is 
 substitution level such that the digital copy person would act AS IF 
 she has been duplicated at the correct level, but having or living a 
 (1-person) spectrum reversal.
 
 Now what could that mean? Let us interview the copy and ask her the 
 color of the sky. Having the same external behavior as the original, 
 she will told us the usual answer: blue (I suppose a sunny day!).
 
 So, apparently she is not 1-aware of that spectrum reversal. This means 
 that from her 1-person point of view, there was no spectrum reversal, 
 but obviously there is no 3-description of it either 
 
 So I am not sure your assertion make sense. I agree that if we take an 
 incorrect substitution level, the copy could experience a spectrum 
 reversal, but then the person will complain to her doctor saying 
 something like I have not been copied correctly, and will not pay her 
 doctor bill (but this is a different  external behaviour, ok?)

I don't doubt that there is some substitution level that preserves 3rd person 
behaviour and 1st person experience, even if this turns out to mean copying 
a person to the same engineering tolerances as nature has specified for 
ordinary 
day to day life. The question is, is there some substitution level which 
preserves 
3rd person behaviour but not 1st person experience? For example, suppose 
you carried around with you a device which monitored all your behaviour in 
great 
detail, created predictive models, compared its predictions with your actual 
behaviour, and continuously refined its models. Over time, this device might be 
able to mimic your behaviour closely enough such that it could take over 
control of 
your body from your brain and no-one would be able to tell that the 
substitution 
had occurred. I don't think it would be unreasonable to wonder whether this 
copy 
experiences the same thing when it looks at the sky and declares it to be blue 
as 
you do before the substitution.

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: UDA revisited and then some

2006-12-01 Thread Bruno Marchal


Le 01-déc.-06, à 10:24, Stathis Papaioannou a écrit :



 Bruno Marchal writes:

 snip

 We can assume that the structural difference makes a difference to
 consciousness but
 not external behaviour. For example, it may cause spectrum reversal.


 Let us suppose you are right. This would mean that there is
 substitution level such that the digital copy person would act AS IF
 she has been duplicated at the correct level, but having or living a
 (1-person) spectrum reversal.

 Now what could that mean? Let us interview the copy and ask her the
 color of the sky. Having the same external behavior as the original,
 she will told us the usual answer: blue (I suppose a sunny day!).

 So, apparently she is not 1-aware of that spectrum reversal. This 
 means
 that from her 1-person point of view, there was no spectrum reversal,
 but obviously there is no 3-description of it either 

 So I am not sure your assertion make sense. I agree that if we take an
 incorrect substitution level, the copy could experience a spectrum
 reversal, but then the person will complain to her doctor saying
 something like I have not been copied correctly, and will not pay 
 her
 doctor bill (but this is a different  external behaviour, ok?)

 I don't doubt that there is some substitution level that preserves 3rd 
 person
 behaviour and 1st person experience, even if this turns out to mean 
 copying
 a person to the same engineering tolerances as nature has specified 
 for ordinary
 day to day life. The question is, is there some substitution level 
 which preserves
 3rd person behaviour but not 1st person experience? For example, 
 suppose
 you carried around with you a device which monitored all your 
 behaviour in great
 detail, created predictive models, compared its predictions with your 
 actual
 behaviour, and continuously refined its models. Over time, this device 
 might be
 able to mimic your behaviour closely enough such that it could take 
 over control of
 your body from your brain and no-one would be able to tell that the 
 substitution
 had occurred. I don't think it would be unreasonable to wonder whether 
 this copy
 experiences the same thing when it looks at the sky and declares it to 
 be blue as
 you do before the substitution.



Thanks for the precision.
It *is* as reasonable to ask such a question as it is reasonable to ask 
if tomorrow my first person experience will not indeed permute my blue 
and orange qualia *including my memories of it* in such a way that my 
3-behavior will remain unchanged. In that case we are back to the 
original spectrum reversal problem.
This is a reasonable question in the sense that the answer can be shown 
relatively (!) undecidable: it is not verifiable by any external means, 
nor by the first person itself. We could as well conclude that such a 
change occurs each time the magnetic poles permute, or that it changes 
at each season, etc.
*But* (curiously enough perhaps) such a change can be shown to be 
guess-able by some richer machine.
The spectrum reversal question points on the gap between the 1 and 3 
descriptions. With acomp your question should be addressable in the 
terms of the modal logic Z and X, or more precisely Z1* minus Z1 and 
X1* minus X1, that is their true but unprovable (and undecidable) 
propositions. Note that the question makes no sense at all for the 
pure 1-person because S4Grz1* minus S4Grz1 is empty.
So your question makes sense because at the level of the fourth and 
fifth hypo your question can be translated into purely arithmetical 
propositions, which although highly undecidable by the machine itself 
can be decided by some richer machine.
And I would say, without doing the calculus which is rather complex, 
that the answer could very well be positive indeed, but this remains to 
be proved. At least the unexpected nuances between computability, 
provability, knowability, observability, perceivability (all redefined 
by modal variant of G) gives plenty room for this, indeed.

Bruno






http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: UDA revisited and then some

2006-12-01 Thread Brent Meeker

Stathis Papaioannou wrote:
 
 Bruno Marchal writes:
 
 snip

 We can assume that the structural difference makes a difference to 
 consciousness but
 not external behaviour. For example, it may cause spectrum reversal.

 Let us suppose you are right. This would mean that there is 
 substitution level such that the digital copy person would act AS IF 
 she has been duplicated at the correct level, but having or living a 
 (1-person) spectrum reversal.

 Now what could that mean? Let us interview the copy and ask her the 
 color of the sky. Having the same external behavior as the original, 
 she will told us the usual answer: blue (I suppose a sunny day!).

 So, apparently she is not 1-aware of that spectrum reversal. This means 
 that from her 1-person point of view, there was no spectrum reversal, 
 but obviously there is no 3-description of it either 

 So I am not sure your assertion make sense. I agree that if we take an 
 incorrect substitution level, the copy could experience a spectrum 
 reversal, but then the person will complain to her doctor saying 
 something like I have not been copied correctly, and will not pay her 
 doctor bill (but this is a different  external behaviour, ok?)
 
 I don't doubt that there is some substitution level that preserves 3rd person 
 behaviour and 1st person experience, even if this turns out to mean copying 
 a person to the same engineering tolerances as nature has specified for 
 ordinary 
 day to day life. The question is, is there some substitution level which 
 preserves 
 3rd person behaviour but not 1st person experience? For example, suppose 
 you carried around with you a device which monitored all your behaviour in 
 great 
 detail, created predictive models, compared its predictions with your actual 
 behaviour, and continuously refined its models. Over time, this device might 
 be 
 able to mimic your behaviour closely enough such that it could take over 
 control of 
 your body from your brain and no-one would be able to tell that the 
 substitution 
 had occurred. I don't think it would be unreasonable to wonder whether this 
 copy 
 experiences the same thing when it looks at the sky and declares it to be 
 blue as 
 you do before the substitution.

That's a precis of Greg Egan's short story The Jewel.  I wouldn't call it 
unreasonable to wonder whether the copy experiences the same qualia, but I'd 
call it unreasonable to conclude that it did not on the stated evidence.  In 
fact I find it hard to think of what evidence would count against it have some 
kind of qualia.

Brent Meeker

--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited and then some

2006-12-01 Thread Brent Meeker

Bruno Marchal wrote:
 
 Le 01-déc.-06, à 10:24, Stathis Papaioannou a écrit :
 

 Bruno Marchal writes:

 snip

 We can assume that the structural difference makes a difference to
 consciousness but
 not external behaviour. For example, it may cause spectrum reversal.

 Let us suppose you are right. This would mean that there is
 substitution level such that the digital copy person would act AS IF
 she has been duplicated at the correct level, but having or living a
 (1-person) spectrum reversal.

 Now what could that mean? Let us interview the copy and ask her the
 color of the sky. Having the same external behavior as the original,
 she will told us the usual answer: blue (I suppose a sunny day!).

 So, apparently she is not 1-aware of that spectrum reversal. This 
 means
 that from her 1-person point of view, there was no spectrum reversal,
 but obviously there is no 3-description of it either 

 So I am not sure your assertion make sense. I agree that if we take an
 incorrect substitution level, the copy could experience a spectrum
 reversal, but then the person will complain to her doctor saying
 something like I have not been copied correctly, and will not pay 
 her
 doctor bill (but this is a different  external behaviour, ok?)
 I don't doubt that there is some substitution level that preserves 3rd 
 person
 behaviour and 1st person experience, even if this turns out to mean 
 copying
 a person to the same engineering tolerances as nature has specified 
 for ordinary
 day to day life. The question is, is there some substitution level 
 which preserves
 3rd person behaviour but not 1st person experience? For example, 
 suppose
 you carried around with you a device which monitored all your 
 behaviour in great
 detail, created predictive models, compared its predictions with your 
 actual
 behaviour, and continuously refined its models. Over time, this device 
 might be
 able to mimic your behaviour closely enough such that it could take 
 over control of
 your body from your brain and no-one would be able to tell that the 
 substitution
 had occurred. I don't think it would be unreasonable to wonder whether 
 this copy
 experiences the same thing when it looks at the sky and declares it to 
 be blue as
 you do before the substitution.
 
 
 
 Thanks for the precision.
 It *is* as reasonable to ask such a question as it is reasonable to ask 
 if tomorrow my first person experience will not indeed permute my blue 
 and orange qualia *including my memories of it* in such a way that my 
 3-behavior will remain unchanged. In that case we are back to the 
 original spectrum reversal problem.
 This is a reasonable question in the sense that the answer can be shown 
 relatively (!) undecidable: it is not verifiable by any external means, 
 nor by the first person itself. We could as well conclude that such a 
 change occurs each time the magnetic poles permute, or that it changes 
 at each season, etc.
 *But* (curiously enough perhaps) such a change can be shown to be 
 guess-able by some richer machine.
 The spectrum reversal question points on the gap between the 1 and 3 
 descriptions. With acomp your question should be addressable in the 
 terms of the modal logic Z and X, or more precisely Z1* minus Z1 and 
 X1* minus X1, that is their true but unprovable (and undecidable) 
 propositions. Note that the question makes no sense at all for the 
 pure 1-person because S4Grz1* minus S4Grz1 is empty.
 So your question makes sense because at the level of the fourth and 
 fifth hypo your question can be translated into purely arithmetical 
 propositions, which although highly undecidable by the machine itself 
 can be decided by some richer machine.
 And I would say, without doing the calculus which is rather complex, 
 that the answer could very well be positive indeed, but this remains to 
 be proved. At least the unexpected nuances between computability, 
 provability, knowability, observability, perceivability (all redefined 
 by modal variant of G) gives plenty room for this, indeed.
 
 Bruno

So what does your calculus say about the experience of people who wear glasses 
which invert their field of vision?

Brent Meeker

--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




RE: UDA revisited and then some

2006-12-01 Thread Stathis Papaioannou


In addition to spectrum reversal type situations, where no change is noted from 
either 3rd or 1st person perspective (and therefore it doesn't really matter to 
anyone: 
as you say, it may be occurring all the time anyway and we would never know), 
there is 
the possibility that a change is noted from a 1st person perspective, but never 
reported. 
If you consider a practical research program to make artificial replacement 
brains, all the 
researchers can ever do is build a brain that behaves like the original. It may 
do this because 
it thinks like the original, or it may do it because it is a very good actor 
and is able to pretend 
that it thinks like the original. Those brains which somehow betray the fact 
that they are 
acting will be rejected, but the ones that never betray this fact will be 
accepted as true 
replacement brains when they are actually not. Millions of people might agree 
to have these 
replacement brains and no-one will ever know that they are committing suicide. 

Stathis Papaioannou



 From: [EMAIL PROTECTED]
 Subject: Re: UDA revisited and then some
 Date: Fri, 1 Dec 2006 12:27:37 +0100
 To: everything-list@googlegroups.com
 
 
 
 Le 01-déc.-06, à 10:24, Stathis Papaioannou a écrit :
 
 
 
  Bruno Marchal writes:
 
  snip
 
  We can assume that the structural difference makes a difference to
  consciousness but
  not external behaviour. For example, it may cause spectrum reversal.
 
 
  Let us suppose you are right. This would mean that there is
  substitution level such that the digital copy person would act AS IF
  she has been duplicated at the correct level, but having or living a
  (1-person) spectrum reversal.
 
  Now what could that mean? Let us interview the copy and ask her the
  color of the sky. Having the same external behavior as the original,
  she will told us the usual answer: blue (I suppose a sunny day!).
 
  So, apparently she is not 1-aware of that spectrum reversal. This 
  means
  that from her 1-person point of view, there was no spectrum reversal,
  but obviously there is no 3-description of it either 
 
  So I am not sure your assertion make sense. I agree that if we take an
  incorrect substitution level, the copy could experience a spectrum
  reversal, but then the person will complain to her doctor saying
  something like I have not been copied correctly, and will not pay 
  her
  doctor bill (but this is a different  external behaviour, ok?)
 
  I don't doubt that there is some substitution level that preserves 3rd 
  person
  behaviour and 1st person experience, even if this turns out to mean 
  copying
  a person to the same engineering tolerances as nature has specified 
  for ordinary
  day to day life. The question is, is there some substitution level 
  which preserves
  3rd person behaviour but not 1st person experience? For example, 
  suppose
  you carried around with you a device which monitored all your 
  behaviour in great
  detail, created predictive models, compared its predictions with your 
  actual
  behaviour, and continuously refined its models. Over time, this device 
  might be
  able to mimic your behaviour closely enough such that it could take 
  over control of
  your body from your brain and no-one would be able to tell that the 
  substitution
  had occurred. I don't think it would be unreasonable to wonder whether 
  this copy
  experiences the same thing when it looks at the sky and declares it to 
  be blue as
  you do before the substitution.
 
 
 
 Thanks for the precision.
 It *is* as reasonable to ask such a question as it is reasonable to ask 
 if tomorrow my first person experience will not indeed permute my blue 
 and orange qualia *including my memories of it* in such a way that my 
 3-behavior will remain unchanged. In that case we are back to the 
 original spectrum reversal problem.
 This is a reasonable question in the sense that the answer can be shown 
 relatively (!) undecidable: it is not verifiable by any external means, 
 nor by the first person itself. We could as well conclude that such a 
 change occurs each time the magnetic poles permute, or that it changes 
 at each season, etc.
 *But* (curiously enough perhaps) such a change can be shown to be 
 guess-able by some richer machine.
 The spectrum reversal question points on the gap between the 1 and 3 
 descriptions. With acomp your question should be addressable in the 
 terms of the modal logic Z and X, or more precisely Z1* minus Z1 and 
 X1* minus X1, that is their true but unprovable (and undecidable) 
 propositions. Note that the question makes no sense at all for the 
 pure 1-person because S4Grz1* minus S4Grz1 is empty.
 So your question makes sense because at the level of the fourth and 
 fifth hypo your question can be translated into purely arithmetical 
 propositions, which although highly undecidable by the machine itself 
 can be decided by some richer machine.
 And I would say

Re: UDA revisited and then some

2006-11-30 Thread Bruno Marchal


Le 29-nov.-06, à 06:33, Stathis Papaioannou wrote:


snip

 We can assume that the structural difference makes a difference to 
 consciousness but
 not external behaviour. For example, it may cause spectrum reversal.


Let us suppose you are right. This would mean that there is 
substitution level such that the digital copy person would act AS IF 
she has been duplicated at the correct level, but having or living a 
(1-person) spectrum reversal.

Now what could that mean? Let us interview the copy and ask her the 
color of the sky. Having the same external behavior as the original, 
she will told us the usual answer: blue (I suppose a sunny day!).

So, apparently she is not 1-aware of that spectrum reversal. This means 
that from her 1-person point of view, there was no spectrum reversal, 
but obviously there is no 3-description of it either 

So I am not sure your assertion make sense. I agree that if we take an 
incorrect substitution level, the copy could experience a spectrum 
reversal, but then the person will complain to her doctor saying 
something like I have not been copied correctly, and will not pay her 
doctor bill (but this is a different  external behaviour, ok?)

Bruno


http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: UDA revisited

2006-11-29 Thread Brent Meeker

Stathis Papaioannou wrote:
 
 David Nyman writes:
 
 You're right - it's muddled, but as you imply there is the glimmer of
 an idea trying to break through. What I'm saying is that the
 'functional' - i.e. 3-person description - not only of the PZ, but of
 *anything* - fails to capture the information necessary for PC. Now,
 this isn't intended as a statement of belief in magic, but rather that
 the 'uninstantiated' 3-person level (i.e. when considered abstractly)
 is simply a set of *transactions*.  But - beyond the abstract - the
 instantiation or substrate of these transactions is itself an
 information 'domain' - the 1-person level - that in principle must be
 inaccessible via the transactions alone - i.e. you can't see it 'out
 there'. But by the same token it is directly accessible via
 instantiation - i.e. you can see it 'in here'

 For this to be what is producing PC, the instantiating, or
 constitutive, level must be providing whatever information is necessary
 to 'animate' 3-person transactional 'data' in phenomenal form, and in
 addition whatever processes are contingent on phenomenally-animated
 perception must be causally effective at the 3-person level (if we are
 to believe that possessing PC actually makes a difference). This seems
 a bit worrying in terms of the supposed inadmissability of 'hidden
 variables' in QM (i.e the transactional theory of reality).
 Notwithstanding this, if what I'm saying is true (which no doubt it
 isn't), then it would appear that information over and above what is
 manifested transactionally would be required to account for PC, and for
 whatever transactional consequences are contingent on the possession of
 PC.

 Just to be clear about PZs, it would be a consequence of the foregoing
 that a functionally-equivalent analog of a PC entity *might* possess
 PC, but that this would depend critically on the functional
 *substitution level*. We could be confident that physical cloning
 (duplication) would find the right level, but in the absence of this,
 and without a theory of instantiation, we would be forced to rely on
 the *behaviour* of the analog in assessing whether it possessed PC.
 But, on reflection, this seems right.
 
 You seem to be implying that there is something in the instantiation which 
 cannot be captured in the 3rd person description. Could this something just 
 be identified as the raw feeling of PC from the inside, generated by 
 perfectly 
 well understood physics, with no causal effects of its own? 
 
 Let me give a much simpler example than human consciousness. Suppose that 
 when a hammer hits a nail, it groks the nail. Grokking is not something that 
 can 
 be explained to a non-hammer. There is no special underlying physics: 
 whenever 
 momentum is transferred from the hammer to the nail, grokking necessarily 
 occurs. 
 It is no more possible for a hammer to hit a nail without grokking it than it 
 is 
 possible for a hammer to hit a nail without hitting it. Because of this, it 
 doesn't 
 really make sense to say that grokking causes anything: the 3rd person 
 describable physics completely defines all hammer-nail interactions, which is 
 why 
 we have all gone through life never suspecting that hammers grok. 
 
 The idea of a zombie (non-grokking) hammer is philosophically problematic. We 
 would 
 have to invoke magic to explain how of two physically identical hammers doing 
 identical 
 things, one is a zombie and the other is normal. (There is no evidence that 
 there is 
 anything magic about grokking. Mysterious though it may seem, it's just a 
 natural part 
 of being a hammer). Still, we can imagine that God has created a zombie 
 hammer, 
 indistinguishable from normal hammers no matter what test we put it through. 
 This would 
 imply that there is some non-third person describable aspect of hammers 
 responsible for 
 their ability to grok nails. OK: we knew that already, didn't we? It is what 
 makes grokking 
 special, private, and causally irrelevant from a third person perspective. 
 
 Stathis Papaioannou

Very well put, Stathis.

And an apt example since to grok actually is an english word meaning to 
understand intuitively.  So when you understand that A and B entails A, it 
is because you grok and.  Intuitive understanding is not communicable 
directly.

Brent Meeker

--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-28 Thread 1Z


David Nyman wrote:
 1Z wrote:

  But PC isn't *extra* information It is a re-presentation of
  what is coming in through the senses by 3rd person mechanisms.

 How can you be confident of that?

Because phenomenal perception wouldn't be perception otherwise.

Non-phenomenal sense data (pulse trains, etc) has to co-vary with
external events, or it is a useless as a guide to what is going on
outside
your head. Likewise, the phenomenal re-presentation has
to co-vary with the data. And if A co-varies with
B, A contains essentially the same information as B. If PC were
a free variable, it would not present, or re-present anything outside
itself.

 We can see that transactional
 information arrives in the brain and is processed in a 3-person
 describable manner. We don't have the glimmer of a theory of how this
 could of itself produce anything remotely like PC, or indeed more
 fundamentally account for the existence of any non-3-personal 'pov'
 whatsoever.

How, no. But when PC is part of perception, that sets
constraints on *what* is happening.

 What I'm suggesting is that 'phenomenality' is inherently
 bound up with instantiation, and that it thereby embodies (literally)
 information that is inaccessible from the 3-person (i.e. disembodied)
 pov.

Information about how information is embodied (this is a first folio
of Shakespeare, that is CDROM of Shakespeare) is always
extra. However, if the information itself is extra, therr is no
phenomenal perception.

 This is why 'qualia' aren't 'out there'.  Of course this doesn't
 imply that electrons are conscious or whatever, because the typical
 content and 'grasp' of PC would emerge at vastly higher-order levels of
 organisation. But my point is that *instantiation* makes the difference
 - the world looks *like* something (actually, like *me*) to an
 instantiated entity, but not like anything (obviously) to a
 non-instantiated entity.

 PZs, as traditionally conceived, are precisely that - non-instantiated,
 abstract, and hence not 'like' anything at all.

Huh? PZ's are supposed to be physical.

 The difference between
 a PZ and a traditionally-duplicated PC human is that we *can't help*
 but get the phenomenality when we follow the traditional process of
 constructing people. But a purely 3-person functional theory doesn't
 tell us how. And consequently we can't find a purely functional
 *substitution level* that is guaranteed to produce PC, except by
 physical duplication. Or - as in the 'yes doctor' gamble - by observing
 the behaviour of the entity and drawing our own conclusions.
 
 David



--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-28 Thread 1Z


Colin Geoffrey Hales wrote:
 [EMAIL PROTECTED]
   [EMAIL PROTECTED]
 In-Reply-To: [EMAIL PROTECTED]

 Hi Brent,
 Please see the post/replies to Quentin/LZ.
 I am trying to understand the context in which I can be wrong and how
 other people view the proposition. There can be a mixture of mistakes and
 poor communication and I want to understand all the ways in which these
 things play a role in the discourse.

 So...

  So, I have my zombie scientist and my human scientist and I
  ask them to do science on exquisite novelty. What happens?
  The novelty is invisible to the zombie, who has the internal
  life of a dreamless sleep.
 
  Scientists don't literally see novel theories - they invent
  them by combining other ideas.  Invisible is just a metaphor.

 I am not talking about the creative process. I am talking about the
 perception of a natural world phenomena that has never before been
 encountered. There can be no a-priori scientific knowledge in such
 situations. It is as far from a metaphor as you can get. I mean literal
 invisibility. See the red photon discussion in the LZ posting. If all you
 have is a-priori abstract (non-phenomenal) rules of interpretation of
 sensory signals to go by, then one day you are going to misinterpret
 because the signals came in the same from a completely different source
 and you;d never know it. That is the invisibility I claim at the center of
 the zombie's difficulty.

 
  The reason it is invisible is because there is no phenomenal
  consciousness. The zombie has only sensory data to use to
  do science. There are an infinite number
  of ways that same sensory data could arrive from an infinity
  of external natural world situations. The sensory data is
  ambiguous - it's all the same - action potential pulse trains
  traveling from sensors to brain. The zombie cannot possibly
  distinguish the novelty from the sensory data
 
  Why can it not distinguish them as well as the limited human scientist?

 Because the human scientist is distinguishing them within the phenomenal
 construct made from the sensory data, not directly from the sensory data -
 which all the zombie has. The zombie has no phenomenal construct of the
 external world. It has an abstraction entirely based on the prior history
 of non-phenonmenal sensory input.

All the evidence indicates that humans have only an
abstraction entirely based on the prior history
of phenomenal sensory input -- which itself contains omly information
previously present in  abstraction entirely based on the prior history
of non-phenonmenal sensory input.

 
  and has no awareness of the external world or even its own boundary.
 
  Even simple robots like the Mars Rovers have awareness of the
  world, where they are, their internal states, and

 No they don't. They have an internal state sufficiently complex to
 navigate according to the rules of the program (a-priori knowledge) given
 to them by humans, who are the only beings that are actually aware where
 the rover is. Look at what happens when the machine gets hung up on
 novelty... like the rock nobody could allow for who digs it out of it?
 no the rover... humans do...

Because it lacks phenomenality? Or because it is not
a very smart robot?

 .The rover has no internal life at all. Going
 'over there' is what the human sees. 'actuate this motor until until this
 number equals that number' is what the rover does.

 
  No.  You've simply assumed that you know what awareness is and you
 have the defined a zombie as not having it.  You might as
  well have just defined zombie as just like a person, but can't do
 science or can't whistle.  Whatever definition you give
  still leaves the question of whether a being whose internal
  processes (and a fortiori the external processes) are
  functionally identical with a human's is conscious.

 This is the nub of it. It's where I struggle to see the logic others see.
 I don't think I have done what you describe. I'll walk myself through it.

 What I have done is try to figure out a valid test for phenomenal
 consciousness.

 When you take away phenomenal consciousness what can't you do? It seems
 science is a unique/special candidate for a variety of reasons. Its
 success is critically dependent on the existence of a phenomenal
 representation of the external world.

So is art. So is walking around without bumping into things.
So, no science is not unique.

 The creature that is devoid of such constructs is what we typically call a
 zombie. May be a mistake to call it that. No matter.

 OK, so the real sticking point is the 'phenomenal construct'. The zombie
 could have a 'construct' with as much detail in it as the human phenomenal
 construct, but that is phenomenally inert (a numerical abstraction). Upon
 what basis could the zombie acquire such a construct?

The same way a human did, but without the phenomenality, I suppose.

  It can't get it from
 sensory feeds without knowing already what 

Re: UDA revisited: Digital Physics

2006-11-28 Thread 1Z


Colin Geoffrey Hales wrote:

  (And analogue physics might turn out to be digital)
 

 Digital is a conceptual representation metaphor only.

Not necessarily.

http://en.wikipedia.org/wiki/Digital_physics

http://www.mtnmath.com/digital.html


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: UDA revisited

2006-11-28 Thread Stathis Papaioannou


I was using David Chalmer's terminology. The science, however advanced it 
might become, is the easy problem. Suppose alien scientists discover that 
human consciousness is caused by angels that reside in tiny black holes inside 
every neuron. They study these angels so closely that they come to understand 
them as well as humans understand hammers or screwdrivers today: well enough 
to build a human and predict his every response. Despite such detailed 
knowledge, 
they might still have no idea that humans are conscious, or what it is like to 
be a 
human, or how having one type of angel in your head feels different to having a 
different type of angel. For that matter, we have no idea whether hammers and 
screwdrivers have any kind of phenomenal consciousness. We assume that they do 
not, but maybe they experience something which for us is utterly beyond 
imagination. 
It's not a question science can ever answer, even in principle.

Stathis Papaioannou
 


  The hard problem is not that we haven't discovered the physics that
  explains
  consciousness, it is that no such explanation is possible. Whatever
  Physics X
  is, it is still possible to ask, Yes, but how can a blind man who
  understands
  Physics X use it to know what it is like to see? As far as the hard
  problem goes,
  Physics X (if there is such a thing) is no more of an advance than knowing
  which
  neurons fire when a subject has an experience.
 
  Stathis Papaioannou
 
 I think you are mixing up modelling and explanation. It may be that 'being
 something' is the only way to describe it. Why is that invalid science?
 Especially when 'being something' is everything that enables science.
 
 Every oject in the universe has a first person story to tell. Not just us.
 Voicelessness is just an logistics issue.
 
 Colin

_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




RE: UDA revisited and then some

2006-11-28 Thread Stathis Papaioannou


Colin Hales writes:

  I think it is logically possible to have functional equivalence but
  structural
  difference with consequently difference in conscious state even though
  external behaviour is the same.
 
  Stathis Papaioannou
 
 Remember Dave Chalmers with his 'silicon replacement' zombie papers? (a)
 Replace every neuron with a silicon functional equivalent and (b) hold
 the external behaviour identical.

I would guess that such a 1-for-1 replacement brain would in fact have the same 
PC as the biological original, although this si not a logical certainty. But 
what I was 
thinking of was the equivalent of copying the look and feel of a piece of 
software 
without having access to the source code. Computers may one day be able to copy 
the look and feel of a human not by directly modelling neurons but by 
completely 
different mechanisms. Even if such computers were conscious, there seems no 
good 
reason to assume that their experiences would be similar to those of a 
similarly 
behaving human. 
 
 If the 'structural difference' (accounting for consciousness) has a
 critical role in function then the assumption of identical external
 behaviour is logically flawed. This is the 'philosophical zombie'. Holding
 the behaviour to be the same is a meaninglesss impossibility in this
 circumstance.

We can assume that the structural difference makes a difference to 
consciousness but 
not external behaviour. For example, it may cause spectrum reversal.
 
 In the case of Chalmers silicon replacement it assumes that everything
 that was being done by the neuron is duplicated. What the silicon model
 assumes is a) that we know everything there is to know and b) that silicon
 replacement/modelling/representation is capable of delivering everything,
 even if we did 'know  everything' and put it in the model. Bad, bad,
 arrogant assumptions.

Well, it might just not work, and you end up with an idiot who slobbers and 
stares into 
space. Or you might end up with someone who can do calculations really well but 
displays 
no emotions. But it's a thought experiment: suppose you use whatever advanced 
technology 
it takes to create a being with *exactly* the same behaviours as a biological 
human. Can 
you be sure that this being would be conscious? Can you be sure that this being 
would be 
conscious in the same way you and I are conscious?
 
 This is the endless loop that comes about when you make two contradictory
 assumptions without being able to know that you are, explore the
 consequences and decide you are right/wrong, when the whole scenario is
 actually meaningless because the premises are flawed. You can be very
 right/wrong in terms of the discussion (philosophy) but say absolutely
 nothing useful about anything in the real world (science).

I agree that the idea of a zombie identical twin (i.e. same brain, same 
behaviour but no PC) 
is philosophically dubious, but I think it is theoretically possible to have a 
robot twin which is 
if not unconscious at least differently conscious.

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




RE: UDA revisited

2006-11-28 Thread Stathis Papaioannou


Quentin Anciaux writes:

 Le Mardi 28 Novembre 2006 00:00, Stathis Papaioannou a écrit :
  Quentin Anciaux writes:
   But the point is to assume this nonsense to take a conclusion, to see
   where it leads. Why imagine a possible zombie which is functionnally
   identical if there weren't any dualistic view in the first place ! Only
   in dualistic framework it is possible to imagine a functionnally
   equivalent to human yet lacking consciousness, the other way is that
   functionnally equivalence *requires* consciousness (you can't have
   functionnally equivalence without consciousness).
 
  I think it is logically possible to have functional equivalence but
  structural difference with consequently difference in conscious state even
  though external behaviour is the same.
 
  Stathis Papaioannou
 
 Do you mean you can have exact human external behavior replica without 
 consciousness ? or with a different consciousness (than a human) ?
 
 If 1st case then if you can't find any difference between a real human and 
 the 
 replica lacking consciousness how could you tell the replica is lacking 
 consciouness (or that the human have consciousness) ?
 
 If the second case, I don't understand what could be a different 
 consciousness, could you elaborate ?

See my answer to Colin on this point. I assume that you are conscious in much 
the same 
way I am because (roughly speaking) you have a similar brain to mine *and* your 
behaviour is similar to mine. If only one of us were conscious we would have to 
invoke 
magic to explain it: God has decided to give only one of us an immaterial, 
undetectable 
soul which does not make any difference to behaviour. 

On the other hand, if it turns out that you are an alien robot designed to fool 
us into 
thinking you are human, based on technology utterly different to that in a 
biological brain, 
it is not unreasonable to wonder whether you are conscious at all, or if you 
are whether 
your conscious experience is anything like a human's.

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: UDA revisited and then some

2006-11-28 Thread Brent Meeker

Stathis Papaioannou wrote:
 
 Colin Hales writes:
 
 I think it is logically possible to have functional equivalence but
 structural
 difference with consequently difference in conscious state even though
 external behaviour is the same.

 Stathis Papaioannou
 Remember Dave Chalmers with his 'silicon replacement' zombie papers? (a)
 Replace every neuron with a silicon functional equivalent and (b) hold
 the external behaviour identical.
 
 I would guess that such a 1-for-1 replacement brain would in fact have the 
 same 
 PC as the biological original, although this si not a logical certainty. But 
 what I was 
 thinking of was the equivalent of copying the look and feel of a piece of 
 software 
 without having access to the source code. Computers may one day be able to 
 copy 
 the look and feel of a human not by directly modelling neurons but by 
 completely 
 different mechanisms. Even if such computers were conscious, there seems no 
 good 
 reason to assume that their experiences would be similar to those of a 
 similarly 
 behaving human. 
  
 If the 'structural difference' (accounting for consciousness) has a
 critical role in function then the assumption of identical external
 behaviour is logically flawed. This is the 'philosophical zombie'. Holding
 the behaviour to be the same is a meaninglesss impossibility in this
 circumstance.
 
 We can assume that the structural difference makes a difference to 
 consciousness but 
 not external behaviour. For example, it may cause spectrum reversal.
  
 In the case of Chalmers silicon replacement it assumes that everything
 that was being done by the neuron is duplicated. What the silicon model
 assumes is a) that we know everything there is to know and b) that silicon
 replacement/modelling/representation is capable of delivering everything,
 even if we did 'know  everything' and put it in the model. Bad, bad,
 arrogant assumptions.
 
 Well, it might just not work, and you end up with an idiot who slobbers and 
 stares into 
 space. Or you might end up with someone who can do calculations really well 
 but displays 
 no emotions. But it's a thought experiment: suppose you use whatever advanced 
 technology 
 it takes to create a being with *exactly* the same behaviours as a biological 
 human. Can 
 you be sure that this being would be conscious? Can you be sure that this 
 being would be 
 conscious in the same way you and I are conscious?

Consciousness would be supported by the behavioral evidence.  If it were 
functionally similar at a low level I don't see what evidence there would be 
against it. So the best conclusion would be that the being was conscious.

If we knew a lot about the function of the human brain and we created this 
behaviorally identical being but with different functional structure; then we 
would have some evidence against the being having human type consciousness - 
but I think we'd be able to assert that it was not conscious in some way.

Brent Meeker


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-27 Thread 1Z


Colin Geoffrey Hales wrote:
 
 
  Colin Hales writes:
 
  The very fact that the laws of physics, derived and validated using
  phenomenality, cannot predict or explain how appearances are generated
  is
  proof that the appearance generator is made of something else and that
  something else else is the reality involved, which is NOT
  appearances, but independent of them.
 
  I know that will sound weird...
 
  
   The only science you can do is I hypothesise that when I activate
  this
   nerve, that sense nerve and this one do this
  
   And I call regularities in my perceptions the external world, which
   becomes so
   familiar to me that I forget it is a hypothesis.
 
  Except that in time, as people realise what I just said above, the
  hypothesis has some emprical support: If the universe were made of
  appearances when we opened up a cranium we'd see them. We don't. We see
  something generating/delivering them - a brain. That difference is the
  proof.
 
  I don't really understand this. We see that chemical reactions
  in the brain generate consciousness, so why not stop at that?
  In Gilbert Ryle's words, the mind is what
  the brain does. It's mysterious, and it's not well
  understood, but it's still just chemistry.

 I have heard this 3 times now!

 1) Marvin Minski... not sure where but people quote it.
 2) Derek Denton, The primordial emotions...
 and now
 3) Gilbert Ryle!

 Who really said it? Not that it matters OK...back to business

 ask your self:

 If the mind is what the brain does, then what exactly is a coffee cup
 doing?

It's not mind-ing.

 For that question is just as valid and has just as complex an
 answer...

Of course not.

 .yet we do not ask it. Every object in the universe is like this.
 This is the mother of all anthropomorphisms.

 There is a view of the universe from the perspective of being a coffee cup

No there isn't. It has no internal representation of anything else.

This isn't a mysterious qualia issue. Things like digital cameras
and tape recorders demonstrably contain representations.Things like
coffee cups don't.

 and it is being equivalently created by whatever is the difference between
 it and a brain. And you are not entitled to say 'Nothing', all you can say
 is that there's no brain material, so it isn't like a brain. You can make
 no assertion as to the actual experience because describing a brain does
 NOT explain the causality of itHot cup? Cold cup? Full? Empty? All the
 same? Not the same? None of these questions are helped by the what the
 brain does bandaid excuse for proper science. Glaring missing physics.

 Zombie room has been deployed... OK dogs... do your worst! Attack!
 
 :-)
 
 Colin


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-27 Thread Bruno Marchal


Le 26-nov.-06, à 07:09, Colin Geoffrey Hales a écrit :



 I know your work is mathematics, not philosophy. Thank goodness! I can 
 see
 how your formalism can tell you 'about' a universe. I can see how
 inspection of the mathematics tells a story about the view from within 
 and
 without. Hypostatses and all that. I can see how the whole picture is
 constructed of platonic objects interacting according to their innate
 rules.

 It is the term 'empirically falsifiable' I have trouble with. For that 
 to
 have any meaning at all it must happen in our universe, not the 
 universe
 of your formalism.


Let us say that my formalism (actually the Universal Machine talk) is 
given by the 8 hypostases. Then, just recall that the UDA+Movie Graph 
explains why the appearance of the physical world is described by some 
of those hypostases (person point of view, pov).





 A belief in its falsifiability of a formalism that does
 not map to anything we can find in our universe is problematic.


The intelligible matter hypo *must* map the observations. If not, the 
comp formalism remains coherent, but would be falsified.




 In the platonic realm of your formalism arithmetical propositions of 
 the
 form  (A v ~A) happen to be identical to our empirical laws:

 It is an unconditional truth about the natural world that either (A is
 true about the natural world) or (A is not true about the natural 
 world)


Physics does not appear at that level.






 (we do the science dance by making sure A is good enough so that the 
 NOT
 clause never happens and voila, A is an an empirical 'fact')

 Call me thick but I don't understand how this correspondence between
 platonic statements and our empirical method makes comp falsifiable in 
 our
 universe. You need to map the platonic formalism to that which drives 
 our
 reality and then say something useful we can test.You need to make a 
 claim
 that is critically dependent on 'comp' being true.


If comp is true the propositional logic of the certain observable obeys 
the logic of the intelligible matter hypostases, which are perfectly 
well defined and comparable to the empirical quantum logic, for 
example.
We can already prove that comp makes that logic non boolean.



 I would suggest that claim be about the existence or otherwise of
 phenomenal consciousness PC would be the best bet.


You have not yet convince me that PC can be tested.




 There is another more subtle psychological issue in that a belief that
 comp is empirically testable in principle does not entail that acting 
 as
 if it were true is valid.


You are right. The contrary is true. We should act as if we were 
doubting that comp is true. (Actually comp is special in that regard: 
if true we have to doubt it).
Note the funny situation: in 2445 Mister Alfred accepts an artificial 
digital brain (betting on comp). He lives happily (apparently) until 
2620 where at least comp is tested in the sense I describe above, and 
is refuted (say).
Should we conclude that M. Alfred, from 2445q,  is a zombie ?




 Sometimes I think that is what is going on
 around here.

 Do you have any suggested areas where comp might be tested and have any
 ideas what the test might entail?


Naive comp predicts that all cup of coffee will turn into white rabbits 
or weirder in less than two seconds. Let us look at this cup of coffee 
right now. After more than two seconds I see it has not changed into a 
white rabbit. Naive comp has been refuted.
Now, computer science gives precise reason to expect that the comp 
prediction are more difficult to do, but UDA shows (or should show) 
that the whole of physics is derivable from comp (that is all the 
empirical physical laws---the rest is geography). So testing comp 
needs doing two things:
- deriving physics from arithmetic in the way comp predict this must be 
done (that is from the pov hypostases)
- comparing with observations.

The interest of comp is that it explains 8 povs, but only some of them 
are empirically testable, but then the other appears to be indirectly 
testable because all the povs are related.

To sump up quickly: Comp entails the following mystical propositions: 
the whole truth is in your head. But I have shown that this entails 
that the whole truth is in the head of any universal machine. I 
explain how to look inside the head of a universal machine and how to 
distinguish (in that head) the physical truth (quanta) from other sort 
of truth (like qualia). Then you can test comp by comparing the 
structure of the quanta you will find in the universal machine head, 
and those you see around you in the physical universe.

It is not at all different from the usual work by physicists, despite 
it makes machine's physics(*) a branch of number theory. We can compare 
that machine's physics with usual empirical physics and test that 
machine's physics.

(*) machine's physics really means here the physics extracted by an 
ideally self-observing machine.



 ... I 

Re: UDA revisited

2006-11-27 Thread Colin Geoffrey Hales


 If the mind is what the brain does, then what exactly is a coffee cup
 doing?

 It's not mind-ing.

 For that question is just as valid and has just as complex an
 answer...

 Of course not.


 .yet we do not ask it. Every object in the universe is like this.
 This is the mother of all anthropomorphisms.

 There is a view of the universe from the perspective of being a coffee
 cup

 No there isn't. It has no internal representation of anything else.


 This isn't a mysterious qualia issue. Things like digital cameras
 and tape recorders demonstrably contain representations.Things like
 coffee cups don't.


 and it is being equivalently created by whatever is
 the difference between it and a brain. And you are
 not entitled to say 'Nothing', all you can say
 is that there's no brain material, so it isn't
 like a brain. You can make no assertion as to the
 actual experience because describing a brain does
 NOT explain the causality of itHot cup? Cold
 cup? Full? Empty? All the same? Not the same? None
 of these questions are helped by the what the
 brain does bandaid excuse for proper science.
 Glaring missing physics.


What you need to do is deliver a law of nature that says representation
makes qualia. Some physical law. I have at least found candidate real
physices to hypothesise and it indicates that representation is NOT causal
of anything other than representation.

metaphorically:

You have no paint on your paint brush. You are telling me you don't need
any. You assume the act of painting art, and ONLY the act of painting art
makes paint. Randomly spraying paint everywhere is painting, just not
necessarily art. That act is still using paint and visible.

A brain has a story to tell that is more like art.
A coffee cup has a story to tell that definitely isn't art. But it's not
necessarily nothing.

Your assumptions in respect of representation are far far more
unjustified, magical, mystical and baseless than any of my propositions in
respect of physics. I have a hypothesis for a physical process for 'paint'
that exists in brain material. The suggested physics involved means I
could make a statement 'it's not like anything' to be a coffee cup because
that physics is not present in the necessary form. That explanation has
NOTHING to do with representation.

You have nothing but assumptions that paint is magic.

All your comments are addressed in the last paragraph of my original
(above), negating all your claims...which you then didn't respond or
acknowledge. You have obviously responded without reading everything
first. Would you please stop wasting my time like this. Endless gainsay is
not an argument now say... yes it is!.

Colin Hales



--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: UDA revisited

2006-11-27 Thread Stathis Papaioannou


Quentin Anciaux writes:

 But the point is to assume this nonsense to take a conclusion, to see 
 where it leads. Why imagine a possible zombie which is functionnally 
 identical if there weren't any dualistic view in the first place ! Only in 
 dualistic framework it is possible to imagine a functionnally equivalent to 
 human yet lacking consciousness, the other way is that functionnally 
 equivalence *requires* consciousness (you can't have functionnally 
 equivalence without consciousness).

I think it is logically possible to have functional equivalence but structural 
difference with consequently difference in conscious state even though 
external behaviour is the same. 

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: UDA revisited

2006-11-27 Thread Quentin Anciaux

Hi, 
Le Mardi 28 Novembre 2006 00:00, Stathis Papaioannou a écrit :
 Quentin Anciaux writes:
  But the point is to assume this nonsense to take a conclusion, to see
  where it leads. Why imagine a possible zombie which is functionnally
  identical if there weren't any dualistic view in the first place ! Only
  in dualistic framework it is possible to imagine a functionnally
  equivalent to human yet lacking consciousness, the other way is that
  functionnally equivalence *requires* consciousness (you can't have
  functionnally equivalence without consciousness).

 I think it is logically possible to have functional equivalence but
 structural difference with consequently difference in conscious state even
 though external behaviour is the same.

 Stathis Papaioannou

Do you mean you can have exact human external behavior replica without 
consciousness ? or with a different consciousness (than a human) ?

If 1st case then if you can't find any difference between a real human and the 
replica lacking consciousness how could you tell the replica is lacking 
consciouness (or that the human have consciousness) ?

If the second case, I don't understand what could be a different 
consciousness, could you elaborate ?

Quentin Anciaux

--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: UDA revisited

2006-11-26 Thread Brent Meeker

1Z wrote:
 
 Colin Geoffrey Hales wrote:
 Stathis,
...
 Whatever 'reality' is, it is regular/persistent,
 repeatable/stable enough to do science on it via
 our phenomenality and come
 up with laws that seem to characterise how it will appear
 to us in our phenomenality.
 You could say: my perceptions are
 regular/persistent/repeatable/stable enough to assume an
 external reality generating them and to do science on. And if
 a machine's central processor's perceptions are similarly
 regular/persistent/, repeatable/stable, it could also do
 science on them. The point is, neither I nor
 the machine has any magical knowledge of an external world.
 All we have is regularities in perceptions, which we assume
 to be originating from the external world because that's
 a good model which stands up no matter what we throw
 at it.
 Oops. Maybe I spoke too soon! OK.
 Consider... ...stable enough to assume an external reality...

 You are a zombie. What is it about sensory data that suggests an external
 world?
 
 What is it about sensory data that suggests an external world to
 human?
 
 Well, of course, we have a phenomenal view. Bu there is no informtion
 in the phenomenal display that was not first in the pre-phenomenal
 sensory data.

No, I think Colin has point there.  Your phenomenal view adds a lot of 
assumptions to the sensory data in constructing an internal model of what you 
see.  These assumptions are hard-wired by evolution.  It is situations in which 
these assumptions are false that produce optical illusions.

Brent Meeker

--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-26 Thread Brent Meeker

1Z wrote:
 
 Brent Meeker wrote:
 
 No, I think Colin has point there.  Your phenomenal view adds a lot of 
 assumptions to the sensory data in constructing an internal model of what 
 you see.  These assumptions are hard-wired by evolution.  It is situations 
 in which these assumptions are false that produce optical illusions.
 
 It depends on what you mean by information. Our hardwiring allows us to
 make
 better-than-chance guesses about what is really out there. But it is
 not
 information *about* what is really out there -- it doesn't come from
 the external world in the way sensory data does.

Not in the way that sensory data does, but it comes from the external world via 
evolution.  I'd say it's information about what's out there just as much as the 
sensory data is.  Whether it's about what's *really* out there invites 
speculation about what's *really real*.  I'd agree that it provides our best 
guess at what's real.

Brent Meeker


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-26 Thread 1Z


Colin Geoffrey Hales wrote:
  Scientific behaviour demanded of the zombie condition is a clearly
  identifiable behavioural benchmark where we can definitely claim that
  phenomenality is necessary...see below...
 
  It is all to easy to consider scientific behaviour without
  phenomenality.
  Scientist looks at test-tube -- scientist makes note in lab
  journal...

 'Looks' with what?

Eyes, etc.

 Scientist has no vision system.

A Zombie scientist has a complete visual system except for whatever
it is that causes phenomenality.since we don't
know what it is, we can imagine a zombie scientist as having
a complete neural system for processing vision.

 There are eyes and
 optic chiasm, LGN and all that. But no visual scene.


 The scientist is
 blind.

The zombie scientist is a functional duplicate. The zombie scientist
will behave as though it sees. It will also behave the same in novel
situations -- or it would not be  a functional duplicate.


  I spent tens of thousands of hours designing, building, benchtesting and
  commissioning zombies. On the benchtop I have pretended to be their
  environment and they had no 'awareness' they weren't in their real
  environment. It's what makes bench testing possible. The universe of the
  zombies was the universe of my programming. The zombies could not tell
  if
  they were in the factory or on the benchtop.
 
  According to solipsists, humans can't either. You seem
  to think PC somehow tells you reality is really real,
  but you haven't shown it. Counterargument: we have
  PC during dreaming, but dreams aren't real.

 I say nothing about the 'really realness' of 'reality'. It's irrelevant.
 Couldn't care less. _Whatever it is_, its relentlessly consistent to all
 of us in regular ways suffient to characterise it scientifically.
 Our
 visual phenomenal scene depicts it well enough to do science.

So there are no blind scientists?

Without that
 visual depiction we can't do science.

Unless we find another way.

But a functional duplicate is a functional duplicate.

 Yes we have internal imagery. Indeed it is an example supporting what I am
 saying! The scenes and the sensing are 2 separate things. You can have one
 without the other. You can hallucinate - internal imagery overrides that
 of the sensing stimulus. Yes! That is the point. It is a representation
 and we cannot do science without it.

Unless we find another way. Maybe the zombies could find one.

  None of it says anything about WHY the input did what it did. The
  causality outside the zombie is MISSING from these signals.
 
  The causality outside the human is missing from the signals.
  A photon is a photon, it doesn't come with a biography.

 Yep. That's the point. How does the brain make sense of it? By making use
 of some property of the natural world which makes a phenomeanl scene.

The process by which we infer the real-world objects that
caused our sense-data can be treated in information
processing terms, for all that it is presented to us
phenomenally. You haven't demonstrated that
unplugging phenomenality stymies the whole process.

   They have no
  intrinsic sensation to them either. The only useful information is the
  body knows implicitly where they came from..which still is not enough
  because:
 
  Try swapping the touch nerves for 2 fingers. You 'touch' with one and
  feel
  the touch happen on the other. The touch sensation is created as
  phenomenal consciousness in the brain using the measurement, not the
  signal measurement itself.
 
  The brain attaches meaning to signals according to the channel they
  come on on, hence phantom limb pain and so on. We still
  don't need PC to explain that.

 Please see the recent post to Brent re pain and nociception. Pain IS
 phenomenal consiouness (a phenomenal scene).

Pain is presented phenomenally, but neurologists can
identify pain signals without being able to peak into
other people's qualia.

 How do you think the phantom
 limb gets there?  It's a brain/phenomenal representation.

Yes.

 It IS phenomenal
 consiousness.

Not all representations are phenomenal.

 Of a limb that isn't actually there.



  Now think about the touch..the same sensation of touch could have been
  generated by a feather or a cloth or another finger or a passing car.
  That
  context is what phenomenal consciousness provides.
 
  PC doesn't miraculously provide the true context. It can
  be fooled by dreams and hallucination.

 Yes it can misdirect, be wrong, be pathologically constitutes. But at
 least we have it. We could not survive without it. Would could not do
 science without it.

Unless we find another way. Most people move around using
their legs. But legless people can find other ways of moving.

 It situates us in an external world which we would
 otherwise find completely invisible.

Blindsight, remember,

  And it doesn't have
  access to information that the physical brain doesn't have access
  to.

 The physical brain generates it! The 

Re: UDA revisited

2006-11-26 Thread Colin Geoffrey Hales


 Except that in time, as people realise what I just said above, the
 hypothesis has some emprical support: If the universe were made of
 appearances when we opened up a cranium we'd see them. We don't.

 Or appearances don't appear to be appearances to a third party.


Precisely. Now ask yourself...
What kind of universe could make that possible?
It is not the kind of universe depicted by laws created using appearances.

  I do need some rules or knowledge to begin with if I
  am to get anywhere with interpreting sense data.

 You do NOT interpret sense data! In consciuous activity
 you interpret the phenomenal scene generated using the
 sense data.

 But that is itself an interpetation for reasons you yourself have
 spelt out. Sensory pulse-trains don't have any  meaning in themselves.

An interpretation that is hard-coded into your biology a-priori. You do
not manufacture it from your own knowledge (unless you are hallucinating!)
your knowledge is a-poteriori.


  Habituated/unconscious
 reflex behaviour with fixed rules uses sense data directly.

 Does that make it impossible to have
 adaptive responses to sense data?

Not at all. That adaptation is based on what rule acquired how? Adaptation
is another rule assuming the meaning of all novelty. Where does that come
from? You're stuck in a loop assuming your knowledge is in the zombie.
Stop it!



 Think about driving home on a well travelled route. You don't even know
 how you got home. Yet if something unusual happened on the drive - ZAP -
 phenomenality kicks in and phenomenal consciousness handles the
 novelty.

 Is that your only evidence for saying that it is impossible
 to cope with novelty without phenomenality?

I am claiming that the only way to find out the laws of nature is through
the capcity to experience the novelty in the natural world OUTSIDE the
scientist, not the novelty in the sensory data.

This is about science, not any old behaviour. The fact is that most
novelty can be handled by any old survivable rule. That rule is just a
behaviour rule, not a law of the natural world. The scientist needs to be
able to act 'as-if' a rule was operating OUTSIDE themselves in order that
testing happen.


  With living organisms, evolution provides this
  knowledge

 Evolution provided
 a) a learning tool(brain) that knows how to learn from phenomenal
consciousness, which is an adaptive presentation of real
external world a-priori knowledge.
 b) Certain simple reflex behaviours.

  while with machines the designers provide it.

 Machine providers do not provide (a)


 They only provide (b), which includes any adaptivity rules, which are
 just
 more rules.

 How do you know that (a) isn't just rules? What's the difference?

Yes rules in our DNA give us the capacity to create the scenes in a
repeatable way. Those are natural rules. (Not made BY us). The physics
that actually does it in response to the sensory data is a natural rule.
The physics that makes it an experience is another natural rule. All these
are natural rules.

You are assuming that rules are experienced, regardless of their form. You
are basing this assumption on your own belief (asnother assumption) that
we know everything there is to know about physics. You act in denial of
something you can prove to yourself exists with simple experiments.

You should be proving to me why we don't need phenomenal consciousness,
not the other way around.



 You seem to think there is an ontological gulf between (a) and (b). But
 that seems arbitrary.

Only under the assumptions mentioned above. These are assumptions I do not
make.


 Amazing but true. Trial and error. Hypothesis/Test in a brutal live or
 die laboratory called The Earth Notice that the process
 selected for phenomenal consciousness early on

 But that slides past the point. The development of phenomenal
 consciousness was an adaptation that occurred without PC.

 Hence, PC is not necessary for all adaptation.

I am not claiming that. I am claiming it is necessary for scientific
behaviour. It can be optional in an artifact or animal. The constraints of
that situation merely need to be consistent with survival. The fact that
most animals have it is proof of its efficacy as a knowledge source, not a
disproof of my claim.

Read the rest of my paragraph before you blurt.


 which I predict will eventually be
 proven to exist in nearly all animal cellular life (vertebrate and
 invertebrate and even single celled organisms) to some extent. Maybe
 even
 in some plant life.

 'Technology' is a loaded word...I suppose I mean 'human made'
 technology.
 Notice that chairs and digital watches did not evolve independently of
 humans. Nor did science. Novel technology could be re-termed 'non-DNA
 based technology, I suppose. A bird flies. So do planes. One is DNA
 based.
 The other not DNA based, but created by a DNA based creature called the
 human. Eventually conscious machines will create novel technology too -
 including new 

Re: UDA revisited

2006-11-26 Thread Colin Geoffrey Hales


 Absolutely! But the humans have phenomenal consciousness in lieu of ESP,
 which the zombies do not.

 PC doesn't magically solve the problem.It just involves a more
 sophisticated form of guesswork. It can be fooled.

We been here before and I'll say it again if I have to

Yes! It can be fooled. Yes! It can be wrong. Yes! It can be pathologically
affected. Nevertheless without it we are unaware of anything and we could
not do science on novelty in the world outside. The act of doing science
proves we have phenomenal consciosuness and it's third person verification
proves that whatever reality is, it's the same for us all.


 To bench test a human I could not merely
 replicate sensoiry feeds. I'd have to replicate the factory!

 As in brain-in-vat scenarios. Do you have a way of showing
 that BIV would be able to detect its status?

I think the BIV is another oxymoron like the philosophical zombie. It
assumes that the distal processes originating the casuality that cause the
impinging sense data (from the external/distal world) are not involved at
all in the internal scene generation. An assumption I do not make.

I would predict that the scenes related to the 'phantom' body might work
because there are (presumably) the original internal (brain-based) body
maps that can substitute for the lack of the actual bodyBut the scenes
related to the 'phantom external world' I would predict wouldn't work. So
the basic assumption of BIV I would see as flawed. It assumes that all
there is to the scene genreation is what there is at the boundary where
the sense measurement occurs.

Virtual reality works, I think, because in the end, actual photons fly at
you from outside. Actual phonons impinge your ears and so forth.


 The human is
 connected to the external world (as mysterious as that may be and it's
 not
 ESP!). The zombie isn't, so faking it is easy.

 No. They both have exactly the same causal connections. The zombie's
 lack of phenomenality is the *only* difference. By definition.


 And every nerve that a human has is a sensory feed You just have to
 feed
 data into all of them to fool PC. As in a BIV scenario.

See above


 Phenomenal scenes can combine to produce masterful, amazing
 discriminations. But how does the machine, without being
 told already by a
 human, know one from the other?

 How do humans know without being told by God?

You are once again assuming that existing scientific knowledge is 100%
equipped. Then, when it fails to have anything to say about phenomenality,
you invoke god, the Berkeleyan informant.

How about a new strategy: we don't actually know everything. The universe
seems to quite naturally deliver phgenomenality. This is your problem, not
its problem.


 Having done that how can it combine and
 contextualise that joint knowledge? You have to tell it how to learn.
 Again a-priori knowledge ...

 Where did we get our apriori knowledge from? If it wasn't
 a gift from God, it must have been a natural process.

Yes. Now how might that be? What sort of universe could do that?
This is where I've been. Go explore.


 (And what has this to do with zombies? Zombies
 lack phenomenality, not apriori knowledge).

They lack the a-priori knowledge that is delivered in the form of
phenomenality, from which all other knowledge is derived. The a-priori
knowledge (say in the baby zombie) is all pre-programmed reflex -
unconsciousess internal processes all about the self - not the external
world...except for bawling...another reflex.

All of which is irrelevant to my main contention which is about science
and exquisite novelty.


 You're talking about cross-correlating sensations, not sensory
 measurement. The human as an extra bit of physics in the
 generation of the
 phenomenal scenes which allows such contextualisations.

 Why does it need new physics? Is that something you
 are assuming or something you are proving?

I am conclusively proving that science, scientists and novel technology
are literally scientific proof that phenomenality is a real, natural
process in need of explanation. The whole world admits to the 'hard
problem'. For 2500 years!

The new physics is something I am proving is necessarily there to be
found. Not what it is but merely that it a new way of thinking is needed.
It is the permission we need to scientifically explore the underlying
reality of the universe.

That is what this is saying. Phenomenality is evidence of something causal
of it. That causality is NOT that depicted by the appearances it delivers
or we'd already predict it!

Our total inability to predict it and total dependence on it for
scientific evidence is proof that allowing youself to explore universes
causalof phenomenality that is also causal of atoms and scientists is the
new physics rule-set to find - and it is NOT the physics rule-set
delivered by using the appearances thus delivered. The two are intimately
related and equally valid, just not about the same point of view.

Colin Hales

Re: UDA revisited

2006-11-26 Thread Colin Geoffrey Hales



 Colin Geoffrey Hales wrote:
  Scientific behaviour demanded of the zombie condition is a clearly
  identifiable behavioural benchmark where we can definitely claim that
  phenomenality is necessary...see below...
 
  It is all to easy to consider scientific behaviour without
  phenomenality.
  Scientist looks at test-tube -- scientist makes note in lab
  journal...

 'Looks' with what?

 Eyes, etc.

 Scientist has no vision system.

 A Zombie scientist has a complete visual system except for whatever
 it is that causes phenomenality.since we don't
 know what it is, we can imagine a zombie scientist as having
 a complete neural system for processing vision.

 There are eyes and
 optic chiasm, LGN and all that. But no visual scene.


 The scientist is
 blind.

 The zombie scientist is a functional duplicate. The zombie scientist
 will behave as though it sees. It will also behave the same in novel
 situations -- or it would not be  a functional duplicate.

Oh god. here we go again. I have to comply with the strictures of a
philosophical zombie or I'm not saying anything. I wish I'd never
mentioned the damned word.






--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-26 Thread Colin Geoffrey Hales


 Colin Geoffrey Hales wrote:
 SNIP
 No confusion at all. The zombie is behaving. 'Wide awake'
 in the sense that it is fully functional.
 Well, adaptive behaviour -- dealing with novelty --- is functioning.

 Yes - but I'm not talking about merely functioning. I am talking about
 the
 specialised function called scientific behaviour in respect of the
 natural
 world outside. The adaptive behaviour you speak of is adaptivity in
 respect of adherence or otherwise to an internal rule set, not
 adaptation
 in respect of the natural world outside.

 BTW 'Adaptive' means change, change means novelty has occurred. If you
 have no phenopmenality you must already have a rule as to how to adapt
 to
 all change - ergo you know everything already.

 So you deny that life has adapted through Darwinian evolution.

 Brent Meeker


Adaptation in KNOWLEDGE?
Adaptation in reflex behaviour?
Adaptation in the creature's hardware?
Adaptation in the capacity the learn?

All different.
Dead end. No more.







--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-26 Thread Brent Meeker

Colin Geoffrey Hales wrote:
 Colin Geoffrey Hales wrote:
 But you have no way to know whether phenomenal scenes are created by a
 particular computer/robot/program or not because it's just mystery
 property defined as whatever creates phenomenal scenes.  You're going
 around in circles.  At some point you need to anchor your theory to an
 operational definition.
 OK. There is a proven mystery calle dthe hard problem. Documented to
 death
 and beyond.
 It is discussed in documents - but it is not documented and it is not
 proven.
 
 It's enshrined in encylopedias! yes it's a problem We don;t know. It was
 #2 in big questions in science mag last year.
 
 It is predicted (by Bruno to take a nearby example) that a
 physical system that replicates the functions of a human (or dog) brain at
 the level of neural activity and receives will implement phenomenal
 consciousness.
 
 Then the proposition should be able to say exactly where, why and how. It
 can't, it hasn't.

Where is in the brain.  Science doesn't usually answer why questions except 
in the general sense of evolutionary adaptation.  How? we don't know exactly.  
But having an unanswered question doesn't constitute a deep mystery that 
demands new physics.  

 
 is that the physics (rule set) of appearances and the physics (rule
 set) of the universe capable of generating appearances are not the same
 rule set! That the universe is NOT made of its appearance, it's made of
 something _with_ an appearance that is capable of making an appearance
 generator.
 It is a commonplace that the ontology of physics may be mistaken (that's
 how science differs from religion) and hence one can never be sure that
 his theory refers to what's really real - but that's the best bet.
 
 Yes but in order that you be mistaken you have to be aware you have made a
 mistake, 

Do you ever read what you write?  That sounds like something Geore W. Bush 
believes.

which means admitting you have missed something. The existence of
 an apparently unsolvable problem... isn;t that a case for that kind of
 behaviour? (see below to see what science doesn't know it doesn't know
 about itself)
 
 That's it. Half the laws of physics are going neglected merely because
 we
 won't accept phenomenal consciousness ITSELF as evidence of anything.
 We accept it as evidence of extremely complex neural activity - can you
 demonstrate it is not?
 
 You have missed the point again.
 
 a) We demand CONTENTS OF phenomenal consciousness (that which is
 perceived) as all scientific evidence.
 
 but
 
 B) we do NOT accept phenomenal consciousness ITSELF, perceiving as
 scientific evidence of anything.

Sure we do.  We accept it as evidence of our evolutionary adaptation to 
survival on Earth.

 
 Evidence (a) is impotent to explain (b). 

That's your assertion - but repeating it over an over doesn't add anything to 
its support.

Maybe some new physics is implied by consciousness (as in Penrose's suggestion) 
or a complete revolution (as in Burno's UD), but it is far from proven.  I 
don't see even a suggestion from you - just repeated complaints that we're not 
recognizing the need for some new element and claims that you've proven we need 
one.

Brent Meeker


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-26 Thread Colin Geoffrey Hales


 Le Dimanche 26 Novembre 2006 22:54, Colin Geoffrey Hales a écrit :
 SNIP
 What point is there in bothering with it. The philosophical zombie is
 ASSUMED to be equivalent! This is failure before you even start! It's
 wrong and it's proven wrong because there is a conclusively logically
 and
 empirically provable function that the zombie cannot possibly do without
 phenomenality: SCIENCE. The philosophical zombie would have to know
 everything a-priori, which makes science meaningless. There is no
 novelty
 to a philosophical zombie. It would have to anticipate all forms of
 randomness or chaotic behaviour NUTS.

 But that's exactly what all the arguments is about !! Either identical
 functionnal behavior entails consciousness either there is some magical
 property needed plus  identical functionnal behavior to entails
 consciousness.

 This is failure before you even start!

 But the point is to assume this nonsense to take a conclusion, to see
 where it leads. Why imagine a possible zombie which is functionnally
 identical if there weren't any dualistic view in the first place ! Only in
 dualistic framework it is possible to imagine a functionnally equivalent
 to
 human yet lacking consciousness, the other way is that functionnally
 equivalence *requires* consciousness (you can't have functionnally
 equivalence without consciousness).

 This is failure before you even start!

 That's what you're doing... you haven't prove that zombie can't do science
 because the zombie point is not on what they can do or not, it is the
 fact
 that either acting like we act (human way) entails necessarily to have
 consciousness or it does not (meaning that there exists an extra property
 beyond behavior, an extra thing undetectable from
 seeing/living/speaking/...
 with the zombie that gives rise to consciousness)L.

 You haven't prove that zombie can't do science because you tells it at the
 starting of the argument. The argument should be weither or not it is
 possible to have a *complete* *functionnal* (human) replica yet lacking
 consciousness.

 Quentin


Scientist_A does science.

Scientist_A closes his eyes and finds the ability to do science radically
altered.

Continue the process and you eliminate all scientific behaviour.

The failure of scientific behaviour correlates perfectly with the lack of
phenomenal cosnciousness.

Empirical fact:

Human scientists have phenomenal consciousness

also
Phenomenal consciousness is the source of all our scientific evidence

ergo

Phenomenal consciousness exists and is sufficient and necessary for human
scientific behaviour

No need to mention zombies, sorry I ever did.
No more times round the loop, thanks.

Colin Hales



--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: UDA revisited

2006-11-26 Thread 1Z


Colin Geoffrey Hales wrote:
 
  You are a zombie. What is it about sensory data that suggests an
  external world?
 
  What is it about sensory data that suggests an external world to
  human?

 Nothing. That's the point. That's why we incorporate the usage of natural
 world properties to contextualise it in the external world.

Huh???

 Called
 phenomenal consciousuness..that makes us not a zombie.

That's not what phenomenal consciousness means...or usually
means...

 
  Well, of course, we have a phenomenal view. Bu there is no informtion
  in the phenomenal display that was not first in the pre-phenomenal
  sensory data.

 Yes there is. Mountains of it. It's just that the mechanism and the need
 for it is not obvious to you.

Things that don't exist tend not to be obvious.

 Some aspects of the external world must be
 recruited to some extent in the production of the visual field, for
 example. None of the real spatial relative location qualities, for
 example, are inherent in the photons hitting the retina. Same with the
 spatial nature of a sound field. That data is added through the mechanisms
 for generation of phenomenality.

It's not added. It's already there. It needs to be made explicit.

  The science you can do is the science of zombie sense data, not an
  external world.
 
  What does of mean in that sentence? Human science
  is based on human phenomenality which is based on pre-phenomenal
  sense data, and contains nothing beyond it informationally.

 No, science is NOT done on pre-phenomenal sense data. It is done on the
 phenomenal scene.

Which in turn is derived from sense data. If A is informative about B
and B is informative about C, A is informative about C.

 This is physiological fact. Close you eyes and see how
 much science you can do.

That shuts off sense-data , not just phenomenality.

 I don;t seem to be getting this obvious simple thing past the pre-judgements.



 
  Humans unconsciously make guesses about the causal origins
  of their sense-data in order to construct the phenomenal
  view, which is then subjected to further educated guesswork
  as part of the scientific process (which make contradict the
  original guesswork, as in the detection of illusions)

 No they unconsciously generate a phenomenal field an then make judgements
 from it. Again close your eyes and explore what affect it has on your
 judgements. Hard-coded a-priori reflex system such as those that make the
 hand-eye reflex work in blindsight are not science and exist nowhere else
 excpet in reflex bahaviour.


In humans. That doesn't mean phenomenality is necessary for adaptive
behaviour in other entities.

  Your hypotheses about an external world would be treated
  as wild metaphysics by your zombie friends
 
  Unless they are doing the same thing. why shouldn't
  they be? It is function/behaviour afer all. Zombies
  are suppposed to lack phenomenality, not function.
 

 You are stuck on the philosophiocal zombie! Ditch it! Not what we are
 talking about. The philosophical zombie is an oxymoron.

If *you're* not talking about Zombies,
why use the word?

  (none of which you cen ever be
  aware of, for they are in this external world..., so there's another
  problem :-) Very tricky stuff, this.
  The only science you can do is I hypohesise that when I activate this
  nerve, that sense nerve and this one do this You then publish in
  nature
  and collect your prize. (Except the external world this assumes is not
  there, from your perspective... life is grim for the zombie)
 
  Assuming, for some unexplained reasons, that zombies cannot
  hypothesise about an external world without phenomena.

 Again you are projecting your experiences onto the zombie. There is no
 body, no boundary, not NOTHING to the zombie to even conceive of to
 hypothesise about. They are a toaster, a rock.

Then there is no zombie art or zombie work or zombie anything.

Why focus on science?

  We have to admit to this ignorance and accept that we don't know
  something
  fundamental about the universe. BTW this means no magic, no ESP, no
  dualism - just basic physics an explanatory mechanism that is right in
  front of us that our 'received view' finds invisible.
 
  Errr, yes. Or our brains don't access the external world directly.

 That is your preconception, not mine.

It's not a preconception,. There just isn't any evidence of
clairvoyance or ESP.

  Try and imagine the ways in which
 you would have to think if that make sense of phenomenality. here's one:

 That there is no such thing as 'space' or 'things' or 'distance' at all.
 That we are all actually in the same place. You can do this and not
 violate any laws of nature at all, and it makes phenomenality easy -
 predictable in brain material the fact that it predicts itself, when
 nothing else has... now what could that mean?

I have no idea what you are talking about.

 Colin Hales


--~--~-~--~~~---~--~~
 You received this 

Re: UDA revisited

2006-11-26 Thread Colin Geoffrey Hales

 That's it. Half the laws of physics are going neglected merely because
 we
 won't accept phenomenal consciousness ITSELF as evidence of anything.
 We accept it as evidence of extremely complex neural activity - can you
 demonstrate it is not?

 You have missed the point again.

 a) We demand CONTENTS OF phenomenal consciousness (that which is
 perceived) as all scientific evidence.

 but

 B) we do NOT accept phenomenal consciousness ITSELF, perceiving as
 scientific evidence of anything.

 Sure we do.  We accept it as evidence of our evolutionary adaptation to
 survival on Earth.

Evdiencde of anything CAUSAL OF PHENOMENAL CONSCIOUSNESS. You are quoting
evidence (a) at me.



 Evidence (a) is impotent to explain (b).

 That's your assertion - but repeating it over an over doesn't add anything
 to its support.

It is logically impossible for apparent causality depicted in objects in
phenomenal scenes to betray anything that caused the scene you used. This
is like saying you conclude the objects in the image in a mirror caused
the reflecting surface that is the mirror.

This is NOT just assertion.

Empirical evidence derives no necessity for causal relationships
NAGEL

Well proven. Accepted. Not mine. All empirical science is like this there
is no causality in any of it. Phenomenality is CAUSED by something.
Whatever that is, is caused all our empirical evidence.


 Maybe some new physics is implied by consciousness (as in Penrose's
 suggestion) or a complete revolution (as in Burno's UD), but it is far
 from proven.  I don't see even a suggestion from you - just repeated
 complaints that we're not recognizing the need for some new element and
 claims that you've proven we need one.

 Brent Meeker


OK. Well I'll just get on with making my chips then. I have been exploring
the physics in question for some time now and it pointed me at exactly the
right place in brain material. I am just trying to get people to make the
first steps I did.

It involves accepting that you don't know everything and that exactly what
you don't know is why our universe produces phenomenality. There is an
anomaly in our evidence system which is an indicator af how to change.
That anomaly means that investigating underlying realities consistent with
the causal production of phenomenal conciousness is viable science.

The thing is you have to actually do it to get anywhere. Killing your
darlings is not easy.

Colin Hales





--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-26 Thread 1Z


Colin Geoffrey Hales wrote:
 
  Le Dimanche 26 Novembre 2006 22:54, Colin Geoffrey Hales a écrit :
  SNIP
  What point is there in bothering with it. The philosophical zombie is
  ASSUMED to be equivalent! This is failure before you even start! It's
  wrong and it's proven wrong because there is a conclusively logically
  and
  empirically provable function that the zombie cannot possibly do without
  phenomenality: SCIENCE. The philosophical zombie would have to know
  everything a-priori, which makes science meaningless. There is no
  novelty
  to a philosophical zombie. It would have to anticipate all forms of
  randomness or chaotic behaviour NUTS.
 
  But that's exactly what all the arguments is about !! Either identical
  functionnal behavior entails consciousness either there is some magical
  property needed plus  identical functionnal behavior to entails
  consciousness.
 
  This is failure before you even start!
 
  But the point is to assume this nonsense to take a conclusion, to see
  where it leads. Why imagine a possible zombie which is functionnally
  identical if there weren't any dualistic view in the first place ! Only in
  dualistic framework it is possible to imagine a functionnally equivalent
  to
  human yet lacking consciousness, the other way is that functionnally
  equivalence *requires* consciousness (you can't have functionnally
  equivalence without consciousness).
 
  This is failure before you even start!
 
  That's what you're doing... you haven't prove that zombie can't do science
  because the zombie point is not on what they can do or not, it is the
  fact
  that either acting like we act (human way) entails necessarily to have
  consciousness or it does not (meaning that there exists an extra property
  beyond behavior, an extra thing undetectable from
  seeing/living/speaking/...
  with the zombie that gives rise to consciousness)L.
 
  You haven't prove that zombie can't do science because you tells it at the
  starting of the argument. The argument should be weither or not it is
  possible to have a *complete* *functionnal* (human) replica yet lacking
  consciousness.
 
  Quentin
 

 Scientist_A does science.

 Scientist_A closes his eyes and finds the ability to do science radically
 altered.

 Continue the process and you eliminate all scientific behaviour.

 The failure of scientific behaviour correlates perfectly with the lack of
 phenomenal cosnciousness.

Closing your eyes cuts of sensory data as well. So: not proven.

 Empirical fact:

 Human scientists have phenomenal consciousness

 also
 Phenomenal consciousness is the source of all our scientific evidence

 ergo

 Phenomenal consciousness exists and is sufficient and necessary for human
 scientific behaviour

Doesn't follow. the fact that you use X to do Y doesn't make
Z necessary for Y. Something else could be used instead. legs and
locomotion...


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: UDA revisited

2006-11-26 Thread Colin Geoffrey Hales

The discussion has run its course. It has taught me a lot about the sorts
of issues and mindsets involved.

It has also given me the idea for the methodological-zombie-room, which I
will now write up. Maybe it will depict the circumstances and role of
phenomenality better than I have thus far.

Meanwhile I'd ask you to think about what sort of universe could make it
that if matter (A) acts 'as if' it intereacted with matter (B), that it
literally reified aspects of that interaction, even though matter (B) does
not exist. For that is what I propose constitutes the phenomenal scenes.
It happens in brain material at the membranes of appropriately configured
neurons and astrocytes. Matter (B) is best classed as virtual bosons.

Just have a think about how that might be and what the universe that does
that might be made of. It's not made of the things depicted by the virtual
bosons.

cheers,

colin



--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-26 Thread David Nyman


On Nov 26, 11:50 pm, 1Z [EMAIL PROTECTED] wrote:

 Why use the word if you don't like the concept?

I've been away for a bit and I can't pretend to have absorbed all the
nuances of this thread but I have some observations.

1. To coherently conceive that a PZ which is a *functional* (not
physical) duplicate can nonetheless lack PC - and for this to make any
necessary difference to its possible behaviour - we must believe that
the PZ thereby lacks some crucial information.
2. Such missing information consequently can't be captured by any
purely *functional* description (however defined) of the non-PZ
original.
3. Hence having PC must entail the possession and utilisation of
information which *in principle* is not functionally (3-person)
describable, but which, in *instantiating* 3-person data, permits it to
be contextualised, differentiated, and actioned in a manner not
reproducible by any purely functional (as opposed to constructable)
analog.

Now this seems to tally with what Colin is saying about the crucial
distinction between the *content* of PC and whatever is producing it.
It implies that whatever is producing it isn't reducible to sharable
3-person quanta. This seems also (although I may be confused) to square
with Bruno's claims for COMP that the sharable 3-person emerges from
(i.e. is instantiated by) the 1-person level. As he puts it -'quanta
are sharable qualia'. IOW, the observable - quanta - is the set of
possible transactions between functionally definable entities
instantiated at a deeper level of representation (the constitutive
level). This is why we see brains not minds.

It seems to me that the above, or something like it, must be true if we
are to take the lessons of the PZ to heart. IOW, the information
instantiated by PC is in principle inaccessible to a PZ because the
specification of the PZ as a purely functional 3-person analog is
unable to capture the necessary constitutive information. The
specification is at the wrong level. It's like trying to physically
generate a new computer by simply running more and more complex
programs on the old one. It's only by *constructing* a physical
duplicate (or some equivalent physical analog) that the critical
constitutive - or instantiating - information can be captured.

We have to face it.  We won't find PC 'out there' - if we could, it
would (literally) be staring us in the face. I think what Colin is
trying to do is to discover how we can still do science on PC despite
the fact that whatever is producing it isn't capturable by 'the
observables', but rather only in the direct process and experience of
observation itself.

David

 Colin Geoffrey Hales wrote:
  SNIP
   No confusion at all. The zombie is behaving. 'Wide awake'
   in the sense that it is fully functional.

   Well, adaptive behaviour -- dealing with novelty --- is functioning.

  Yes - but I'm not talking about merely functioning. I am talking about the
  specialised function called scientific behaviour in respect of the natural
  world outside.You assume, but have no shown, that it is in a class of its 
  own.

  The adaptive behaviour you speak of is adaptivity in
  respect of adherence or otherwise to an internal rule set, not adaptation
  in respect of the natural world outside.False dichotomy.
 Any adaptive system adapts under the influence under the influence of
 external impacts, and there are always some underlying rules, if only
 the rules of physics.

  BTW 'Adaptive' means change, change means novelty has occurred. If you
  have no phenopmenality you must already have a rule as to how to adapt to
  all change - ergo you know everything already.Rules to adapt to change 
  don't have to stipulate novel inputs in
 advance.

   I spent tens of thousands of hours designing, building,
   benchtesting and commissioning zombies. On the benchtop I
   have pretended to be their environment and they had no 'awareness'
   they weren't in their real environment. It's what makes bench
testing possible. The universe of the zombies was the
   universe of my programming. The zombies could not tell if
   they were in the factory or on the benchtop. That's why I
   can empathise so well with zombie life. I have been
   literally swatted by zombies (robot/cranes and other machines)
   like I wasn't therescares the hell
   out of you! Some even had 'vision systems' but were still
   blind. soyes the zombie can 'behave'. What I am claiming
   is they cannot do _science_ i.e. they cannot behave
   scientifically. This is a very specific claim, not a general
   claim.

   I see nothing to support it.

  I have already showed you conclusive empirical evidence you can
  demonstrate on yourself.No you haven't. Zombies aren't blind in the sense
 of not being able to see at all,. You are just juggling
 different definitions of Zombie.

  Perhaps the 'zombie room' will do it.
   - it's all the
same - action potential pulse trains traveling from sensors to
  brain.

No, 

RE: UDA revisited

2006-11-26 Thread Colin Geoffrey Hales


 Of course they are analogue devices, but their analogue nature makes no
 difference to the computation. If the ripple in the power supply of a TTL
 circuit were 4 volts then the computer's true analogue nature would
 intrude and it would malfunction.

 Stathis Papaioannou

Of course you are right..The original intent of my statement was to try
and correct any mental misunderstandings about the difference between the
real piece of material manipulating charge and the notional 'digital'
abstraction represented by it. I hope I did that.

Colin


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: UDA revisited

2006-11-26 Thread Stathis Papaioannou


Colin Hales writes:

 The very fact that the laws of physics, derived and validated using
 phenomenality, cannot predict or explain how appearances are generated is
 proof that the appearance generator is made of something else and that
 something else else is the reality involved, which is NOT
 appearances, but independent of them.
 
 I know that will sound weird...
 
 
  The only science you can do is I hypothesise that when I activate this
  nerve, that sense nerve and this one do this
 
  And I call regularities in my perceptions the external world, which
  becomes so
  familiar to me that I forget it is a hypothesis.
 
 Except that in time, as people realise what I just said above, the
 hypothesis has some emprical support: If the universe were made of
 appearances when we opened up a cranium we'd see them. We don't. We see
 something generating/delivering them - a brain. That difference is the
 proof.

I don't really understand this. We see that chemical reactions in the brain 
generate 
consciousness, so why not stop at that? In Gilbert Ryle's words, the mind is 
what 
the brain does. It's mysterious, and it's not well understood, but it's still 
just chemistry.

  If I am to do more I must have a 'learning rule'. Who tells me the
  learning rule? This is a rule of interpretation. That requires context.
  Where does the context come from? There is none. That is the situation
  of
  the zombie.
 
  I do need some rules or knowledge to begin with if I am to get anywhere
  with interpreting sense data.
 
 You do NOT interpret sense data! In consciuous activity you interpret the
 phenomenal scene generated using the sense data. Habituated/unconscious
 reflex behaviour with fixed rules uses sense data directly.

You could equally well argue that my computer does not interpret keystrokes, 
nor the 
electrical impulses that travel to it from the keyboard, but rather it creates 
a phenomenal 
scene in RAM based on those keystrokes. 

 Think about driving home on a well travelled route. You don't even know
 how you got home. Yet if something unusual happened on the drive - ZAP -
 phenomenality kicks in and phenomenal consciousness handles the novelty.

If something unusual happens I'll try to match it as closely as I can to 
something I have 
already encountered and act accordingly. If it's like nothing I've ever 
encountered before 
I guess I'll do something random, and on the basis of the effect this has 
decide what I 
will do next time I encounter the same situation. 

  With living organisms, evolution provides this
  knowledge
 
 Evolution provided
 a) a learning tool(brain) that knows how to learn from phenomenal
consciousness, which is an adaptive presentation of real
external world a-priori knowledge.
 b) Certain simple reflex behaviours.
 
  while with machines the designers provide it.
 
 Machine providers do not provide (a)
 
 They only provide (b), which includes any adaptivity rules, which are just
 more rules.
 
 
 
  Incidentally, you have stated in your paper that novel technology as the
  end
  product of scientific endeavour is evidence that other people are not
  zombies, but
  how would you explain the very elaborate technology in living organisms,
  created
  by zombie evolutionary processes?
 
  Stathis Papaioannou
 
 Amazing but true. Trial and error. Hypothesis/Test in a brutal live or die
 laboratory called The Earth Notice that the process selected for
 phenomenal consciousness early onwhich I predict will eventually be
 proven to exist in nearly all animal cellular life (vertebrate and
 invertebrate and even single celled organisms) to some extent. Maybe even
 in some plant life.
 
 'Technology' is a loaded word...I suppose I mean 'human made' technology.
 Notice that chairs and digital watches did not evolve independently of
 humans. Nor did science. Novel technology could be re-termed 'non-DNA
 based technology, I suppose. A bird flies. So do planes. One is DNA based.
 The other not DNA based, but created by a DNA based creature called the
 human. Eventually conscious machines will create novel technology too -
 including new versions of themselves. It doesn't change any part of the
 propositions I make - just contextualises them inside a fascinating story.

The point is a process that is definitely non-conscious, i.e. evolution, 
produces 
novel machines, some of which are themselves conscious at that.

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 

RE: UDA revisited

2006-11-26 Thread Stathis Papaioannou


Colin Hales writes:

 OK. There is a proven mystery called the hard problem. Documented to death
 and beyond. Call it Physics X. It is the physics that _predicts_ (NOT
 DESCRIBES) phenomenal consciousness (PC). We have, through all my fiddling
 about with scientists, conclusive scientific evidence PC exists and is
 necessary for science.
 
 So what next?
 
 You say to yourself... none of the existing laws of physics predict PC.
 Therefore my whole conception of how I understand the universe
 scientifically must be missing something fundamental. Absolutely NONE of
 what we know is part of it. What could that be?.

The hard problem is not that we haven't discovered the physics that explains 
consciousness, it is that no such explanation is possible. Whatever Physics X 
is, it is still possible to ask, Yes, but how can a blind man who understands 
Physics X use it to know what it is like to see? As far as the hard problem 
goes, 
Physics X (if there is such a thing) is no more of an advance than knowing 
which 
neurons fire when a subject has an experience.

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




RE: UDA revisited

2006-11-25 Thread Stathis Papaioannou


Colin Hales writes:


  So, I have my zombie scientist and my human scientist and
  I ask them to do science on exquisite novelty. What happens?
  The novelty is invisible to the zombie, who has the internal
  life of a dreamless sleep. The reason it is invisible is
  because there is no phenomenal consciousness. The zombie
  has only sensory data to use to do science. There are an
  infinite number of ways that same sensory data could arrive
  from an infinity of external natural world situtations.
  The sensory data is ambiguous - it's all the same - action
  potential pulse trains traveling from sensors to brain.
 
 Stathis:
  All I have to work on is sensory data also.
 
 No you don't! You have an entire separate set of perceptual/experiential
 fields constructed from sensory feeds. The fact of this is proven - think
 of hallucination. When the senory data gets overidden by the internal
 imagery (schizophrenia). Sensing is NOT our perceptions. It is these
 latter phenomenal fields that you consciously work from as a scientist.
 Not the sensory feeds. This seems to be a recurring misunderstanding or
 something people seem to be struggling with. It feels like its coming from
 your senses but it's all generated inside your head.

OK, I'll revise my claim: all I have to work with is perceptions which I assume 
are 
coming from sense data which I assume is coming from the real world impinging 
on 
my sense organs. The same is true of a machine which receives environmental 
input and processes it. At the processing stage, this is the equivalent of 
perception. 
The processor assumes that the information it is processing originates from 
sensors 
which are responding to real world stimuli, but it has no way of knowing if the 
data 
actually arose from spontaneous or externally induced activity at any point 
from the 
sensors, transducers, conductors, or components of the processor itself: 
whether 
they are hallucinations, in fact. There might be some clue that it is not a 
legitimate 
sensory feed, but if the halllucination is perfect it is by definition 
impossible to detect. 

  I can't be certain that there is a real world out there, and
  even if there is, all I can possibly do is create a virtual
  reality in my head which correlates with the patterns of sense
  data I receive.
 
 Yes - the virtual reality is the collection of phenomenal scenes
 mentioned above  is what you use to learn from, not the sense data.
 Put more accurately - you learn things that are consistent with the
 phenomenal scenes. There is a tendency in some circles to think of
 consciousness as an epiphenomenal irrelevance, devoid of causal
 efficacy... I would disagree in that it's causal efficacy is in CHANGE of
 belief (learning), not the holding of static belief. Scientific behaviour
 is all about changing belief.
 
 reality of the external world? It doesn't matter what you believe about
 the existence or otherwise of 'reality'. Whatever it is, we have an
 a-priori tool for perceiving it that is a phenomenon. i.e. Phenomenality
 is a real world phenomenon just as real as a rock. Leave the reality
 discussion to the campfire.

OK, let's avoid that thankless discussion...

 Whatever 'reality' is, it is regular/persistent/repeatable/stable enough
 to do science on it via our phenomenality and come up with laws that seem
 to characterise how it will appear to us in our phenomenality.

You could say: my perceptions are regular/persistent/repeatable/stable enough 
to assume an external reality generating them and to do science on. And if a 
machine's central processor's perceptions are similarly regular/persistent/
repeatable/stable, it could also do science on them. The point is, neither I 
nor 
the machine has any magical knowledge of an external world. All we have is 
regularities in perceptions, which we assume to be originating from the 
external 
world because that's a good model which stands up no matter what we throw at 
it. 

  Certainly, it is ambiguous, and that is why we have science: we
  come up with a model or hypothesis consistent with the sense data,
  then we look for more sense data to test it.
 
 You describe scientific behaviour...yes, but the verification is not
 through sense data but through phenomenal fields. The phenomenal fields
 are NOT the sense data. Phenomenal fields can be ambiguous, yes.
 Scientific procedure deals with that.
 ..but..
 The sense data is separate and exquisitely ambiguous and we do not look
 for sense data to verify scientific observations! We look for
 perceptual/phenomenal data. Experiences. Maybe this is yet another
 terminological issue. Sensing is not perception.

If the perception is less ambiguous that the sense data, that is a false 
certainty. 

  Any machine which looks for regularities in sensory feeds
  does the same thing. Are you saying that such a machine could
  not find the regularities or that if it did find the
  regularities it would thereby be conscious?
 
 I 

RE: UDA revisited

2006-11-25 Thread Stathis Papaioannou


Colin hales writes:

  You don't think paramecium behaviour could be modelled on a computer?
 
  Stathis Papaiaonnou
 
 A paramecium can behave like it's perceiving something. I haven't observed
 it myself but I have spoken to people who have and they say they have
 behaviours which betray some sort of awareness beyond the scope of their
 boundary. A teeny paramecium-sized primitive external world model. A teeny
 bit of adaptive behaviour.
 
 So a computer model?
 
 A) that included a model of those aspects of the physics participating in
 what the paramecium could have as experiences.
 B) That included all the molecular pathways (cilia molecules, the lot)
 C) that included a model of the response to the perceptual physics
 D) That included a model of the environment of the paramecium
 
 would be pretty good. But the model would not be having experiences.
 There's the age old distinction between [modelling perfectly] and [the
 perfect model]. The former aims at realistic replication. The latter
 aims at suited to task. I think you could get pretty close to it
 behaviourally. Maybe indistiguishable.
 
 The way to test it? Make the model drive a nano-robot paramecium shell.
 Then let it live with real paramecium. Then expose both to novelty and see
 what the differences are.
 
 I don't think any amount of detail will ever make the model or the
 computer it is running on have experiencesthe only perfect model of
 the paramecium is a paramecium. Also...paramecium is not noted for its
 scientific behaviour!

The computer driving the paramecium shell might be difficult to build, but in 
principle it would be the same sort of task as, say, a computer running an 
analogue 
clock or projecting a film (i.e., originally filmed on a celluloid strip) onto 
a screen. With 
sufficient attention to detail, it should be impossible to distinguish the 
digital replica 
from the original. If you don't believe the paramecium replica can be made 
indistinguishable from the original, which part of the paramecium is it that 
would be so 
hard to simulate? If you do manage to simulate it, down to the quantum level if 
necessary, then how could it possibly not behave like a real paramecium? 

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: UDA revisited

2006-11-25 Thread 1Z


Colin Geoffrey Hales wrote:

 BTW there's no such thing as a truly digital computer. They are all
 actually analogue. We just ignore the analogue parts of the state
 transitions and time it all so it makes sense.

And if the analogue part intrudes, the computer has malfunctioned
in some way. So correctly functioning computers are digital.

(And analogue physics might turn out to be digital)


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-25 Thread 1Z


Colin Geoffrey Hales wrote:
 
  You don't think paramecium behaviour could be modelled on a computer?
 
  Stathis Papaiaonnou

 A paramecium can behave like it's perceiving something. I haven't observed
 it myself but I have spoken to people who have and they say they have
 behaviours which betray some sort of awareness beyond the scope of their
 boundary.

Perception can occur *at* a boundary. Touch is perception.

 A teeny paramecium-sized primitive external world model. A teeny
 bit of adaptive behaviour.

 So a computer model?

 A) that included a model of those aspects of the physics participating in
 what the paramecium could have as experiences.
 B) That included all the molecular pathways (cilia molecules, the lot)
 C) that included a model of the response to the perceptual physics
 D) That included a model of the environment of the paramecium

 would be pretty good. But the model would not be having experiences.

Because...?

 There's the age old distinction between [modelling perfectly] and [the
 perfect model]. The former aims at realistic replication. The latter
 aims at suited to task. I think you could get pretty close to it
 behaviourally. Maybe indistinguishable.

 The way to test it? Make the model drive a nano-robot paramecium shell.
 Then let it live with real paramecium. Then expose both to novelty and see
 what the differences are.

Why should there be any? Any AI worthy of the name will learn
form experience. Doing so at  a human level has not been
achieved, doing so a the paramecium level might be a lot easier.

 I don't think any amount of detail will ever make the model or the
 computer it is running on have experiences...

That's an opinion, not an argument.


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-25 Thread 1Z


Bruno Marchal wrote:
 Le 24-nov.-06, à 05:48, Colin Geoffrey Hales a écrit :

  I agree very 'not interesting' ... a bit like saying assuming comp
  endlessly.and never being able to give it teeth.

 I guess you don't know about my work (thesis). I know there are some
 philosopher who considers it controversial, but it is not a work in
 philosophy

Yes it is. You need the premiss that
numbers exist Platonically (or that Truth and Existence
merge in the Plotinian One) which is as philosophical
as anything could be.

Consider also that you arguments embeds the movie graph
argument, which, as you admit, is equivalent to
Maudlin's Olympia argument...which is is philosophy,
and presented as such by Maudlin.


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: UDA revisited

2006-11-25 Thread 1Z


Colin Geoffrey Hales wrote:
 
 
  Colin Hales writes:
 
  So, I have my zombie scientist and my human scientist and
  I ask them to do science on exquisite novelty. What happens?
  The novelty is invisible to the zombie, who has the internal
  life of a dreamless sleep. The reason it is invisible is
  because there is no phenomenal consciousness. The zombie
  has only sensory data to use to do science. There are an
  infinite number of ways that same sensory data could arrive
  from an infinity of external natural world situtations.
  The sensory data is ambiguous - it's all the same - action
  potential pulse trains traveling from sensors to brain.

 Stathis:
  All I have to work on is sensory data also.

 No you don't! You have an entire separate set of perceptual/experiential
 fields constructed from sensory feeds. The fact of this is proven - think
 of hallucination. When the senory data gets overidden by the internal
 imagery (schizophrenia). Sensing is NOT our perceptions. It is these
 latter phenomenal fields that you consciously work from as a scientist.

So what? There is nothing in them that was
not first in sensing, unless magic is taking place.

 Not the sensory feeds. This seems to be a recurring misunderstanding or
 something people seem to be struggling with. It feels like its coming from
 your senses but it's all generated inside your head.

Only in dreaming and hallucination.

  I can't be certain that there is a real world out there, and
  even if there is, all I can possibly do is create a virtual
  reality in my head which correlates with the patterns of sense
  data I receive.

 Yes - the virtual reality is the collection of phenomenal scenes
 mentioned above  is what you use to learn from, not the sense data.
 Put more accurately - you learn things that are consistent with the
 phenomenal scenes. There is a tendency in some circles to think of
 consciousness as an epiphenomenal irrelevance, devoid of causal
 efficacy... I would disagree in that it's causal efficacy is in CHANGE of
 belief (learning), not the holding of static belief. Scientific behaviour
 is all about changing belief.

But it does not follow without consciousness, there is no
change of belief. Any more than without legs there is no
locomotion.

You probably have some adaptive software on your PC...

 reality of the external world? It doesn't matter what you believe about
 the existence or otherwise of 'reality'. Whatever it is, we have an
 a-priori tool for perceiving it that is a phenomenon.

It doesn't follow that it can't
be perceived without phenomenality.
(Or that there is any special certainty to human perceptions).

  i.e. Phenomenality
 is a real world phenomenon just as real as a rock. Leave the reality
 discussion to the campfire.

 Whatever 'reality' is, it is regular/persistent/repeatable/stable enough
 to do science on it via our phenomenality and come up with laws that seem
 to characterise how it will appear to us in our phenomenality.

We do it via our phenomenality. That doesn't mean
that entities devoid of phenomenality can't do it.

  Certainly, it is ambiguous, and that is why we have science: we
  come up with a model or hypothesis consistent with the sense data,
  then we look for more sense data to test it.

 You describe scientific behaviour...yes, but the verification is not
 through sense data but through phenomenal fields. The phenomenal fields
 are NOT the sense data. Phenomenal fields can be ambiguous, yes.
 Scientific procedure deals with that.

None of that shows that phenomenal fields are essential
to the process. legs and locomotion again.

 ..but..
 The sense data is separate and exquisitely ambiguous and we do not look
 for sense data to verify scientific observations! We look for
 perceptual/phenomenal data. Experiences. Maybe this is yet another
 terminological issue. Sensing is not perception.

Disjoint sense data are ambiguous. They need to be
contextualised with other sense data, memories, innate
reflexes and so on. The $64,000 dollar question is
whether you can have all that without phenomenality.

  Any machine which looks for regularities in sensory feeds
  does the same thing. Are you saying that such a machine could
  not find the regularities or that if it did find the
  regularities it would thereby be conscious?
 
  Stathis Papaioannou

 I am saying the machine can find regularity in the sensory feeds - easily.
 That is does so does not mean it is conscious.

If it can find regularities without consciousness, Zombies *can* do
science.

  It does not mean it has
 access to the external matural world.

Despite having sensory feeds? What kind of sense doesn't
give you access to the external world?

 ..and that is not what WE dowe find regularity in the perceptual fields.

 Looking for regularity in sensory data is totally different process fro
 looking for regularity in a perceptual field. Multiple sensory feeds can
 lead to the same perceptual field.
  Multiple 

Re: UDA revisited

2006-11-25 Thread 1Z


Colin Geoffrey Hales wrote:


 Peceptual fields can misrepresent reality. The 'virtual reality' generator
 is not perfect, buit it's pretty good. Scientific procedure aimes to
 eliminate the effects of such misdirection. Through the use of test and
 control and review and critical argument. The main thing is that there is
 a perceptual field of the external world. Without it we wouldn't even have
 a chance to be mistaken about the external world!

We would have the chance to make cognitive mistakes.


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-25 Thread 1Z


Colin Geoffrey Hales wrote:
 Colin
 That is the invisibility I claim at the center of
  the zombie's difficulty.

 Brent
 But it will also present the same difficulty to the human scientist.  An
 in fact it is easy to build a robot that detects and responds to radio
 waves that are completely invisible to a human scientist.

 Colin
 I'm not talking about invisibility of within a perceptual field. That is
 an invisibility humans can deal with to some extent using instruments. We
 inherit the limits of that process, but at least we have something
 presented to us from the outside world. The invisibility I speak of is the
 invisibility of novel behaviour in the natural world within a perceptual
 field.


To an entity without a phenomenal field, novel
behaviour will be phenomenally invisible. Everything
will be phenomenally invisible. That doesn't
mean they won't be able have non-phenomenal
access to events. Including novdl ones.

 Without a phenomenal representation of the external world we cannot
 use existing knowledge to predict anything 'out there' that we can
 reliably be surprised about. There is no 'out there' without phenomenal
 representation.

That's a claim -- that any projection from internal sense-data to a
hypothetical
external source is necessarily phenomenal -- not an argument.

 Brent:
 Are you saying that a computer cannot have any pre-programmed rules for
 dealing with sensory inputs, or if it does it's not a zombie.

 Colin:
 I would say that a computer can have any amount of pre-programmed rules
 for dealing with sensory inputs. Those rules are created by humans and

Yes.

 grounded in the perceptual experiences of humans.

Not necessarily. AI researches try to generalise as much as possible.

 That would be a-priori
 knowledge. The machine itself has no experiences related to the rules or
 its sensing, hence it is a zombie. The possession of behavioural rules
 does not entail zombie-ness. The lack of possession of perceptual fields
 does.

 Brent:
 Or are you claiming that humans have some pre-scientific knowledge that
 cannot be implemented in a computer.

 Colin:
 Yes! Humans have a genetically bestowed capacity to make cellular material
 which takes advantage of (as yet un-described) attributes of the natural
 world that enable sensory feeds to create phenomenal fields, thus
 connecting the human with the external natural world.

You pass easily from humans are connected to the external
world phenomenally to no entity can be connected to the external
world except phenomenally.

 This is before any
 derived knowledge (scientific or not). So I suppose 'pre-scientific' is a
 good term for it. Innate a-priori knowledge (not learned or 'learned'
 during construction).

 I am saying that it cannot be computed. The experiences must be had. This
 does not preclude a different sort of chip that does have experiences
 because it replicates (not models) the actual physics of the phenomenal
 fields. This physics could be mixed into a computational substrate.

 So I'm saying that 'computing' grounded in perceptual fields is non-zombie.
 But we don't have that form of computing. We have numerical/symbolic
 models based on/grounded in human perception.
 
 cheers,
 
 Colin


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-25 Thread 1Z


Colin Geoffrey Hales wrote:
 The PHENOMENAL

 Colin
  What I have done is try to figure out a valid test for phenomenal
  consciousness.

 Brent
 What is the functional definition of phenomenal?
 Is there non-phenomenal consciousness?

 Colin
 Phenomena are things that happen in the universe.

That's a loose and popular usage. It doesn't distinguish
phenomenal consciousness from cognition, because cognition happens
in the universe too.


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-25 Thread Colin Geoffrey Hales

 If all you have is a bunch of numbers (or 4-20mA current loop
 signals or 1-5V signals) dancing away, and you have no
 a-priori knowledge of the external world, how are you to
 create any sort of model of the external world in the first
 place? You don't even know it is there. That is the world
 devoid of phenomenal consciousness.

 You could say exactly same thing about a bunch of neurons and chemicals.
Yet they produce consciousness.

Yes. Neurons and their chemicals do contrive to construct phenomenal
scenes. The question to ask yourslef is what is different about their
circumstance that this be so? Or better..what is missing from my way of
thinking that has me unable to imagine how neuron behaviour can produce
such a thing - and what is the difference between that and wired signals,
numbers and rules?


 If humans give you a model. You still don't know what it is about.
 It's just a bunch of rules (when this input does this, do that...
 and so on). None of which is an experience. None of which gives
 you any innate awareness of the external world.

 If you can learn and act you do know what it is about.
 You're just making an assertion that none of it is
 experience or innate awareness.

Hmmm. OK. So you're 'learning', are you? What rules of learning are there
and how did you get them? How do you 'know' what appropriate action to
take? Rules for learning are rules like the others. Tell me how a system
devoid of a phenomenal representation of the external world could ever
form a representation of the external world without knowing how to do that
already.


 That's why a real phenomenon happening in your head that innately
connects to the real phenomemona in the external natural world
 and constructs an experienced representation of it, devoid of
 all knowledge 'about it',is necessary before you can know
 anything about it _at all_.

 Unsupported assertion.

OK...want proof? Let's do a test. You are a scientist. You are about to do
science on a coffee cup in front of you. Close your eyes.

Now explain how you could possibly be as scientifically broad/adept in
your description of coffee cups. (Meanwhile I have filled the coffee cup
with acid, something of which you are completely unaware). The whole
phenomenal scene connecting you with the external world is GONE. What more
support do you want? What more is possible before a simple statement such
as the one I make above becomes reasonable?

Or, put it another way...exactly what is it that you are asserting acts in
its place? Maybe you could tell me what that is I might understand.


 There is a natural tendency to anthropomorphise our experiences
 into the artifact. Imagine yourself in a black silent room with
 a bunch of numbers streaming by and a bunch of dials you can
 use to send numbers back out. Now tell me how you can ever deduce
 the real external world from all those numbers. You can't.
 You can say 'when I poke this dial that number over there does
 that'. That is your whole universe. You have to stop thinking
 like a human and really imagine 'what it is like' to be a zombie.

 To imagine is to create an internal model - so imagining what it
 is like to be a zombie seems to be a contradiction in terms.

 Brent Meeker

Erm I am trying to convey what it is like to be a human contrasted
with a zombie. YES...I am using human imagination to do that. You can't
use that back against me... please try to do the imagining I suggest
instead of criticising the attempt. I KNOW humans have imagination and
zombies don't...sheesh! cut me some slack here!

OKIf you like thenconsider this bit

 Imagine yourself in a black silent room with
 a bunch of numbers streaming by and a bunch
 of dials you can use to send numbers back out.

Delete all that as well. NOTHING. No awaness of even the numbers. For that
is what the zombie has...even worse. No awareness even of its own sensing.
nothing. Now put yourself in the zombie's shoes for a while.

Colin





--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-24 Thread Quentin Anciaux

Hi,

Le Vendredi 24 Novembre 2006 22:54, Colin Geoffrey Hales a écrit :
 Now that there is a definite role of consciousness (access to novelty),
 the statement 'functional equivalent' makes the original 'philosophical
 zombie' an oxymoron...

But functionnal equivalence is a requisite ! By definition, a zombie is 
a creature which acts and looks like any other conscious creature (human), 
who's behavior is undistinguishable from a real person but yet lacks any 
conscious experience. What this statement says is that you should accept one 
of these propositions :

1) Consciousness is not tied to a given behavior nor to a given physical 
attribute, replicating these does not give consciousness. (dualism)

2) Zombies are impossible, if you do a functionnaly identical being, it will 
be conscious. (physicalism or computationalism)

 the first premise of 'functional equivalence' is 
 wrong. The zombie can't possibly be functionally identical without
 consciousness, which stops it being a zombie!

As I see that you accept point 2) then I don't understand why you continue to 
see any explanatory power to the zombie concept.

 To move forward, the 'breed' of the zombie in the paper is not merely
 'functionally identical'. That requirement is relaxed. Instead it is
 physically identical in all respects except the brain. 

It is not a zombie then (not the philosophical zombie we're talking about). 
What you're doing is taking the premices that the supposed zombie is not 
functionnaly identical (because that's how you differentiate it from 
the real scientist) and then taking as conclusion that they aren't 
functionnaly identical to real scientist... iow it is false because it is 
false.

 This choice is 
 justified empirically - the brain is known to be where it happens. Then
 there is an exploration of the difference between the human and zombie
 brains that could account for why/how one is conscious and the other is
 not. 

The point shows that functionnal equivalence leads to consciousness or 
you should be dualist.

SNIP


Regards,
Quentin Anciaux

--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: UDA revisited

2006-11-24 Thread 1Z


Colin Geoffrey Hales wrote:
 Hi Quentin,
 
  Hi Colin,
 
 snip
  ... I am more interested in proving scientists aren't/can't be
  zombiesthat it seems to also challenge computationalism in a
 certain
  sense... this is a byproduct I can't help, not the central issue. Colin
 
 
  I don't see how the idea of zombies could challenge computationalism...
 Zombie
  is an argument against dualism... in other way it is the ability to
 construct
  a functionnal identical being as a conscious one yet the zombie is not
 conscious. Computationalism does not predict zombie simply because
 computationalism is one way to explain consciousness.
 
  Quentin
 

 Now that there is a definite role of consciousness (access to novelty),
 the statement 'functional equivalent' makes the original 'philosophical
 zombie' an oxymoron...the first premise of 'functional equivalence' is
 wrong. The zombie can't possibly be functionally identical without
 consciousness, which stops it being a zombie!

You need to distinguish between having a function and being a function.
Locomotion is a function. Legs have the function of
locomotion. But wheels or wings or flippers could fulfil the same
function.

 To move forward, the 'breed' of the zombie in the paper is not merely
 'functionally identical'. That requirement is relaxed. Instead it is
 physically identical in all respects except the brain. This choice is
 justified empirically - the brain is known to be where it happens. Then
 there is an exploration of the difference between the human and zombie
 brains that could account for why/how one is conscious and the other is
 not. At that point (post-hoc) one can assess functional equivalence. The
 new zombie is born.

 Now...If I can show even one behaviour that the human can do that the new
 zombie can't replicate then I have got somewhere. The assessment benchmark
 chosen is 'scientific behaviour'. This is the 'function' in which
 equivalence is demanded. Of all human behaviours this one is unique
 because it is directed at the world _external_ to the scientist.

Surely just about every action is directed towards the external world.

 It also
 produces something that is demanded externalised (a law of nature, 3rd
 person corroboration). The last unique factor is that the scientist
 creates something previously unknown by ALL. It is unique in this regard
 and the perfect benchmark behaviour to contrast the zombie and the human.

 So, I have my zombie scientist and my human scientist and I ask them to do
 science on exquisite novelty. What happens? The novelty is invisible to
 the zombie, who has the internal life of a dreamless sleep.

I think you are confusing lack of phenomenality with lack of
response to the environment. Simple sensors
can respond without (presumably) phenomenality.
So can humans with blindsight (but not very efficiently).

  The reason it
 is invisible is because there is no phenomenal consciousness. The zombie
 has only sensory data to use to do science. There are an infinite number
 of ways that same sensory data could arrive from an infinity of external
 natural world situtations. The sensory data is ambiguous

That doesn't follow. The Zombie can produce different responses
on the basis of physical differences in its input, just as
a machine can.

- it's all the
 same - action potential pulse trains traveling from sensors to brain.

No, it's not all the same. Its coded in a very complex way. It's
like saying the information in you computer is all the same -- its
all ones and zeros

 The zombie cannot possibly distinguish the novelty from the sensory data
 and has no awareness of the external world or even its own boundary.

Huh? It's perfectly possible to build a robot
that produces a special signal when it encounters input it has
not encountered before.

 OK.

 Now, we have the situation where in order that science be done by a human
 we must have phenomenal consciousness. This is 'phenomena' - actual
 natural world 'STUFF' behaving in a certain way. If I was to do science on
 a rock...that rock is a natural world phenomena. So is consciousness. The
 fact that our present scientific modes of thinking make the understanding
 of it as a phenomena difficult is irrelevant. The reality of the existence
 of it is proven because science exists.

 How does this reach computationalism?

 Well if consciousness is phenomena like any other, as it must be, then
 phenomena of the type applicable to consciousness (whatever the mysterious
 hard problem solution is) must be present in order that scientific
 behaviour can happen. The phenomena in a computational artifact - one that
 is manipulating symbols - are the phenomena of the artifact, not those
 represented by any symbols being manipulated.

 So the idea of a functional equivalent based on manipulation of symbols
 alone is arguably/demonstrably wrong in one case only: scientific
 behaviour. From an AGI engineering perspective it means pure computation
 won't do it. So 

Re: UDA revisited

2006-11-24 Thread Colin Geoffrey Hales


 Hi,

 Le Vendredi 24 Novembre 2006 22:54, Colin Geoffrey Hales a écrit :
 Now that there is a definite role of consciousness (access to novelty),
 the statement 'functional equivalent' makes the original 'philosophical
 zombie' an oxymoron...

 But functionnal equivalence is a requisite ! By definition, a zombie is
 a creature which acts and looks like any other conscious creature
 (human),
 who's behavior is undistinguishable from a real person but yet lacks any
 conscious experience. What this statement says is that you should accept
 one
 of these propositions :

 1) Consciousness is not tied to a given behavior nor to a given physical
 attribute, replicating these does not give consciousness. (dualism)

 2) Zombies are impossible, if you do a functionnaly identical being, it
 will
 be conscious. (physicalism or computationalism)


OK. Skip the whole zombie description. Call them Scientist_A and
Scientist_B. Let phenomenal consciousness = PC (because I am tired of
writing it!). Scientist_A has PC. Scientist_B doesn't.

Q. Is there any sort of brain we can give scientist Scientist_B that
enables scientific behaviour without PC?

A. NO. Reasons in the paper.

Scientist_B does not have to have Scientist_A's type of brain. But what
Scientist_B MUST have is PC. Otherwise Scientist_B will not be able to do
science. This is only claimed for scientific behaviour. Nothing else. It
clearly has implications elsewhere, though.

Where this fits into physicalism, computationalism, functionalism,
dualism, Xism, Yism, Zism?...frankly it won't change anything knowing
thatmy guide is nature, not philosophyalthough I would like to
calibrate the ideas in those terms so I can communicate it to those who do
care and explore the implications.

Practical Implications (reasons why I want this aired so badly):

1) There is a reason for computationalism being challenged. I care about
the future of AGI. For if $millions are being spent on AGI in the belief
that somehow the work involves the actual or eventual creation of
consciousness then those involved need to know there is the beginnings of
an argument against that belief. Also there is a valid reason to caution
those involved as to the expectations of performance: the zombies they are
making will be very habitat-bound and fragile in the face of novelty. This
is already seen in all AGI experiments to date. Talk to Rodney Brooks and
Gerald Edelman.

2) There is a reason for scientists (=me) to be challenged. Scientist are,
themselves, for the very first time, empirical evidence of something.
Scientists are empirical evidence of the reality of the existence of PC.
Therefore PC is now evidence of something. That something is currently not
being explored and never has in any focussed fashion. (a) PC delivers
scientific evidence and (b) it is also evidence in and of itself.
Scientists currently allow/demand (a) and disallow (b) for no good reason.
Now we have the evidence we can start to get used to the idea that (b) is
OK to explore. Scientists may say there is no doubt as to the existence of
consciousness and they may actually believe that... but if they really
did then activity (b) should be valid science - and it is not accepted as
such. This is a fundamental inconsistency at the heart of science.

That is ultimately what my paper is all about: cultural change.

But if you must have an 'ism bucket to put me in... let's see.

I am a radical dual aspect monist. I think.

regards,

Colin Hales


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




RE: UDA revisited

2006-11-24 Thread Stathis Papaioannou


Colin Hales writes:

 I would also predict that a UD reified in our universe would be like
 that...'not much' consciousness (the consciousness of the computer = that
 of which it is made, not that of the program). There are no phenomena
 reified as a result of the UD operating. The only phenomena happening are
 the machinations of the hardware of the UD.
 
 BTW I am not saying that abstract computation of some sort cannot be
 involved in artificial general intelligence. What I am saying is that
 bolted to the computation are non-negotiable aspects that have to be
 done in real phenomena or the machine will have no phenomenal
 consciousness. The phenomena are of a type found in brain material.
 Conversely the sort of universe that make brain material have
 consciousness is not madeof platomic arithmetical entities.
 
 In one sense 'comp', I gather, is the claim that computation of an
 abstracted realm-X done in realm-Y can create realm-X phenomena. By
 extension it means 'arithmetic realism' of the classical platonic kind is
 false.

It still isn't clear to me whether you believe it is possible for a digital 
computer 
to be conscious or not.

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




RE: UDA revisited

2006-11-24 Thread Colin Geoffrey Hales

Stathis wrote:

 It still isn't clear to me whether you believe it is possible
 for a digital computer to be conscious or not.


Digital computers of the type we currently have?
In any/all combinations, including the whole internet?

No... that they have the consciousness of the kind we have.
No... that they have the consciousness of the programs they run.
But
Yes.. in that they may be having some sort of experience... whatever it is
like to be a hot electrically noisy chunk of silicon with all sorts of
dopants shuffling charge around in circles...

But...future computers of a new type?
Yes... Except the percentage 'digital' (meaning abstract symbol
manipulation) will be far less than 100% and could conceivably be zero. I
suspect hybrids will be common. It'll be up to us. I do not mean quantum
computers. They will be shown to be zombies too. Being a zombie able to
manipulate a zillion abstract symbols simultaneously just means you get to
be a very powerful zombie. A megazombie that is just as inept at science.
It will be able to make mistakes far more quickly though. That's my
prediction, anyway.

BTW there's no such thing as a truly digital computer. They are all
actually analogue. We just ignore the analogue parts of the state
transitions and time it all so it makes sense.

The first one I plan to build (my PhD project is doing modelling for it)
will be a hybrid for a whole bunch of reasons. It will self-verify the
existence of experiences by putting multiple scientists together. That
is, several (probably four) 'scientists' will share/compare experiences
and do primitive science. There will be 4 independent 'selves' in there,
all able to have each other's experiences. It's the only way you can
verify the physics is doing what you say. Eventually the chips may be able
to be integrated into a human to augment/restore vision (eyeballs not
needed) etc. That's when the real test will be possible.

My PhD restarts in January, which is why I have enough time to write odd
papers and be a nuiscance here! :-)

cheers,

colin





--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: UDA revisited

2006-11-24 Thread Stathis Papaioannou


Colin Hales writes:

 I would predict extremely primitive phenomenal scenes to a single cell
 organism...Perhaps LIGHT/NOT-LIFGT...With causal efficacy. I think
 paramecium might operate this way. 'Phenomenal scenes' and 'abstraction
 from/via phenomenal scenes' are two independent axes of intellect. It is
 the single cell version which I would hold responsible for the cambrian
 explosion. Ants may have a collective intellect, but it's based on
 primitive phenomenal scenes (not zombies). The very first single-cell
 non-zombie had an amazing survival advantage even if their reflex
 behaviour was random (anything rather than nothing). In my AGI model
 certain types of single celled creatures cannot help but have 'phenomena'
 - it comes with their membrane and can't be helped. Fill it with genetic
 material and the cell goes negativemake it selectively permeable...job
 done.

You don't think paramecium behaviour could be modelled on a computer?

Stathis Papaiaonnou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




RE: UDA revisited

2006-11-24 Thread Stathis Papaioannou


Colin Hales writes:
 
 So, I have my zombie scientist and my human scientist and I ask them to do
 science on exquisite novelty. What happens? The novelty is invisible to
 the zombie, who has the internal life of a dreamless sleep. The reason it
 is invisible is because there is no phenomenal consciousness. The zombie
 has only sensory data to use to do science. There are an infinite number
 of ways that same sensory data could arrive from an infinity of external
 natural world situtations. The sensory data is ambiguous - it's all the
 same - action potential pulse trains traveling from sensors to brain.

All I have to work on is sensory data also. I can't be certain that there is a 
real world out there, and even if there is, all I can possibly do is create a 
virtual reality in my head which correlates with the patterns of sense data I 
receive. Certainly, it is ambiguous, and that is why we have science: we come 
up with a model or hypothesis consistent with the sense data, then we look for 
more sense data to test it. Any machine which looks for regularities in sensory 
feeds does the same thing. Are you saying that such a machine could not find 
the regularities or that if it did find the regularities it would thereby be 
conscious?

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: UDA revisited

2006-11-24 Thread Colin Geoffrey Hales

[EMAIL PROTECTED]
  [EMAIL PROTECTED]
In-Reply-To: [EMAIL PROTECTED]

Hi Brent,
Please see the post/replies to Quentin/LZ.
I am trying to understand the context in which I can be wrong and how
other people view the proposition. There can be a mixture of mistakes and
poor communication and I want to understand all the ways in which these
things play a role in the discourse.

So...

 So, I have my zombie scientist and my human scientist and I
 ask them to do science on exquisite novelty. What happens?
 The novelty is invisible to the zombie, who has the internal
 life of a dreamless sleep.

 Scientists don't literally see novel theories - they invent
 them by combining other ideas.  Invisible is just a metaphor.

I am not talking about the creative process. I am talking about the
perception of a natural world phenomena that has never before been
encountered. There can be no a-priori scientific knowledge in such
situations. It is as far from a metaphor as you can get. I mean literal
invisibility. See the red photon discussion in the LZ posting. If all you
have is a-priori abstract (non-phenomenal) rules of interpretation of
sensory signals to go by, then one day you are going to misinterpret
because the signals came in the same from a completely different source
and you;d never know it. That is the invisibility I claim at the center of
the zombie's difficulty.


 The reason it is invisible is because there is no phenomenal
 consciousness. The zombie has only sensory data to use to
 do science. There are an infinite number
 of ways that same sensory data could arrive from an infinity
 of external natural world situations. The sensory data is
 ambiguous - it's all the same - action potential pulse trains
 traveling from sensors to brain. The zombie cannot possibly
 distinguish the novelty from the sensory data

 Why can it not distinguish them as well as the limited human scientist?

Because the human scientist is distinguishing them within the phenomenal
construct made from the sensory data, not directly from the sensory data -
which all the zombie has. The zombie has no phenomenal construct of the
external world. It has an abstraction entirely based on the prior history
of non-phenonmenal sensory input.


 and has no awareness of the external world or even its own boundary.

 Even simple robots like the Mars Rovers have awareness of the
 world, where they are, their internal states, and

No they don't. They have an internal state sufficiently complex to
navigate according to the rules of the program (a-priori knowledge) given
to them by humans, who are the only beings that are actually aware where
the rover is. Look at what happens when the machine gets hung up on
novelty... like the rock nobody could allow for who digs it out of it?
no the rover... humans doThe rover has no internal life at all. Going
'over there' is what the human sees. 'actuate this motor until until this
number equals that number' is what the rover does.


 No.  You've simply assumed that you know what awareness is and you
have the defined a zombie as not having it.  You might as
 well have just defined zombie as just like a person, but can't do
science or can't whistle.  Whatever definition you give
 still leaves the question of whether a being whose internal
 processes (and a fortiori the external processes) are
 functionally identical with a human's is conscious.

This is the nub of it. It's where I struggle to see the logic others see. 
I don't think I have done what you describe. I'll walk myself through it.

What I have done is try to figure out a valid test for phenomenal
consciousness.

When you take away phenomenal consciousness what can't you do? It seems
science is a unique/special candidate for a variety of reasons. Its
success is critically dependent on the existence of a phenomenal
representation of the external world.

The creature that is devoid of such constructs is what we typically call a
zombie. May be a mistake to call it that. No matter.

OK, so the real sticking point is the 'phenomenal construct'. The zombie
could have a 'construct' with as much detail in it as the human phenomenal
construct, but that is phenomenally inert (a numerical abstraction). Upon
what basis could the zombie acquire such a construct? It can't get it from
sensory feeds without knowing already what sensory feeds relate to what
part of the natural world. That a-priori knowledge is not available. It's
what the zombie is trying to find out. This is the logical loop from my
perspective.

So who's in the logical loop here? I am assuming zero a-priori scientific
knowledge in the human and the zombie. How does each get to a state of
non-zero scientific knowledge of the external natural world? For this is
what has actually happened in an evolutionary sense. We have phenomenal
consciousness for a reason.

If you zero out all a-priori knowledge in two entities, one with and one
without phenomenal consciousness the only 

RE: UDA revisited

2006-11-24 Thread Colin Geoffrey Hales



 Colin Hales writes:

 So, I have my zombie scientist and my human scientist and
 I ask them to do science on exquisite novelty. What happens?
 The novelty is invisible to the zombie, who has the internal
 life of a dreamless sleep. The reason it is invisible is
 because there is no phenomenal consciousness. The zombie
 has only sensory data to use to do science. There are an
 infinite number of ways that same sensory data could arrive
 from an infinity of external natural world situtations.
 The sensory data is ambiguous - it's all the same - action
 potential pulse trains traveling from sensors to brain.

Stathis:
 All I have to work on is sensory data also.

No you don't! You have an entire separate set of perceptual/experiential
fields constructed from sensory feeds. The fact of this is proven - think
of hallucination. When the senory data gets overidden by the internal
imagery (schizophrenia). Sensing is NOT our perceptions. It is these
latter phenomenal fields that you consciously work from as a scientist.
Not the sensory feeds. This seems to be a recurring misunderstanding or
something people seem to be struggling with. It feels like its coming from
your senses but it's all generated inside your head.

 I can't be certain that there is a real world out there, and
 even if there is, all I can possibly do is create a virtual
 reality in my head which correlates with the patterns of sense
 data I receive.

Yes - the virtual reality is the collection of phenomenal scenes
mentioned above  is what you use to learn from, not the sense data.
Put more accurately - you learn things that are consistent with the
phenomenal scenes. There is a tendency in some circles to think of
consciousness as an epiphenomenal irrelevance, devoid of causal
efficacy... I would disagree in that it's causal efficacy is in CHANGE of
belief (learning), not the holding of static belief. Scientific behaviour
is all about changing belief.

reality of the external world? It doesn't matter what you believe about
the existence or otherwise of 'reality'. Whatever it is, we have an
a-priori tool for perceiving it that is a phenomenon. i.e. Phenomenality
is a real world phenomenon just as real as a rock. Leave the reality
discussion to the campfire.

Whatever 'reality' is, it is regular/persistent/repeatable/stable enough
to do science on it via our phenomenality and come up with laws that seem
to characterise how it will appear to us in our phenomenality.

 Certainly, it is ambiguous, and that is why we have science: we
 come up with a model or hypothesis consistent with the sense data,
 then we look for more sense data to test it.

You describe scientific behaviour...yes, but the verification is not
through sense data but through phenomenal fields. The phenomenal fields
are NOT the sense data. Phenomenal fields can be ambiguous, yes.
Scientific procedure deals with that.
..but..
The sense data is separate and exquisitely ambiguous and we do not look
for sense data to verify scientific observations! We look for
perceptual/phenomenal data. Experiences. Maybe this is yet another
terminological issue. Sensing is not perception.

 Any machine which looks for regularities in sensory feeds
 does the same thing. Are you saying that such a machine could
 not find the regularities or that if it did find the
 regularities it would thereby be conscious?

 Stathis Papaioannou

I am saying the machine can find regularity in the sensory feeds - easily.
That is does so does not mean it is conscious. It does not mean it has
access to the external matural world.

..and that is not what WE dowe find regularity in the perceptual fields.

Looking for regularity in sensory data is totally different process fro
looking for regularity in a perceptual field. Multiple sensory feeds can
lead to the same perceptual field. Multiple perceptual fields can arise
out of the same sensory feeds. No matter how weird it sounds our brains
can map sensory fields to the world outside...The sensory data is
intrinsically ambiguous and not about the external world, but the world at
the location of the transduction that created the sensory measurent. And
an indirect transduction at that (a retinal cone protein isomerisation by
a red photon is not a 'red photon experience' it is a
protein-isomerisation experience  - ie no experience at all!)

I had no idea people were so confused about the distinction between
sensing and perception. I hope I am helping.

now...to the paramecium!

cheers

colin





--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-24 Thread Brent Meeker

Colin Geoffrey Hales wrote:
 [EMAIL PROTECTED]
   [EMAIL PROTECTED]
 In-Reply-To: [EMAIL PROTECTED]
 
 Hi Brent,
 Please see the post/replies to Quentin/LZ.
 I am trying to understand the context in which I can be wrong and how
 other people view the proposition. There can be a mixture of mistakes and
 poor communication and I want to understand all the ways in which these
 things play a role in the discourse.
 
 So...
 
 So, I have my zombie scientist and my human scientist and I
 ask them to do science on exquisite novelty. What happens?
 The novelty is invisible to the zombie, who has the internal
 life of a dreamless sleep.
 Scientists don't literally see novel theories - they invent
 them by combining other ideas.  Invisible is just a metaphor.
 
 I am not talking about the creative process. I am talking about the
 perception of a natural world phenomena that has never before been
 encountered. There can be no a-priori scientific knowledge in such
 situations. It is as far from a metaphor as you can get. I mean literal
 invisibility. See the red photon discussion in the LZ posting. If all you
 have is a-priori abstract (non-phenomenal) rules of interpretation of
 sensory signals to go by, then one day you are going to misinterpret
 because the signals came in the same from a completely different source
 and you;d never know it. 

Yes, that's a mistake humans make too.  Even simpler, have you ever seen the 
demonstration in which a long red rod is hung in a rotating trapezoidal white 
window frame.  In spite of knowing exactly what is happening, the window frame 
appears to a a square frame that is oscillating while the rod rotates and in 
some way passes through the material of the frame.  Your pre-scientific 
hard-wiring misleads you.

That is the invisibility I claim at the center of
 the zombie's difficulty.

But it will also present the same difficulty to the human scientist.  An in 
fact it is easy to build a robot that detects and responds to radio waves that 
are completely invisible to a human scientist.

Are you saying that a computer cannot have any pre-programmed rules for dealing 
with sensory inputs, or if it does it's not a zombie. Or are you claiming that 
humans have some pre-scientific knowledge that cannot be implemented in a 
computer.  
 
 The reason it is invisible is because there is no phenomenal
 consciousness. The zombie has only sensory data to use to
 do science. There are an infinite number
 of ways that same sensory data could arrive from an infinity
 of external natural world situations. The sensory data is
 ambiguous - it's all the same - action potential pulse trains
 traveling from sensors to brain. The zombie cannot possibly
 distinguish the novelty from the sensory data
 Why can it not distinguish them as well as the limited human scientist?
 
 Because the human scientist is distinguishing them within the phenomenal
 construct made from the sensory data, not directly from the sensory data -
 which all the zombie has. The zombie has no phenomenal construct of the
 external world. It has an abstraction entirely based on the prior history
 of non-phenonmenal sensory input.

It is very confusing that you make assertions about zombies and I can't tell 
whether you're defining zombie or you suppose the assertion follows from 
something else.   Why does the zombie have no phenomenal construct?  Certainly 
a computer can take sensory data and create a model of the world from it.  Is 
phenomenal a special word that is supposed to make this different from what 
people do, like qualia?

 
 and has no awareness of the external world or even its own boundary.
 Even simple robots like the Mars Rovers have awareness of the
 world, where they are, their internal states, and
 
 No they don't. They have an internal state sufficiently complex to
 navigate according to the rules of the program (a-priori knowledge) given
 to them by humans, 

But humans have a-priori knowledge given them by evolution.  

who are the only beings that are actually aware where
 the rover is. Look at what happens when the machine gets hung up on
 novelty... like the rock nobody could allow for who digs it out of it?
 no the rover... humans doThe rover has no internal life at all. Going
 'over there' is what the human sees. 'actuate this motor until until this
 number equals that number' is what the rover does.
 
 No.  You've simply assumed that you know what awareness is and you
 have the defined a zombie as not having it.  You might as
 well have just defined zombie as just like a person, but can't do
 science or can't whistle.  Whatever definition you give
 still leaves the question of whether a being whose internal
 processes (and a fortiori the external processes) are
 functionally identical with a human's is conscious.
 
 This is the nub of it. It's where I struggle to see the logic others see. 
 I don't think I have done what you describe. I'll walk myself through it.
 

Re: UDA revisited

2006-11-24 Thread Brent Meeker

Colin Geoffrey Hales wrote:

 Colin Hales writes:

 So, I have my zombie scientist and my human scientist and
 I ask them to do science on exquisite novelty. What happens?
 The novelty is invisible to the zombie, who has the internal
 life of a dreamless sleep. The reason it is invisible is
 because there is no phenomenal consciousness. The zombie
 has only sensory data to use to do science. There are an
 infinite number of ways that same sensory data could arrive
 from an infinity of external natural world situtations.
 The sensory data is ambiguous - it's all the same - action
 potential pulse trains traveling from sensors to brain.
 
 Stathis:
 All I have to work on is sensory data also.
 
 No you don't! You have an entire separate set of perceptual/experiential
 fields constructed from sensory feeds. The fact of this is proven - think
 of hallucination. When the senory data gets overidden by the internal
 imagery (schizophrenia). Sensing is NOT our perceptions. It is these
 latter phenomenal fields that you consciously work from as a scientist.
 Not the sensory feeds. This seems to be a recurring misunderstanding or
 something people seem to be struggling with. It feels like its coming from
 your senses but it's all generated inside your head.
 
 I can't be certain that there is a real world out there, and
 even if there is, all I can possibly do is create a virtual
 reality in my head which correlates with the patterns of sense
 data I receive.
 
 Yes - the virtual reality is the collection of phenomenal scenes
 mentioned above  is what you use to learn from, not the sense data.
 Put more accurately - you learn things that are consistent with the
 phenomenal scenes. There is a tendency in some circles to think of
 consciousness as an epiphenomenal irrelevance, devoid of causal
 efficacy... I would disagree in that it's causal efficacy is in CHANGE of
 belief (learning), not the holding of static belief. Scientific behaviour
 is all about changing belief.
 
 reality of the external world? It doesn't matter what you believe about
 the existence or otherwise of 'reality'. Whatever it is, we have an
 a-priori tool for perceiving it that is a phenomenon. i.e. Phenomenality
 is a real world phenomenon just as real as a rock. Leave the reality
 discussion to the campfire.
 
 Whatever 'reality' is, it is regular/persistent/repeatable/stable enough
 to do science on it via our phenomenality and come up with laws that seem
 to characterise how it will appear to us in our phenomenality.
 
 Certainly, it is ambiguous, and that is why we have science: we
 come up with a model or hypothesis consistent with the sense data,
 then we look for more sense data to test it.
 
 You describe scientific behaviour...yes, but the verification is not
 through sense data but through phenomenal fields. The phenomenal fields
 are NOT the sense data. Phenomenal fields can be ambiguous, yes.
 Scientific procedure deals with that.
 ..but..
 The sense data is separate and exquisitely ambiguous and we do not look
 for sense data to verify scientific observations! We look for
 perceptual/phenomenal data. Experiences. Maybe this is yet another
 terminological issue. Sensing is not perception.
 
 Any machine which looks for regularities in sensory feeds
 does the same thing. Are you saying that such a machine could
 not find the regularities or that if it did find the
 regularities it would thereby be conscious?

 Stathis Papaioannou
 
 I am saying the machine can find regularity in the sensory feeds - easily.
 That is does so does not mean it is conscious. It does not mean it has
 access to the external matural world.
 
 ..and that is not what WE dowe find regularity in the perceptual fields.
 
 Looking for regularity in sensory data is totally different process fro
 looking for regularity in a perceptual field. Multiple sensory feeds can
 lead to the same perceptual field. Multiple perceptual fields can arise
 out of the same sensory feeds. No matter how weird it sounds our brains
 can map sensory fields to the world outside...The sensory data is
 intrinsically ambiguous and not about the external world, but the world at
 the location of the transduction that created the sensory measurent. And
 an indirect transduction at that (a retinal cone protein isomerisation by
 a red photon is not a 'red photon experience' it is a
 protein-isomerisation experience  - ie no experience at all!)
 
 I had no idea people were so confused about the distinction between
 sensing and perception. I hope I am helping.
 
 now...to the paramecium!


I understand that there is a difference between sensing and perception.  
Perception includes sensing and also interpreting the sensations in a model of 
the world.  Which is why unusual appearances can literally be difficult to 
perceive.  But you still have not said why a digital computer cannot have an 
internal model and modify that model based on sensory data.

Brent Meeker


RE: UDA revisited

2006-11-24 Thread Colin Geoffrey Hales


 You don't think paramecium behaviour could be modelled on a computer?

 Stathis Papaiaonnou

A paramecium can behave like it's perceiving something. I haven't observed
it myself but I have spoken to people who have and they say they have
behaviours which betray some sort of awareness beyond the scope of their
boundary. A teeny paramecium-sized primitive external world model. A teeny
bit of adaptive behaviour.

So a computer model?

A) that included a model of those aspects of the physics participating in
what the paramecium could have as experiences.
B) That included all the molecular pathways (cilia molecules, the lot)
C) that included a model of the response to the perceptual physics
D) That included a model of the environment of the paramecium

would be pretty good. But the model would not be having experiences.
There's the age old distinction between [modelling perfectly] and [the
perfect model]. The former aims at realistic replication. The latter
aims at suited to task. I think you could get pretty close to it
behaviourally. Maybe indistiguishable.

The way to test it? Make the model drive a nano-robot paramecium shell.
Then let it live with real paramecium. Then expose both to novelty and see
what the differences are.

I don't think any amount of detail will ever make the model or the
computer it is running on have experiencesthe only perfect model of
the paramecium is a paramecium. Also...paramecium is not noted for its
scientific behaviour!

cheers,

colin


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-24 Thread Colin Geoffrey Hales

Colin:
 When you take away phenomenal consciousness what can't you do?

Brent:
I don't know, because I don't know what it is.

What it is?
..is what changes radically when you close your eyes.
..is what you lose when you have a dreamless sleep.
..is what totally stops you doing science when it's not there.
..is as real as a rock
..is our only interface with reality outside ourselves

What causes it?
..that's a different question and not actually needed to make progress in
this discussion. We just have to admit that we don't know and examine our
assumptions about the universe that create our lack of ability to sort it
out.

colin



--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-24 Thread Colin Geoffrey Hales

Colin
 I am not talking about the creative process. I am talking about the
 perception of a natural world phenomena that has never before been
 encountered. There can be no a-priori scientific knowledge in such
 situations. It is as far from a metaphor as you can get. I mean literal
 invisibility. See the red photon discussion in the LZ posting. If all you
 have is a-priori abstract (non-phenomenal) rules of interpretation of
 sensory signals to go by, then one day you are going to misinterpret
 because the signals came in the same from a completely different source
 and you;d never know it.

Brent
Yes, that's a mistake humans make too.  Even simpler, have you ever seen the
demonstration in which a long red rod is hung in a rotating trapezoidal
white window frame.  In spite of knowing exactly what is happening, the
window frame appears to a a square frame that is oscillating while the rod
rotates and in some way passes through the material of the frame.  Your
pre-scientific hard-wiring misleads you.

Colin
Peceptual fields can misrepresent reality. The 'virtual reality' generator
is not perfect, buit it's pretty good. Scientific procedure aimes to
eliminate the effects of such misdirection. Through the use of test and
control and review and critical argument. The main thing is that there is
a perceptual field of the external world. Without it we wouldn't even have
a chance to be mistaken about the external world!




--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-24 Thread Colin Geoffrey Hales

Colin
That is the invisibility I claim at the center of
 the zombie's difficulty.

Brent
But it will also present the same difficulty to the human scientist.  An
in fact it is easy to build a robot that detects and responds to radio
waves that are completely invisible to a human scientist.

Colin
I'm not talking about invisibility of within a perceptual field. That is
an invisibility humans can deal with to some extent using instruments. We
inherit the limits of that process, but at least we have something
presented to us from the outside world. The invisibility I speak of is the
invisibility of novel behaviour in the natural world within a perceptual
field. Without a phenomenal representation of the external world we cannot
use existing knowledge to predict anything 'out there' that we can
reliably be surprised about. There is no 'out there' without phenomenal
representation.


Brent:
Are you saying that a computer cannot have any pre-programmed rules for
dealing with sensory inputs, or if it does it's not a zombie.

Colin:
I would say that a computer can have any amount of pre-programmed rules
for dealing with sensory inputs. Those rules are created by humans and
grounded in the perceptual experiences of humans. That would be a-priori
knowledge. The machine itself has no experiences related to the rules or
its sensing, hence it is a zombie. The possession of behavioural rules
does not entail zombie-ness. The lack of possession of perceptual fields
does.

Brent:
Or are you claiming that humans have some pre-scientific knowledge that
cannot be implemented in a computer.

Colin:
Yes! Humans have a genetically bestowed capacity to make cellular material
which takes advantage of (as yet un-described) attributes of the natural
world that enable sensory feeds to create phenomenal fields, thus
connecting the human with the external natural world. This is before any
derived knowledge (scientific or not). So I suppose 'pre-scientific' is a
good term for it. Innate a-priori knowledge (not learned or 'learned'
during construction).

I am saying that it cannot be computed. The experiences must be had. This
does not preclude a different sort of chip that does have experiences
because it replicates (not models) the actual physics of the phenomenal
fields. This physics could be mixed into a computational substrate.

So I'm saying that 'computing' grounded in perceptual fields is non-zombie.
But we don't have that form of computing. We have numerical/symbolic
models based on/grounded in human perception.

cheers,

Colin



--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-24 Thread Colin Geoffrey Hales

The PHENOMENAL

Colin
 What I have done is try to figure out a valid test for phenomenal
 consciousness.

Brent
What is the functional definition of phenomenal?
Is there non-phenomenal consciousness?

Colin
Phenomena are things that happen in the universe.
Those things are perceived by humans.
That perception is called phenomenal consciousness.
The complete collection of phenomenal scenes is consciousness.

Yes there is non-phenomenal consciousness. It is what survives a dreamless
sleep. Chalmers called it 'psychological consciousness'. Block called it
'access concsiousness'. All the beliefs (innate or learned) that you have,
none of which have any intrinsic experiential qualities until brought into
conscuisness... these are non-phenomenal. This includes any scientific
beliefs such as F = MA.

Colin
 It seems
 science is a unique/special candidate for a variety of reasons. Its
 success is critically dependent on the existence of a phenomenal
 representation of the external world.

Brent
It's criticaly dependent on having a representation of the external world
- I don't know what phenomenal adds to that.

Colin
OK. I could make a 'representation' out of any old abstraction I wanted.
There are an infinite number of ways to abstract something, none of which
are experienced through the act of using them to do anything. This is
where it gets self-referential and confusing.

The provision of phenomenal consciousness is a phenomena, as real as a
rock. It's just that the phenomena is not a 'thing' like a rock, it's a
thing that has intrinsic 'aboutness' as its primary quality. The fact that
this sounds wierd and nobody explains how (except me, elsewhere) is
irrelevant at this point.

You can't have a life with a rock in it woithout having a rock.
likewise:
You can't be conscious without having phenomenal cosnciousness.

Both are just physics of our universe doing what it does. Mostly the rocks
are external to the cranium. Although in the case of my dog and
politicians I wonder. :-)

Colin
 OK, so the real sticking point is the 'phenomenal construct'. The zombie
 could have a 'construct' with as much detail in it as the human phenomenal
 construct, but that is phenomenally inert (a numerical abstraction).

Brent
Again you seem to be calling on phenomenal to do all the work of denying
consciousness to the zombie.  You could just use human instead.

Colin
Phenomenal calls up real physics. Humam or artefact - it does not
matter. Without the phenomena that create phenonmenal consciousness it
won't be conscious.

I hope the above has helped with 'PHENOMENAL'.

I've been on this all afternoon. It's been very instructive to understand
how my words can be heard by others. We all hear different things.
Time out!

cheers
colin



--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-24 Thread Colin Geoffrey Hales



 I understand that there is a difference between sensing and perception.
 Perception includes sensing and also interpreting the sensations in a
 model of the world.  Which is why unusual appearances can literally be
 difficult to perceive.  But you still have not said why a digital computer
 cannot have an internal model and modify that model based on sensory data.

 Brent Meeker

If all you have is a bunch of numbers (or 4-20mA current loop signals or
1-5V signals) dancing away, and you have no a-priori knowledge of the
external world, how are you to create any sort of model of the external
world in the first place? You don't even know it is there. That is the
world devoid of phenomenal consciousness.

If humans give you a model. You still don't know what it is about. It's
just a bunch of rules (when this input does this, do that...and so on).
None of which is an experience. None of which gives you any innate
awareness of the external world.

That's why a real phenomenon happening in your head that innately connects
to the real phenomemona in the external natural world and constructs an
experienced representation of it, devoid of all knowledge 'about it', is
necessary before you can know anything about it _at all_.

There is a natural tendency to anthropomorphise our experiences into the
artifact. Imagine yourself in a black silent room with a bunch of numbers
streaming by and a bunch of dials you can use to send numbers back out.
Now tell me how you can ever deduce the real external world from all those
numbers. You can't. You can say 'when I poke this dial that number over
there does that'. That is your whole universe.

You have to stop thinking like a human and really imagine 'what it is
like' to be a zombie. You don't have any awareness of your own boundary,
let alone the external world. The most exqisitely detailed abstraction you
can possibly imagine won't make the tiniest bit of difference to that
situation.

cheers,
colin




--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-24 Thread Brent Meeker

Colin Geoffrey Hales wrote:
 I understand that there is a difference between sensing and perception.
 Perception includes sensing and also interpreting the sensations in a
 model of the world.  Which is why unusual appearances can literally be
 difficult to perceive.  But you still have not said why a digital computer
 cannot have an internal model and modify that model based on sensory data.

 Brent Meeker
 
 If all you have is a bunch of numbers (or 4-20mA current loop signals or
 1-5V signals) dancing away, and you have no a-priori knowledge of the
 external world, how are you to create any sort of model of the external
 world in the first place? You don't even know it is there. That is the
 world devoid of phenomenal consciousness.

You could say exactly same thing about a bunch of neurons and chemicals.  Yet 
they produce consciousness.
 
 If humans give you a model. You still don't know what it is about. It's
 just a bunch of rules (when this input does this, do that...and so on).
 None of which is an experience. None of which gives you any innate
 awareness of the external world.

If you can learn and act you do know what it is about.  You're just making an 
assertion that none of it is experience or innate awareness.
 
 That's why a real phenomenon happening in your head that innately connects
 to the real phenomemona in the external natural world and constructs an
 experienced representation of it, devoid of all knowledge 'about it', is
 necessary before you can know anything about it _at all_.

Unsupported assertion.

 There is a natural tendency to anthropomorphise our experiences into the
 artifact. Imagine yourself in a black silent room with a bunch of numbers
 streaming by and a bunch of dials you can use to send numbers back out.
 Now tell me how you can ever deduce the real external world from all those
 numbers. You can't. You can say 'when I poke this dial that number over
 there does that'. That is your whole universe.
 
 You have to stop thinking like a human and really imagine 'what it is
 like' to be a zombie. 

To imagine is to create an internal model - so imagining what it is like to be 
a zombie seems to be a contradiction in terms.

Brent Meeker

--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-23 Thread Russell Standish

On Fri, Nov 24, 2006 at 12:12:07PM +1100, Colin Geoffrey Hales wrote:
 
 My paper proves zombies can't do science. You have all said that the UD is
 not conscious. This is another way of saying that any creatures within
 (computed by) a UD have no consciousness. The UD is therefore a zombie
 realm. Hence computationalism is false. Yes?

Not at all. We would also say the the universe is not conscious. But
that doesn't mean that there aren't some conscious creatures located within the
universe.



A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 [EMAIL PROTECTED]
Australiahttp://www.hpcoders.com.au



--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-23 Thread Colin Geoffrey Hales

[EMAIL PROTECTED]
In-Reply-To: 
 [EMAIL PROTECTED]
  au


With reference to the other thread

Re: Hypostases (was: Natural Order  Belief)

 The other problem is how all
 of this logic connects to Everything.  That is why I am trying to
 understand the 0-person.  I think questioning the 0-person might be the
 same thing as questioning the assumption of Arithmetic Realism (AR),
 but I'm not sure.


You are mainly right. Strictly speaking any sufficiently rich notion of
truth would work. If you are interested in the theology of an angel
(a non turing emulable entity like Analysis + omega rule (anomega)) you
will have to take a notion of analytical truth, but for any digital
machine arithmetical truth is enough (even for ZF, but this is hard to
show: better to take the X-notion of truth for a machine talking on
X-objects).
Anyone believing in a notion of independent (from oneself) truth
believes in a notion of zero-person. With comp AR is enough.
--

I suppose what I am claiming is that 0-person exists.
In a way I would also claim Arithmetic Realism (AR) to be real.

But what I am claiming is, in effect that the 'number base' of the
'arithmetic' is not platonic/ideal/integers etc, but something else. The
numbers I claim to exist are those we find when we look = random events.

The difference is subtle but the consequences are far reaching.

Colin



--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-23 Thread Colin Geoffrey Hales


 On Fri, Nov 24, 2006 at 12:12:07PM +1100, Colin Geoffrey Hales wrote:

 My paper proves zombies can't do science. You have all said that the UD
 is
 not conscious. This is another way of saying that any creatures within
 (computed by) a UD have no consciousness. The UD is therefore a zombie
 realm. Hence computationalism is false. Yes?

 Not at all. We would also say the the universe is not conscious. But
 that doesn't mean that there aren't some conscious creatures located
 within the universe.


We could argue that we humans 'are' the consciousness of the universe. But
it would add nothing to the discussion! :-) A tad too antropomorphic...

I assume by the universe you mean ours. Understanding human
consciousness properly means we will eventually be able to prescribe what
level of consciousness applies to the rest of the universe that is 'not
humans'. Including animals ...I predict 'not as much'rocks, fridges
etc. I predict 'not much at all'.

I would also predict that a UD reified in our universe would be like
that...'not much' consciousness (the consciousness of the computer = that
of which it is made, not that of the program). There are no phenomena
reified as a result of the UD operating. The only phenomena happening are
the machinations of the hardware of the UD.

BTW I am not saying that abstract computation of some sort cannot be
involved in artificial general intelligence. What I am saying is that
bolted to the computation are non-negotiable aspects that have to be
done in real phenomena or the machine will have no phenomenal
consciousness. The phenomena are of a type found in brain material.
Conversely the sort of universe that make brain material have
consciousness is not madeof platomic arithmetical entities.

In one sense 'comp', I gather, is the claim that computation of an
abstracted realm-X done in realm-Y can create realm-X phenomena. By
extension it means 'arithmetic realism' of the classical platonic kind is
false.

Who'd have thought I'd have to bother with all this stuff all I want
to do is build my chips and get on with AGI! Here I am proving zombies
can't do science? sheesh!

cheers,

Colin



--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-23 Thread Russell Standish

On Fri, Nov 24, 2006 at 01:12:17PM +1100, Colin Geoffrey Hales wrote:
 
 We could argue that we humans 'are' the consciousness of the universe. But
 it would add nothing to the discussion! :-) A tad too antropomorphic...

Indeed.

 
 I assume by the universe you mean ours. Understanding human
 consciousness properly means we will eventually be able to prescribe what
 level of consciousness applies to the rest of the universe that is 'not
 humans'. Including animals ...I predict 'not as much'rocks, fridges
 etc. I predict 'not much at all'.

I am extremely sceptical of claims of consciousness going down in some
degree to simpler animals, plants, nonliving things. My main
counterargument is the Why ants are not conscious argument, which is
in my book, but I haven't published seperately yet.

This is still room for consciousness is some higher order animals -
chimpanzees, dolphins, elephants perhaps.

 
 I would also predict that a UD reified in our universe would be like
 that...'not much' consciousness (the consciousness of the computer = that
 of which it is made, not that of the program). There are no phenomena
 reified as a result of the UD operating. The only phenomena happening are
 the machinations of the hardware of the UD.
 

Fair enough, but this is a direct contradiction with the assumption of
computationalism. 

 
 Who'd have thought I'd have to bother with all this stuff all I want
 to do is build my chips and get on with AGI! Here I am proving zombies
 can't do science? sheesh!
 
 cheers,
 
 Colin
 

C'est la vie.


A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 [EMAIL PROTECTED]
Australiahttp://www.hpcoders.com.au



--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-23 Thread Colin Geoffrey Hales


 I assume by the universe you mean ours. Understanding human
 consciousness properly means we will eventually be able to prescribe
 what
 level of consciousness applies to the rest of the universe that is 'not
 humans'. Including animals ...I predict 'not as much'rocks, fridges
 etc. I predict 'not much at all'.

 I am extremely sceptical of claims of consciousness going down in some
 degree to simpler animals, plants, nonliving things. My main
 counterargument is the Why ants are not conscious argument, which is
 in my book, but I haven't published seperately yet.

 This is still room for consciousness is some higher order animals -
 chimpanzees, dolphins, elephants perhaps.

I would predict extremely primitive phenomenal scenes to a single cell
organism...Perhaps LIGHT/NOT-LIFGT...With causal efficacy. I think
paramecium might operate this way. 'Phenomenal scenes' and 'abstraction
from/via phenomenal scenes' are two independent axes of intellect. It is
the single cell version which I would hold responsible for the cambrian
explosion. Ants may have a collective intellect, but it's based on
primitive phenomenal scenes (not zombies). The very first single-cell
non-zombie had an amazing survival advantage even if their reflex
behaviour was random (anything rather than nothing). In my AGI model
certain types of single celled creatures cannot help but have 'phenomena'
- it comes with their membrane and can't be helped. Fill it with genetic
material and the cell goes negativemake it selectively permeable...job
done.



 I would also predict that a UD reified in our universe would be like
 that...'not much' consciousness (the consciousness of the computer =
 that
 of which it is made, not that of the program). There are no phenomena
 reified as a result of the UD operating. The only phenomena happening
 are the machinations of the hardware of the UD.


 Fair enough, but this is a direct contradiction with the assumption of
 computationalism.

This is a 'assume comp' playground only? I am up for not assuming
anything.but if computationalism is actually false then it becomes a
religion or a club or something.

I have no emotional/religious attachment...I just want what works. I can
mount (and have) a case for it being false (zombies can't do
science)...also computationalism has produced nothing but failure in
AGI to date I have physics to point at in brain material perfectly
suited to the type of phenomena (virtual bosons) needed for phenomenal
consciousness using neurons/astrocytes... I have a mathematical formalism
(EC yes ... under contruction!!!) that predicts it be like thatI have
a complete set of evolutionary cues that support itI have consistency
with every pathology I have thrown at it..I have ethological
consistency... the latest empirical neurosience evidence confirms that
small groups/single cells have phenomenality (outside cortex)so I
have an entire axis of neural modelling that is currently missing and has
been missing ever since Hodgkins and Huxley and before which is
explanatory of why AGI and related neural modelling won't work... so

until someone can undo all of it (in particular, just now, the zombie
scientist issue)...comp is false. I look forward to useful encounters to
that effect, not just 'assume thisbelieve that'

Colin


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-23 Thread Russell Standish

On Fri, Nov 24, 2006 at 02:47:46PM +1100, Colin Geoffrey Hales wrote:
 
 
 
  I would also predict that a UD reified in our universe would be like
  that...'not much' consciousness (the consciousness of the computer =
  that
  of which it is made, not that of the program). There are no phenomena
  reified as a result of the UD operating. The only phenomena happening
  are the machinations of the hardware of the UD.
 
 
  Fair enough, but this is a direct contradiction with the assumption of
  computationalism.
 
 This is a 'assume comp' playground only? I am up for not assuming
 anything.but if computationalism is actually false then it becomes a
 religion or a club or something.

Not at all. I don't even subscribe to computationalism most days, but
it is a powerful metaphor for reasoning. Nevertheless it is important
to know in any argument if you assume it or not. Otherwise you may
have the sort of argument:

  If computationalism is false, then I show that computationalism is false.

which is not especially interesting.



A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 [EMAIL PROTECTED]
Australiahttp://www.hpcoders.com.au



--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-23 Thread Colin Geoffrey Hales

  Fair enough, but this is a direct contradiction with the assumption of
  computationalism.

 This is a 'assume comp' playground only? I am up for not assuming
 anything.but if computationalism is actually false then it becomes a
 religion or a club or something.

 Not at all. I don't even subscribe to computationalism most days, but
 it is a powerful metaphor for reasoning. Nevertheless it is important
 to know in any argument if you assume it or not. Otherwise you may
 have the sort of argument:

   If computationalism is false, then I show that computationalism is
 false.

 which is not especially interesting.


I agree very 'not interesting' ... a bit like saying assuming comp
endlessly.and never being able to give it teeth.

... I am more interested in proving scientists aren't/can't be
zombiesthat it seems to also challenge computationalism in a certain
sense... this is a byproduct I can't help, not the central issue.

Colin



--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: UDA revisited

2006-11-22 Thread Stathis Papaioannou


Russell Standish writes:

 This is also a response to one of Bruno's comments.
 
 When talking about minds, the self/other boundary need not occur on
 the biological boundary (skin). I would say that when dreaming, or
 hallucinating, the random firing we perceive as coming from our input
 centres (visual cortex for instance) is coming from outside our minds
 (although still within our heads).

What if I'm not dreaming or hallucinating but just thinking abstract thoughts 
about number theory or philosophy. I'm conscious, but I don't necessarily have 
any sense of input from outside myself, whether real or imagined. I could live 
my 
whole life like this, and if I ever suspected that something other than my own 
mind 
existed it would be just another theory created by my mind on its own. 
 
 There is a strong selective pressure to align our psychogical
 self/other boundary to our biological one - hence hallucinations are
 typically not adaptive, and I guess dreaming is tolerated as a byproduct of
 whatever sleep is useful for (committing things to long term memory perhaps)
 
 The self-other distinction is fairly strong. When it goes wrong (ie is
 not aligned with biology), things become really strange, with people
 thinking they're possessed, or thinking that their arms belong to
 someone else etc.

That's true, but psychotic people are still conscious.

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: UDA revisited

2006-11-22 Thread Bruno Marchal


Le 21-nov.-06, à 22:58, Russell Standish a écrit :



 Fair enough. I was just meaning that one cannot 1-tell as you put
 it. I agree that it may be possible to empirically distinguish between
 living in a UD and not in a UD, although that remains to be seen.



OK. Note that I have predicted the many-worlds appearance from comp, 
including the comp-suicide, well before knowing the quantum and the 
quantum MW. From an intuitive point of view the UDA explains also why 
to expect some classical tautologies to be physically wrong, that is, 
the UDA already justifies some form of non locality (spatial and 
temporal) for probabilities mainly due to the impossibility for any 
digital machine to be aware of the delay a UD would introduce between 
generation of successive steps in the running of all programs.

Now, with the AUDA, we have precise quantitative propositions to test. 
We know that nature's observables violate the classical tautology

(p  q) - ((p  r) v (q  ~r))

What remains to be seen is that the comp-nature violate it too, that is 
that at least S4Grz1, or Z1*, or X1* does not prove the quantized 
version of the Bell inequality (p, q represents the sigma_1 sentences 
in the arithmetic translation, B = box, D = diamond):

BD [(BD p  BD q) - (BD ((BD p)  (BD r)) v (BD q  BD(~(BD r)))]

using the Goldblatt (cf also Rawling and Selesnick) quantization rule T:

T(p) = BD p(p atomic)
T(A  B) = T(A)  T(B)
T(~A) = BD(~T(A))

Now, in BD p the box and the diamond are of course the one of the 
third fourth and five hypostases (the soul, intelligible matter, and 
sensible matter) so here the Bsomething is really defined recursively 
by B something  D something (and more complex if some modality 
appears in the something). Just to say that when you translate the 
modal Bell inequality into the logic G*, and then G, you get a huge 
proposition, and although both G and G* are decidable, that Bell 
inequality remains intractable.


In another post you say:


 What Bruno is now calling the 3rd person point of view I label 1st
 person plural. Bruno is now distinguishing several different types of 
 1st
 person plural viewpoints.

 I believe I took an accurate snapshot of the terminological usage at
 the time I wrote the book, but terminology in this field does have a
 habit of moving on (and so it should).

I don't think I have ever change the nomenclature. (my fault if I have 
not been enough clear). Although G is the logic of self-reference I 
take care making precise that it is a third person self-reference, like 
when you talk on your brain/body/universe with your doctor. What is new 
(since my thesis defense) is that, since I have (re)read Plotinus (and 
get more scholar confirmations of my reading), I am willing to call 
0-person point of view the notion of (arithmetical) truth. It helps to 
fit the whole lobian interview in the (neo)platonist paradigm. By 
Tarski theorem (the non definability by M of the full notion of truth 
about M), the 0-person view is akin to the quasi-impersonal unnameable 
ONE of Plotinus.
Such a notion of truth is not normative. If it is true that the machine 
M will stay a billion years in some purgatory, then the proposition 
asserting this will be correct theology. Purely theological 
propositions, belonging to G* minus G, are not knowable in general, but 
may be, correctly sometimes,  bettable and some of them empirically 
falsifiable, or first (plural) personally confirmable (like the 
quantum).


In yet another post you say:

 When talking about minds, the self/other boundary need not occur on
 the biological boundary (skin). I would say that when dreaming, or
 hallucinating, the random firing we perceive as coming from our input
 centres (visual cortex for instance) is coming from outside our minds
 (although still within our heads).

I can accept this. It is consistent with the idea that the UD is not 
conscious, despite generating all possible form of (comp) 
consciousness.

Bruno

http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: UDA revisited

2006-11-22 Thread Colin Geoffrey Hales

Bruno wrote:

 In yet another post you say:

 When talking about minds, the self/other boundary need not occur on the
biological boundary (skin). I would say that when dreaming, or
hallucinating, the random firing we perceive as coming from our input
centres (visual cortex for instance) is coming from outside our minds
(although still within our heads).

 I can accept this. It is consistent with the idea that the UD is not
conscious, despite generating all possible form of (comp)
 consciousness.

 Bruno

Thinking out loud here.

Yeah, this is how I am coming to view COMP. The logic and analysis
involved means you can step back, point at certain aspects of the analysis
it provides and say... this corresponds to aspect X of 'reality', that
correponds to aspect Y of reality... and so forth. It's a sort of
generalised abstracting framework within which forms or classes of
knowledge are exhibited.

These categorisations can be seen in the abstract realm of number is one
thing. What does this say about a non-abstract/real realm of STUFF i.e.
one constructed of something else? A lot.

I suppose I'm grappling with the idea that COMP is true...but it's true in
it's own realm. The realm of STUFF has a complete set of equivalent
truths. We discuss/contrast/compare the two realms. But that's where it
ends.

...in the sense that the UD made of STUFF does not implement everything
that 'being' STUFF has in the STUFF realm. Conversely in the UD number
realm, a UD made of number but computing in STUFF-as-number would not have
everything that the STUFF realm has.

So in this context the idea that COMP is declared right or wrong (proven)
in the STUFF realm is meaningless. Likewise the proof 'STUFF-HYPOTHESIS'
in the number realm would be meaningless. ?

So it's like there's only ever useful correspondence between realms, not
any sort of literal equivalence. This is how COMP can be true (in the
number realm) and yet false in our STUFF realm. False in the sense that a
STUFF-UD computing number will not be conscious in our realm in the same
way that a NUMBER-UD computing STUFF would not be conscious in the number
realm.

Am I making sense? It seems plausible. If so it means that those things
depicted using COMP are true, but computationalism put as the expectation
of computer science, is false. In the number realm there would be
number-computer science beings able to make the equivalent statement that
'stuffialisationism' is false.

This just came out of my head.

BTW has anyone made any sense of my appedices yet?

cheers

colin







--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-19 Thread Russell Standish

I haven't cast anything into set language, just using words.

However, if you identify a universal dovetailer UD with its execution
trace UD*, then UD* would indeed not statisfy the foundation axiom.

However, I tend not to make that identification, and also just to play
it safe I'm in the habit of using the term ensemble when referring
to things like UD*.

As for causal chains - what does set theory have to do with that?

Cheers

On Sat, Nov 18, 2006 at 11:20:26PM -0500, Stephen Paul King wrote:
 
 Hi Russel,
 
 Are you assuming non-well founded sets?
 
 http://en.wikipedia.org/wiki/Non-well-founded_set_theory
 
 Onward!
 
 Stephen
 
 - Original Message - 
 From: Russell Standish [EMAIL PROTECTED]
 To: everything-list@googlegroups.com
 Sent: Saturday, November 18, 2006 3:12 AM
 Subject: Re: UDA revisited
 
 
 
  On Sun, Nov 19, 2006 at 02:36:04PM +1100, Stathis Papaioannou wrote:
 
  But if a physical universe is needed to run the UD, without a physical 
  universe
  there is no UD. It's a circular argument unless you have some other 
  argument
  showing a computation can run without physical hardware.
 
  Stathis Papaioannou
 
  The argument is that its turtles all the way down, or in other words
  that there is no first cause.
 
  It seems that there are three possibilities:
 
  1. Causal chains are infinite and unbounded
  2. Causal chains are infinite but bounded (the causal chain is
  circular).
  3. Casual chains are finite and bounded (first cause is needed)
 
  Only in case 3 is a physical universe needed to run the UD. My
  personal taste is for case 2, but I doubt there is any way of
  empirically settling the matter, and many people find all 3 options
  distasteful.
 
  Cheers
 
  PS - I'll need to think a bit about Colin's post... :)
 
  -- 
 
  
  A/Prof Russell Standish  Phone 0425 253119 (mobile)
  Mathematics
  UNSW SYDNEY 2052  [EMAIL PROTECTED]
  Australiahttp://www.hpcoders.com.au
  
 
 
  
 
 
  -- 
  No virus found in this incoming message.
  Checked by AVG Free Edition.
  Version: 7.5.430 / Virus Database: 268.14.6/536 - Release Date: 11/16/2006 
  3:51 PM
 
  
 
 
 
-- 


A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 [EMAIL PROTECTED]
Australiahttp://www.hpcoders.com.au



--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-19 Thread John M

See please interspaced remarks  (JM) as well.
General addition I would start with:
In our present views, based on the limited capabilities of the mind-brain 
activity we can only muster for the time being...
(Our mental event-horizon reaches only so far)
John
- Original Message - 
From: Russell Standish [EMAIL PROTECTED]
To: everything-list@googlegroups.com
Sent: Saturday, November 18, 2006 3:12 AM
Subject: Re: UDA revisited



 On Sun, Nov 19, 2006 at 02:36:04PM +1100, Stathis Papaioannou wrote:

 But if a physical universe is needed to run the UD, without a physical 
 universe
 there is no UD. It's a circular argument unless you have some other 
 argument
 showing a computation can run without physical hardware.

 Stathis Papaioannou

 The argument is that its turtles all the way down, or in other words
 that there is no first cause.

(JM):
At least not within our present mental horizon. All possible things are 
not restricted to our present knowledge-limits. An expression like there is 
none seems like a current  'theory-based' exaggeration.

 It seems that there are three possibilities:

 1. Causal chains are infinite and unbounded
 2. Causal chains are infinite but bounded (the causal chain is
 circular).
 3. Casual chains are finite and bounded (first cause is needed)

 Only in case 3 is a physical universe needed to run the UD. My
 personal taste is for case 2, but I doubt there is any way of
 empirically settling the matter, and many people find all 3 options 
 distasteful.

(JM):
In my 'wholeness-view' (not yet realizable) #1 is the version.
Cause in this case is the impact-result of the ever changing totality, 
while any other (picked?) cause(s) are within a model-view.
#2 seems to me like 'eat your cake and have it' .
Empirically based? do we include mental experiencing to exceed the 
'physical world based (conventional) observation figment?
Even 'logically acceptable' seems restricted to our human ways.
I resort to the (humble) position that we are not (yet?)  set to say a 
'final' word upon more remote features than how far our present mental event 
horizon reaches. (Turtle is OK).
In spite of a '-*nescio* non est scientia-' (my version) maxim.

 Cheers

(JM)
Cheers - John

 PS - I'll need to think a bit about Colin's post... :)

 -- 

 
 A/Prof Russell Standish  Phone 0425 253119 (mobile)
 Mathematics
 UNSW SYDNEY 2052  [EMAIL PROTECTED]
 Australiahttp://www.hpcoders.com.au


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-19 Thread Russell Standish

On Sun, Nov 19, 2006 at 01:33:13PM +1100, Colin Geoffrey Hales wrote:
 
 snip
  Since it makes no difference in any observable respect whether we are
 living in a computer simulation running on a bare substrate, as one that
 is incidently computated as part of a universal dovetailer, or an
 infinite chain of dovetailers, we really can make use of Laplace's ripost
 to Napoleon Sire, I have no need of that hypothesis with
 respect to a concrete computer running our world.
 snip
 
 Sorry Russel, I disagree with this claim.
 
 To say that the universe is computation does not imply any old substitute
 computed abstraction will be identical in all respects.
 
 In particular there is a blizzard of virtual theorems made available
 because of the intrinsic parallelism of 'reality as computation'. These
 are NOT explictly computed. Abstract it and all the virtual
 theorems/computations are gone.
 
 To see a computational equivalent check out ANY cellular automaton. There
 is a perfectly computational but uncomputed relationship between any cell
 and _all_ other cells (NOT just the local cells explicit to the rule set
 used). Yet the only thing that was actually computed was the cell contents
 using local cells incorporated in the cell rules.  The universe is
 equivalent. It is computation and can be regarded/treated as a massively
 parallel CA. All the virtual theorems (computations) actually exist.
 
 So: Computationalism is the statement that I am a computation.
 
  is correct in that the universe is computation, but incorrect in that
 an abstraction on a substrate will replicate everything - is cannot/does
 not replicate the virtual theorems. SO.I have shown you a _physical_
 but virtual computation that is NOT replicated by the UDA abstraction.
 This makes your original assertion incorrect.

I have never heard of your virtual theorems before, but assuming
they're analogous to the implied computations that occur in your CA
example, the difference would be the other way around. A UD will
actually compute all these implied computations, whereas they are only
virtual with respect to a direct computation.

So you could make a statement that the difference between living in a
direct computation and living in a UD is that in the direct
computation there isn't this blizzard of virtual theorems. But
there is another difference, as pointed out by Bruno - in a UD, first
person histories are non-deterministic. The point has been well argued
in this list as to whether this is significant or not (I for one happen to
think it is significant).

Finally, there is a problem that UDs are far simpler programs than the
one implementing our universe, so assuming they exist (which they do
by virtue of computationalism), we are far more likely to find
ourselves in a UD than in a simple direct computation.

 
 The story is bigger than this in that I hold the virtual theorems to be
 the substrate for subjective experiencebut my claims in this regard do

Interesting claim. And if what I stated above holds, it meshes quite
nicely with the view that counterfactuals are essential for conscious
experience. 


-- 


A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 [EMAIL PROTECTED]
Australiahttp://www.hpcoders.com.au



--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-19 Thread Hal Ruhl

Hi Russell

At 09:53 PM 11/17/2006, you wrote:


To say that there must be a physical computer on which the dovetailer
should run, is rather similar to saying there must be an ultimate
turtle upon which the world rests. The little old lady was right in
saying its turtles all the way down. Of course it is also analogous
to saying there must be a prime mover to start the causal chain.

Back on 9/4/06 I posted a more recent version of my model.  While 
this model continues to evolve, I define Physical Reality as the 
property of an object that allows objects to interact - it is the 
only property of an object allowed to change.

I think one can make a loose correspondence between my list and its 
dynamic and the UD as follows:

1) Some of my objects and all the states of all the programs in all 
the possible UD trace states [However, I suspect some if not most of 
my objects are not Turing computable.].

2) My object to object interaction [according to local interaction 
rules] and the operation of the UD itself [a current program state 
producing the next state via the particular computer's 
configuration[state]].  My list would have a locus containing a group 
of closely related properties re the level of the Physical Reality 
of an object.  Within this locus objects interact by pushing each 
other's boundary around by rules local to those objects. This 
boundary pushing is like what the UD's operation does [it seems to me].

3) The current Physical Reality distribution among some of my 
objects and the current state of the UD trace.

However, the UD trace is binary in that it is only active at one 
state at a time and all other states are inactive.  I think that that 
is a disallowed selection from a more general multiplicity of 
possibilities.  Also I have more objects than possible program states.

Nevertheless I see no need for a fundamental material matrix in which 
to play out the dynamics of my list.  In fact I think it would be 
logically excluded as a global necessity since it would select just a 
subset of the possible dynamics of interaction between my objects 
because the matrix would have to have some properties.

I think that this argument as far as a material matrix goes is to a 
degree along the lines of your argument but clearly I presently see 
the UD as just a subset of my list's dynamic.

Hal Ruhl





--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-18 Thread Colin Geoffrey Hales

snip
 Since it makes no difference in any observable respect whether we are
living in a computer simulation running on a bare substrate, as one that
is incidently computated as part of a universal dovetailer, or an
infinite chain of dovetailers, we really can make use of Laplace's ripost
to Napoleon Sire, I have no need of that hypothesis with
respect to a concrete computer running our world.
snip

Sorry Russel, I disagree with this claim.

To say that the universe is computation does not imply any old substitute
computed abstraction will be identical in all respects.

In particular there is a blizzard of virtual theorems made available
because of the intrinsic parallelism of 'reality as computation'. These
are NOT explictly computed. Abstract it and all the virtual
theorems/computations are gone.

To see a computational equivalent check out ANY cellular automaton. There
is a perfectly computational but uncomputed relationship between any cell
and _all_ other cells (NOT just the local cells explicit to the rule set
used). Yet the only thing that was actually computed was the cell contents
using local cells incorporated in the cell rules.  The universe is
equivalent. It is computation and can be regarded/treated as a massively
parallel CA. All the virtual theorems (computations) actually exist.

So: Computationalism is the statement that I am a computation.

 is correct in that the universe is computation, but incorrect in that
an abstraction on a substrate will replicate everything - is cannot/does
not replicate the virtual theorems. SO.I have shown you a _physical_
but virtual computation that is NOT replicated by the UDA abstraction.
This makes your original assertion incorrect.

The story is bigger than this in that I hold the virtual theorems to be
the substrate for subjective experiencebut my claims in this regard do
not affect my treatment of your claim in respect of computationalism. The
UDA throws away a very very very large number of virtual theorems.  The
UDA does NOT do massively parallel theorem proving therefore it loses all
the virtual theorems. Note that a massively parallel computer made of
STUFF does NOT recreate the virtual theorems inherent in the actual
computation that _is_ STUFF.

Put it this wayTWO theorem-proofs actually deliver THREE truths.
TRUTH_1, TRUTH_2 and the difference between the two. Traverse TRUTH_1 back
down to the common axiom set and then back up TRUTH_2. This corresponds to
'as-if' a direct TRUTH_1_to_2 or TRUTH_2_to_1 was enacted/proven when it
was not actually proven explicitly. It comes about because TRUTH_1 and
TRUTH_2 were 'computed' in parallel by the universe-as-computation. If the
universe is computation and computes matter then the virtual theorems are
'virtual matter'.

This is the literal origin of Godel's incompleteness theorem! It's _why_
it applies - the parallelness of theorem proving is neglected by
mathematicians in the construction of calculus/logic. In the process a
whole pile of theorems become true but unprovable or conversely there's a
whole pile of truths that are provable but not actually proven, but can be
implicitly proven (via the method of 'virtual theorem proving' shown
above.

So... if the UDA is an abstration made of 'STUFF' then it has no virtual
theorems whereas the STUFF has them. A UDA made of anything else but STUFF
is meaningless from my point of view. I want to build real AGI, not play
in ideal realms (no matter how much fun it is!).

If I can get this EC/lambda calc thing sorted I'd be able to show you
formally. All in good time.

regards,

Colin Hales





--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: UDA revisited

2006-11-18 Thread Stathis Papaioannou


Russell Standish writes:

 I had a thought about an alternative way of expressing the UDA
 (universal dovetailer argument).
 
 Computationalism is the statement that I am a computation. To use
 the RITSIAR acronym, computations are real in the sense I am real. But
 the Church-Turing thesis gives a particular model of a computation, it
 is effectively defining a computation as something that there is a
 Turing machine emulating. 
 
 The universal dovetailer is a computation. Contained within its
 execution trace, are the execution traces of all other programs,
 including itself. If we are a program, we can be found inside a
 universal dovetailer, which can be found within another UD (infinitely
 many, in fact).
 
 To say that there must be a physical computer on which the dovetailer
 should run, is rather similar to saying there must be an ultimate
 turtle upon which the world rests. The little old lady was right in
 saying its turtles all the way down. Of course it is also analogous
 to saying there must be a prime mover to start the causal chain. If
 God created the world, then it immediately poses the question Who
 created God.
 
 Since it makes no difference in any observable respect whether we are
 living in a computer simulation running on a bare substrate, as one
 that is incidently computated as part of a universal dovetailer, or an
 infinite chain of dovetailers, we really can make use of Laplace's
 ripost to Napoleon Sire, I have no need of that hypothesis with
 respect to a concrete computer running our world.
 
 To rescue the primitive matter world, we need to deny the existence of
 the universal dovetailer. But this denies the Church Turing thesis -
 we have to say some computations exist (eg ourselves), but others
 don't. To make computationalism compatible with primitive materialism
 requires us to abandon the Church-Turing thesis and redefine what we
 mean by computation.

But if a physical universe is needed to run the UD, without a physical universe 
there is no UD. It's a circular argument unless you have some other argument 
showing a computation can run without physical hardware.

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




  1   2   >