Re: SV: computationalism and supervenience

2006-09-11 Thread Brent Meeker

Lennart Nilsson wrote:
...
 But my point is that this may come down to what we would mean by a computer
 being 
 conscious.  Bruno has an answer in terms of what the computer can prove.
 Jaynes (and 
 probably John McCarthy) would say a computer is conscious if it creates a
 narrative 
 of its experience which it can access as memory.
 
 Brent Meeker
 
 Humphrey says it has to have an evolutionary past.
 LN

I've read some of Humphrey's books, but I don't recall that.  What's his 
argument? 
Whats the citation?

Brent Meeker

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: computationalism and supervenience

2006-09-11 Thread Stathis Papaioannou

Peter Jones writes:

 Stathis Papaioannou wrote:
 
  Like Bruno, I am not claiming that this is definitely the case, just that 
  it is the case if
  computationalism is true. Several philosophers (eg. Searle) have used the 
  self-evident
  absurdity of the idea as an argument demonstrating that computationalism is 
  false -
  that there is something non-computational about brains and consciousness. I 
  have not
  yet heard an argument that rejects this idea and saves computationalism.
 
 [ rolls up sleaves ]
 
 The idea is easilly refuted if it can be shown that computation doesn't
 require
 interpretation at all. It can also be refuted more circuitously by
 showing that
 computation is not entirely a matter of intepretation. In everythingism
 , eveything
 is equal. If some computations (the ones that don't depend on
 interpretation) are
 more equal than others, the way is still open for the Somethinginst
 to object
 that interpretation-independent computations are really real, and the
 others are
 mere possibilities.
 
 The claim has been made that computation is not much use without an
 interpretation.
 Well, if you define a computer as somethin that is used by a human,
 that is true.
 It is also very problematic to the computationalist claim that the
 human mind is a computer.
 Is the human mind of use to a human ? Well, yes, it helps us stay alive
 in various ways.
 But that is more to do with reacting to a real-time environment, than
 performing abstract symbolic manipulations or elaborate
 re-interpretations. (Computationalists need to be careful about how
 they define computer. Under
 some perfectly reasonable definitions -- for instance, defining a
 computer as
 a human invention -- computationalism is trivially false).

I don't mean anything controversial (I think) when I refer to interpretation of 
computation. Take a mercury thermometer: it would still do its thing if all 
sentient life in the universe died out, or even if there were no sentient life 
to 
build it in the first place and by amazing luck mercury and glass had come 
together 
in just the right configuration. But if there were someone around to observe it 
and 
understand it, or if it were attached to a thermostat and heater, the 
thermometer 
would have extra meaning - the same thermometer, doing the same thermometer 
stuff. Now, if thermometers were conscious, then part of their thermometer 
stuff might include knowing what the temperature was - all by themselves, 
without 
benefit of external observer. Furthermore, if thermometers were conscious, they 
might be dreaming of temperatures, or contemplating the meaning of 
consciousness, 
again in the absence of external observers, and this time in the absence of 
interaction 
with the real world. 

This, then, is the difference between a computation and a conscious 
computation. If 
a computation is unconscious, it can only have meaning/use/interpretation in 
the eyes 
of a beholder or in its interaction with the environment. If a computation is 
conscious, 
it may have meaning/use/interpretation in interacting with its environment, 
including 
other conscious beings, and for obvious reasons all the conscious computations 
we 
encounter will fall into that category; but a conscious computation can also 
have meaning 
all by itself, to itself. You might argue, as Brent Meeker has, that a 
conscious being would 
quickly lose consciousness if environmental interaction were cut off, but I 
think that is just 
a contingent fact about brains, and in any case, as Bruno Marchal has pointed 
out, you 
only need a nanosecond of consciousness to prove the point.

 It is of course true that the output of a programme intended to do one
 thing
 (system S, say) could be re-interpeted as something else. But what
 does it *mean* ?
 If computationalism is true whoever or whatever is doing the
 interpreting is another
 computational process. SO the ultimate result is formed by system S in
 connjunction
 with another systen. System S is merely acting as a subroutine. The
 Everythingist's
 intended conclusion is  that every physical system implements every
 computation.

That's what I'm saying, but I certainly don't think everyone agrees with me on 
the list, and 
I'm not completely decided as to which of the three is more absurd: every 
physical system 
implements every conscious computation, no physical system implements any 
conscious 
computation (they are all implemented non-physically in Platonia), or the idea 
that a 
computation can be conscious in the first place. 

 But the evidence -- the re-interpretation scenario -- only supports the
 idea
 that any computational system could become part of a larger system that
 is
 doing something else. System S cannot be said to be simultaneously
 perforiming
 every possible computation *itself*. The multiple-computaton -- i.e
 multiple-interpretation
 -- scenario is dependent on a n intepreter. Having made computation
 dependent
 on interpretation, 

RE : computationalism and supervenience

2006-09-11 Thread Bruno Marchal

Brent Meeker wrote (through many posts):


 I won't insist, because you might be right, but I don't think that is  
 proven.  It may
 be that interaction with the environment is essential to continued  
 consciousness.



Assuming comp, I think that this is a red herring. To make this clear I  
use a notion of generalized brain in some longer version of the UDA.  
See perhaps:

http://groups.google.com/group/everything-list/browse_frm/thread/ 
4c995dee307def3b/9f94f4d49cb2b9e6? 
q=universal+dovetailerrnum=1#9f94f4d49cb2b9e6

The generalized brain is by definition the portion of whatever you need  
to turing-emulate to experience nothing or to survive in the relative  
way addressed through comp. It can contain any part of the environment.  
Note that in that case, assuming comp, such a part has to be assumed  
turing-emulable, or comp is just false.

Of course, if the generalized brain is the entire multiverse, the  
thought experiment with the doctor is harder to figure out, certainly.  
But already at the seventh step of the 8-steps-version of the UDA, you  
can understand that in front of the infinitely (even just potentially  
from all actual views) running UD, comp makes all your continuations  
UD-accessed. It would just mean, in that case, that there is a unique  
winning program with respect of building you. I doubt that, but that is  
not the point.

By the same token, it is also not difficult to get the evolution of  
brain into the notion of generalized brain, so that evolution is  
also a red herring when used as a critics of comp, despite the  
possibility of non computationnal aspect of evolution like geographical  
randomization à-la Washington/Moscow.



 I would bet on computationalism too.  But I still think the conclusion  
 that every
 physical process, even the null one, necessarily implements all  
 possible
 consciousness is absurd.


OK, but the point is just that comp implies that physical processes  
does not implement per se consciousness. They implements  
consciousness only as far as making that consciousness able to manifest  
itself relatively to its most probable computational history (among a  
continuum).






 Reductio ad absurdum of what? Comp or  (weak) Materialism?

 Bruno

 Dunno.  A reductio doesn't tell you which premise is wrong.



Nice. So you seem to agree with the UDA+movie-graph argument, we have:

not comp v not physical-supervenience.

This is equivalent to both:

comp - not physical supervenience, and
physical supervenience - not comp

Now I agree that at this stage(after UDA) it would be natural to  
abandon comp but then computer science and the translation of the UDA  
in the language of a universal turing machine (sufficiently rich, or  
lobian) such an abandonment could be premature (to say the least).  
Incompleteness should make us skeptical in front of any intuitive and  
too rapid conclusion.





 That's generally useful; but when we understand little about  
 something, such as
 consciousness, we should be careful about assuming what's  
 theoretically possible;
 particularly when it seems to lead to absurdities.


Mmh If we assume theoretical possibilities and then are led to  
absurdities, then we have learned something: evidences against the  
theoretical assumptions. If the absurdities can be transform into  
clear contradiction, perhaps by making the theoretical assumptions  
clearer, then we have prove something: the falsity of the assumptions.
I think you know that, and you were just quick, isn't' it?






 Stathis: In discussing Tim Maudlin's paper, Bruno has concluded
 that either computationalism is false or the supervenience theory is  
 false.

 As I understand it Bruno would say that physics supervenes on number  
 theory and
 consciousness supervenes on physics.  So physics is eliminable.


Note that Maudlin's arrives at the same conclusion than me: NOT comp OR  
NOT physical-supervenience. Mauldin's concludes then, assuming  
sup-phys, that comp is problematic (although he realized that not-comp  
is yet still more problematic). I conclude, just because I keep comp at  
this stage, that sup-phys is false, and this makes primary matter  
eliminable. Physics as a field is not eliminate of course, but is  
eliminated as a fundamental field. It is not so astonishing given that  
physics does not often seriously address the mind/body puzzle, and when  
it does (cf Bunge) it still uses the aristotle means to put the problem  
under the rug.




 That interpretation can be reduced to computation is implicit in  
 computationalism.
 The question is what, if anything, is unique about those computations  
 that execute
 interpretation.


Interpretation are done by interpreter, that is *universal* (turing)  
machine.
Perhaps we should agree on a definition, at least for the 3-notions: a  
3-interpretation can be encoded through a (in general infinite) trace  
of a computation.
With the [Fi, ...] and Fu being an universal function, and 

Re: ROADMAP (SHORT)

2006-09-11 Thread Tom Caylor

[EMAIL PROTECTED] wrote:
 - Original Message -
 From: Tom Caylor [EMAIL PROTECTED]
 To: Everything List everything-list@googlegroups.com
 Sent: Wednesday, September 06, 2006 3:23 PM
 Subject: Re: ROADMAP (SHORT)



 You wrote:
 What is the non-mathematical part of UDA?  The part that uses Church
 Thesis?  When I hear non-mathematical I hear non-rigor.  Define
 rigor that is non-mathematical.  I guess if you do then you've been
 mathematical about it.  I don't understand.

 Tom
 --
 Smart: whatever I may come up with, as a different type of vigor
 (btw is this term well identified?) you will call it math - just a
 different type.
 John M
 --~--~-~--~~~---~--~~

The root of the word math means learning, study, or science.  Math is
the effort to make things precise.  So in my view applied math would be
taking actual information and trying to make the science precise in
order to further our learning and quest of the truth in the most
efficient manner possible.  I think that this is the concept that is
captured by the term rigor.  But what's in a name?  I call it math
and I think that a good many people would agree, but others might call
it something else, like rigor.  I think that it's an intuitive
concept limited by our finite capabilities, as you so many times point
out, John.

Tom


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: ROADMAP (SHORT)

2006-09-11 Thread jamikes

Tom, thanks, you said it as I will try to spell it out  interjected in your
reply.
John
- Original Message -
From: Tom Caylor [EMAIL PROTECTED]
To: Everything List everything-list@googlegroups.com
Sent: Monday, September 11, 2006 12:21 PM
Subject: Re: ROADMAP (SHORT)



 [EMAIL PROTECTED] wrote:
  - Original Message -
  From: Tom Caylor [EMAIL PROTECTED]
  To: Everything List everything-list@googlegroups.com
  Sent: Wednesday, September 06, 2006 3:23 PM
  Subject: Re: ROADMAP (SHORT)
 
 
 
  You wrote:
  What is the non-mathematical part of UDA?  The part that uses Church
  Thesis?  When I hear non-mathematical I hear non-rigor.  Define
  rigor that is non-mathematical.  I guess if you do then you've been
  mathematical about it.  I don't understand.
 
  Tom
  --
  Smart: whatever I may come up with, as a different type of vigor
  (btw is this term well identified?) you will call it math - just a
  different type.
  John M
  --~--~-~--~~~---~--~~

 The root of the word math means learning, study, or science.  Math is
 the effort to make things precise.  So in my view applied math would be
 taking actual information and trying to make the science precise in
 order to further our learning and quest of the truth in the most
 efficient manner possible.
Applied math is a sore point for me. As long as I accept (theoretical)
Math as a language of logical thinking (IMO a one-plane one, but it is not
the point now) I cannot condone the APPLIED math  version, (math) using
the results of Math for inrigorating (oops!) the imprecise model-values
(reductionist) 'science' is dealing with.
Precise it will be, right it won't, because it is based on a limited vue
within the boundaries of (topical) science observations. It makes the
imprecise value-system looking precise.

 I think that this is the concept that is
 captured by the term rigor.  But what's in a name?  I call it math
 and I think that a good many people would agree, but others might call
 it something else, like rigor.  I think that it's an intuitive
 concept limited by our finite capabilities, as you so many times point
 out, John.
I did, indeed and am glad that someone noticed. Your term 'rigor'  is pretty
wide, you call it 'math' (if not Math) including all those qualia-domains
which are under discussion to be 'numbers(?) or not'. OK, I don't deny your
godfatherish right to call anything by any name, but then - please - tell me
what name to call the old mathematical math? (ie. churning conventional
numbers like 1,2,3) by?

 Tom

John



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: ROADMAP (SHORT)

2006-09-11 Thread Tom Caylor

[EMAIL PROTECTED] wrote:
 Tom, thanks, you said it as I will try to spell it out  interjected in your
 reply.
 John
 - Original Message -
 From: Tom Caylor [EMAIL PROTECTED]
 To: Everything List everything-list@googlegroups.com
 Sent: Monday, September 11, 2006 12:21 PM
 Subject: Re: ROADMAP (SHORT)


 
  [EMAIL PROTECTED] wrote:
   - Original Message -
   From: Tom Caylor [EMAIL PROTECTED]
   To: Everything List everything-list@googlegroups.com
   Sent: Wednesday, September 06, 2006 3:23 PM
   Subject: Re: ROADMAP (SHORT)
  
  
  
   You wrote:
   What is the non-mathematical part of UDA?  The part that uses Church
   Thesis?  When I hear non-mathematical I hear non-rigor.  Define
   rigor that is non-mathematical.  I guess if you do then you've been
   mathematical about it.  I don't understand.
  
   Tom
   --
   Smart: whatever I may come up with, as a different type of vigor
   (btw is this term well identified?) you will call it math - just a
   different type.
   John M
   --~--~-~--~~~---~--~~
 
  The root of the word math means learning, study, or science.  Math is
  the effort to make things precise.  So in my view applied math would be
  taking actual information and trying to make the science precise in
  order to further our learning and quest of the truth in the most
  efficient manner possible.
 Applied math is a sore point for me. As long as I accept (theoretical)
 Math as a language of logical thinking (IMO a one-plane one, but it is not
 the point now) I cannot condone the APPLIED math  version, (math) using
 the results of Math for inrigorating (oops!) the imprecise model-values
 (reductionist) 'science' is dealing with.
 Precise it will be, right it won't, because it is based on a limited vue
 within the boundaries of (topical) science observations. It makes the
 imprecise value-system looking precise.
 
  I think that this is the concept that is
  captured by the term rigor.  But what's in a name?  I call it math
  and I think that a good many people would agree, but others might call
  it something else, like rigor.  I think that it's an intuitive
  concept limited by our finite capabilities, as you so many times point
  out, John.
 I did, indeed and am glad that someone noticed. Your term 'rigor'  is pretty
 wide, you call it 'math' (if not Math) including all those qualia-domains
 which are under discussion to be 'numbers(?) or not'. OK, I don't deny your
 godfatherish right to call anything by any name, but then - please - tell me
 what name to call the old mathematical math? (ie. churning conventional
 numbers like 1,2,3) by?
 
  Tom
 
 John

That is called arithmetic.  I don't really want to pursue a discussion
on terminology, but thanks for your thoughts.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: computationalism and supervenience

2006-09-11 Thread Stathis Papaioannou

Brent Meeker writes:

 Why not?  Can't we map bat conscious-computation to human 
 conscious-computation; 
 since you suppose we can map any computation to any other.  But, you're 
 thinking, 
 since there a practical infinity of maps (even a countable infinity if you 
 allow 
 one-many) there is no way to know which is the correct map.  There is if 
 you and the 
 bat share an environment.
  
  
  You're right that the correct mapping is the one in which you and the bat 
  share the 
  environment. That is what interaction with the environment does: forces us 
  to choose 
  one mapping out of all the possible ones, whether that involves talking to 
  another person 
  or using a computer. However, that doesn't mean I know everything about 
  bats if I know 
  everything about bat-computations. If it did, that would mean there was no 
  difference 
  between zombie bats and conscious bats, no difference between first person 
  knowledge 
  and third person or vicarious knowledge.
  
  Stathis Papaioannou
 
 I don't find either of those conclusions absurd.  Computationalism is 
 generally 
 thought to entail both of them.  Bruno's theory that identifies knowledge 
 with 
 provability is the only form of computationalism that seems to allow the 
 distinction 
 in a fundamental way.

The Turing test would seem to imply that if it behaves like a bat, it has the 
mental states of a 
bat, and maybe this is a good practical test, but I think we can keep 
computationalism/strong AI 
and allow that it might have different mental states and still behave the same. 
A person given 
an opiod drug still experiences pain, although less intensely, and would be 
easily able to fool the 
Turing tester into believing that he is experiecing the same pain as in the 
undrugged state. By 
extension, it is logically possible, though unlikely, that the subject may have 
no conscious experiences 
at all. The usual argument against this is that by the same reasoning we cannot 
be sure that our 
fellow humans are conscious. This is strictly true, but we have two reasons for 
assuming other 
people are conscious: they behave like we do and their brains are similar to 
ours. I don't think 
it would be unreasonable to wonder whether a digital computer that behaves like 
we do really 
has the same mental states as a human, while still believing that it is 
theoretically possible that a 
close enough analogue of a human brain would have the same mental states.

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: computationalism and supervenience

2006-09-11 Thread Stathis Papaioannou

Brent Meeker writes:

  I think it goes against standard computationalism if you say that a 
  conscious 
  computation has some inherent structural property. Opponents of 
  computationalism 
  have used the absurdity of the conclusion that anything implements any 
  conscious 
  computation as evidence that there is something special and 
  non-computational 
  about the brain. Maybe they're right.
  
  Stathis Papaioannou
 
 Why not reject the idea that any computation implements every possible 
 computation 
 (which seems absurd to me)?  Then allow that only computations with some 
 special 
 structure are conscious.

It's possible, but once you start in that direction you can say that only 
computations 
implemented on this machine rather than that machine can be conscious. You need 
the 
hardware in order to specify structure, unless you can think of a God-given 
programming 
language against which candidate computations can be measured.

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: computationalism and supervenience

2006-09-11 Thread Colin Hales



 -Original Message-
Stathis Papaioannou
 
 Brent Meeker writes:
 
  Why not?  Can't we map bat conscious-computation to human conscious-
 computation;
  since you suppose we can map any computation to any other.  But,
 you're thinking,
  since there a practical infinity of maps (even a countable infinity if
 you allow
  one-many) there is no way to know which is the correct map.  There is
 if you and the
  bat share an environment.
  
  
   You're right that the correct mapping is the one in which you and the
 bat share the
   environment. That is what interaction with the environment does:
 forces us to choose
   one mapping out of all the possible ones, whether that involves
 talking to another person
   or using a computer. However, that doesn't mean I know everything
 about bats if I know
   everything about bat-computations. If it did, that would mean there
 was no difference
   between zombie bats and conscious bats, no difference between first
 person knowledge
   and third person or vicarious knowledge.
  
   Stathis Papaioannou
 
  I don't find either of those conclusions absurd.  Computationalism is
 generally
  thought to entail both of them.  Bruno's theory that identifies
 knowledge with
  provability is the only form of computationalism that seems to allow the
 distinction
  in a fundamental way.
 
 The Turing test would seem to imply that if it behaves like a bat, it has
 the mental states of a
 bat, and maybe this is a good practical test, but I think we can keep
 computationalism/strong AI
 and allow that it might have different mental states and still behave the
 same. A person given
 an opiod drug still experiences pain, although less intensely, and would
 be easily able to fool the
 Turing tester into believing that he is experiecing the same pain as in
 the undrugged state. By
 extension, it is logically possible, though unlikely, that the subject may
 have no conscious experiences
 at all. The usual argument against this is that by the same reasoning we
 cannot be sure that our
 fellow humans are conscious. This is strictly true, but we have two
 reasons for assuming other
 people are conscious: they behave like we do and their brains are similar
 to ours. I don't think
 it would be unreasonable to wonder whether a digital computer that behaves
 like we do really
 has the same mental states as a human, while still believing that it is
 theoretically possible that a
 close enough analogue of a human brain would have the same mental states.
 
 Stathis Papaioannou

I am so glad to here this come onto the list, Stathis. Your argument is
logically equivalentI took this argument (from the recent thread) over
to the JCS-ONLINE forum and threw it in there to see what would happen. As a
result I wrote a short paper ostensibly to dispose of the solipsism argument
once and for all by demonstrating empirical proof of the existence of
consciousness, (if not any particular details within it). In it is some of
the stuff from the thread...and acknowledgement to the list.

I expect it will be rejected as usual... regardless...it's encouraging to at
least see a little glimmer of hope that some of the old arguments that get
trotted out are getting a little frayed around the edges..

If anyone wants to see it they are welcome... just email me. Or perhaps I
could put it in the google forum somewhere... it can do that, can't it?

BTW: The 'what it is like' of a Turing machine = what it is like to be a
tape and tape reader, regardless of what is on the tape. 'tape_reader_ness',
I assume... :-)

Regards,


Colin Hales



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-11 Thread Brent Meeker

Stathis Papaioannou wrote:
 Brent Meeker writes:
 
 
Why not?  Can't we map bat conscious-computation to human 
conscious-computation; 
since you suppose we can map any computation to any other.  But, you're 
thinking, 
since there a practical infinity of maps (even a countable infinity if you 
allow 
one-many) there is no way to know which is the correct map.  There is if 
you and the 
bat share an environment.


You're right that the correct mapping is the one in which you and the bat 
share the 
environment. That is what interaction with the environment does: forces us 
to choose 
one mapping out of all the possible ones, whether that involves talking to 
another person 
or using a computer. However, that doesn't mean I know everything about bats 
if I know 
everything about bat-computations. If it did, that would mean there was no 
difference 
between zombie bats and conscious bats, no difference between first person 
knowledge 
and third person or vicarious knowledge.

Stathis Papaioannou

I don't find either of those conclusions absurd.  Computationalism is 
generally 
thought to entail both of them.  Bruno's theory that identifies knowledge 
with 
provability is the only form of computationalism that seems to allow the 
distinction 
in a fundamental way.
 
 
 The Turing test would seem to imply that if it behaves like a bat, it has the 
 mental states of a 
 bat, and maybe this is a good practical test, but I think we can keep 
 computationalism/strong AI 
 and allow that it might have different mental states and still behave the 
 same. A person given 
 an opiod drug still experiences pain, although less intensely, and would be 
 easily able to fool the 
 Turing tester into believing that he is experiecing the same pain as in the 
 undrugged state. By 
 extension, it is logically possible, though unlikely, that the subject may 
 have no conscious experiences 
 at all. The usual argument against this is that by the same reasoning we 
 cannot be sure that our 
 fellow humans are conscious. This is strictly true, but we have two reasons 
 for assuming other 
 people are conscious: they behave like we do and their brains are similar to 
 ours. I don't think 
 it would be unreasonable to wonder whether a digital computer that behaves 
 like we do really 
 has the same mental states as a human, while still believing that it is 
 theoretically possible that a 
 close enough analogue of a human brain would have the same mental states.
 
 Stathis Papaioannou

I agree with that.  It would be hard to say whether a robot whose computation 
was via 
a digital computer implementing something like a production system was 
conscious or 
not even if its behavoir were very close to human.  On the other hand it would 
also 
be hard to say that another robot, whose computation was by digital simulation 
of a 
neural network modeled on a mammalian brain and whose behavoir was very close 
to 
human, was *not* conscious.

Brent Meeker

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-11 Thread Brent Meeker

Stathis Papaioannou wrote:
 Brent Meeker writes:
 
 
I think it goes against standard computationalism if you say that a 
conscious 
computation has some inherent structural property. Opponents of 
computationalism 
have used the absurdity of the conclusion that anything implements any 
conscious 
computation as evidence that there is something special and 
non-computational 
about the brain. Maybe they're right.

Stathis Papaioannou

Why not reject the idea that any computation implements every possible 
computation 
(which seems absurd to me)?  Then allow that only computations with some 
special 
structure are conscious.
 
 
 It's possible, but once you start in that direction you can say that only 
 computations 
 implemented on this machine rather than that machine can be conscious. You 
 need the 
 hardware in order to specify structure, unless you can think of a God-given 
 programming 
 language against which candidate computations can be measured.

I regard that as a feature - not a bug. :-)

Disembodied computation doesn't quite seem absurd - but our empirical sample 
argues 
for embodiment.

Brent Meeker

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: computationalism and supervenience

2006-09-11 Thread Stathis Papaioannou



Brent meeker writes:

 I could make a robot that, having suitable thermocouples, would quickly 
 withdraw it's 
 hand from a fire; but not be conscious of it.  Even if I provide the 
 robot with 
 feelings, i.e. judgements about good/bad/pain/pleasure I'm not sure it 
 would be 
 conscious.  But if I provide it with attention and memory, so that it 
 noted the 
 painful event as important and necessary to remember because of it's 
 strong negative 
 affect; then I think it would be conscious.
 
 
 It's interesting that people actually withdraw their hand from the fire 
 *before* they experience 
 the pain. The withdrawl is a reflex, presumably evolved in organisms with 
 the most primitive 
 central nervour systems, while the pain seems to be there as an 
 afterthought to teach us a 
 lesson so we won't do it again. Thus, from consideration of evolutionary 
 utility consciousness 
 does indeed seem to be a side-effect of memory and learning. 
 
 Even more curious, volitional action also occurs before one is aware of it. 
 Are you 
 familiar with the experiments of Benjamin Libet and Grey Walter?
  
  
  These experiments showed that in apparently voluntarily initiated motion, 
  motor cortex activity 
  actually preceded the subject's awareness of his intention by a substantial 
  fraction of a second. 
  In other words, we act first, then decide to act. These studies did not 
  examine pre-planned 
  action (presumably that would be far more technically difficult) but it is 
  easy to imagine the analogous 
  situation whereby the action is unconsciously planned before we become 
  aware of our decision. In 
  other words, free will is just a feeling which occurs after the fact. This 
  is consistent with the logical 
  impossibility of something that is neither random nor determined, which is 
  what I feel my free will to be.
  
  
 I also think that this is an argument against zombies. If it were possible 
 for an organism to 
 behave just like a conscious being, but actually be unconscious, then why 
 would consciousness 
 have evolved? 
 
 An interesting point - but hard to give any answer before pinning down what 
 we mean 
 by consciousness.  For example Bruno, Julian Jaynes, and Daniel Dennett 
 have 
 explanations; but they explain somewhat different consciousnesses, or at 
 least 
 different aspects.
  
  
  Consciousness is the hardest thing to explain but the easiest thing to 
  understand, if it's your own 
  consciousness at issue. I think we can go a long way discussing it assuming 
  that we do know what 
  we are talking about even though we can't explain it. The question I ask 
  is, why did people evolve 
  with this consciousness thing, whatever it is? The answer must be, I think, 
  that it is a necessary 
  side-effect of the sort of neural complexity that underpins our behaviour. 
  If it were not, and it 
  were possible that beings could behave exactly like humans and not be 
  conscious, then it would 
  have been wasteful of nature to have provided us with consciousness. 
 
 This is not necessarily so.  First, evolution is constrained by what goes 
 before. 
 Its engineering solutions often seem rube-goldberg, e.g. backward retina in 
 mammals. 

Sure, but vision itself would not have evolved unnecessarily.

   Second, there is selection against some evolved feature only to the extent 
 it has a 
 (net) cost.  For example, Jaynes explanation of consciousness conforms to 
 these two 
 criteria.  I think that any species that evolves intelligence comparable to 
 ours will 
 be conscious for reasons somewhat like Jaynes theory.  They will be social 
 and this 
 combined with intelligence will make language a good evolutionary move.  Once 
 they 
 have language, remembering what has happened, in order to communicate and 
 plan, in 
 symbolic terms will be a easy and natural evolvement.  Whether that leads to 
 hearing 
 your own narrative in your head, as Jaynes supposes, is questionable; but it 
 would be 
 consistent with evolution. It takes advantage of existing structure and 
 functions to 
 realize a useful new function.

Agreed. So consciousness is either there for a reason or it's a necessary 
side-effect of the sort 
of brains we have and the way we have evolved. It's still theoretically 
possible that if the latter 
is the case, we might have been unconscious if we had evolved completely 
different kinds of 
brains, but similar behaviour - although I think it unlikely.
 
 This does not necessarily 
  mean that computers can be conscious: maybe if we had evolved with 
  electronic circuits in our 
  heads rather than neurons consciousness would not have been a necessary 
  side-effect. 
 
 But my point is that this may come down to what we would mean by a computer 
 being 
 conscious.  Bruno has an answer in terms of what the computer can prove.  
 Jaynes (and 
 probably John McCarthy) would say a computer is conscious if it creates a 
 narrative 
 of its experience 

RE: computationalism and supervenience

2006-09-11 Thread Colin Hales


Stathis Papaioannou
snip
 Maybe this is a copout, but I just don't think it is even logically
 possible to explain what consciousness
 *is* unless you have it. It's like the problem of explaining vision to a
 blind man: he might be the world's
 greatest scientific expert on it but still have zero idea of what it is
 like to see - and that's even though
 he shares most of the rest of his cognitive structure with other humans,
 and can understand analogies
 using other sensations. Knowing what sort of program a conscious computer
 would have to run to be
 conscious, what the purpose of consciousness is, and so on, does not help
 me to understand what the
 computer would be experiencing, except by analogy with what I myself
 experience.
 
 Stathis Papaioannou
 

Please consider the plight of the zombie scientist with a huge set of
sensory feeds and similar set of effectors. All carry similar signal
encoding and all, in themselves, bestow no experiential qualities on the
zombie.

Add a capacity to detect regularity in the sensory feeds.
Add a scientific goal-seeking behaviour.

Note that this zombie...
a) has the internal life of a dreamless sleep
b) has no concept or percept of body or periphery
c) has no concept that it is embedded in a universe.

I put it to you that science (the extraction of regularity) is the science
of zombie sensory fields, not the science of the natural world outside the
zombie scientist. No amount of creativity (except maybe random choices)
would ever lead to any abstraction of the outside world that gave it the
ability to handle novelty in the natural world outside the zombie scientist.

No matter how sophisticated the sensory feeds and any guesswork as to a
model (abstraction) of the universe, the zombie would eventually find
novelty invisible because the sensory feeds fail to depict the novelty .ie.
same sensory feeds for different behaviour of the natural world.

Technology built by a zombie scientist would replicate zombie sensory feeds,
not deliver an independently operating novel chunk of hardware with a
defined function(if the idea of function even has meaning in this instance).

The purpose of consciousness is, IMO, to endow the cognitive agent with at
least a repeatable (not accurate!) simile of the universe outside the
cognitive agent so that novelty can be handled. Only then can the zombie
scientist detect arbitrary levels of novelty and do open ended science (or
survive in the wild world of novel environmental circumstance).

In the absence of the functionality of phenomenal consciousness and with
finite sensory feeds you cannot construct any world-model (abstraction) in
the form of an innate (a-priori) belief system that will deliver an endless
ability to discriminate novelty. In a very Godellian way eventually a limit
would be reach where the abstracted model could not make any prediction that
can be detected. The zombie is, in a very real way, faced with 'truths' that
exist but can't be accessed/perceived. As such its behaviour will be
fundamentally fragile in the face of novelty (just like all computer
programs are).
---
Just to make the zombie a little more real... consider the industrial
control system computer. I have designed, installed hundreds and wired up
tens (hundreds?) of thousands of sensors and an unthinkable number of
kilometers of cables. (NEVER again!) In all cases I put it to you that the
phenomenal content of sensory connections may, at best, be characterised as
whatever it is like to have electrons crash through wires, for that is what
is actually going on. As far as the internal life of the CPU is concerned...
whatever it is like to be an electrically noisy hot rock, regardless of the
programalthough the character of the noise may alter with different
programs!

I am a zombie expert! No that didn't come out right...erm
perhaps... I think I might be a world expert in zombies yes, that's
better.
:-)
Colin Hales


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-11 Thread Brent Meeker

Stathis Papaioannou wrote:
 
 
 Brent meeker writes:
 
 
I could make a robot that, having suitable thermocouples, would quickly 
withdraw it's 
hand from a fire; but not be conscious of it.  Even if I provide the 
robot with 
feelings, i.e. judgements about good/bad/pain/pleasure I'm not sure it 
would be 
conscious.  But if I provide it with attention and memory, so that it 
noted the 
painful event as important and necessary to remember because of it's 
strong negative 
affect; then I think it would be conscious.


It's interesting that people actually withdraw their hand from the fire 
*before* they experience 
the pain. The withdrawl is a reflex, presumably evolved in organisms with 
the most primitive 
central nervour systems, while the pain seems to be there as an 
afterthought to teach us a 
lesson so we won't do it again. Thus, from consideration of evolutionary 
utility consciousness 
does indeed seem to be a side-effect of memory and learning. 

Even more curious, volitional action also occurs before one is aware of it. 
Are you 
familiar with the experiments of Benjamin Libet and Grey Walter?


These experiments showed that in apparently voluntarily initiated motion, 
motor cortex activity 
actually preceded the subject's awareness of his intention by a substantial 
fraction of a second. 
In other words, we act first, then decide to act. These studies did not 
examine pre-planned 
action (presumably that would be far more technically difficult) but it is 
easy to imagine the analogous 
situation whereby the action is unconsciously planned before we become 
aware of our decision. In 
other words, free will is just a feeling which occurs after the fact. This 
is consistent with the logical 
impossibility of something that is neither random nor determined, which is 
what I feel my free will to be.



I also think that this is an argument against zombies. If it were possible 
for an organism to 
behave just like a conscious being, but actually be unconscious, then why 
would consciousness 
have evolved? 

An interesting point - but hard to give any answer before pinning down what 
we mean 
by consciousness.  For example Bruno, Julian Jaynes, and Daniel Dennett 
have 
explanations; but they explain somewhat different consciousnesses, or at 
least 
different aspects.


Consciousness is the hardest thing to explain but the easiest thing to 
understand, if it's your own 
consciousness at issue. I think we can go a long way discussing it assuming 
that we do know what 
we are talking about even though we can't explain it. The question I ask is, 
why did people evolve 
with this consciousness thing, whatever it is? The answer must be, I think, 
that it is a necessary 
side-effect of the sort of neural complexity that underpins our behaviour. 
If it were not, and it 
were possible that beings could behave exactly like humans and not be 
conscious, then it would 
have been wasteful of nature to have provided us with consciousness. 

This is not necessarily so.  First, evolution is constrained by what goes 
before. 
Its engineering solutions often seem rube-goldberg, e.g. backward retina in 
mammals. 
 
 
 Sure, but vision itself would not have evolved unnecessarily.
 
 
  Second, there is selection against some evolved feature only to the extent 
 it has a 
(net) cost.  For example, Jaynes explanation of consciousness conforms to 
these two 
criteria.  I think that any species that evolves intelligence comparable to 
ours will 
be conscious for reasons somewhat like Jaynes theory.  They will be social 
and this 
combined with intelligence will make language a good evolutionary move.  Once 
they 
have language, remembering what has happened, in order to communicate and 
plan, in 
symbolic terms will be a easy and natural evolvement.  Whether that leads to 
hearing 
your own narrative in your head, as Jaynes supposes, is questionable; but it 
would be 
consistent with evolution. It takes advantage of existing structure and 
functions to 
realize a useful new function.
 
 
 Agreed. So consciousness is either there for a reason or it's a necessary 
 side-effect of the sort 
 of brains we have and the way we have evolved. It's still theoretically 
 possible that if the latter 
 is the case, we might have been unconscious if we had evolved completely 
 different kinds of 
 brains, but similar behaviour - although I think it unlikely.
  
 
This does not necessarily 
mean that computers can be conscious: maybe if we had evolved with 
electronic circuits in our 
heads rather than neurons consciousness would not have been a necessary 
side-effect. 

But my point is that this may come down to what we would mean by a computer 
being 
conscious.  Bruno has an answer in terms of what the computer can prove.  
Jaynes (and 
probably John McCarthy) would say a computer is conscious if it creates a 
narrative 
of its experience which it can access as memory.
 
 
 Maybe this is a copout, but I just don't think it is even 

Re: computationalism and supervenience

2006-09-11 Thread Brent Meeker

Colin Hales wrote:
 
 Stathis Papaioannou
 snip
 
Maybe this is a copout, but I just don't think it is even logically
possible to explain what consciousness
*is* unless you have it. It's like the problem of explaining vision to a
blind man: he might be the world's
greatest scientific expert on it but still have zero idea of what it is
like to see - and that's even though
he shares most of the rest of his cognitive structure with other humans,
and can understand analogies
using other sensations. Knowing what sort of program a conscious computer
would have to run to be
conscious, what the purpose of consciousness is, and so on, does not help
me to understand what the
computer would be experiencing, except by analogy with what I myself
experience.

Stathis Papaioannou

 
 
 Please consider the plight of the zombie scientist with a huge set of
 sensory feeds and similar set of effectors. All carry similar signal
 encoding and all, in themselves, bestow no experiential qualities on the
 zombie.
 
 Add a capacity to detect regularity in the sensory feeds.
 Add a scientific goal-seeking behaviour.
 
 Note that this zombie...
 a) has the internal life of a dreamless sleep
 b) has no concept or percept of body or periphery
 c) has no concept that it is embedded in a universe.
 
 I put it to you that science (the extraction of regularity) is the science
 of zombie sensory fields, not the science of the natural world outside the
 zombie scientist. No amount of creativity (except maybe random choices)
 would ever lead to any abstraction of the outside world that gave it the
 ability to handle novelty in the natural world outside the zombie scientist.
 
 No matter how sophisticated the sensory feeds and any guesswork as to a
 model (abstraction) of the universe, the zombie would eventually find
 novelty invisible because the sensory feeds fail to depict the novelty .ie.
 same sensory feeds for different behaviour of the natural world.
 
 Technology built by a zombie scientist would replicate zombie sensory feeds,
 not deliver an independently operating novel chunk of hardware with a
 defined function(if the idea of function even has meaning in this instance).
 
 The purpose of consciousness is, IMO, to endow the cognitive agent with at
 least a repeatable (not accurate!) simile of the universe outside the
 cognitive agent so that novelty can be handled. Only then can the zombie
 scientist detect arbitrary levels of novelty and do open ended science (or
 survive in the wild world of novel environmental circumstance).

Almost all organisms have become extinct.  Handling *arbitrary* levels of 
novelty is 
probably too much to ask of any species; and it's certainly more than is 
necessary to 
survive for millenia.

 
 In the absence of the functionality of phenomenal consciousness and with
 finite sensory feeds you cannot construct any world-model (abstraction) in
 the form of an innate (a-priori) belief system that will deliver an endless
 ability to discriminate novelty. In a very Godellian way eventually a limit
 would be reach where the abstracted model could not make any prediction that
 can be detected. 

So that's how we got string theory!

The zombie is, in a very real way, faced with 'truths' that
 exist but can't be accessed/perceived. As such its behaviour will be
 fundamentally fragile in the face of novelty (just like all computer
 programs are).

How do you know we are so robust.  Planck said, A new idea prevails, not by 
the 
conversion of adherents, but by the retirement and demise of opponents.  In 
other 
words only the young have the flexibility to adopt new ideas.  Ironically 
Planck 
never really believed quantum mechanics was more than a calculational trick.

 ---
 Just to make the zombie a little more real... consider the industrial
 control system computer. I have designed, installed hundreds and wired up
 tens (hundreds?) of thousands of sensors and an unthinkable number of
 kilometers of cables. (NEVER again!) In all cases I put it to you that the
 phenomenal content of sensory connections may, at best, be characterised as
 whatever it is like to have electrons crash through wires, for that is what
 is actually going on. As far as the internal life of the CPU is concerned...
 whatever it is like to be an electrically noisy hot rock, regardless of the
 programalthough the character of the noise may alter with different
 programs!

That's like say whatever it is like to be you, it is at best some waves of 
chemical 
potential.  You don't *know* that the control system is not conscious - unless 
you 
know what structure or function makes a system conscious.

Brent Meeker

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more