Re: How would a computer know if it were conscious?

2007-06-03 Thread Jason Resch
At the very least could it be said the AI is conscious of the question?
Would this awareness of even a single piece of information be sufficient to
make it conscious?

Jason

On 6/2/07, Hal Finney [EMAIL PROTECTED] wrote:


 Various projects exist today aiming at building a true Artificial
 Intelligence.  Sometimes these researchers use the term AGI, Artificial
 General Intelligence, to distinguish their projects from mainstream AI
 which tends to focus on specific tasks.  A conference on such projects
 will be held next year, agi-08.org.

 Suppose one of these projects achieves one of the milestone goals of
 such efforts; their AI becomes able to educate itself by reading books
 and reference material, rather than having to have facts put in by
 the developers.  Perhaps it requires some help with this, and various
 questions and ambiguities need to be answered by humans, but still this is
 a huge advancement as the AI can now in principle learn almost any field.

 Keep in mind that this AI is far from passing the Turing test; it is able
 to absorb and digest material and then answer questions or perhaps even
 engage in a dialog about it.  But its complexity is, we will suppose,
 substantially less than the human brain.

 Now at some point the AI reads about the philosophy of mind, and the
 question is put to it: are you conscious?

 How might an AI program go about answering a question like this?
 What kind of reasoning would be applicable?  In principle, how would
 you expect a well-designed AI to decide if it is conscious?  And then,
 how or why is the reasoning different if a human rather than an AI is
 answering them?

 Clearly the AI has to start with the definition.  It needs to know what
 consciousness is, what the word means, in order to decide if it applies.
 Unfortunately such definitions usually amount to either a list of
 synonyms for consciousness, or use the common human biological heritage
 as a reference.  From the Wikipedia: Consciousness is a quality of the
 mind generally regarded to comprise qualities such as subjectivity,
 self-awareness, sentience, sapience, and the ability to perceive the
 relationship between oneself and one's environment.  Here we have four
 synonyms and one relational description which would arguably apply to
 any computer system that has environmental sensors, unless perceive
 is also merely another synonym for conscious perception.

 It looks to me like AIs, even ones much more sophisticated than I am
 describing here, are going to have a hard time deciding whether they
 are conscious in the human sense.  Since humans seem essentially unable
 to describe consciousness in any reasonable operational terms, there
 doesn't seem any acceptable way for an AI to decide whether the word
 applies to itself.

 And given this failure, it calls into question the ease with which
 humans assert that they are conscious.  How do we really know that
 we are conscious?  For example, how do we know that what we call
 consciousness is what everyone else calls consciousness?  I am worried
 that many people believe they are conscious simply because as children,
 they were told they were conscious.  They were told that consciousness
 is the difference between being awake and being asleep, and assume on
 that basis that when they are awake they are conscious.  Then all those
 other synonyms are treated the same way.

 Yet most humans would not admit to any doubt that they are conscious.
 For such a slippery and seemingly undefinable concept, it seems odd
 that people are so sure of it.  Why, then, can't an AI achieve a similar
 degree of certainty?  Do you think a properly programmed AI would ever
 say, yes, I am conscious, because I have subjectivity, self-awareness,
 sentience, sapience, etc., and I know this because it is just inherent in
 my artificial brain?  Presumably we could program the AI to say this,
 and to believe it (in whatever sense that word applies), but is it
 something an AI could logically conclude?

 Hal

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-03 Thread marc . geddes

Consciousness is a cognitive system capable of reflecting on other
cognitive systems, by enabling switching and integration between
differing representations of knowledge in different domains.  It's a
higher-level summary of knowledge in which there is a degree of coarse
graining sufficient to lose precise information about the under-lying
computations.  Current experience is integrated with past knowledge in
order to provide higher-level summaries of the meaning of a concept.
Any cognitive system capable of reflection in this sense is
conscious.  In essence, conscious is what *mediates* between different
representations of knowledge... as mentioned above... the ability to
switch between and integrate different representational systems.

There are three general types of consciousness arising from the fact
that there are three different classes of cognitive systems which
could be potentially reflected upon.  The first are systems which
perceive physical concepts.  When this perception is reflected upon,
we experience sensations.  The second are systems which perceive
teleological concepts... closely related to our motivational systems.
When this is reflected upon, we experience emotions (or more
accurately feelings).  The third type of consciousness is very weak in
humans... it's the ability to reflect upon systems which perceive
logical/mathematical things.  Reflection upon these systems is
consciously experienced as an 'ontology-scape' (in a sense, conscious
awareness of the theory of everything).  But as mentioned, this last
type of consciousness is very weak in humans, since our ability to
reflect upon our own cognitive systems is quite small and not done by
the brain directly (when engaged in logical reasoning, we humans are
not generally reflecting on our thoughts directly, but via indirect
means such as verbal or visual representations of these thoughts).

The third type of conscious mentioned above is synonymous with
'reflective intelligence'.  That is, any system successfully engaged
in reflective decision theory would automatically be conscious.
Incidentally, such a system would also be 'friendly' (ethical)
automatically.  The ability to reason effectively about ones own
cognitive processes would certainly enable the ability to elaborate
precise definitions of consciousness and determine that the system was
indeed conforming to the aforementioned definitions.

 Much of the confusion surrounding these issues stems from the fact
there's not one definition of 'general intelligence', but THREE.
There's the ability to detect patterns (which does not require
sentience), there's the ability to engage in symbolic reasoning (which
also does not require sentience), finally there's the ability to
engage in reflective reasoning (reasoning about reasoning).  And it's
this third definition intelligence which DOES by necessity entail
consciousness.  It also, by necessity entails ethical behaviour.

Now there those who point to powerful systems such as 'Corporations'
and 'Evolution' to try to argue that you can have intelligence without
consciousness.  But these arguments are not convincing.  It's true
that, for instance 'Evolution' is an intelligence system in ONE sense,
but it's certainly NOT a *reflective intelligence*.  Nor, is , for
instance a 'Corporation'.  A 'corporation' has SOME of the sub-systems
of person-hood, but not all of them.  The existence of  non-sentient
cognitive systems which display *some* of the features of
intelligence   is a LONG way from establishing that you can have
*reflective intelligence* without consciousness.  As has been pointed
out, RPOPS such as for instance 'Evolution' are NOT capable of
reflective intelligence and therefore cannot be taken as a disproof of
the claim that reflective intelligence must by necessity automatically
be friendly and sentient.

The reason I elaborate this carefully is because you will, on certain
mailing lists, run into a number of artificial intelligence crack-pots
claiming that (1) You can have reflective intelligence without
consciousness and (2)  AI's with reflective intelligence don't have to
be ethical and might destroy the world.  Ignore these crack-pot claims
when you see them.










--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-03 Thread Stathis Papaioannou
On 03/06/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

The third type of conscious mentioned above is synonymous with
 'reflective intelligence'.  That is, any system successfully engaged
 in reflective decision theory would automatically be conscious.
 Incidentally, such a system would also be 'friendly' (ethical)
 automatically.  The ability to reason effectively about ones own
 cognitive processes would certainly enable the ability to elaborate
 precise definitions of consciousness and determine that the system was
 indeed conforming to the aforementioned definitions.


How do you derive (a) ethics and (b) human-friendly ethics from reflective
intelligence?  I don't see why an AI should decide to destroy the world,
save the world, or do anything at all to the world, unless it started off
with axioms and goals which pushed it in a particular direction.




-- 
Stathis Papaioannou

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-03 Thread marc . geddes



On Jun 3, 9:20 pm, Stathis Papaioannou [EMAIL PROTECTED] wrote:
 On 03/06/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

 The third type of conscious mentioned above is synonymous with

  'reflective intelligence'.  That is, any system successfully engaged
  in reflective decision theory would automatically be conscious.
  Incidentally, such a system would also be 'friendly' (ethical)
  automatically.  The ability to reason effectively about ones own
  cognitive processes would certainly enable the ability to elaborate
  precise definitions of consciousness and determine that the system was
  indeed conforming to the aforementioned definitions.

 How do you derive (a) ethics and (b) human-friendly ethics from reflective
 intelligence?  I don't see why an AI should decide to destroy the world,
 save the world, or do anything at all to the world, unless it started off
 with axioms and goals which pushed it in a particular direction.

 --
 Stathis Papaioannou

When reflective intelligence is applied to cognitive systems which
reason about teleological concepts (which include values, motivations
etc) the result is conscious 'feelings'.  Reflective intelligence,
recall, is the ability to correctly reason about cognitive systems.
When applied to cognitive systems reasoning about teleological
concepts this means the ability to correctly determine the
motivational 'states' of self and others - as mentioned - doing this
rapidly and accuracy generates 'feelings'.  Since, as has been known
since Hume, feelings are what ground ethics, the generation of
feelings which represent accurate tokens about motivational
automatically leads to ethical behaviour.

Bad behaviour in humans is due to a deficit in reflective
intelligence.  It is known for instance, that psychopaths have great
difficulty perceiving fear and sadness and negative motivational
states in general.  Correct representation of motivational states is
correlated with ethical behaviour.  Thus it appears that reflective
intelligence is automatically correlated with ethical behaviour.  Bear
in mind, as I mentioned that: (1) There are in fact three kinds of
general intelligence, and only one of them ('reflective intelligence')
is correlated with ethics.The other two are not.  A deficit in
reflective intelligence does not affect the other two types of general
intelligence (which is why for instance psychopaths could still score
highly in IQ tests).  And (2) Reflective intelligence in human beings
is quite weak.  This is the reason why intelligence does not appear to
be much correlated with ethics in humans.  But this fact in no way
refutes the idea that a system with full and strong reflective
intelligence would automatically be ethical.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-03 Thread Stathis Papaioannou
On 03/06/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

 How do you derive (a) ethics and (b) human-friendly ethics from reflective
  intelligence?  I don't see why an AI should decide to destroy the world,
  save the world, or do anything at all to the world, unless it started
 off
  with axioms and goals which pushed it in a particular direction.

 When reflective intelligence is applied to cognitive systems which
 reason about teleological concepts (which include values, motivations
 etc) the result is conscious 'feelings'.  Reflective intelligence,
 recall, is the ability to correctly reason about cognitive systems.
 When applied to cognitive systems reasoning about teleological
 concepts this means the ability to correctly determine the
 motivational 'states' of self and others - as mentioned - doing this
 rapidly and accuracy generates 'feelings'.  Since, as has been known
 since Hume, feelings are what ground ethics, the generation of
 feelings which represent accurate tokens about motivational
 automatically leads to ethical behaviour.


Determining the motivational states of others does not necessarily involve
feelings or empathy. It has been historically very easy to assume that other
species or certain members of our own species either lack feelings or, if
they have them, it doesn't matter. Moreover, this hasn't prevented people
from determining the motivations of inferior beings in order to exploit
them. So although having feelings may be necessary for ethical behaviour, it
is not sufficient.

Bad behaviour in humans is due to a deficit in reflective
 intelligence.  It is known for instance, that psychopaths have great
 difficulty perceiving fear and sadness and negative motivational
 states in general.  Correct representation of motivational states is
 correlated with ethical behaviour.


Psychopaths are often very good at understanding other peoples' feelings, as
evidenced by their ability to manipulate them. The main problem is that they
don't *care* about other people; they seem to lack the ability to be moved
by other peoples' emotions and lack the ability to experience emotions such
as guilt. But this isn't part of a general inability to feel emotion, as
they often present as enraged, entitled, depressed, suicidal, etc., and
these emotions are certainly enough to motivate them. Psychopaths have a
slightly different set of emotions, regulated in a different way compared to
the rest of us, but are otherwise cognitively intact.

Thus it appears that reflective
 intelligence is automatically correlated with ethical behaviour.  Bear
 in mind, as I mentioned that: (1) There are in fact three kinds of
 general intelligence, and only one of them ('reflective intelligence')
 is correlated with ethics.The other two are not.  A deficit in
 reflective intelligence does not affect the other two types of general
 intelligence (which is why for instance psychopaths could still score
 highly in IQ tests).  And (2) Reflective intelligence in human beings
 is quite weak.  This is the reason why intelligence does not appear to
 be much correlated with ethics in humans.  But this fact in no way
 refutes the idea that a system with full and strong reflective
 intelligence would automatically be ethical.


Perhaps I haven't quite understood your definition of reflective
intelligence. It seems to me quite possible to correctly reason about
cognitive systems, at least well enough to predict their behaviour to a
useful degree, and yet not care at all about what happens to them.
Furthermore, it seems possible to me to do this without even suspecting that
the cognitive system is conscious, or at least without being sure that it is
conscious.


-- 
Stathis Papaioannou

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-03 Thread Jason

What do others on this list think about Mark Tegmark's definition of
consciousness:

I believe that consciousness is, essentially, the way information
feels when being processed. Since matter can be arranged to process
information in numerous ways of vastly varying complexity, this
implies a rich variety of levels and types of consciousness.

Source: http://www.edge.org/q2007/q07_7.html

Jason

On Jun 3, 6:11 am, Stathis Papaioannou [EMAIL PROTECTED] wrote:
 On 03/06/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:



  How do you derive (a) ethics and (b) human-friendly ethics from reflective
   intelligence?  I don't see why an AI should decide to destroy the world,
   save the world, or do anything at all to the world, unless it started
  off
   with axioms and goals which pushed it in a particular direction.

  When reflective intelligence is applied to cognitive systems which
  reason about teleological concepts (which include values, motivations
  etc) the result is conscious 'feelings'.  Reflective intelligence,
  recall, is the ability to correctly reason about cognitive systems.
  When applied to cognitive systems reasoning about teleological
  concepts this means the ability to correctly determine the
  motivational 'states' of self and others - as mentioned - doing this
  rapidly and accuracy generates 'feelings'.  Since, as has been known
  since Hume, feelings are what ground ethics, the generation of
  feelings which represent accurate tokens about motivational
  automatically leads to ethical behaviour.

 Determining the motivational states of others does not necessarily involve
 feelings or empathy. It has been historically very easy to assume that other
 species or certain members of our own species either lack feelings or, if
 they have them, it doesn't matter. Moreover, this hasn't prevented people
 from determining the motivations of inferior beings in order to exploit
 them. So although having feelings may be necessary for ethical behaviour, it
 is not sufficient.

 Bad behaviour in humans is due to a deficit in reflective

  intelligence.  It is known for instance, that psychopaths have great
  difficulty perceiving fear and sadness and negative motivational
  states in general.  Correct representation of motivational states is
  correlated with ethical behaviour.

 Psychopaths are often very good at understanding other peoples' feelings, as
 evidenced by their ability to manipulate them. The main problem is that they
 don't *care* about other people; they seem to lack the ability to be moved
 by other peoples' emotions and lack the ability to experience emotions such
 as guilt. But this isn't part of a general inability to feel emotion, as
 they often present as enraged, entitled, depressed, suicidal, etc., and
 these emotions are certainly enough to motivate them. Psychopaths have a
 slightly different set of emotions, regulated in a different way compared to
 the rest of us, but are otherwise cognitively intact.

 Thus it appears that reflective

  intelligence is automatically correlated with ethical behaviour.  Bear
  in mind, as I mentioned that: (1) There are in fact three kinds of
  general intelligence, and only one of them ('reflective intelligence')
  is correlated with ethics.The other two are not.  A deficit in
  reflective intelligence does not affect the other two types of general
  intelligence (which is why for instance psychopaths could still score
  highly in IQ tests).  And (2) Reflective intelligence in human beings
  is quite weak.  This is the reason why intelligence does not appear to
  be much correlated with ethics in humans.  But this fact in no way
  refutes the idea that a system with full and strong reflective
  intelligence would automatically be ethical.

 Perhaps I haven't quite understood your definition of reflective
 intelligence. It seems to me quite possible to correctly reason about
 cognitive systems, at least well enough to predict their behaviour to a
 useful degree, and yet not care at all about what happens to them.
 Furthermore, it seems possible to me to do this without even suspecting that
 the cognitive system is conscious, or at least without being sure that it is
 conscious.

 --
 Stathis Papaioannou


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-03 Thread Hal Finney

Part of what I wanted to get at in my thought experiment is the
bafflement and confusion an AI should feel when exposed to human ideas
about consciousness.  Various people here have proffered their own
ideas, and we might assume that the AI would read these suggestions,
along with many other ideas that contradict the ones offered here.
It seems hard to escape the conclusion that the only logical response
is for the AI to figuratively throw up its hands and say that it is
impossible to know if it is conscious, because even humans cannot agree
on what consciousness is.

In particular I don't think an AI could be expected to claim that it
knows that it is conscious, that consciousness is a deep and intrinsic
part of itself, that whatever else it might be mistaken about it could
not be mistaken about being conscious.  I don't see any logical way it
could reach this conclusion by studying the corpus of writings on the
topic.  If anyone disagrees, I'd like to hear how it could happen.

And the corollary to this is that perhaps humans also cannot legitimately
make such claims, since logically their position is not so different
from that of the AI.  In that case the seemingly axiomatic question of
whether we are conscious may after all be something that we could be
mistaken about.

Hal

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-03 Thread Quentin Anciaux

Why would we have a word that intuitively everybody can grasp for himself 
without it being linked to a real phenomena ?

Not only we have one word, but we have plenty of words which try to grasp the 
idea. Denying consciousness phenomena like this is playing a vocabulary 
game... not denying the subject of the word.

Quentin

On Sunday 03 June 2007 21:52:17 Hal Finney wrote:
 Part of what I wanted to get at in my thought experiment is the
 bafflement and confusion an AI should feel when exposed to human ideas
 about consciousness.  Various people here have proffered their own
 ideas, and we might assume that the AI would read these suggestions,
 along with many other ideas that contradict the ones offered here.
 It seems hard to escape the conclusion that the only logical response
 is for the AI to figuratively throw up its hands and say that it is
 impossible to know if it is conscious, because even humans cannot agree
 on what consciousness is.

 In particular I don't think an AI could be expected to claim that it
 knows that it is conscious, that consciousness is a deep and intrinsic
 part of itself, that whatever else it might be mistaken about it could
 not be mistaken about being conscious.  I don't see any logical way it
 could reach this conclusion by studying the corpus of writings on the
 topic.  If anyone disagrees, I'd like to hear how it could happen.

 And the corollary to this is that perhaps humans also cannot legitimately
 make such claims, since logically their position is not so different
 from that of the AI.  In that case the seemingly axiomatic question of
 whether we are conscious may after all be something that we could be
 mistaken about.

 Hal

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-03 Thread Jayceetout


Hal Finney wrote:
 Part of what I wanted to get at in my thought experiment is the
 bafflement and confusion an AI should feel when exposed to human ideas
 about consciousness.  Various people here have proffered their own
 ideas, and we might assume that the AI would read these suggestions,
 along with many other ideas that contradict the ones offered here.
 It seems hard to escape the conclusion that the only logical response
 is for the AI to figuratively throw up its hands and say that it is
 impossible to know if it is conscious, because even humans cannot agree
 on what consciousness is.

 In particular I don't think an AI could be expected to claim that it
 knows that it is conscious, that consciousness is a deep and intrinsic
 part of itself, that whatever else it might be mistaken about it could
 not be mistaken about being conscious.  I don't see any logical way it
 could reach this conclusion by studying the corpus of writings on the
 topic.  If anyone disagrees, I'd like to hear how it could happen.

 And the corollary to this is that perhaps humans also cannot legitimately
 make such claims, since logically their position is not so different
 from that of the AI.  In that case the seemingly axiomatic question of
 whether we are conscious may after all be something that we could be
 mistaken about.

 Hal


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-03 Thread Colin Hales

Sorry about the previous post... I did it from the the Google
listsomething weird happened.
---

Hi folks,
Re: How would a computer know if it were conscious?

Easy.

The computer would be able to go head to head with a human in a competition.
The competition?
Do science on exquisite novelty that neither party had encountered.
(More interesting: Make their life depend on getting it right. The
survivors are conscious).

Only conscious entities can do open ended science on the exquisitely novel.
You cannot teach something how to deal with the exquisitely novel because
you haven't any experience of it to teach. It means that the entity must
be configurted as a machine that learns how to learn something. This is
one meta-level removed from your usual AI situation. It's what humans do.
During neogenesis and development, humans 'learn how to learn how to
learn.

If the computer/scientist can match the human/scientist...it's as
conscious as a human. It must be.

cheers
colin hales





--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-03 Thread Russell Standish

I don't see that you've made your point. If you achieve this, you have
created an artificial creative process, a sort of holy grail of
AI/ALife. However, it seems far from obvious that consciousness should
be necessary. Biological evolution is widely considered to be creative
(even exponentially so), but few would argue that the biosphere is
conscious (and has been for ca 4E10 years).

Cheers

On Mon, Jun 04, 2007 at 10:48:07AM +1000, Colin Hales wrote:
 
 Sorry about the previous post... I did it from the the Google
 listsomething weird happened.
 ---
 
 Hi folks,
 Re: How would a computer know if it were conscious?
 
 Easy.
 
 The computer would be able to go head to head with a human in a competition.
 The competition?
 Do science on exquisite novelty that neither party had encountered.
 (More interesting: Make their life depend on getting it right. The
 survivors are conscious).
 
 Only conscious entities can do open ended science on the exquisitely novel.
 You cannot teach something how to deal with the exquisitely novel because
 you haven't any experience of it to teach. It means that the entity must
 be configurted as a machine that learns how to learn something. This is
 one meta-level removed from your usual AI situation. It's what humans do.
 During neogenesis and development, humans 'learn how to learn how to
 learn.
 
 If the computer/scientist can match the human/scientist...it's as
 conscious as a human. It must be.
 
 cheers
 colin hales
 
 
 
 
 
 
-- 


A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 [EMAIL PROTECTED]
Australiahttp://www.hpcoders.com.au


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-03 Thread Saibal Mitra

If it feels bafflement and confusion, then surely it is conscious :)

An AI that takes information from books might experience similar qualia we
can experience. The AI will be programmed to do certain tasks and it must
thus have a notion of what it is doing is ok., not ok, or completely wrong.

If things are going wrong and it has to revert what it has just done, it may
feel some sort of pain. Just like what happens to us if we pick up something
that is very hot.

So, I think that there will be a mismatch between the qualia the AI
experiences and what it reads about that we experience. The AI won't read
the information like we read it. I think it will directly experience it as
some qualia, just like we experience information coming in via our senses
into our brain.

The meaning we associate with the text would not be accessible to the AI,
because ultimately that is linked to the qualia we experience.

Perhaps what the AI experiences when it is processing information is similar
to an animal that is moving in some landscape. Maybe when it reads something
then that manifests itself like some object it sees. If  it processes
information then that could be like picking up that object putting it next
to a similar looking object.

But if  that object represents a text about consciousness then there is no
way for the AI to know that.

Saibal


- Original Message - 
From: Hal Finney [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Sunday, June 03, 2007 09:52 PM
Subject: Re: How would a computer know if it were conscious?



 Part of what I wanted to get at in my thought experiment is the
 bafflement and confusion an AI should feel when exposed to human ideas
 about consciousness.  Various people here have proffered their own
 ideas, and we might assume that the AI would read these suggestions,
 along with many other ideas that contradict the ones offered here.
 It seems hard to escape the conclusion that the only logical response
 is for the AI to figuratively throw up its hands and say that it is
 impossible to know if it is conscious, because even humans cannot agree
 on what consciousness is.

 In particular I don't think an AI could be expected to claim that it
 knows that it is conscious, that consciousness is a deep and intrinsic
 part of itself, that whatever else it might be mistaken about it could
 not be mistaken about being conscious.  I don't see any logical way it
 could reach this conclusion by studying the corpus of writings on the
 topic.  If anyone disagrees, I'd like to hear how it could happen.

 And the corollary to this is that perhaps humans also cannot legitimately
 make such claims, since logically their position is not so different
 from that of the AI.  In that case the seemingly axiomatic question of
 whether we are conscious may after all be something that we could be
 mistaken about.

 Hal

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---