Quentin Anciaux wrote: > I'd like to add a "definition" of consciousness. > > Consciousness is the inner narative composed of sounds/images/feelings which > present itself as 'I'. What is (the origin/meaning) of 'I', I don't know, > but 'I' is the consciousness. > > Quentin >
John McCarthy notes that consciousness is not a single thing. He has written some essays on what it would mean to create a conscious artificial intelligence: http://www-formal.stanford.edu/jmc/consciousness.html http://www-formal.stanford.edu/jmc/zombie.pdf Brent Meeker > On Saturday 02 June 2007 22:13:30 Hal Finney wrote: > >> Various projects exist today aiming at building a true Artificial >> Intelligence. Sometimes these researchers use the term AGI, Artificial >> General Intelligence, to distinguish their projects from mainstream AI >> which tends to focus on specific tasks. A conference on such projects >> will be held next year, agi-08.org. >> >> Suppose one of these projects achieves one of the milestone goals of >> such efforts; their AI becomes able to educate itself by reading books >> and reference material, rather than having to have facts put in by >> the developers. Perhaps it requires some help with this, and various >> questions and ambiguities need to be answered by humans, but still this is >> a huge advancement as the AI can now in principle learn almost any field. >> >> Keep in mind that this AI is far from passing the Turing test; it is able >> to absorb and digest material and then answer questions or perhaps even >> engage in a dialog about it. But its complexity is, we will suppose, >> substantially less than the human brain. >> >> Now at some point the AI reads about the philosophy of mind, and the >> question is put to it: are you conscious? >> >> How might an AI program go about answering a question like this? >> What kind of reasoning would be applicable? In principle, how would >> you expect a well-designed AI to decide if it is conscious? And then, >> how or why is the reasoning different if a human rather than an AI is >> answering them? >> >> Clearly the AI has to start with the definition. It needs to know what >> consciousness is, what the word means, in order to decide if it applies. >> Unfortunately such definitions usually amount to either a list of >> synonyms for consciousness, or use the common human biological heritage >> as a reference. From the Wikipedia: "Consciousness is a quality of the >> mind generally regarded to comprise qualities such as subjectivity, >> self-awareness, sentience, sapience, and the ability to perceive the >> relationship between oneself and one's environment." Here we have four >> synonyms and one relational description which would arguably apply to >> any computer system that has environmental sensors, unless "perceive" >> is also merely another synonym for conscious perception. >> >> It looks to me like AIs, even ones much more sophisticated than I am >> describing here, are going to have a hard time deciding whether they >> are conscious in the human sense. Since humans seem essentially unable >> to describe consciousness in any reasonable operational terms, there >> doesn't seem any acceptable way for an AI to decide whether the word >> applies to itself. >> >> And given this failure, it calls into question the ease with which >> humans assert that they are conscious. How do we really know that >> we are conscious? For example, how do we know that what we call >> consciousness is what everyone else calls consciousness? I am worried >> that many people believe they are conscious simply because as children, >> they were told they were conscious. They were told that consciousness >> is the difference between being awake and being asleep, and assume on >> that basis that when they are awake they are conscious. Then all those >> other synonyms are treated the same way. >> >> Yet most humans would not admit to any doubt that they are conscious. >> For such a slippery and seemingly undefinable concept, it seems odd >> that people are so sure of it. Why, then, can't an AI achieve a similar >> degree of certainty? Do you think a properly programmed AI would ever >> say, yes, I am conscious, because I have subjectivity, self-awareness, >> sentience, sapience, etc., and I know this because it is just inherent in >> my artificial brain? Presumably we could program the AI to say this, >> and to believe it (in whatever sense that word applies), but is it >> something an AI could logically conclude? >> >> Hal >> >> >> > > > > > > > --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "Everything List" group. To post to this group, send email to [EMAIL PROTECTED] To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/everything-list?hl=en -~----------~----~----~----~------~----~------~--~---

