Robert,

All this discussion is unfortunately moot, since it depends entirely on how the computer is programmed and what kinds of sensory inputs it is hooked up to, and what kind of control over active devices with feedback it is given. There is near infinite possible variation here. So one can trivially imagine computers of any type with any set of capabilities. To see this just consider my contention is that if a computer were constructed out of biological materials according to a human genetic blueprint then it would be indistinguishable from a human. Therefore the question of whether computers are sentient or conscious or not depends entirely on the definition of consciousness and the design of the computer. It is trivially simple whether computers are/can be conscious or not. The answer is YES. One simply defines consciousness and then one builds the computer to those specs. For the very conservative that might mean building one to human specs (what age level?) but it is always theoretically possible.


In my view all computers are conscious of what they are conscious of, as are all organisms. One must never fall into the fallacy of confusing self-consciousness with consciousness. That is just consciousness of the concept of a self which is only one of the many contents of consciousness.

Edgar



On Aug 24, 2008, at 5:18 AM, Robert Karl Stonjek wrote:



----- Original Message -----
From: Stan Franklin
To: Evolutionary Psychology list
Sent: Sunday, August 24, 2008 5:36 AM
Subject: Fwd: [evol-psych] Essay: The Shy Computer


Robert, I enjoyed your The Shy Computer essay. Thanks for it. My comments are below. All the best. Stan

Our first observation is that, in response to stimuli or instruction, the robot can indeed perform many cognitive functions. But when the response to stimuli is complete it more or less stops. Its curiosity appears to be largely absent and the only conversation it engages in revolves around its basic needs (as programmed in to simulate the living - eg hunger, thirst).

Curiosity (exploration) is often built in to software agents and even some robots as being necessary for learning. Some autonomous software agents and robots generate their own internal stimuli to which they respond. They have their own agenda. My IDA agent would sometime initiate correspondence with sailor about new jobs when the sailor had not contacted IDA in a timely fashion.

... how does one program in a general curiosity without telling the robot what to be curious about?

Every autonomous agent must have some built-in primitive sensing, motivating, and acting capabilities. Built-in curiosity may simply be the motivation to do, or to pay attention to, the novel. This, of course, requires memory of an appropriate type, which can also be built-in.

But the robot takes no interest in the drawings it has made previously nor those made by other people.

It certainly can, depending on the built-in motivations.

RKS:
You would have to specify each domain of curiosity quite specifically. This is not human-like. Humans are continuously finding new areas of curiosity. I was considering the computer as an analogue of human consciousness or human evolved behaviour rather than the idea of making a software agent.

Sure you can manually program a software agent to be curious in a particular way in a particular domain, but is this how humans do it?

I would also point out that when humans need extra information to complete a task they seek it, but that is not the same as curiosity by the common useage of the word. Curiosity is the seeking of information for no obvious or immediate reason.


The Robot talks almost endlessly, but mainly not to anyone - it just talks. And it doesn't take much interest in the chatter of others.

There's no reason the robot shouldn't speak only when it seems appropriate to the robot, nor that it shouldn't be interested in what others say.

RKS:
If expression (output) were programmed in the way I have suggested then a robot would start acting like a toddler - they talk endlessly, but not to anyone (90% of verbal output occurs without a listener present). This is what I am trying to understand - what would make a robot start acting like a child?


Thus three processes are in play - expression, curiosity, and the processing of information into a single composite (self) image. As both the internal and external consciousness can sense both the world and the physical self and each other, the stage is set for all the complexity normally found in humans.

Expression, in the form of email messages in English, and the processing of both internal and external stimuli are both a part of our IDA software agent. To set the stage for human-like complexity, more is needed, namely learning in several modes. This led us to LIDA, Learning IDA.

RKS:
Why does it learn? Because you have specifically instructed it to do so. This form of prescriptive instruction is not allowed in my model, as I mentioned.

A robot designed with human-like modules would not be anything like IDA or her sister LIDA. A human-like robot would have to sleep or would stop functioning normally after a few days (for instance).

Note that your software agent always does as you have instructed it to do. Does one, after learning, want to take on some other task? Does it want to become a music composition agent instead of filling out forms for the military?

This uncertainty, which we would find in the model I have suggested, means that the robot in my piece is an analogue of human consciousness and so could not be reliably set up for one particular task.

Note also that the more effective the information gathering skills of the agent are, the less information you need to supply initially. For instance, a pony can walk just hours after being born, can recognise the mother and feed by itself. Humans can not, but humans are born with far less innate knowledge - we are more effective information gatherers and processors.

Thus if your software agent was to evolve toward human-ness you will find that the amount of prescriptive information required falls off as it evolves but the number of personalities, predispositions and career options balloons considerably until the only way that you could reliably create an agent for a specific task is to select from a population of agents those that are 'naturally' predisposed to the proposed career.

Kind Regards
Robert Karl Stonjek


--
Stan Franklin   Professor   Computer Science
W. Harry  Feinstone  Interdisciplinary  Research   Professor
Institute for Intelligent Systems
FedEx Institute of Technology
The University of Memphis
Memphis, TN 38152 USA
phone 901-678-1341
fax 901-678-5129             [EMAIL PROTECTED]
www.cs.memphis.edu/~franklin



Reply via email to