Edgar, Robert, Stan, et al...
I am not a participant on the Evolutionary-Psychology Forum but
thread up from another forum that I do participate in with Edgar. I
read the referenced essay. I don't have a particular interest in
of machine consciousness, but I do have a lot of experience in
and applying machine intelligence (decision-making) solutions in the
commercial sector. This work has involved solutions implemented on
computers based on techniques as mundane as statistical regression
more interesting techniques such as genetic and evolutionary
know the systems you are talking about here are much more advanced
these, but I would like to offer some observations I've made and
I've reached from my work.
- All attempts at developing machine intelligence that I have seen
about are based on a flawed assumption. That flawed assumption is
decision-making process is based on logic. Human intelligence is
and much less than that. Bottom line is that we attempt to construct
machines that think the way we think we think, not how we really think
because we don't really know how we really think. It is my personal
that humans come to decisions based on a myriad of inputs, and only
after-the-fact apply logic to validate or justify the decision.
- All machines solutions I know of operate entirely in the realm of
and 'when', but cannot adequately explain 'why' (except as a
what and when). I think this is because 'what' and 'when' are (in a
practical sense) measurable qualities and therefore absolute,
is a totally ambiguous and relative quality.
- If there is any hope for human-like machine intelligence (and maybe
consciousness), my suggestion is to abandon the current and seemingly
single-minded focus on digital computers, and concentrate on analog
computers. Digital computers work great on purely logical
mathematics and logic itself. It is my belief that raw sensory input
(sound, light, tactile) is analog. I belief our bodies and our pre-
thinking processes operates on these analog inputs. It is our
that first filters, and our rational mind that then converts these
digital to allow us to apply rationality and logic. The best digital
computers can do is EMULATE human intelligence. My intuition is
analog computers can successfully fully RECREATE the human thinking
For what it's worth...Bill!
From: Zen_Forum@yahoogroups.com [mailto:[EMAIL PROTECTED]
Of Edgar Owen
Sent: Sunday, August 24, 2008 7:22 PM
To: [EMAIL PROTECTED]; Zen_Forum@yahoogroups.com
Subject: [Zen] Re: [evol-psych] Essay: The Shy Computer
All this discussion is unfortunately moot, since it depends
entirely on how
the computer is programmed and what kinds of sensory inputs it is
to, and what kind of control over active devices with feedback it
There is near infinite possible variation here. So one can
computers of any type with any set of capabilities. To see this just
consider my contention is that if a computer were constructed out of
biological materials according to a human genetic blueprint then it
indistinguishable from a human. Therefore the question of whether
are sentient or conscious or not depends entirely on the definition of
consciousness and the design of the computer. It is trivially
computers are/can be conscious or not. The answer is YES. One
consciousness and then one builds the computer to those specs. For
conservative that might mean building one to human specs (what age
but it is always theoretically possible.
In my view all computers are conscious of what they are conscious
of, as are
all organisms. One must never fall into the fallacy of confusing
self-consciousness with consciousness. That is just consciousness
concept of a self which is only one of the many contents of
On Aug 24, 2008, at 5:18 AM, Robert Karl Stonjek wrote:
----- Original Message -----
From: Stan Franklin
To: Evolutionary Psychology list
Sent: Sunday, August 24, 2008 5:36 AM
Subject: Fwd: [evol-psych] Essay: The Shy Computer
Robert, I enjoyed your The Shy Computer essay. Thanks for it. My
are below. All the best. Stan
Our first observation is that, in response to stimuli or
robot can indeed perform many cognitive functions. But when the
stimuli is complete it more or less stops. Its curiosity appears
largely absent and the only conversation it engages in revolves
basic needs (as programmed in to simulate the living - eg hunger,
Curiosity (exploration) is often built in to software agents and
robots as being necessary for learning. Some autonomous software
robots generate their own internal stimuli to which they respond.
their own agenda. My IDA agent would sometime initiate
sailor about new jobs when the sailor had not contacted IDA in a
... how does one program in a general curiosity without telling the
what to be curious about?
Every autonomous agent must have some built-in primitive sensing,
motivating, and acting capabilities. Built-in curiosity may simply
motivation to do, or to pay attention to, the novel. This, of course,
requires memory of an appropriate type, which can also be built-in.
But the robot takes no interest in the drawings it has made
those made by other people.
It certainly can, depending on the built-in motivations.
You would have to specify each domain of curiosity quite
is not human-like. Humans are continuously finding new areas of
I was considering the computer as an analogue of human
human evolved behaviour rather than the idea of making a software
Sure you can manually program a software agent to be curious in a
way in a particular domain, but is this how humans do it?
I would also point out that when humans need extra information to
task they seek it, but that is not the same as curiosity by the common
useage of the word. Curiosity is the seeking of information for no
or immediate reason.
The Robot talks almost endlessly, but mainly not to anyone - it just
talks. And it doesn't take much interest in the chatter of others.
There's no reason the robot shouldn't speak only when it seems
to the robot, nor that it shouldn't be interested in what others say.
If expression (output) were programmed in the way I have suggested
robot would start acting like a toddler - they talk endlessly, but
anyone (90% of verbal output occurs without a listener present).
what I am trying to understand - what would make a robot start
acting like a
Thus three processes are in play - expression, curiosity, and the
of information into a single composite (self) image. As both the
and external consciousness can sense both the world and the
and each other, the stage is set for all the complexity normally
Expression, in the form of email messages in English, and the
both internal and external stimuli are both a part of our IDA software
agent. To set the stage for human-like complexity, more is needed,
learning in several modes. This led us to LIDA, Learning IDA.
Why does it learn? Because you have specifically instructed it to
This form of prescriptive instruction is not allowed in my model, as I
A robot designed with human-like modules would not be anything like
her sister LIDA. A human-like robot would have to sleep or would stop
functioning normally after a few days (for instance).
Note that your software agent always does as you have instructed it
Does one, after learning, want to take on some other task? Does it
become a music composition agent instead of filling out forms for the
This uncertainty, which we would find in the model I have
that the robot in my piece is an analogue of human consciousness
could not be reliably set up for one particular task.
Note also that the more effective the information gathering skills
agent are, the less information you need to supply initially. For
a pony can walk just hours after being born, can recognise the
feed by itself. Humans can not, but humans are born with far less
knowledge - we are more effective information gatherers and
Thus if your software agent was to evolve toward human-ness you
that the amount of prescriptive information required falls off as
but the number of personalities, predispositions and career options
considerably until the only way that you could reliably create an
a specific task is to select from a population of agents those that
'naturally' predisposed to the proposed career.
Robert Karl Stonjek
Error! Filename not specified.
Stan Franklin Professor Computer Science
W. Harry Feinstone Interdisciplinary Research Professor
Institute for Intelligent Systems
FedEx Institute of Technology
The University of Memphis
Memphis, TN 38152 USA
fax 901-678-5129 [EMAIL PROTECTED]
__________ NOD32 3381 (20080822) Information __________
This message was checked by NOD32 antivirus system.