Hi Bill,

I posted it here because consciousness is the essence of Zen. Zen is precisely pure consciousness. So the more we understand about the true nature of consciousness the more we understand about Zen. I believe we can get some good insights by considering non-human consciousnesses. I have a couple of papers on the nature of consciousness and the 'true nature of things' if anyone is interested I could provide links to.


On Aug 25, 2008, at 12:32 AM, <[EMAIL PROTECTED]> <[EMAIL PROTECTED]> wrote:

Edgar, Robert, Stan, et al...

I am not a participant on the Evolutionary-Psychology Forum but picked this thread up from another forum that I do participate in with Edgar. I did not read the referenced essay. I don't have a particular interest in the topic of machine consciousness, but I do have a lot of experience in developing
and applying machine intelligence (decision-making) solutions in the
commercial sector. This work has involved solutions implemented on digital computers based on techniques as mundane as statistical regression to much more interesting techniques such as genetic and evolutionary programming. I know the systems you are talking about here are much more advanced than these, but I would like to offer some observations I've made and conclusions
I've reached from my work.

- All attempts at developing machine intelligence that I have seen or read about are based on a flawed assumption. That flawed assumption is that our decision-making process is based on logic. Human intelligence is much more
and much less than that. Bottom line is that we attempt to construct
machines that think the way we think we think, not how we really think
because we don't really know how we really think. It is my personal belief
that humans come to decisions based on a myriad of inputs, and only
after-the-fact apply logic to validate or justify the decision.

- All machines solutions I know of operate entirely in the realm of 'what' and 'when', but cannot adequately explain 'why' (except as a function of
what and when). I think this is because 'what' and 'when' are (in a
practical sense) measurable qualities and therefore absolute, whereas 'why'
is a totally ambiguous and relative quality.

- If there is any hope for human-like machine intelligence (and maybe
consciousness), my suggestion is to abandon the current and seemingly
single-minded focus on digital computers, and concentrate on analog
computers. Digital computers work great on purely logical processes, like
mathematics and logic itself. It is my belief that raw sensory input
(sound, light, tactile) is analog. I belief our bodies and our pre- rational thinking processes operates on these analog inputs. It is our consciousness that first filters, and our rational mind that then converts these inputs to
digital to allow us to apply rationality and logic. The best digital
computers can do is EMULATE human intelligence. My intuition is that only
analog computers can successfully fully RECREATE the human thinking

For what it's worth...Bill!

From: Zen_Forum@yahoogroups.com [mailto:[EMAIL PROTECTED] On Behalf
Of Edgar Owen
Sent: Sunday, August 24, 2008 7:22 PM
To: [EMAIL PROTECTED]; Zen_Forum@yahoogroups.com
Subject: [Zen] Re: [evol-psych] Essay: The Shy Computer


All this discussion is unfortunately moot, since it depends entirely on how the computer is programmed and what kinds of sensory inputs it is hooked up to, and what kind of control over active devices with feedback it is given. There is near infinite possible variation here. So one can trivially imagine
computers of any type with any set of capabilities. To see this just
consider my contention is that if a computer were constructed out of
biological materials according to a human genetic blueprint then it would be indistinguishable from a human. Therefore the question of whether computers
are sentient or conscious or not depends entirely on the definition of
consciousness and the design of the computer. It is trivially simple whether computers are/can be conscious or not. The answer is YES. One simply defines consciousness and then one builds the computer to those specs. For the very conservative that might mean building one to human specs (what age level?)
but it is always theoretically possible.

In my view all computers are conscious of what they are conscious of, as are
all organisms. One must never fall into the fallacy of confusing
self-consciousness with consciousness. That is just consciousness of the concept of a self which is only one of the many contents of consciousness.


On Aug 24, 2008, at 5:18 AM, Robert Karl Stonjek wrote:

----- Original Message -----
From: Stan Franklin
To: Evolutionary Psychology list
Sent: Sunday, August 24, 2008 5:36 AM
Subject: Fwd: [evol-psych] Essay: The Shy Computer

Robert, I enjoyed your The Shy Computer essay. Thanks for it. My comments
are below. All the best. Stan

Our first observation is that, in response to stimuli or instruction, the robot can indeed perform many cognitive functions. But when the response to stimuli is complete it more or less stops. Its curiosity appears to be largely absent and the only conversation it engages in revolves around its basic needs (as programmed in to simulate the living - eg hunger, thirst).

Curiosity (exploration) is often built in to software agents and even some robots as being necessary for learning. Some autonomous software agents and robots generate their own internal stimuli to which they respond. They have their own agenda. My IDA agent would sometime initiate correspondence with sailor about new jobs when the sailor had not contacted IDA in a timely
... how does one program in a general curiosity without telling the robot
what to be curious about?

Every autonomous agent must have some built-in primitive sensing,
motivating, and acting capabilities. Built-in curiosity may simply be the
motivation to do, or to pay attention to, the novel. This, of course,
requires memory of an appropriate type, which can also be built-in.
But the robot takes no interest in the drawings it has made previously nor
those made by other people.
It certainly can, depending on the built-in motivations.

You would have to specify each domain of curiosity quite specifically. This is not human-like. Humans are continuously finding new areas of curiosity. I was considering the computer as an analogue of human consciousness or human evolved behaviour rather than the idea of making a software agent.

Sure you can manually program a software agent to be curious in a particular
way in a particular domain, but is this how humans do it?

I would also point out that when humans need extra information to complete a
task they seek it, but that is not the same as curiosity by the common
useage of the word. Curiosity is the seeking of information for no obvious
or immediate reason.

 The Robot talks almost endlessly, but mainly not to anyone - it just
talks.  And it doesn't take much interest in the chatter of others.

There's no reason the robot shouldn't speak only when it seems appropriate
to the robot, nor that it shouldn't be interested in what others say.

If expression (output) were programmed in the way I have suggested then a robot would start acting like a toddler - they talk endlessly, but not to anyone (90% of verbal output occurs without a listener present). This is what I am trying to understand - what would make a robot start acting like a

Thus three processes are in play - expression, curiosity, and the processing of information into a single composite (self) image. As both the internal and external consciousness can sense both the world and the physical self and each other, the stage is set for all the complexity normally found in

Expression, in the form of email messages in English, and the processing of
both internal and external stimuli are both a part of our IDA software
agent. To set the stage for human-like complexity, more is needed, namely
learning in several modes. This led us to LIDA, Learning IDA.

Why does it learn? Because you have specifically instructed it to do so.
This form of prescriptive instruction is not allowed in my model, as I

A robot designed with human-like modules would not be anything like IDA or
her sister LIDA.  A human-like robot would have to sleep or would stop
functioning normally after a few days (for instance).

Note that your software agent always does as you have instructed it to do. Does one, after learning, want to take on some other task? Does it want to
become a music composition agent instead of filling out forms for the

This uncertainty, which we would find in the model I have suggested, means that the robot in my piece is an analogue of human consciousness and so
could not be reliably set up for one particular task.

Note also that the more effective the information gathering skills of the agent are, the less information you need to supply initially. For instance, a pony can walk just hours after being born, can recognise the mother and feed by itself. Humans can not, but humans are born with far less innate knowledge - we are more effective information gatherers and processors.

Thus if your software agent was to evolve toward human-ness you will find that the amount of prescriptive information required falls off as it evolves but the number of personalities, predispositions and career options balloons considerably until the only way that you could reliably create an agent for a specific task is to select from a population of agents those that are
'naturally' predisposed to the proposed career.

Kind Regards
Robert Karl Stonjek

Error! Filename not specified.
Stan Franklin   Professor   Computer Science
W. Harry  Feinstone  Interdisciplinary  Research   Professor
Institute for Intelligent Systems
FedEx Institute of Technology
The University of Memphis
Memphis, TN 38152 USA
phone 901-678-1341
fax 901-678-5129             [EMAIL PROTECTED]

__________ NOD32 3381 (20080822) Information __________

This message was checked by NOD32 antivirus system.

Reply via email to