On 9/14/2011 8:07 PM meekerdb said the following:
On 9/14/2011 10:34 AM, Evgenii Rudnyi wrote:
On 9/13/2011 9:23 PM meekerdb said the following:
On 9/13/2011 12:00 PM, Craig Weinberg wrote:
It's easy to assume that it helps, just as it's easy for me to
assume that we have free will. If we don't need our conscious
mind to make decisions, then we certainly don't need the
fantasyland associated with our conscious minds to help with
that process. Think of building a robot that walks around and
looks for food and avoids danger. Why would it help to
construct some kind of Cartesian theater inside of it?
Functionally, there is no reasonable explanation for perception
or experience, especially if you believe in determinism.

It would help, even be essential, to the robot learning for it
to remember things. But not just everything. It needs to
remember important things, like what it was doing just before it
fell down the stairs. So you design it to continually construct a
narrative history and if something important happens you tuck
that piece of narrative history into a database for future
reference by associative memory ('near stairs'? don't back up).
This memory consists of connected words learned by the
speech/hearing module and images. For efficiency you use these
same modules for associative construction of the narrative memory
and for recall. Hence part of the same processing is used for
recall and cogitation as well as perception and learning. That's
why thinking has similarity to perception, i.e. sitting in a
Cartesian theater.

I would agree that it would easy to obtain thinking provided that
perception is there. This is though an open question, what does it
 mean perception by a robot. Does for example an automatic door


That seems to be a question for the makers of dictionaries. We know
that the automatic door receives input from it's environment and acts
on it. Do we want to call that "perceiving"? We could. Or we could
reserve "perceiving" to cases were there is memory and learning, e.g.
an automatic door that learns face recognition. Or we could also
require that there be some self-awareness, e.g. an automatic door
that runs a BIT before either working or reporting a problem with its
mechanism. This is part of my point that when we have fully
developed, engineering level understanding of intelligence and
consciousness we will see them as very complex, multi-dimensional
subjects. That is one reason I do not embrace Bruno's idea that every
Lobian computing machine is equally intelligent and conscious. That
may be true, but in some abstract, uninteresting sense.


Nowadays one can imagine an intelligent automatic door with many functions, for example when I say Remember it stores my image. Additionally it will open itself only for persons that it remembers. That is, it will open itself not for everybody. I believe that such functionality is already reality. So the question is the same. If such an intelligent door perceives? I also agree that this is also a matter of definitions, but this would be the goal of the example, to define Perception better in a more definite way. I would suggest along this way to introduce conscious perception and unconscious perception, and I would state that such a door could have unconscious perception but not conscious.

Even a better example would be Big Dog:


Here it would be hard to say that Big Dog does not perceive. Yet the question remains if this perception still unconscious, or one already can find some elements of conscious perception.

Then we can ask ourselves the same for insects, and finally go onward along the tree of life.



You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to