On 11.10.2012 11:36 Evgenii Rudnyi said the following:
On 26.09.2012 20:35 meekerdb said the following:
An interesting paper which comports with my idea that "the problem
of consciousness" will be "solved" by engineering. Or John
Clark's point that consciousness is easy, intelligence is hard.
Consciousness in Cognitive Architectures A Principled Analysis of
RCS, Soar and ACT-R
I have started reading the paper. Thanks a lot for the link.
I have finished reading the paper. I should say that I am not impressed.
First, interestingly enough
p. 30 "The observer selects a system according to a set of main features
which we shall call traits."
Presumably this means that without an observer a system does not exist.
In a way it is logical as without a human being what is available is
just an ensemble of interacting strings.
Now let me make some quotes to show you what the authors mean by
consciousness in the order they appear in the paper.
p. 45 "This makes that, in reality, the state of the environment, from
the point of view of the system, will not only consist of the values of
the coupling quantities, but also of its conceptual representations of
it. We shall call this the subjective state of the environment."
p. 52 "These principles, biologically inspired by the old metaphor –or
not so metaphor but an actual functional definition– of the brain-mind
pair as the controller-control laws of the body –the plant–, provides a
base characterisation of cognitive or intelligent control."
p. 60 "Principle 5: Model-driven perception — Perception is the
continuous update of the integrated models used by the agent in a
model-based cognitive control architecture by means of real-time
p. 61 "Principle 6: System awareness—A system is aware if it is
continuously perceiving and generating meaning from the countinuously
p. 62 "Awareness implies the partitioning of predicted futures and
postdicted pasts by a value function. This partitioning we call meaning
of the update to the model."
p. 65 "Principle 7: System attention — Attentional mechanisms allocate
both physical and cognitive resources for system processes so as to
p. 116 "From this perspective, the analysis proceeds in a similar way:
if modelbased behaviour gives adaptive value to a system interacting
with an object, it will give also value when the object modelled is the
system itself. This gives rise to metacognition in the form of
metacontrol loops that will improve operation of the system overall."
p. 117 "Principle 8: System self-awareness/consciousness — A system is
conscious if it is continuously generating meanings from continously
updated self-models in a model-based cognitive control architecture."
p. 122 'Now suppose that for adding consciousness to the operation of
the system we add new processes that monitor, evaluate and reflect the
operation of the “unconscious” normal processes (Fig.
fig:cons-processes). We shall call these processes the “conscious” ones.'
If I understood it correctly, the authors when they develop software
just mark some bits as a subjective state and some processes as
conscious. Voilà! We have a conscious robot.
Let us see what happens.
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to firstname.lastname@example.org.
To unsubscribe from this group, send email to
For more options, visit this group at