On 10/8/2012 11:45 AM, Alberto G. Corona wrote:
Deutsch is right about the need to advance in Popperian epistemology,
which ultimately is evolutionary epistemology. How evolution makes a
portion of matter ascertain what is truth in virtue of what and for
what purpose. The idea of intelligence need a knowledge of what is
truth but also a motive for acting and therefore using this
intelligence. if there is no purpose there is no acting, if no act, no
selection of intelligent behaviours if no evolution, no intelligence.
Not only intelligence is made for acting accoding with  arbitrary
purpose: It has evolved from the selection of resulting behaviours for
precise purposes.

an ordinary purpose is non separable from other purposes that are
coordinated for a particular superior purpose, but the chain of
reasoning and actng means tthat a designed intelligent robot also need
an ultimate purpose. otherwise it would be a sequencer and achiever of
disconnected goals at a certain level where the goals would never have
coordination, that is it would be not intelligent.

I agree that intelligence cannot be separated from purpose. I think that's why projects aimed at creating AGI flounder - a "general" purpose tends to be no purpose at all. But I'm not so sure about an ultimate goal, at least not in the sense of a single goal. I can imagine an intelligent robot who has several high-level goals that are to be satisfied by not necessarily summed or otherwise combined into a single goal.

  This is somewhat different ffom humans, because much of our goals are
hardcoded and non accessible to introspection, although we can use
evolutionary reasoning for obtaining falsable hypothesis about
apparently irrational behaviour, like love, anger aestetics, pleasure
and so on.

There's no reason to give a Mars Rover introspective knowledge of its hardcoded goals. A robot would only need introspective knowledge of goals if there were the possibility of changing them - i.e. not hardcoded.

However men are from time to time asking themselves for the
deep meaning of what he does. specially when a whole chain of goals
have failed, so he is a in a bottleneck. Because this is the right
thing to do for intelligent beings. A true intelligent being therefore
has existential, moral and belief problems. If an artificial
intelligent being has these problems, the designed as solved the
problem of AGI to the most deeper level.

I think it's a matter of depth. A human is generally more complex and has hierarchy of goals. A dead end in trying to satisfy some goal occasions reflection on how that goal relates to some higher goal; how to back track. So a Mars Rover may find itself in a box canyon so that it has to back track and this makes its journey to the objective too far to reach before winter and so it has to select a secondary objective point to reach. But it can't reflect on whether gathering data an transmitting it is good or not.

An AGI designed has no such "core engine" of impulses and perceptions
that drive, in the first place, intelligence to action: curiosity,
fame and respect, power, social navigation instimcts.  It has to start
from scratch.   Concerning perceptions, a man has hardwired
perceptions that create  meaning:  There is part of brain circuitry at
various levels that make it feel that a person in front of him is
another person. But really it is its evolved circuitry what makes the
impression that that is a person and that this is true, instead of a
bunch of moving atoms. Popperian Evoluitionary epistemology build from
this. All of this link computer science with philosophy at the deeper

And because man evolved as a social animal he is hard wired to want to exchange knowledge with other humans.

Another comment concerning design: The evolutionary designs are
different from rational designs. The modularity in rartional design
arises from the fact that reason can not reason with many variables at
the same time. Reason uses divide an conquer.  Object oriented design,
modual architecture and so on are a consequence of that limitation.
These design are understandable by other humans, but they are not the
most effcient. In contrast, modularity in evolution is functional.
That means that if a brain structure is near other in the brain
forming a greater structuture it is for reasons of efficiency,

Are saying spatial modularity implies functional modularity?

not for
reasons of modularity.

No it may be for reasons of adaptability. Evolution has no way to reason about efficiency or even a measure of efficiency. It can only try random variations and copy ones that work.

the interfaces between modules are not
discrete, but pervasive. This makes essentially a reverse engineering
of the brain inpossible.

And not even desirable.


You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to