Ken,

Wow.  I was going to say, this is one of the most interesting posts I have
read on the AGI list in a while, until I realized it wasn't on the AGI list.
Too bad.  I have copied this response and your original email (below) to the
AGI list to share the inspiration.

In the following I have copied certain parts of your post and followed them
by questions or comments.

>KEN LAWS=====> And much of the advanced robotic planning software
developed at NASA Ames is based on particle filters,
a method of representing probability distributions
as they pass through various nonlinear transformations.
(It remains to be seen whether any of this software
will find use in actual missions, but I'm betting it will
be used in the next Mars rovers.)

ED PORTER=====> Probability particle filters sounds cool.  I assume it means
you only consider or transmit information about probability (or
probabilistic implication) distributions or changes in such distributions
that have over a certain concentration in a given portion of space/time to
those locations in space/time.  Is that correct?  And what sort of
non-linear transforms are you talking about?

 
>KEN LAWS=====> Artificial neural networks, like humans, have a remarkable
ability to deal with noise inputs and under-constrained models,
but the learning is very slow.  That's why evolution has
provided us with a priori brain structuring, instead of
a tabula rasa mind.  It makes the learning tractable.

ED PORTER=====> Other than the way sensory, homeostatic, body sensation, and
other info is mapped into our brains, the cortico-basil-ganglia-thalamic
feedback loop, the cortico-cerebellum-thalamic feedback loop, and the other
pre-designed plug and play interface to the reptile brain (all of which
establish a certain type of architecture and control structure), what are
your talking about?

I would be very interesting in knowing what type of constraint, other than
these basic architectural constraints are involved.

I just attended a lecture this weekend where a Harvard researcher on the
unconscious mind said that one-day-old babies have been shown to be able to
mimic a few basic facial expressions, such as sticking out their tongue and
putting their mouth into a small circle, as when saying "who".  This is hard
to understand, because one would imaging that by this age a child fresh from
the fog of the womb would not have had time to build up the visual patterns
enabling them to recognize facial features, and further more would not have
had time to map the correspondence between that blob of pink sticking out of
the hole in somebody's face and the baby's own tongue, or own mouth. (I
don't know how much evidence this study has.)

I have not had anybody explain to me how such instinctual programming could
be represented in advance of the learned, experientially derived patterns
out of which most mind patterns are represented.  The one exception is Sam
Adam's explained in an off-stage discussion at the Singularity Conference,
about how new-borns are designed to visually focus on the female areola
because tests have shown their visual system is pre-wired to detect circular
patterns.)

>KEN LAWS=====>For those who prefer fine-scale brain/mind modeling,
look into the decades of theoretical and simulation work
by the SOAR community, and by the APEX community,
for human sensor/manipulator learning simulations.

ED PORTER=====> I haven't read about SOAR for ten years.  It struck me as a
generalized expert system (if-then rules), but one with a relatively
enlightened goal structure and learning structure for an expert system.  Has
it grown into a real contender for human level AGI, and what sort of tasks
its is currently actually capable of.  Re APEX, I have never heard it.  Have
you any good URLs for summarizing the nature and capabilities of each.

>KEN LAWS=====>
 ...

Early pattern recognition researchers had high hopes
for statistical learning, but eventually realized
that the magic is almost always in feature extraction
rather than the statistical back end.  Represent a problem
well and it may be easy to solve; badly and you'll need
more computing power than you can afford.
..

ED PORTER=====> I think most intelligent AGI models envision a system that
has many representations which compete for existence based on usefulness.
This is one way of addressing this problem.  Another is to understand that
for certain complex problems, such as the those of the type we who are
trying to design AGI often face, part of the problem is creating the proper
novel representation, and that can involve a lot of trial and error and
exploration that hopefully tends to build up patterns representing partial
or not quite right solutions that over time probabilistically increase the
chance of synthesizing a better representation.

The Novamente-OpenCog approach should be able to use both of these methods
to find proper representations, although the system should be biased toward
learning how to learn, which includes learning how to select appropriate
representations.

Do you agree?

With regard to your general discussion about different levels of sensory
input, my feeling is that one can develop and test many of the aspects of a
generalized AGI architecture on many different types and sizes of problems,
although certain additional issue may have to be addressed as you change the
nature of the problems, and the diversity and size of them. 

If one takes the Novamente/OpenCog approach, it seems to me that many of the
same issues would be involved in creating an NL understanding and generating
system as in creating a visual understanding and outputting system, as would
be involved in creating a robotic environmental/mechanical/goal state
recognition and generation system.  In my mind purely cerebral AGI's are
mind robots, involving goal, hierarchical behaviors, task feedback
monitoring, etc, just like a robot.

Thus I think the basic idea behind OpenCog makes sense, if it can get a
large enough community to give it some momentum, the basic architecture
should emerge, and the basic architectures should be flexible enough to
adapt itself to many different types of problems.

Do you agree?  

If not, what other approaches would you suggest?

Thanks for your post.

Ed Porter




-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf
Of Ken Laws
Sent: Tuesday, December 04, 2007 4:13 PM
To: [EMAIL PROTECTED]
Subject: Re: A global approach to AI in virtual, artificial and real worlds


> -- The car from
> Stanford that won the original DARPA Grand Challenge was controlled
> based on probabilistic robotics....

And much of the advanced robotic planning software
developed at NASA Ames is based on particle filters,
a method of representing probability distributions
as they pass through various nonlinear transformations.
(It remains to be seen whether any of this software
will find use in actual missions, but I'm betting it will
be used in the next Mars rovers.)


However, as Ben said, probabilistic methods have
computational tractability issues.  You can't just
"use a physical stereoscopic array of cameras and microphones
to build a basic object library and vocabulary via stationary video
and audio sensors" because of the problem's dimensionality.
Artificial neural networks, like humans, have a remarkable
ability to deal with noise inputs and under-constrained models,
but the learning is very slow.  That's why evolution has
provided us with a priori brain structuring, instead of
a tabula rasa mind.  It makes the learning tractable.


> We, humans, are what we are because we have 5 senses
> (6 if you addproprioception).

And many more, if you care to include them.  See
http://en.wikipedia.org/wiki/Senses .
And even that leaves out the social senses that we
are discovering in the form of mirror neurons.

One reason we have so many senses is that they help
keep pattern recognition problems from being so massively
under-determined.  The only question that a situated agent
really has to answer, moment by moment, is "What do I do now?"
The more potentially relevant inputs, the easier and
more reliable the solution and the less need there is
for high-level reasoning.

Consider that man on the beach, for instance.  He doesn't
need to compile thousands of years of experience with
near and distant dogs in various crowd situations.
He can look around and see that no one else is
worried about the distant dog, hence he needn't worry
either.  The "wisdom of crowds" has already compiled
and presented the probability that he needs.

One problem with simulated worlds is that they lack this
physical and social richness.  This forces attention
away from the immediate "What do I do now?" problem
toward a more abstract level of reasoning.  However,
the toy worlds also provide a pre-parsed experience
of reality, much like the simplified world that parents
present to their children.  This can be helpful.

Early pattern recognition researchers had high hopes
for statistical learning, but eventually realized
that the magic is almost always in feature extraction
rather than the statistical back end.  Represent a problem
well and it may be easy to solve; badly and you'll need
more computing power than you can afford.  (A classic
example is the question of whether you can cover
a chess board with dominoes if you've first removed
the two chess board squares at opposite corners.)

The advantage of a simulated world is that the coarse-scale
learning is easier; the disadvantage is that you can only
learn the world structure that is presented.  One of
the lessons of the expert systems fad was that the world
is seldom fully represented from any one perspective,
and so one can't expand the capabilities of an expert system
merely by adding more knowledge.  It may be necessary
to use multiple representations, plus meta-level arbiters
to combine them.  (Exactly what human brain centers
seem designed to do.  For instance, we humans use
something like 20 different analogies to reason about love,
according to George Lakoff.  Love is a road; love is
a carriage; etc.  Robert J. Sternberg has likewise identified
at least two dozen conflicting models of marriage.)

Roboticists learned, many decades ago, that success
in simulation is only a tiny step toward success in
the real world.  Labs that could afford real hardware
had no interest in simulations (even when I, as a funder,
urged that approach).  I see that Stanford now has impressive
simulation software for robot dynamics, so perhaps the
earlier lesson was over-learned.  Still, beware the idea
that simulation addresses real-world problems.  It may,
or it may not.

Ben commented that he can't afford a robotic approach
for this project, and I agree.  However, those who really
want to go that route shouldn't despair; there are ways
to do it.  Remember Seymour Papert's turtle graphics,
and the robot turtles that resulted?  A more modern
distributed educational robotics project was developed
by Prof. Illah Nourbakhsh as the Personal Rover Project,
http://www.cs.cmu.edu/~personalrover/, which has now
become the TeRK project, http://www.terk.ri.cmu.edu/ .
I haven't been following it, but Nourbakhsh did develop
a roving robotic platform affordable by individual children,
plus ways of running robots remotely and of collaborating
on the development of software libraries.

For those who prefer fine-scale brain/mind modeling,
look into the decades of theoretical and simulation work
by the SOAR community, and by the APEX community,
for human sensor/manipulator learning simulations.
I believe that these and OpenCog have much in common,
though SOAR and APEX are more closely tied to human
emulation at the level of memory chunking and
millisecond response times.

OpenCog is focused on a much coarser level of Turing test.
It can be taken in many directions.  I'm a bit skeptical
of the dog-parrot-monkey trajectory though, because
I'm not convinced that interest can be sustained
or that the results will teach us much of anything.
(But what do I know?  Not much about it, yet.)

One concern is that the whole attraction of training a dog
is that it is difficult to do well, hence challenging to attempt
and impressive when accomplished.  Designing an AI
to be difficult to train seems counter-productive, but
necessary for the application.  (I acknowledge, of course,
that even "difficult to train" is an advance over the recent
state of the art, "impossible to train.")

It's a baby step, but at least it's a step.


One final note:

> I think that issues like that would be better discussed on the AGI list
> [email protected]
> than on this list which is supposed to be devoted to the OpenCog project
> in particular ;-)

Wow, that takes me back to the old Usenet days, and to
my time as the AIList Arpanet bboard moderator.  We techies
keep organizing societies around the Dewey Decimal System,
but the truth is that each discussion has a social dimension.

We are gathered in this virtual time and place to discuss
specific issues and implementations, but also to discuss
whatever the hell we feel like discussing.  The same topic
will develop differently in different lists, entailing differing
contributions and effects.  I'm not a member of AGI,
for instance, so this contribution would be absent there.

The best way to handle discussion overlap is for interested people
to report on any discussions that need cross-communication,
summarizing relevant questions and conclusions and also
pointing people to other conversations they might wish to join.
Indeed, such reporting can also take place across time,
as in some of my comments above.

Let a thousand blogs bloom.

--
Ken Laws

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups
"OpenCog.org (Open Cognition Framework)" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to
[EMAIL PROTECTED]
For more options, visit this group at
http://groups.google.com/group/opencog?hl=en
-~----------~----~----~----~------~----~------~--~---

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=72059876-578332

<<attachment: winmail.dat>>

Reply via email to