WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent input and
responses

BELOW ARE THE MOST RECENT DISCUSSIONS CONCERNING POSSIBLE MISSING CONCEPTUAL
PROBLEMS THAT MIGHT STAND BETWEEN US AND AGI.  

THESE COMMENTS ALL RELATE TO IMPORTANT ISSUES TO BE DEALT WITH --- BUT IT IS
NOT CLEAR ANY OF THEM REPRESENT A MAJOR CONCEPTUAL PROBLEM FOR WHICH WE HAVE
NO REASONABLE POTENTIAL SOLUTION.



==========================================================
====Richard Loosemore Sat 4/19/2008 7:57 PM
==========================================================
Richard Loosemore [EMAIL PROTECTED] Sun 4/20/2008 4:20 P
Ed Porter wrote:
> RICHARD,
> 
> I can't provide "concrete reasons" why Novamente and roughly similar 
> approaches will work --- precisely because they are designed to 
> operate in the realm Wolfram calls computationally irreducible --- 
> meaning it cannot be modeled properly by anything substantially less 
> complex than itself.  Thus, whether or not it will work cannot be 
> formally proven in advance.

Just a few small details:

As I understand it, Ben has argued vehemently that Novamente is not 
subject to the computational irreducibility (complex systems) issue.

And complexity does mean that "it cannot be modeled properly by anything 
substantially less complex than itself".  What it does mean is that it 
cannot be explained in an "analytic" manner.

====<ED PORTER= According to the def of computational irreducibility in
wikipedia it is a concept that applies in vary degrees to differing levels
and types of description of a computation in physical reality or in a
computer.  A large Novamente system would have so many complex things going
on that it would probably take more effort to devise model that would
accurately predict its behavior under the various types of complex states it
would develop than to actually build the system itself.  I have a hunch that
Ben would agree.  I think what Ben was saying is that his system, like
Hofstadter's Copycat would be able to avoid potentially disasterous effects
of complexity that you have often ominously warned about.>
</ED PORTER>====

I would not ask anyone to formally prove that the systems at AGI 2008 
will work (me, of all people!).  Not formal proof, just something other 
than a hunch.

====<ED PORTER= Glad you of all people would not demand formal proof.  I
think Ben and people like him have something more than just a hunch.  Ben
has built a lot of AI systems, he's not an amateur.  Joscha Bach claims his
system, which in some ways more similar to some of my thinking than Novamete
says he has run tests on it that shows it scales efficiently well (although,
admittedly, still in small systems), and had actually performed well at
automatically learning and created hierarchical memory representations that
automatically learn to perform desired functionality in the toy world his
hardware budget can support.  So these are not just hunches >
</ED PORTER>====

> 
> I assume it would have been equally hard to provide "concrete reasons" 
> why Hofstadter's Copycat and his similar systems would work before 
> they were built.  But they did work.

Not really germane:  Copycat was unbelievably simple compared to AGI 
systems, and Hofstadter would nover have claimed ahead of time that it 
would do anything, because it was an experimental system.

====<ED PORTER= I don't know how much or how little Hofstadter predicted
about his systems ahead of time.  Agreed AGI is much more complex than
Copycat, but not as much more complex as you may think, because one of the
basic concepts in most of AGI approaches is to apply a common architecture
to many different AI tasks.  I do think a lot of experimentation in terms of
parameter tuning, refinements, etc, will be required to get a Novamente-like
system to work, so to a certain extent it will be an experimental system >
</ED PORTER>====

> Since computation irreducibility is something you make a big point 
> about in one of your own papers you frequently cite --- you --- of all 
> people --- should not deny the right to believe in approaches to AGI 
> before they are supported by concrete proof.

Quite the reverse:  all of these people deny that the complexity issue 
is at all relevant to their systems.  You cannot say, on their behalf, 
that complexity is an excuse for not being able to predict why the 
systems should work, while at the same time they protest that my complex 
systems analysis is wrong.  ;-)

====<ED PORTER= Richard, If you will remember, I actually wrote a post,
admitting to having to eat my words, at least in part, saying there was
something to your complex systems analysis viewpoint. >
</ED PORTER>====
 

> But I do have, what I consider, rational reasons for believing a 
> Novamente-like systems will work.
> 
> One of them is that you have failed to describe to me any major 
> conceptual problem for the AGI community does not have what appear to 
> be valid approaches.

Me?  What did my question have to do with me?  I asked about your optimism.

But since you mention me, I have (if I undertand your statement, which 
was a little confusingly worded) described a major, crippling reason why 
the AGI community does not have valid approaches.

> 
> For more reasons why I believe a Novamente-like approach might work I 
> copy the following portion of a former post of from about 5 months ago 
> --- describing my understanding of how Novamente itself would probably 
> work, based on reading material from Novamente and my own other 
> reading and thinking.

But this general description below does not really say why everything is 
on track to succeed.

====<ED PORTER= I agree your complexity arguments have to be kept in mind,
just as does combinatorial explosion.  Keeping a highly dynamic system such
as a complex Novamente System from being blown away from productively
performing its intended function is an important design concern --- but like
combinatorial explosion, there are reasons to believe we can deal with it.
One of the best of which, is that even if the system has its dynamism really
damped down you should be able to still get useful work out of it.  But it
will probably take a lot of experimental tuning and refinement to learn how
to run the system with the most efficient and productive form of dynamic
control. The availiability of more cheap massively parallel hardware that
will increasingly arrive over then next decade should make it possible to
perform such tuning experiments in parallel, which should speed them up
considerable.   

So net-net, Richard, I don't currently consider this a major conceptual
problem --- although it might be.  >
</ED PORTER>====



==========================================================
==== William Pearson Sun 4/20/2008 4:45 PM
==========================================================

I'm not quite sure how to describe it, but this brief sketch will have to do
until I get some more time. These may be in some new AI material, but I
haven't had the chance to read up much recently.

Linguistic information and other non-inductive information integrated into
learning/modelling strategies, including the learning of linguistic rules.

Consider an AI learning chess, it is told in plain english that "Knights
move two hops in one direction and one hop 90 degrees to that".

Now our AI has learnt english so how do we hook this knowledge into our
modelling system, so that it can predict when it might lose or take a piece
because of the position of a knight?

Consider also the sentence, "There are words such as verbs, that are doing
words, you need to put a pronoun or noun before the verb".

People are given this sort of information when learning languages, it seems
to help them. How and why does it help them?

====<ED PORTER= 
William, I assume you are asking how a system designed to automatically
learn from experience would know how to handle knowledge handed to it in a
natural language declarative form.  I do not see this as a problem.  

A Novamente-like approach records, generalizes over, creates compositions
out of, and creates a multi-level hierarchical memory of such generalization
and compositions, along with episodic experiences represented as networks
within such memory.  As Jeff Haskins and others point out hierarchical
memory provides invariant representation.  This means it can not only
recognize multiple different sets of inputs, such as different views of an
object, as corresponding to a given concept, such as a given type of object,
but also can take a given concept and map an appropriate version of it into
a current context.  This includes both context appropriate imagining of
sensual information and the generating of context specific behaviors.  

Natural language involves such perception and behavior, involving both
perception of words, but also their experienced connections in the
hierarchical memory to other sensory, emotional, and higher level patterns
and episodes.  

So when people are given a sentence such as the one you quoted about verbs,
pronouns, and nouns, presuming they have some knowledge of most of the words
in the sentence, they will understand the concept that verbs "are doing
words."   This is because of the groupings of words that tend to occur in
certain syntactical linguistic contexts, the ones that would be most
associated with the types of experiences the mind would associates with
"doing" would be largely word senses that are verbs and that the mind's
experience and learned patterns most often proceeds by nouns or pronouns.
So all this stuff falls out of the magic of spreading activation in a
Novamente-like hierarchical experiential memories (with the help of a
considerable control structure such as that envisioned for Novamente).  

Declarative information learned by NL gets projected into the same type of
activations in the hierarchical memory as would actual experiences that
teaches the same thing, but at least as episodes, and in some patterns
generalized from episodes, such declarative information would remain linked
to the experience of having been learned from reading or hearing from other
humans. 
So in summary, a Novamete-like system should be able to handle this alleged
problem, and at the moment it does not appear to provide an major unanswered
conceptual problem. >
</ED PORTER>====




==========================================================
==== Derek Zahn Sun 4/20/2008 6:29 PM
==========================================================

William Pearson writes:

> Consider an AI learning chess, it is told in plain english that...
 
I think the points you are striving for (assuming I understand what you
mean) are very important and interesting.  Even the first simplest steps
toward this clear and (seemingly) simple task baffle me.  How does the
concept of 'knight' poof into existence during the conversation? How does a
system learn how to learn to play a game in the first place?  I like this
task as a tool for considering how a potential AGI approach is truly general
-- by asking over and over again "how and why could that happen" for any
imagining of how each sentence could be processed.
 
Now, Edward, I hope you are right about Novamente but I don't quite follow
the reasoning behind your confidence.  I'm imagining that in a previous life
you'd pointed me toward a drawing of a DaVinci flying machine, excitedly
projecting 3-8 years until we'd be flying around.  Now DaVinci's a bright
guy (smarter than me) and it's a nice concept, and I can't prove it won't
work -- I'd have to invent a pretty effective aerodynamic science to do so.
I still might not be convinced.  Absence of disproof is not necessarily
strong evidence.
 
I'm looking forward to getting more info about Novamente soon and hopefully
understand the nuts and bolts of how it could do tasks like the ones William
wrote about.  I have some concerns about things like whether propagating
truth values around is really a very effective modeling substrate for the
world of objects and ideas we live in -- but since I don't understand
Novamente well enough, there's lititle I can say pro or con beyond those
vague intuitions (and the last thing I'd want to do is bug Ben with
questions like "how would Novamente do X?  How about Y?"  He has plenty of
real work to do.)

====<ED PORTER COMMENT= Derek, 
How the concept of "knight" poofs into existence during a conversation about
chess is no great mystery for a Novamente-like system.  If a Novamente has
former experience which chess they have, within their hierarchical memory
recorded patterns and experiences with chess knights, and links between them
and the representation for "knight."  When the context suggests the sound
"night" or "knight" refers to chess, those chess "knight" patterns and
experiences get activated sufficiently to be brought to the conciousness of
the system.

Regarding projections of a DaVinci flying machine operating in 3-8 years, if
we had no helicopters, but had the rest of today's technology, machines
based on the concept of DaVinci's helicopter-like design would be flying
within 3-8 years.  By analogy to AGI, the hardware for interesting AGI test
systems is already here in the form of powerful PCs.  Today for 40K you can
buy systems with 16 2ghz cores and 256 GBytes of DRAM which allows even more
powerful test systems.  And in five or six years for even less you should be
able to buy systems that should be able to prove, in the form of smaller
proto-types, most of the key design elements of a human level AGI.

Regarding the sufficiencly of truth values, Novamente also uses importance
values, which are just as important as truth values.  

>From my reading and remembering of Novamente I don't remember how it dealt
with the type of representations that have traditionally be represented in
narrow AI as vectors, other than to say Novamente nodes can represent
vectors and matrices upon which standard mathematical techniques can be
used.  But I don't see this as a major problem.  Throw in GNG (Growing
Neural Gas), and tune it to fit with the Novamente architecture and you have
a way to learn vector representations that operate much as the other
components in the Novamente hypegraph, and which could be used in
hierarchical memory systems, like that shown to be very powerful in Thomas
Serre's "Learning a Dictionary of Shape-Components in Visual Cortex:
Comparison with Neurons, Humans and Machines" (In fact, one could argue the
main Novamente architecture already supports something very similar to GNG.)
This should allow Novamente to learn sensory patterns and behaviors within
them as well as semantic ones.  

Of course, a Novamente-like system should be allowed to take advantage of
all the wonderful front ends that have been developed for sensory and other
types of information over the years by narrow AI research.

So again I do not see this issue of appropriate understanding as a major
missing conceptual piece of the AGI problems
></ED PORTER>====




==========================================================
==== Linas Vepstas Mon 4/21/2008 9:12 AM
==========================================================

On 20/04/2008, Derek Zahn <[EMAIL PROTECTED]> wrote: 
William Pearson writes:

> Consider an AI learning chess, it is told in plain english that...
 
I think the points you are striving for (assuming I understand what you
mean) are very important and interesting.  Even the first simplest steps
toward this clear and (seemingly) simple task baffle me.  How does the
concept of 'knight' poof into existence during the conversation?

One has to have grounding: prior experience with checkers, or parchesi or 
other board games, and thus the concept of moving a gamepiece  on a board.
And, prior to that, the concept of 3D space,e.g. that of shoving a toy 
around on the floor. Also, the concept of having something taken away from
them: something one wants to have but can't.  In humans, desires seem 
grounded in biology, but then, like colorful tropical birds with bzarre
mating
rituals, grow to be their own (biology-unmotivated) thing. Only after one
has
mastered these concepts is one ready to understand a knight.

How does a system learn how to learn to play a game in the first place?  

Well, one has to be psychologically motivated to participate. There has to 
be some motivator to make one want to learn. My experience with children 
shows that they lack motivators for most things, bar one: if they can get
Dad's
attention, they're willing to try anything.

I like this task as a tool for considering how a potential AGI approach is
truly general -- by asking over and over again "how and why could that
happen" for any imagining of how each 

I think most researchers are in general agreement with you on this. Thus the
current focus  on integrating knowledge-bases, and 3D spatial knowledge,
with
a 3D (virtual) body, and, to a lesser degree, psychological/motivational
desires/needs.

====<ED PORTER= I agree with this comment.  It supports what I have said
above in response to comments by William Pearson and Derek Zahn.  >
</ED PORTER>====


==========================================================

IN SUMMARY --- FROM THE POSTS SO FAR --- I HAVE YET TO SEE ANYONE SHOW ME A
MAJOR CONCEPTUAL PROBLEMS BETWEEN US AND AGI --- JUST LOTS OF WORK TO BE
DONE TO BUILD, TUNE, AND REFINE OUR CURRENT DESIGNS.  

BUT I DON'T DENY THAT --- AS WE START TO GET PROTOTYPES OF SUCH SYSTEMS UP
AND,  AT LEAST PARTIALLY, RUNNING --- WE MAY LEARN OF MAJOR CONCEPTUAL
PROBLEMS OF WHICH WE ARE CURRENTLY EITHER NOT AWARE OR SUFFICIENTLY
APPRECIATIVE.

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

<<attachment: winmail.dat>>

Reply via email to