Re Derek Zahn [EMAIL PROTECTED] Mon 4/21/2008 11:50 AM and12:33 PM

 

 

>====Zahn===>

In the scenario where somebody verbally explains chess there are no prior
sensory experiences with knights to draw from... 

 

====Porter===>

By the time anybody is in a position to understand anything about chess they
normally have a very large vocabulary of linguistic and experiential
patterns in hierarchical networked memory and they interpret what they are
told about a chess knight in terms of them.  Before I had ever heard about
chess knights, I had been read or seen many stories about knights of old.  I
had also played checkers and other board game and understood the idea of
game pieces moving on the board according to certain rules.  I knew that in
the real world some things, like cars, only normally move on a surface, and
that others like people or horse can jump.  So when this discussion of chess
knights would be taking place, successive networks of activation of patterns
previously formed from such previous experiences would be recorded and
associated with word "knight" --- and it would be from such associations the
hearer would associate meaning with that word 

 

>====Zahn===>

but that is not the central point I was trying to get at.  For me it is not
quite enough to say that somewhere in a vaguely-described hierarchical
memory there will be some unspecified patterns that correspond in some
unclear way to chess knights, and that these representations will through
some method I don't fully understand get clustered into something that will
do what a "knight" concept should do (although we can't say for sure exactly
what that is).
 
Note that the flaw here is with my understanding -- because I cannot "see"
exactly how these things would happen and work for a specific case, I can't
conclude for myself that they would do so, however brilliant Ben is and
however fascinating on a general level his ideas are (and they are extremely
interesting to me).
 
You seem to be looking for a disproof, but you won't find one... you can't
disprove something about a system that is not fully understood.  Similarly
that is the reason I'm questioning the forcefulness of your belief.  You
seem to have drawn conclusions about the technical capabilities of a system
given only a sketchy English description of it.  To me that's a leap of
faith.
 
Note that I am not criticizing Novamente; I think it's the most interesting
AGI system out there and it has a chance of succeeding.


====Porter===>

Your doubt is natural.  I am not totally without doubt about Novamente
myself, although my level of doubt about whether a Novamente-like systems
could be made to work over 10 years if there were $1 billion invested in it
with the multiple teams selected by the right people is very small. 

 

One of the reason I probably have a lower level of doubt about it than you
is that I, largely on my own, by reading AI and brain ccience articles came
up with a surprisingly similar approach after thousands of hours over many
years of thinking about it.  So I when I read about Novamente images of the
type of semantic net I would use pop into my mind.

 

The Serre article I cited in my last post is proof of the surprising power
of hierarchical memory and the manner in which much of it can be
automatically learned.

 

>====Zahn===>

> Regarding the sufficiencly of truth values, Novamente also 
> uses importance values, which are just as important as truth values. 

Yes, that's true, I should have written:  [I] have some concerns about
things like whether propagating truth+importance values around is really a
very effective modeling substrate for the world of objects and ideas we live
in [...]



====Porter===>

The truth values and importances represent only the activation of nodes, the
nodes themselves are represented by what they are connected to, in a
generalization and compositional hierarchy.  At the lowest level these are
activation levels of sets of one or more sensory or emotional inputs.  So
the representation is much richer than just truth+importance. Other elements
are also involved in the representation, including things such as the
relative timing of activation of different nodes.

 

>====Zahn===>

The usual response to questions about Novamente's capabilities seems to be
to say "it can do that, but the method hasn't been described yet" or "ok,
but all we have to do is add [neural gas, or whatever] to it and hook it up,
and then we're good to go.



I hope those things are true and look forward to seeing the engineering play
out.  But you can't blame people for retaining a "we'll see" attitude at the
present time, I think.

 

====Porter===>

Again, if I had not already had significant parts of what Novamente
describes in my own mind, I would have trouble understanding how the system
worked or what its potential was just from reading Ben's currently published
writings.  This is not meant as a criticism of Ben, but rather it reflect
the complexity of the subject matter.  Even though the write up of Novamente
is the most complete description of an AGI I have read, I would have had
trouble understanding the real significance of it had I not had a
substantial base of prior related knowledge in which to interpret it. 

 

>====Zahn===>

Derek Zahn [EMAIL PROTECTED] Mon 4/21/2008 1Mon 4/21/2008 12:33 PM

 

One more bit of ranting on this topic, to try to clarify the sort of thing
I'm trying to understand.
 
Some dude is telling my AGI program:  "There's a piece called a 'knight'.
It moves by going two squares in one direction and then one in a
perpendicular direction.  And here's something neat:  Except for one other
obscure case I'll tell you about later, it's the only piece that moves by
jumping through the air instead of moving a square at a time on its
journey."
 
When I try to think about how an intelligence works, I wonder about specific
cases like these (and thanks to William Pearson for inventing this one) --
the genesis of the "knight" concept from this specific purely verbal
exchange.  How could this work?  What is it about the specific word
sequences and/or the conversational context that creates this new "thing" --
the Knight?  It would have to be a hugely complicated language processing
system... so where did that language processing system come from?  Did
somebody hardcode a model of language and conversation and explicitly insert
"generate concept here" actions?  That sounds like a big job.  If it was
learned (much better), how was it learned?  What is the internal
representation of the language processing model that leads to this
particular concept formation, and how was it generated?  If I can see
something specific like that in a system (say Novamente) I can start to
really understand the theory of mind it expresses.

 

====Porter===>

You're right.  Human level language processing is hugely complicated.  It
is, among other reasons, because --- as a young Thinking Machine AI
scientist told me in the late '80s --- you can't have good natural language
processing without good world knowledge processing. 

 

Re your remark about "generate concept here" --- concept creation is one of
the basic processes in a the type of system I am thinking of.  Concepts
include activation states within the hierarchical memory --- episodic
memories recording the more important features of many such activation
states --- generalizations created from similar patterns occurring in such
episodic memories, which generalizations then become available for use as
activated nodes in future activation state --- compositions represented by
the activation of multiple states in episodic recordings and compositional
patterns formed by generalization from such compositions --- etc.   So no
surprise about "generate concept here".   In Novamente and in the brain,
there is a vocabulary of hundreds of millions or billions of patterns that
have to compete in terms of their perceived usefulness and emotional
importance to retain space in the hierarchical memory.  So the memories that
remain tend to be ones that are appropriate and useful for some purpose.

 

I believe such a system can automatically build up a representation of not
only appropriate perceptual representation, but also appropriate physical
and mental behaviors.  Because they are relatively invariant, as Jeff
Hawkins describes, they can both (1) recognizing as corresponding to the
same concept very different sensory inputs, at least a lower level, but
which nevertheless, have been learned to have important common properties
that warrant their recognition belonging to a common concept, (2) they can
project down from higher level concept (such as hit your opponent) to lower
level representation, such as behavioral output, that can vary tremendously
depending on the contexts.

 

I takes a (little, big, huge --- take your pick) leap of faith to be believe
such a system could automatically learn all the important patterns of word
form, syntax, discourse, and models of mind, and all the semantics
represented in normal person's world knowledge, to be able to properly
understand and generate human spoken communication --- but I make such a
leap of faith.  For me the leap is not that big.  

 

But for me to believe in the power of such hierarchical memory there has to
be the proper control mechanism to perform tasks such as selecting foci of
attention, dynamically distributing and focusing spreading activation and
inferencing, selecting and grading importance, selecting competing mental
and physical behaviors, and substantially committing to such behaviors once
selecting, and pruning memory.  Novamente provides mechanism for these, but
getting them to all work together well automatically is for me probably the
biggest challenge.  

 

But with falling price of hardware it will become cheaper and faster to
test, tune, and refine such control systems.  I find it hard to believe that
within 3-8 years we won't see substantial stride made towards making roughly
Novamente-like machines.  In 8 to 20 years I would be surprised if we do not
see machines that are at least at human levels in virtually all mental
skills it is desirable for machines to have. 

 

 

 

-----Original Message-----
From: Derek Zahn [mailto:[EMAIL PROTECTED] 
Sent: Monday, April 21, 2008 12:33 PM
To: agi@v2.listbox.com
Subject: RE: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent
input and responses

 

One more bit of ranting on this topic, to try to clarify the sort of thing
I'm trying to understand.
 
Some dude is telling my AGI program:  "There's a piece called a 'knight'.
It moves by going two squares in one direction and then one in a
perpendicular direction.  And here's something neat:  Except for one other
obscure case I'll tell you about later, it's the only piece that moves by
jumping through the air instead of moving a square at a time on its
journey."
 
When I try to think about how an intelligence works, I wonder about specific
cases like these (and thanks to William Pearson for inventing this one) --
the genesis of the "knight" concept from this specific purely verbal
exchange.  How could this work?  What is it about the specific word
sequences and/or the conversational context that creates this new "thing" --
the Knight?  It would have to be a hugely complicated language processing
system... so where did that language processing system come from?  Did
somebody hardcode a model of language and conversation and explicitly insert
"generate concept here" actions?  That sounds like a big job.  If it was
learned (much better), how was it learned?  What is the internal
representation of the language processing model that leads to this
particular concept formation, and how was it generated?  If I can see
something specific like that in a system (say Novamente) I can start to
really understand the theory of mind it expresses.

 

  _____  


agi |  <http://www.listbox.com/member/archive/303/=now> Archives
<http://www.listbox.com/member/archive/rss/303/> |
<http://www.listbox.com/member/?&;
> Modify Your Subscription

 <http://www.listbox.com> 

 

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to