----- Original Message ---- 

From: Pei Wang <[EMAIL PROTECTED]> 

To: [email protected] 

Sent: Saturday, October 21, 2006 7:03:39 PM 

Subject: Re: [agi] SOTA 

 

>Well, in that sense NARS also has some resemblance to a neural 

>network, as well as many other AI systems. 

 

Also to Novamente, if I understand correctly.  Terms are linked by a 
probability and confidence.  This seems to me to be an optimization of a neural 
network or connectionist model, which is restricted to one number per link, 
representing probability.  To model confidence you would have to make redundant 
copies of the input and output units and their connections.  This would be 
inefficient, of course. 

 

One aspect of NARS and many other structured or semi-structured knowledge 
representations that concerns me is the direct representation of concepts such 
as "is-a", equivalence, logic ("if-then", "and", "or", "not"), quantifiers 
("all", "some"), time ("before" and "after"), etc.  These things seem 
fundamental to knowledge but are very hard to represent in a neural network, so 
it seems expedient to add them directly.  My concern is that the direct 
encoding of such knowledge greatly complicates attempts to use natural 
language, which is still an unsolved problem.  Language is the only aspect of 
intelligence that separates humans from other animals.  Without language, you 
do not have AGI (IMHO). 

 

My concern is that structured knowledge is inconsistent with the development of 
language in children.  As I mentioned earlier, natural language has a structure 
that allows direct training in neural networks using fast, online algorithms 
such as perceptron learning, rather than slow algorithms with hidden units such 
as back propagation.  Each feature is a linear combination of previously 
learned features followed by a nonlinear clamping or threshold operation.  
Working in this fashion, we can represent arbitrarily complex concepts.  In a 
connectionist model, we have, for example: 

 

- pixels 

- line segments 

- letters 

- words 

- phrases, parts of speech 

- sentences 

etc. 

 

Children also learn language as a progression toward increasingly complex 
patterns. 

 

- phonemes beginning at 2-4 weeks 

- phonological rules for segmenting continuous speech at 7-10 months [1] 

- words (semantics) beginning at 12 months 

- simple sentences (syntax) at 2-3 years 

- compound sentences around 5-6 years 

 

Attempts to change the modeling order are generally unsuccessful.  For example, 
attempting to parse a sentence first and then extract its meaning does not 
work.  You cannot parse a sentence without semantics.  For example, the correct 
parse of "I ate pizza with NP" depends on whether NP is "pepperoni", "a fork", 
or "Sam". 

 

Now when we hard code knowledge about logic, quantifiers, time, and other 
concepts and then try to retrofit NLP to it, we are modeling language in the 
worst possible order.  Such concepts, needed to form compound sentences, are 
learned at the last stage of language deveopment.  In fact, some tribal 
languages such as Piraha [2] do not ever reach this stage, even for adults. 

 

My caution is that any language model we develop has to be trainable in order 
from simple to complex.  The model has to be able to first learn simple 
sentences in the absence of any knowledge of logical relations, and then there 
must be a mechanism for learning such relations.  I realize that human models 
of logical relations must be horribly inefficient, given how long it takes 
children to learn them.  I think to solve AGI, we need to develop a better 
understanding of such models.  I do not hold out too much hope for a 
computationally efficient solution, given our long past record of failure. 

  

[1] Jusczyk, Peter W. (1996), “Investigations of the word segmentation 
abilities of infants”, 4’th Intl. Conf. on Speech and Language Processing, Vol. 
3, 1561-1564 

 

[2] The Piraha challenge: an Amazonian tribe takes grammar to a strange place, 
Science News, Dec. 10, 2005,

http://www.findarticles.com/p/articles/mi_m1200/is_24_168/ai_n16029317/pg_1 

 

 

-- Matt Mahoney, [EMAIL PROTECTED] 

 

 

 





-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to