--- On Fri, 9/19/08, Jan Klauck <[EMAIL PROTECTED]> wrote:

> Formal logic doesn't scale up very well in humans. That's why this
> kind of reasoning is so unpopular. Our capacities are that
> small and we connect to other human entities for a kind of
> distributed problem solving. Logic is just a tool for us to
> communicate and reason systematically about problems we would
> mess up otherwise.

Exactly. That is why I am critical of probabilistic or uncertain logic. Humans 
are not very good at logic and arithmetic problems requiring long sequences of 
steps, but duplicating these defects in machines does not help. It does not 
solve the problem of translating natural language into formal language and 
back. When we need to solve such a problem, we use pencil and paper, or a 
calculator, or we write a program. The problem for AI is to convert natural 
language to formal language or a program and back. The formal reasoning we 
already know how to do.

Even though a language model is probabilistic, probabilistic logic is not a 
good fit. For example, in NARS we have deduction (P->Q, Q->R) => (P->R), 
induction (P->Q, P->R) => (Q->R), and abduction (P->R, Q->R) => (P->Q). 
Induction and abduction are not strictly true, of course, but in a 
probabilistic logic we can assign them partial truth values.

For language modeling, we can simplify the logic. If we accept the "converse" 
rule (P->Q) => (Q->P) as partially true (if rain predicts clouds, then clouds 
may predict rain), then we can derive induction and abduction from deduction 
and converse. For induction, (P->Q, P->R) => (Q->P, P->R) => (Q->R). Abduction 
is similar. Allowing converse, the statement (P->Q) is really a fuzzy 
equivalence or association (P ~ Q), e.g. (rain ~ clouds).

A language model is a set of associations between concepts. Language learning 
consists of two operations carried out on a massively parallel scale: forming 
associations and forming new concepts by clustering in context space. An 
example of the latter is:

the dog is
the cat is
the house is
...
the (noun) is

So if we read "the glorp is" we learn that "glorp" is a noun. Likewise, we 
learn something of its meaning from its more distant context, e.g. "the glorp 
is eating my flowers". We do this by the transitive property of association, 
e.g. (glorp ~ eating flowers ~ rabbit).

This is not to say NARS or other systems are wrong, but rather that they have 
more capability than we need to solve reasoning in AI. Whether the extra 
capability helps or not is something that requires experimental verification.

-- Matt Mahoney, [EMAIL PROTECTED]



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com

Reply via email to