--- On Sat, 9/20/08, Pei Wang <[EMAIL PROTECTED]> wrote:

> Matt,
> 
> I really hope NARS can be simplified, but until you give me the
> details, such as how to calculate the truth value in your "converse"
> rule, I cannot see how you can do the same things with a simpler
> design.

You're right. Given P(A), P(B), and P(A->B) = P(B|A), you could derive P(A|B) 
using Bayes law. But you can't assume this knowledge is available.

> For your original claim that "The brain does not
> implement formal
> logic", my brief answers are:
> 
> (1) So what? Who said AI must duplicate the brain? Just
> because we cannot image another possibility?

It doesn't. The problem is that none of the probabilistic logic proposals I 
have seen address the problem of converting natural language to formal 
statements. I see this as a language modeling problem that can be addressed 
using the two fundamental language learning processes, which are learning to 
associate time-delayed concepts and learning new concepts by clustering in 
context space. Arithmetic and logic can be solved directly in the language 
model by learning the rules to convert to formal statements and learning the 
rules for manipulating the statements as grammar rules, e.g. "I had $5 and 
spent $2" -> "5 - 2" -> "3". But a better model would deviate from the human 
model and use an exact formal logic system (such as calculator) when long 
sequences of steps or lots of variables are required. My vision of AI is more 
like a language model that knows how to write programs and has a built in 
computer. Neither component requires probabilistic logic.

-- Matt Mahoney, [EMAIL PROTECTED]



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com

Reply via email to