Re: [agi] Understanding Natural Language

2006-11-28 Thread J. Storrs Hall, PhD.
On Monday 27 November 2006 10:35, Ben Goertzel wrote: Amusingly, one of my projects at the moment is to show that Novamente's economic attention allocation module can display Hopfield net type content-addressable-memory behavior on simple examples. As a preliminary step to integrating it with

Re: Re: [agi] Understanding Natural Language

2006-11-28 Thread Ben Goertzel
On 11/28/06, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote: On Monday 27 November 2006 10:35, Ben Goertzel wrote: Amusingly, one of my projects at the moment is to show that Novamente's economic attention allocation module can display Hopfield net type content-addressable-memory behavior on

Re: [agi] Understanding Natural Language

2006-11-28 Thread J. Storrs Hall, PhD.
On Monday 27 November 2006 10:35, Ben Goertzel wrote: ... An issue with Hopfield content-addressable memories is that their memory capability gets worse and worse as the networks get sparser and sparser. I did some experiments on this in 1997, though I never bothered to publish the results

Re: Re: [agi] Understanding Natural Language

2006-11-28 Thread Ben Goertzel
My approach, admittedly unusual, is to assume I have all the processing power and memory I need, up to a generous estimate of what the brain provides (a petawords and 100 petaMACs), and then see if I can come up with operations that do what it does. If not it, would be silly to try and do the

Re: [agi] Understanding Natural Language

2006-11-28 Thread Philip Goetz
On 11/24/06, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote: On Friday 24 November 2006 06:03, YKY (Yan King Yin) wrote: You talked mainly about how sentences require vast amounts of external knowledge to interpret, but it does not imply that those sentences cannot be represented in (predicate)

Re: Re: [agi] Understanding Natural Language

2006-11-28 Thread Philip Goetz
On 11/27/06, Ben Goertzel [EMAIL PROTECTED] wrote: An issue with Hopfield content-addressable memories is that their memory capability gets worse and worse as the networks get sparser and sparser. I did some experiments on this in 1997, though I never bothered to publish the results ... some

Re: [agi] Understanding Natural Language

2006-11-28 Thread Philip Goetz
On 11/26/06, Pei Wang [EMAIL PROTECTED] wrote: Therefore, the problem of using an n-space representation for AGI is not its theoretical possibility (it is possible), but its practical feasibility. I have no doubt that for many limited application, n-space representation is the most natural and

Re: [agi] Understanding Natural Language

2006-11-28 Thread Matt Mahoney
Philip Goetz [EMAIL PROTECTED] wrote: The use of predicates for representation, and the use of logic for reasoning, are separate issues. I think it's pretty clear that English sentences translate neatly into predicate logic statements, and that such a transformation is likely a useful first step

Re: [agi] Understanding Natural Language

2006-11-28 Thread J. Storrs Hall, PhD.
On Tuesday 28 November 2006 14:47, Philip Goetz wrote: The use of predicates for representation, and the use of logic for reasoning, are separate issues. I think it's pretty clear that English sentences translate neatly into predicate logic statements, and that such a transformation is likely

Re: [agi] Understanding Natural Language

2006-11-28 Thread Philip Goetz
I think that Matt and Josh are both misunderstanding what I said in the same way. Really, you're both attacking the use of logic on the predicates, not the predicates themselves as a representation, and so ignoring the distinction I was trying to create. I am not saying that rewriting English

Re: [agi] Understanding Natural Language

2006-11-28 Thread Philip Goetz
Oops, Matt actually is making a different objection than Josh. Now it seems to me that you need to understand sentences before you can translate them into FOL, not the other way around. Before you can translate to FOL you have to parse the sentence, and before you can parse it you have to

Re: [agi] Understanding Natural Language

2006-11-28 Thread Philip Goetz
On 11/28/06, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote: Sorry -- should have been clearer. Constructive Solid Geometry. Manipulating shapes in high- (possibly infinite-) dimensional spaces. Suppose I want to represent a face as a point in a space. First, represent it as a raster. That is in

Re: [agi] Natural versus formal AI interface languages

2006-11-28 Thread Philip Goetz
On 11/9/06, Eric Baum [EMAIL PROTECTED] wrote: It is true that much modern encryption is based on simple algorithms. However, some crypto-experts would advise more primitive approaches. RSA is not known to be hard, even if P!=NP, someone may find a number-theoretic trick tomorrow that factors.

Re: [agi] Understanding Natural Language

2006-11-28 Thread Matt Mahoney
First order logic (FOL) is good for expressing simple facts like all birds have wings or no bird has hair, but not for statements like most birds can fly. To do that you have to at least extend it with fuzzy logic (probability and confidence). A second problem is, how do you ground the terms?