Re: [agi] Understanding Natural Language

2006-11-30 Thread James Ratcliff
Once you have these sentences in predicate form, it becomes much easier to do some statistical matching on them, and group and classify them together to generate a set of more logical statements, and to disambiguate the simple english term you use first, into a single Term entity in the knowledg

Re: [agi] Understanding Natural Language

2006-11-29 Thread Russell Wallace
On 11/28/06, Philip Goetz <[EMAIL PROTECTED]> wrote: I see evidence of dimensionality reduction by humans in the fact that adopting a viewpoint has such a strong effect on the kind of information a person is able to absorb. In conversations about politics or religion, I often find ideas that to

Re: [agi] Understanding Natural Language

2006-11-29 Thread J. Storrs Hall, PhD.
On Wednesday 29 November 2006 17:23, Philip Goetz wrote: > What is a pointer-and-tag record structure, and what's it got to do > with n-dim vectors? I was using the phrase to cover the typical datastructures representing "objects" or "frames" in standard AI ( and much of mainstream programming)

Re: [agi] Understanding Natural Language

2006-11-29 Thread Philip Goetz
On 11/29/06, J. Storrs Hall, PhD. <[EMAIL PROTECTED]> wrote: On Wednesday 29 November 2006 16:04, Philip Goetz wrote: > On 11/29/06, J. Storrs Hall, PhD. <[EMAIL PROTECTED]> wrote: > > There will be many occurances of the smaller subregions, corresponding to > > all different sizes and positions

Re: [agi] Understanding Natural Language

2006-11-29 Thread J. Storrs Hall, PhD.
On Wednesday 29 November 2006 16:04, Philip Goetz wrote: > On 11/29/06, J. Storrs Hall, PhD. <[EMAIL PROTECTED]> wrote: > > There will be many occurances of the smaller subregions, corresponding to > > all different sizes and positions of Tom's face in the raster. In other > > words, the Tom's face

Re: [agi] Understanding Natural Language

2006-11-29 Thread Philip Goetz
On 11/29/06, J. Storrs Hall, PhD. <[EMAIL PROTECTED]> wrote: There will be many occurances of the smaller subregions, corresponding to all different sizes and positions of Tom's face in the raster. In other words, the Tom's face region is fractal. So, of course, is the Dick's face region, but n

Re: [agi] Understanding Natural Language

2006-11-29 Thread Philip Goetz
On 11/29/06, Philip Goetz <[EMAIL PROTECTED]> wrote: Either that, or I wouldn't do a purely syntactic parse. It doesn't work very well to try to handle syntax first, then semantics. Bother. I've made some contradictory statements. I started out by saying that you could parse English into pr

Re: [agi] Understanding Natural Language

2006-11-29 Thread J. Storrs Hall, PhD.
On Wednesday 29 November 2006 13:56, Matt Mahoney wrote: > How is a raster scan (16K vector) of an image useful? The difference > between two images of faces is the RMS of the differences of the images > obtained by subtracting pixels. Given an image of Tom, how do you compute > the set of all im

Re: [agi] Understanding Natural Language

2006-11-29 Thread Matt Mahoney
1 AM Subject: Re: [agi] Understanding Natural Language On Tuesday 28 November 2006 17:50, Philip Goetz wrote: > I see that a raster is a vector. I see that you can have rasters at > different resolutions. I don't see what you mean by "map the regions > that represent the

Re: [agi] Understanding Natural Language

2006-11-29 Thread Philip Goetz
On 11/29/06, J. Storrs Hall, PhD. <[EMAIL PROTECTED]> wrote: Presumably you would produce multiple parses for syntactically ambiguous sentences: flies(time,like(arrow)) like(time(flies),arrow) ? Either that, or I wouldn't do a purely syntactic parse. It doesn't work very well to try to hand

Re: [agi] Understanding Natural Language

2006-11-29 Thread J. Storrs Hall, PhD.
On Wednesday 29 November 2006 12:28, Philip Goetz wrote: > Oops - looking back at my earlier post, I said that "English sentences > translate neatly into predicate logic statements". I should have left > out "logic". I like using predicates to organize sentences. I made > that post because Josh

Re: [agi] Understanding Natural Language

2006-11-29 Thread Philip Goetz
Oops - looking back at my earlier post, I said that "English sentences translate neatly into predicate logic statements". I should have left out "logic". I like using predicates to organize sentences. I made that post because Josh was pointing out some of the problems with logic, but then makin

Re: [agi] Understanding Natural Language

2006-11-29 Thread Philip Goetz
On 11/28/06, Matt Mahoney <[EMAIL PROTECTED]> wrote: First order logic (FOL) is good for expressing simple facts like "all birds have wings" or "no bird has hair", but not for statements like "most birds can fly". To do that you have to at least extend it with fuzzy logic (probability and conf

Re: [agi] Understanding Natural Language

2006-11-29 Thread J. Storrs Hall, PhD.
On Tuesday 28 November 2006 17:50, Philip Goetz wrote: > I see that a raster is a vector. I see that you can have rasters at > different resolutions. I don't see what you mean by "map the regions > that represent the same face between higher and lower-dimensional > spaces", or what you are takin

Re: [agi] Understanding Natural Language

2006-11-28 Thread Matt Mahoney
and you will see what I mean. -- Matt Mahoney, [EMAIL PROTECTED] - Original Message From: Philip Goetz <[EMAIL PROTECTED]> To: agi@v2.listbox.com Sent: Tuesday, November 28, 2006 5:45:51 PM Subject: Re: [agi] Understanding Natural Language Oops, Matt actually is making a different ob

Re: [agi] Understanding Natural Language

2006-11-28 Thread Philip Goetz
On 11/28/06, J. Storrs Hall, PhD. <[EMAIL PROTECTED]> wrote: Sorry -- should have been clearer. Constructive Solid Geometry. Manipulating shapes in high- (possibly infinite-) dimensional spaces. Suppose I want to represent a face as a point in a space. First, represent it as a raster. That is in

Re: [agi] Understanding Natural Language

2006-11-28 Thread Philip Goetz
Oops, Matt actually is making a different objection than Josh. Now it seems to me that you need to understand sentences before you can translate them into FOL, not the other way around. Before you can translate to FOL you have to parse the sentence, and before you can parse it you have to und

Re: [agi] Understanding Natural Language

2006-11-28 Thread Philip Goetz
I think that Matt and Josh are both misunderstanding what I said in the same way. Really, you're both attacking the use of logic on the predicates, not the predicates themselves as a representation, and so ignoring the distinction I was trying to create. I am not saying that rewriting English as

Re: [agi] Understanding Natural Language

2006-11-28 Thread J. Storrs Hall, PhD.
On Tuesday 28 November 2006 14:47, Philip Goetz wrote: > The use of predicates for representation, and the use of logic for > reasoning, are separate issues. I think it's pretty clear that > English sentences translate neatly into predicate logic statements, > and that such a transformation is lik

Re: [agi] Understanding Natural Language

2006-11-28 Thread Matt Mahoney
ot; and "pizza with a fork". A parser needs to know millions of rules like this. -- Matt Mahoney, [EMAIL PROTECTED] - Original Message From: Philip Goetz <[EMAIL PROTECTED]> To: agi@v2.listbox.com Sent: Tuesday, November 28, 2006 2:47:41 PM Subject: Re: [agi] Understand

Re: [agi] Understanding Natural Language

2006-11-28 Thread Philip Goetz
On 11/26/06, Pei Wang <[EMAIL PROTECTED]> wrote: Therefore, the problem of using an n-space representation for AGI is not its theoretical possibility (it is possible), but its practical feasibility. I have no doubt that for many limited application, n-space representation is the most natural and

Re: Re: [agi] Understanding Natural Language

2006-11-28 Thread Philip Goetz
On 11/27/06, Ben Goertzel <[EMAIL PROTECTED]> wrote: An issue with Hopfield content-addressable memories is that their memory capability gets worse and worse as the networks get sparser and sparser. I did some experiments on this in 1997, though I never bothered to publish the results ... some

Re: [agi] Understanding Natural Language

2006-11-28 Thread Philip Goetz
On 11/24/06, J. Storrs Hall, PhD. <[EMAIL PROTECTED]> wrote: On Friday 24 November 2006 06:03, YKY (Yan King Yin) wrote: > You talked mainly about how sentences require vast amounts of external > knowledge to interpret, but it does not imply that those sentences cannot > be represented in (predic

Re: Re: [agi] Understanding Natural Language

2006-11-28 Thread Ben Goertzel
My approach, admittedly unusual, is to assume I have all the processing power and memory I need, up to a generous estimate of what the brain provides (a petawords and 100 petaMACs), and then see if I can come up with operations that do what it does. If not it, would be silly to try and do the same

Re: [agi] Understanding Natural Language

2006-11-28 Thread J. Storrs Hall, PhD.
On Monday 27 November 2006 10:35, Ben Goertzel wrote: >... > An issue with Hopfield content-addressable memories is that their > memory capability gets worse and worse as the networks get sparser and > sparser. I did some experiments on this in 1997, though I never > bothered to publish the resul

Re: Re: [agi] Understanding Natural Language

2006-11-28 Thread Ben Goertzel
On 11/28/06, J. Storrs Hall, PhD. <[EMAIL PROTECTED]> wrote: On Monday 27 November 2006 10:35, Ben Goertzel wrote: > Amusingly, one of my projects at the moment is to show that > Novamente's "economic attention allocation" module can display > Hopfield net type content-addressable-memory behavior

Re: [agi] Understanding Natural Language

2006-11-28 Thread J. Storrs Hall, PhD.
On Monday 27 November 2006 10:35, Ben Goertzel wrote: > Amusingly, one of my projects at the moment is to show that > Novamente's "economic attention allocation" module can display > Hopfield net type content-addressable-memory behavior on simple > examples. As a preliminary step to integrating it

Re: [agi] Understanding Natural Language

2006-11-27 Thread J. Storrs Hall, PhD.
On Monday 27 November 2006 11:49, YKY (Yan King Yin) wrote: > To illustrate it with an example, let's say the AGI can recognize apples, > bananas, tables, chairs, the face of Einstein, etc, in the n-dimensional > feature space. So, Einstein's face is defined by a hypersurface where each > point i

Re: Re: [agi] Understanding Natural Language

2006-11-27 Thread YKY (Yan King Yin)
On 11/28/06, Mike Dougherty <[EMAIL PROTECTED]> wrote: perhaps my view of a hypersurface is wrong, but wouldn't a subset of the dimensions associated with an object be the physical dimensions? (ok, virtual physical dimensions) Is "On" determined by a point of contact between two objects? (A

Re: Re: [agi] Understanding Natural Language

2006-11-27 Thread Mike Dougherty
On 11/27/06, YKY (Yan King Yin) <[EMAIL PROTECTED]> wrote: The problem is that this thing, "on", is not definable in n-space via operations like AND, OR, NOT, etc. It seems that "on" is not definable by *any* hypersurface, so it cannot be learned by classifiers like feedforward neural networks

Re: Re: [agi] Understanding Natural Language

2006-11-27 Thread YKY (Yan King Yin)
I'm not saying that the n-space approach wouldn't work, but I have used that approach before and faced a problem. It was because of that problem that I switched to a logic-based approach. Maybe you can solve it. To illustrate it with an example, let's say the AGI can recognize apples, bananas,

Re: Re: [agi] Understanding Natural Language

2006-11-27 Thread Ben Goertzel
Amusingly, one of my projects at the moment is to show that Novamente's "economic attention allocation" module can display Hopfield net type content-addressable-memory behavior on simple examples. As a preliminary step to integrating it with other aspects of Novamente cognition (reasoning, evolut

Re: [agi] Understanding Natural Language

2006-11-27 Thread J. Storrs Hall, PhD.
On Sunday 26 November 2006 18:02, Mike Dougherty wrote: > I was thinking about the N-space representation of an idea... Then I > thought about the tilting table analogy Richard posted elsewhere (sorry, > I'm terrible at citing sources) Then I starting wondering what would > happen if the N-space

Re: Re: [agi] Understanding Natural Language

2006-11-26 Thread Andrii (lOkadin) Zvorygin
t. As what has been explained once doesn't have to be again -- this is of course assuming the small functions connected to distributed network to redistribute functions to other computers that will look for them without any need of user interferance (mmorpg will probably run on same ne

Re: Re: [agi] Understanding Natural Language

2006-11-26 Thread Matt Mahoney
s haven't done it suggests the problem requires vast computational resources and training data. -- Matt Mahoney, [EMAIL PROTECTED] - Original Message From: Andrii (lOkadin) Zvorygin <[EMAIL PROTECTED]> To: agi@v2.listbox.com Sent: Sunday, November 26, 2006 4:37:02 PM Subje

Re: [agi] Understanding Natural Language

2006-11-26 Thread Mike Dougherty
On 11/26/06, J. Storrs Hall, PhD. <[EMAIL PROTECTED]> wrote: But I really think that the metric properties of the spaces continue to help even at the very highest levels of abstraction. I'm willing to spend some time giving it a shot, anyway. So we'll see! I was thinking about the N-space rep

Re: Re: [agi] Understanding Natural Language

2006-11-26 Thread Andrii (lOkadin) Zvorygin
interferance (mmorpg will probably run on same network). -- Matt Mahoney, [EMAIL PROTECTED] - Original Message From: Andrii (lOkadin) Zvorygin <[EMAIL PROTECTED]> To: agi@v2.listbox.com Sent: Saturday, November 25, 2006 5:01:04 AM Subject: Re: Re: [agi] Understanding Natural Langua

Re: [agi] Understanding Natural Language

2006-11-26 Thread J. Storrs Hall, PhD.
On Sunday 26 November 2006 14:14, Pei Wang wrote: > > In this design, the tough job is to make the agents working together > to cover all kinds of tasks, and for this part, I'm afraid that the > multi-dimensional space representation won't help much. Also, we > haven't seen much work on high-level

Re: [agi] Understanding Natural Language

2006-11-26 Thread Richard Loosemore
J. Storrs Hall, PhD. wrote: My best ideas at the moment don't have one big space where everything sits, but something more like a Society of Mind where each agent has its own space. New agents are being tried all the time by some heuristic search process, and will come with new dimensions if th

Re: [agi] Understanding Natural Language

2006-11-26 Thread Pei Wang
That makes much more sense. If your system consists of special-purpose subsystems (call them agents or whatever), then for some of them multi-dimensional space may be the best KR framework. I guess for the sensorimotor part this may be the case, as the works of Brooks and Albus show. In this desi

Re: [agi] Understanding Natural Language

2006-11-26 Thread J. Storrs Hall, PhD.
My best ideas at the moment don't have one big space where everything sits, but something more like a Society of Mind where each agent has its own space. New agents are being tried all the time by some heuristic search process, and will come with new dimensions if that does them any good. Equall

Re: Re: [agi] Understanding Natural Language

2006-11-26 Thread Pei Wang
On 11/26/06, Ben Goertzel <[EMAIL PROTECTED]> wrote: HI, > Therefore, the problem of using an n-space representation for AGI is > not its theoretical possibility (it is possible), but its practical > feasibility. I have no doubt that for many limited application, > n-space representation is the

Re: Re: [agi] Understanding Natural Language

2006-11-26 Thread Ben Goertzel
HI, Therefore, the problem of using an n-space representation for AGI is not its theoretical possibility (it is possible), but its practical feasibility. I have no doubt that for many limited application, n-space representation is the most natural and efficient choice. However, for a general pur

Re: [agi] Understanding Natural Language

2006-11-26 Thread Pei Wang
In Section 2.2.1 of http://www.springerlink.com/content/978-1-4020-5045-9 (also briefly in http://nars.wang.googlepages.com/wang.AGI-CNN.pdf ) I compared the three major traditions of formalization used in AI: *. dynamical system. In this framework, the states of the system are described as point

Re: [agi] Understanding Natural Language

2006-11-26 Thread J. Storrs Hall, PhD.
On Saturday 25 November 2006 13:52, Ben Goertzel wrote: > About Teddy Meese: a well-designed Teddy Moose is almost surely going > to have the big antlers characterizing a male moose, rather than the > head-profile of a female moose; and it would be disappointing if a > Teddy Moose had the head an

Re: Re: Re: [agi] Understanding Natural Language

2006-11-25 Thread Ben Goertzel
ght be usable by machines but not by humans. -- Matt Mahoney, [EMAIL PROTECTED] - Original Message From: Andrii (lOkadin) Zvorygin <[EMAIL PROTECTED]> To: agi@v2.listbox.com Sent: Saturday, November 25, 2006 5:01:04 AM Subject: Re: Re: [agi] Understanding Natural Language O

Re: Re: [agi] Understanding Natural Language

2006-11-25 Thread Matt Mahoney
machines but not by humans. -- Matt Mahoney, [EMAIL PROTECTED] - Original Message From: Andrii (lOkadin) Zvorygin <[EMAIL PROTECTED]> To: agi@v2.listbox.com Sent: Saturday, November 25, 2006 5:01:04 AM Subject: Re: Re: [agi] Understanding Natural Language On 11/24/06, Matt

Re: [agi] Understanding Natural Language

2006-11-25 Thread J. Storrs Hall, PhD.
I have several motivations for chasing after what is admittedly a very non-standard form of representation and one certainly not guaranteed of success! First is that low-level insect-like controllers are a snap and a breeze to do in it, and I'm guessing that the whole brain developed from such

Re: Re: [agi] Understanding Natural Language

2006-11-25 Thread Ben Goertzel
Hi, On the other hand, somewhat simpler blends can be done by simple interpolation or mappings like the analogical quadrature I mentioned. For example, you will instantly understand "teddy moose" to be that which is to a moose as a teddy bear is to a bear, i.e. a stuffed-animal toy caricature. I

Re: [agi] Understanding Natural Language

2006-11-25 Thread J. Storrs Hall, PhD.
On Saturday 25 November 2006 12:42, Ben Goertzel wrote: > I'm afraid the analogies between vector space operations and cognitive > operations don't really take you very far. > > For instance, you map conceptual blending into quantitative > interpolation -- but as you surely know, it's not just **a

Re: Re: [agi] Understanding Natural Language

2006-11-25 Thread Ben Goertzel
I constructed a while ago (mathematically) a detailed mapping from Novamente Atoms (nodes/links) into n-dimensional vectors. You can certainly view the state of a Novamente system at a given point in time as a collection of n-vectors, and the various cognition methods in Novamente as mappings fro

Re: [agi] Understanding Natural Language

2006-11-25 Thread J. Storrs Hall, PhD.
On Friday 24 November 2006 10:26, William Pearson wrote: > On 24/11/06, J. Storrs Hall, PhD. <[EMAIL PROTECTED]> wrote: > > The open questions are representation -- I'm leaning towards >CSG > > Constructive solid geometry? You could probably go quite far towards a > real world navigator with this,

Re: Re: [agi] Understanding Natural Language

2006-11-25 Thread Andrii (lOkadin) Zvorygin
On 11/24/06, Matt Mahoney <[EMAIL PROTECTED]> wrote: Andrii (lOkadin) Zvorygin <[EMAIL PROTECTED]> wrote: >I personally don't understand why everyone seems to insist on using >ambiguous illogical languages to express things when there are viable >alternative available. I think because an AGI ne

Re: Re: Re: [agi] Understanding Natural Language

2006-11-24 Thread Ben Goertzel
On 11/24/06, Matt Mahoney <[EMAIL PROTECTED]> wrote: Andrii (lOkadin) Zvorygin <[EMAIL PROTECTED]> wrote: >I personally don't understand why everyone seems to insist on using >ambiguous illogical languages to express things when there are viable >alternative available. I think because an AGI ne

Re: Re: [agi] Understanding Natural Language

2006-11-24 Thread Matt Mahoney
age. -- Matt Mahoney, [EMAIL PROTECTED] - Original Message From: Andrii (lOkadin) Zvorygin <[EMAIL PROTECTED]> To: agi@v2.listbox.com Sent: Friday, November 24, 2006 3:04:49 PM Subject: Re: Re: [agi] Understanding Natural Language "It was a true solar-plexus blow, and compl

Re: Re: [agi] Understanding Natural Language

2006-11-24 Thread Andrii (lOkadin) Zvorygin
"It was a true solar-plexus blow, and completely knocked out, Perkins staggered back against the instrument-board. His outflung arm pushed the power-lever out to its last notch, throwing full current through the bar, which was pointed straight up as it had been when they made their landing." LOJ

Re: Re: [agi] Understanding Natural Language

2006-11-24 Thread Ben Goertzel
Oh, I think the representation is quite important. In particular, logic lets you in for gazillions of inferences that are totally inapropos and no good way to say which is better. Logic also has the enormous disadvantage that you tend to have frozen the terms and levels of abstraction. Actual word

Re: [agi] Understanding Natural Language

2006-11-24 Thread William Pearson
On 24/11/06, J. Storrs Hall, PhD. <[EMAIL PROTECTED]> wrote: The open questions are representation -- I'm leaning towards >CSG Constructive solid geometry? You could probably go quite far towards a real world navigator with this, but I'm not usre how you plan to get it to represent the intern

Re: [agi] Understanding Natural Language

2006-11-24 Thread J. Storrs Hall, PhD.
On Friday 24 November 2006 06:03, YKY (Yan King Yin) wrote: > You talked mainly about how sentences require vast amounts of external > knowledge to interpret, but it does not imply that those sentences cannot > be represented in (predicate) logical form. Substitute "bit string" for "predicate log

Re: [agi] Understanding Natural Language

2006-11-24 Thread YKY (Yan King Yin)
You talked mainly about how sentences require vast amounts of external knowledge to interpret, but it does not imply that those sentences cannot be represented in (predicate) logical form. I think there should be a working memory in which sentences under attention would "bring up" other sentences

Re: [agi] Understanding Natural Language

2006-11-23 Thread Richard Loosemore
Excellent. Summarizing: the idea of "understanding" something (in this case a fragment of (written) natural language) involves many representations being constructed on many levels simultaneously (from word recognition through syntactic parsing to story-archetype recognition). There is not