Pei Wang wrote:
On 5/12/07, Bob Mottram <[EMAIL PROTECTED]> wrote:
In a recent interview
(http://discovermagazine.com/2007/jan/interview-minsky/) Marvin Minsky
says that one of the key things which an intelligent system ought to
be able to do is reason by analogy.

  "His thoughts tumbled in his head, making and breaking alliances
like underpants in a dryer without Cling Free."

Which made me wonder, can Novamente, NARS or any other prospective AGI
system do this kind of reasoning?

In a broad sense, almost all inference in NARS is analogy --- in a
term logic, each statements indicates the possibility of one term
being used (in certain way) as another, and inference on these
statements builds new "can be used as" relations (which technically
are called inheritance, similarity, etc) among terms.

In a narrow sense, NARS has an analogy rule which takes "X and Y are
similar" and "X has property P" as premises to derive a conclusion "Y
has property P" (premises and conclusions are all true to various
degrees). See http://nars.wang.googlepages.com/NARS-Examples-SingleStep.txt
for concrete examples by searching for "analogy" in the file.

For the analogy with the form "X:Y = Z:?", NARS needs more than one
step. It first looks for a relation between X and Y, then looks for
Z's "image" under the relation.

Hmmmm....

But do you think this captures *all* of the idea of what "analogy" is the human case? Most of it?

How would you say that this squared with Hofstadter's ideas about what analogy might be?

Looking for a relation between X and Y is all very well, but one of the things that DRH is fond of telling us is that not just any old relation will do. And if there are a (quasi-)infinite number of possible relations between X and Y, doesn't the selection of the appropriate one become the heart of the analogy process, rather than just a subsidiary step? (In just the same way that reasoning systems in general need to have an inference control mechanism that, in practice, determines how the system actually behaves)?

What I am getting at here is that I think the concept of an "analogy mechanism" has not even become clear yet, and so for some people to say that they believe that their systems already have a kind of analogy mechanism is to jump the gun a little.

In my system, all relationships can be "opened up" by being operated on, but there is no fixed class of operators that does the job: instead they are built on the fly, in a manner that is sensitive to context. So the nature of X, Y and also Z will be able to have a diffuse effect on the operator construction process that is trying to find a good relationship between X and Y. The process of "finding" an operator can have general meta-operators that govern how the process happens (in other words there can be "general, analogy-finding strategies"). Those meta-operators could be called the thing that "is" the analogy mechanism, but that would be an oversimplification.



Richard Loosemore.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to