Mike Tintner,

Firstly, to ground your discussion of analogy in the AI field, you might
like to look at

"Fluid Concepts and Creative Analogies" by Douglas Hofstadter

and  "Metaphor and Cognition" by Bipin Indurkhya, online at
http://www.iiit.ac.in/~bipin/

Like many natural language words, "analogy" is a bit ambiguous....

It can be used to refer to low-level inference operations like the example
Pei gave...

Or, it can be used to refer to higher-level processes that involve loads
of low-level coordinated inference operations... say

"This dog is attacking me... how can I get rid of it?  Well, when the
rabbit was attacking me, I got rid of it by attracting a cat, which scared
away the rabbit.  Analogously, maybe I could get rid of the dog by
finding something to scare it away....  Hmm... what about that lion
who lives next door.  'Hey Mr. Lion!  Come over to play!' "

The above analogy can be broken down into a series of low-level
inference steps, including among them some that match the simple
logical form that Pei called "analogy" in his example.  Doing so, as a
human, is just a simple textbook exercise...

The drawing of analogies, given an appropriate selection of a small
amount of relevant knowledge, is not an incredibly hard problem...

What is more difficult is: Given a large body of knowledge, fish out
the analogies that are going to be relevant and useful (because there
are VERY many possible analogies, and most will be very dumb).
But this is really just the "uncertain inference control" and "attention
allocation" problem in general, not a specific problem to do with
analogies.

Unlike Tintner, I don't see why analogy has to be visual, though it
can be.  The scientific study of analogy in cog sci suggests that
some analogies are visually grounded but many are not.

-- Ben



On 5/12/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:

Pei Wang wrote:
> On 5/12/07, Bob Mottram <[EMAIL PROTECTED]> wrote:
>> In a recent interview
>> (http://discovermagazine.com/2007/jan/interview-minsky/) Marvin Minsky
>> says that one of the key things which an intelligent system ought to
>> be able to do is reason by analogy.
>>
>>   "His thoughts tumbled in his head, making and breaking alliances
>> like underpants in a dryer without Cling Free."
>>
>> Which made me wonder, can Novamente, NARS or any other prospective AGI
>> system do this kind of reasoning?
>
> In a broad sense, almost all inference in NARS is analogy --- in a
> term logic, each statements indicates the possibility of one term
> being used (in certain way) as another, and inference on these
> statements builds new "can be used as" relations (which technically
> are called inheritance, similarity, etc) among terms.
>
> In a narrow sense, NARS has an analogy rule which takes "X and Y are
> similar" and "X has property P" as premises to derive a conclusion "Y
> has property P" (premises and conclusions are all true to various
> degrees). See
http://nars.wang.googlepages.com/NARS-Examples-SingleStep.txt
> for concrete examples by searching for "analogy" in the file.
>
> For the analogy with the form "X:Y = Z:?", NARS needs more than one
> step. It first looks for a relation between X and Y, then looks for
> Z's "image" under the relation.

Hmmmm....

But do you think this captures *all* of the idea of what "analogy" is
the human case?  Most of it?

How would you say that this squared with Hofstadter's ideas about what
analogy might be?

Looking for a relation between X and Y is all very well, but one of the
things that DRH is fond of telling us is that not just any old relation
will do.  And if there are a (quasi-)infinite number of possible
relations between X and Y, doesn't the selection of the appropriate one
become the heart of the analogy process, rather than just a subsidiary
step?  (In just the same way that reasoning systems in general need to
have an inference control mechanism that, in practice, determines how
the system actually behaves)?

What I am getting at here is that I think the concept of an "analogy
mechanism" has not even become clear yet, and so for some people to say
that they believe that their systems already have a kind of analogy
mechanism is to jump the gun a little.

In my system, all relationships can be "opened up" by being operated on,
but there is no fixed class of operators that does the job:  instead
they are built on the fly, in a manner that is sensitive to context.  So
the nature of X, Y and also Z will be able to have a diffuse effect on
the operator construction process that is trying to find a good
relationship between X and Y.  The process of "finding" an operator can
have general meta-operators that govern how the process happens (in
other words there can be "general, analogy-finding strategies").  Those
meta-operators could be called the thing that "is" the analogy
mechanism, but that would be an oversimplification.



Richard Loosemore.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to