Ben,

Many thanks for refs. and detailed reply. Much appreciated and v. interesting.

After reading around this area, and cog sci re analogy, here are my v. cursory 
- & as usual tendentious - impressions (so blitz me down). Basically, my ideas 
about the importance of sensory/visual graphics and images for both analogy and 
AGI were enriched but not fundamentally changed.

1) Formal ("Literate") Analogy - nearly all the science seems to be about 
analogies using formal sign systems, esp. language. But as Gentner 
acknowledges, there is little work re

2) Informal ("Pre-Literate") Analogy (esp animals but also pos. infants) - 
which has to be primary. That worm working out that many different-shaped 
objects would stop up his burrow, was using analogy - i.e. saying in effect: 
"these objects are loosely shaped like/ will fit into, that hole." Ditto the 
bird that fashions a hook to manipulate objects. But I don't see symbols 
playing any part in their (evolutionarily primary) analogical reasoning.

3) Structural Mapping  - the research seems to concentrate mainly on mapping 
sets of symbolic relationships,  and that means depending on one-to-one, 
identical elements to draw analogies (whereas sensory/ visual mapping/analogy 
is not at all so constrained), and ...

4) Symbolically-Derived Analogies are largely trivial (!) - that's my 
definitely cursory conclusion, but I'm betting that none of the computational 
systems so far have produced any striking analogies. It all seems to be about 
simple numerical, alphabetic and verbal/logical analogies,  Can you give 
instances of any interesting analogical results here either from computers or 
ordinary human intelligence that show any promise for adaptive intelligence?  
(First impressions of Gentner and Hofstadter's work here - not impressed).

5)Striking analogies seem to be all Sensory/visual derived -  all the classic 
sci. discovery examples used, eg Kepler, Kekule, Duncker all seem to me 
obviously derived by way of sensory/visual graphics/images not symbols.  
Reading through the rich Indurkhya book, all the metaphors listed (that I've 
read so far) seem to me similarly derived. "Skies crying" (rain- tears) etc do 
not seem to be in any way symbolically derived.

6)Any Visual Analogy Computers? -  

    a)can any computers draw visual analogies? - e.g can any do what a human 
can do - think of a fortress tower/turret and quickly (as I did) come up with 
loose sensory analogies: "fork", "teeth (with gaps)" "fingers" ?

    b)can any digital computers truly map, period? -  put one shape on top of 
another and see immediately that they fit exactly/loosely, (without first 
breaking them down into bits/formulae?) - or would it take an analogical 
computer to do this?

    c)has anyone incorporated in their AI/AGI system, as my ideas suggest they 
should, a cartoon unit and a movie unit, for the purposes of reasoning? (I'm 
still not sure re yours).

7)Facility of Analog Retrieval:
"What is more difficult is: Given a large body of knowledge, fish out 
the analogies that are going to be relevant and useful (because there
are VERY many possible analogies, and most will be very dumb)"

a) off the top of my head - I wonder whether any sensorily derived analogies 
are dumb? Not-so-great like my turret analogies, but still relevant, legitimate.
 - precisely because they literally fit.

b) much more importantly - you may be asking for the impossible - analog 
retrieval, it seems to me, is fundamentally a RISKY, UNCERTAIN business - 
hit-and-miss search. Creatives get paid many thousands of dollars to sit around 
for weeks and come up with new analogies to "Coke (whatever product) is as 
refreshing as..." And that process is very laborious. With lots of trite, 
non-strking analogies coming up and long pauses.   The adaptive drawing of 
sensory analogies is fundamentally an adventurous exploration, because you are 
trying to connect up domains that have never been connected before. There is no 
formula for it by definition or guarantee that it will work. Perhaps if we ever 
do have truly superintelligent robots, they will be able to do it all much 
faster,  (although by then we may link into their brains), but they will still 
be engaged in uncertain exploration with no definable time period

Only just beginning to get into this! But better stop there. Thanks again P.S. 
If interested, can send copy of Grounding Cognition, ed D Pecher, R Zwaan, 
C.U.P. - v. recent sci research which tends, broadly, to support my ideas..


  ----- Original Message ----- 
  From: Benjamin Goertzel 
  To: agi@v2.listbox.com 
  Sent: Saturday, May 12, 2007 7:22 PM
  Subject: Re: [agi] All these moments will be lost in time, like tears in rain



  Mike Tintner,

  Firstly, to ground your discussion of analogy in the AI field, you might like 
to look at

  "Fluid Concepts and Creative Analogies" by Douglas Hofstadter

  and  "Metaphor and Cognition" by Bipin Indurkhya, online at 
  http://www.iiit.ac.in/~bipin/

  Like many natural language words, "analogy" is a bit ambiguous....   

  It can be used to refer to low-level inference operations like the example 
  Pei gave...

  Or, it can be used to refer to higher-level processes that involve loads
  of low-level coordinated inference operations... say

  "This dog is attacking me... how can I get rid of it?  Well, when the 
  rabbit was attacking me, I got rid of it by attracting a cat, which scared
  away the rabbit.  Analogously, maybe I could get rid of the dog by
  finding something to scare it away....  Hmm... what about that lion
  who lives next door.  'Hey Mr. Lion!  Come over to play!' "

  The above analogy can be broken down into a series of low-level
  inference steps, including among them some that match the simple
  logical form that Pei called "analogy" in his example.  Doing so, as a 
  human, is just a simple textbook exercise...

  The drawing of analogies, given an appropriate selection of a small
  amount of relevant knowledge, is not an incredibly hard problem...

  What is more difficult is: Given a large body of knowledge, fish out 
  the analogies that are going to be relevant and useful (because there
  are VERY many possible analogies, and most will be very dumb).
  But this is really just the "uncertain inference control" and "attention 
  allocation" problem in general, not a specific problem to do with
  analogies.

  Unlike Tintner, I don't see why analogy has to be visual, though it
  can be.  The scientific study of analogy in cog sci suggests that 
  some analogies are visually grounded but many are not.

  -- Ben




  On 5/12/07, Richard Loosemore < [EMAIL PROTECTED]> wrote:
    Pei Wang wrote:
    > On 5/12/07, Bob Mottram < [EMAIL PROTECTED]> wrote:
    >> In a recent interview
    >> (http://discovermagazine.com/2007/jan/interview-minsky/ ) Marvin Minsky
    >> says that one of the key things which an intelligent system ought to
    >> be able to do is reason by analogy.
    >>
    >>   "His thoughts tumbled in his head, making and breaking alliances 
    >> like underpants in a dryer without Cling Free."
    >>
    >> Which made me wonder, can Novamente, NARS or any other prospective AGI
    >> system do this kind of reasoning?
    >
    > In a broad sense, almost all inference in NARS is analogy --- in a 
    > term logic, each statements indicates the possibility of one term
    > being used (in certain way) as another, and inference on these
    > statements builds new "can be used as" relations (which technically 
    > are called inheritance, similarity, etc) among terms.
    >
    > In a narrow sense, NARS has an analogy rule which takes "X and Y are
    > similar" and "X has property P" as premises to derive a conclusion "Y 
    > has property P" (premises and conclusions are all true to various
    > degrees). See 
http://nars.wang.googlepages.com/NARS-Examples-SingleStep.txt 
    > for concrete examples by searching for "analogy" in the file.
    >
    > For the analogy with the form "X:Y = Z:?", NARS needs more than one
    > step. It first looks for a relation between X and Y, then looks for 
    > Z's "image" under the relation.

    Hmmmm....

    But do you think this captures *all* of the idea of what "analogy" is
    the human case?  Most of it?

    How would you say that this squared with Hofstadter's ideas about what 
    analogy might be?

    Looking for a relation between X and Y is all very well, but one of the
    things that DRH is fond of telling us is that not just any old relation
    will do.  And if there are a (quasi-)infinite number of possible 
    relations between X and Y, doesn't the selection of the appropriate one
    become the heart of the analogy process, rather than just a subsidiary
    step?  (In just the same way that reasoning systems in general need to 
    have an inference control mechanism that, in practice, determines how
    the system actually behaves)?

    What I am getting at here is that I think the concept of an "analogy
    mechanism" has not even become clear yet, and so for some people to say 
    that they believe that their systems already have a kind of analogy
    mechanism is to jump the gun a little.

    In my system, all relationships can be "opened up" by being operated on,
    but there is no fixed class of operators that does the job:  instead 
    they are built on the fly, in a manner that is sensitive to context.  So
    the nature of X, Y and also Z will be able to have a diffuse effect on
    the operator construction process that is trying to find a good
    relationship between X and Y.  The process of "finding" an operator can
    have general meta-operators that govern how the process happens (in
    other words there can be "general, analogy-finding strategies").  Those 
    meta-operators could be called the thing that "is" the analogy
    mechanism, but that would be an oversimplification.



    Richard Loosemore.

    -----
    This list is sponsored by AGIRI: http://www.agiri.org/email
    To unsubscribe or change your options, please go to:
    http://v2.listbox.com/member/?&; 



------------------------------------------------------------------------------
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?&;


------------------------------------------------------------------------------


  No virus found in this incoming message.
  Checked by AVG Free Edition. 
  Version: 7.5.467 / Virus Database: 269.6.8/800 - Release Date: 11/05/2007 
19:34

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to