Josh, in your 10/4/2007 9:57 AM post you wrote:

“RESEARCH IN ANALOGY-MAKING IS SLOW -- I CAN ONLY THINK OF GENTNER AND
HOFSTADTER AND THEIR GROUPS AS MAJOR MOVERS. WE DON'T HAVE A SOLID THEORY
OF ANALOGY YET (STRUCTURE-MAPPING TO THE CONTRARY NOTWITHSTANDING). IT'S
CLEARLY CENTRAL, AND SO I DON'T UNDERSTAND WHY MORE PEOPLE AREN'T WORKING
ON IT. (BTW: ANYTIME YOU'RE DOING ANYTHING THAT EVEN SMELLS LIKE SUBGRAPH
ISOMORPHISM, BIG IRON IS YOUR FRIEND.)”

You appear to know more on the subject of current analogy drawing research
than me. So could you please explain to me what are the major current
problems people are having in trying figure out how to draw analogies
using a structure mapping approach that has a mechanism for coordinating
similarity slippage, an approach somewhat similar to Hofstadter approach
in Copycat?

Lets say we want a system that could draw analogies in real time when
generating natural language output at the level people can, assuming there
is some roughly semantic-net like representation of world knowledge, and
lets say we have roughly brain level hardware, what ever that is.  What
are the current major problems?

Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-----Original Message-----
From: J Storrs Hall, PhD [mailto:[EMAIL PROTECTED]
Sent: Thursday, October 04, 2007 9:57 AM
To: [email protected]
Subject: Re: [agi] breaking the small hardware mindset


On Wednesday 03 October 2007 09:37:58 pm, Mike Tintner wrote:

> I disagree also re how much has been done.  I don't think AGI -
> correct me -
has solved a single creative problem - e.g. creativity - unprogrammed
adaptivity - drawing analogies - visual object recognition - NLP -
concepts -
creating an emotional system - general learning - embodied/ grounded
knowledge - visual/sensory thinking.- every dimension in short
of "imagination". (Yes, vast creativity has gone into narrow AI, but
that's
different).

Ah, the Lorelei sings so sweetly. That's what happened to AI in the 80's
-- it
went off chasing "human-level performance" at specific tasks, which
requires
a completely different mindset (and something of a different toolset) than

solving the general AI problem. To repeat a previous letter, solving
particular problems is engineering, but AI needed science.

There are, however, several subproblems that may need to be solved to make
a
general AI work. General learning is surely one of them. I happen to think

that analogy-making is another. But there has been a significant amount of

basic research done on these areas. 21st century AI, even narrow AI, looks

very different from say 80's expert systems. Lots of new techniques that
work
a lot better. Some of them require big iron, some don't.

Research in analogy-making is slow -- I can only think of Gentner and
Hofstadter and their groups as major movers. We don't have a solid theory
of
analogy yet (structure-mapping to the contrary notwithstanding). It's
clearly
central, and so I don't understand why more people aren't working on it.
(btw: anytime you're doing anything that even smells like subgraph
isomorphism, big iron is your friend.)

One main reason I support the development of AGI as a serious subfield is
not
that I think any specific approach here is likely to work (even mine), but

that there is a willingness to experiment and a tolerance for new and
odd-sounding ideas that spells a renaissance of science in AI.

Josh



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49804068-d1884d

Reply via email to