Josh,
(Talking of breaking the small hardware mindset, thank god for the
company with the largest hardware mindset -- or at least the largest
physical embodiment of one-- Google. Without them I wouldnt have known
what FARG meant, and would have had to either (1) read your valuable
response with less than the understanding it deserves or (2) embarrassed
myself by admitting ignorance and asking for a clarification.)
With regard to your answer, copied below, I thought the answer would be
something like that.
So which of the below types of representational problems are the reasons
why their basic approach is not automatically extendable?
1. They have no general purpose representation that can
represent almost anything in a sufficiently uniform representational
scheme to let their analogy net matching algorithm be universally applied
without requiring custom patches for each new type of thing to be
represented.
2. They have no general purpose mechanism for determining
what are relevant similarities and generalities across which to allow
slippage for purposes of analogy.
3. They have no general purpose mechanism for
automatically finding which compositional patterns map to which lower
level representations, and which of those compositional patterns are
similar to each other in a way appropriate for slippages.
4. They have no general purpose mechanism for
automatically determining what would be appropriately coordinated
slippages in semantic hyperspace.
5. Some reason not listed above.
I dont know the answer. There is no reason why you should. But if you
-- or any other interested reader do, or if you have any good thoughts
on the subject, please tell me.
I may be naïve. I may be overly big-hardware optimistic. But based on
the architecture I have in mind, I think a Novamente-type system, if it is
not already architected to do so, could be modified to handle all of these
problems (except perhaps 5, if there is a 5) and, thus, provide powerful
analogy drawing across virtually all domains.
Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]
-----Original Message-----
From: J Storrs Hall, PhD [mailto:[EMAIL PROTECTED]
Sent: Thursday, October 04, 2007 1:44 PM
To: [email protected]
Subject: Re: [agi] breaking the small hardware mindset
On Thursday 04 October 2007 10:56:59 am, Edward W. Porter wrote:
> You appear to know more on the subject of current analogy drawing
> research than me. So could you please explain to me what are the major
> current problems people are having in trying figure out how to draw
> analogies using a structure mapping approach that has a mechanism for
> coordinating similarity slippage, an approach somewhat similar to
> Hofstadter approach in Copycat?
> Lets say we want a system that could draw analogies in real time when
> generating natural language output at the level people can, assuming
> there is some roughly semantic-net like representation of world
> knowledge, and lets say we have roughly brain level hardware, what
> ever that is. What are the current major problems?
The big problem is that structure mapping is brittlely dependent on
representation, as Hofstadter complains; but that the FARG school hasn't
really come up with a generative theory (every Copycat-like analogizer
requires a pile of human-written Codelets which increases linearly with
the
knowledge base -- and thus there is a real problem building a Copycat that
can learn its concepts).
In my humble opinion, of course.
Josh
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=50064710-fa7794