On 10/11/07, Mike Tintner <[EMAIL PROTECTED]> wrote:
>
>  Edward,
>
> Thanks for interesting info - but if I may press you once more. You talk
> of different systems, but you don't give one specific example of the kind of
> useful (& significant for AGI) inferences any of them can produce -as I do
> with my cat example. I'd especially like to hear of one or more from
> Novamente, or Copycat.
>

As a very simple example, if Novamente's evolutionary learning engine has
learned a procedure to play fetch, Novamente's inference engine may allow it
to utilize this procedure as guidance in learning to play hide-and-seek, so
that the system may learn to play hide-and-seek more easily than if it had
never learned to play fetch.

This sort of inferencing however involves a number of probabilistic
inference rules all acting together in a coordinated way, it's not a single
inference-step that I can just paste into an email.

In a logic-based system as in a NN-based system or anything else, inferences
(including analogical or metaphorical ones) of commonsensical meaning emerge
as complex combinations of simpler, atomic steps.


Can you think of a single analogy or metaphor, in addition, that is purely
> symbolic?
>


I don't really understand your definitions of the terms analogy, metaphor or
symbolic...

-- Ben G

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=52800333-ed509f

Reply via email to