3 issues have been raised in this thread, by different people:

1.  Richard Loosemore:  Symbol names -- should they be system-generated or
human-entered?

This is a good question.  In Cyc there are so-called "pretty names" (English
terms that describe Cyc concepts) but they are not sophisticated enough to
allow automatic translation of Cyc KB into English.

A dilemma is that 1) if symbol names are machine-generated, they would not
be human-readable;  but 2) the machine should be able to *create* new
symbols during machine learning.

My solution:  all symbols in the system have *no names*;  they are only
referenced by number.  The natural-language part of the KB contains
fact/rules for the translation of symbols into words.  For example, if block
A is on top of block B, there will be a link for "on_top_of" but it will
just be a nameless link.  The natural-language KB will contain rules for
translating that "on_top_of" to human language.

In other words, when we specify the knowledge representation scheme, we
specify them as if they were machine-generated, and we simultaneously
specify the rules for natural-language translation.

The bottom line:  we can insert rules into the system *as if* they are
machine-generated.  After that they are indistinguishable from machine
representation.  Why so much aversion to human-entered facts/rules?

2.  Josh Storrs, John Scanlon: "Numeric representation better than symbolic
representation".

Ben has given his version of the answer.  My view is very similar to his.
Josh keeps saying that logic cannot represent certain things, eg a chipmunk
resembling a leaf blowing in the wind.  In probabilistic logic this CAN be
represented, because the definition of "leaf" is a *weighted sum*
of features, and the jumping chipmunk shares many of those same features as
the leaf's.

If probabilistic logic can handle *all* aspects of AGI, why use a more
complicated method such as vector space?  I'm not a master of math as Ben
is, but I think many logical operations (eg abduction) are so non-linear
that their n-space counterpart is almost impossible to fathom.

Bottomline:  what does the numerical representation offer that the
logico-numerical form has not?

3.  Mark Waser, David Clark:  "Many representations, communicating via
protocols".

My proposal is to have the entire AGI run by a *central* inference engine
(plus truth maintenance system etc etc, let's call the whole shebang a
"cognitive engine").  For this to work, the representation has to be
uniform.

Why use a *centralized* cognitive engine, you ask?  Well, basically because
such an engine is very complicated and not easy to build -- it's got to have
probabilistic logic, an efficient deduction algorithm, and efficient search
mechanisms, etc.  It seems to be a good thing if we could design and write
it just once and solve the whole AGI problem.

The alternative is to let people in different AI subfields write their own
modules, and glue them together via communication protocols.  But what
if the AGI faces a *new*, unseen problem?  The whole point of having an AGI
is that it is *general*, isn't it?  At first it might seem that having
specialized algorithms for (say) vision is cool, but when you give the
vision module a long list of requirements -- reading fonts, playing
boardgames, understanding drawings, etc -- you may find that the vision
module needs to be more and more *general*, to the point that you're almost
making the general cognitive engine again.

YKY

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to