I agree with Ben's post that this kind a system has been tried many times
and produced very little.  How can a collection of "Cats have claws;  Kitty
is a cat;  therefore Kitty has claws." relate cat and kitty and that kitty
is slang and normally used for a young cat.  A database of this type seems
to be like the Chinese room dilemma where even if you got something that
looked intelligent out of the system, you know for a fact that no
intelligence exists.  To know that a cat is a mammal as are people and dogs
can only be had by a huge collection of interrelated models that show the
relationships, properties, abilities etc of all of these things.  Such
models could be automatically created (probably) by using this kind of
information tidbits that you suggest but the process would be very messy and
the size of database would be enormous.  It would be like the AI trying to
find the rules and relations of things out of a huge pile of word facts.
Why not just build the rules and relationships into the AI from the
beginning, populating the models with relevant facts as you go.  This could
be done with much less labor by using the AI itself to build the models by
using higher and higher levels of teaching methods by multiple individuals.



Computer languages use a strict subset of English to populate their syntax.
People use English to communicate with each other.  Why would we want to use
a new language like Lojban when we already use subsets of English with
computers?  Why does an arbitrary English sentence have to be unambiguous?
Most of the time this isn't a problem for English language people and where
it might be a problem why couldn't it just be clarified the same as we
humans do all the time?  The teachers of the AI could intentionally use an
unambiguous subset of English and gradually use more and more sophisticated
sentences as the intelligence of the AI progressed.  Isn't this what we do
with children as they grow up?  Most people verify they understand
instructions given to them before them actually act on those instructions
and potential misunderstandings are normally avoided.  Why can't we do the
same with an AI?  Adding an additional language won't  eliminate the need
for the humans using English or the computer using it's English subset
language.  Whatever the ambiguity problem is between humans and computers,
will only be transported to between the human and the new language for no
net benefit.

David Clark

----- Original Message ----- 
From: "Benjamin Goertzel" <[EMAIL PROTECTED]>
To: <agi@v2.listbox.com>
Sent: Thursday, January 18, 2007 1:28 PM
Subject: Re: [agi] Project proposal: MindPixel 2


> YKY, this kind of thing has been tried many dozens of times in the
> history of AI.
>
> It does not lead to interesting results!  Alas...
>
> The key problem is that you can't feasibly encode enough facts to
> allow interesting commonsense inferences -- commonsense inference
> seems to require a very massive store of highly uncertain
> knowledge-items, rather than a small store of certain ones.
>
> BTW the rule
>
>  "if X is-a Y and Z(Y), then Z(X)".
>
> exists (in a slightly different form) in Novamente and many other
> inference systems...
>
> I feel like you are personally rediscovering GOFAI, the kind of AI
> that I read about in textbooks when I first started exploring the
> field in the early 1980's!!!!
>
> Ben G
>
> > Thanks for the tips.  My idea is quite simple, slightly innovative, but
not
> > groundbreaking.  Basically, I want to collect a knowledgebase of facts
as
> > well as rules.  Facts are like "water is wet" etc.  The rules I explain
> > with this example:  "Cats have claws;  Kitty is a cat;  therefore Kitty
has
> > claws."  Here is an implicit rule that says "if X is-a Y and Z(Y), then
> > Z(X)".  I call rules like this the "Rules of Thought".  They are not
logical
> > tautologies but they express some common thought patterns.
> >
> > My theory is that if we collect a bunch of these rules, add a database
> > of common sense facts, and add a rule-based FOPL inference engine (which
may
> > be enhanced with eg Pei Wang's numerical logic), then we have a common
sense
> > reasoner.  That's what I'm trying to build as a first-stage AGI.
> >
> > If it does work, there may be some commercial applications for such a
> > reasoner.  Also it would serve as the base to build a full AGI capable
of
> > machine learning etc (I have crudely worked out the long-term plan).
> >
> > So, is this a good business idea?
> >
> > YKY ________________________________
> >  This list is sponsored by AGIRI: http://www.agiri.org/email
> >
> > To unsubscribe or change your options, please go to:
> > http://v2.listbox.com/member/?list_id=303
>
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?list_id=303

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to