--- Peter Voss <[EMAIL PROTECTED]> wrote:
> My question: What specifically is Cyc unable to do? What are the tests, and
> how did it fail?

Cyc has not solved the user interface problem.  It does not understand natural
language.  It does not learn.  It requires users to enter data manually in an
obscure, structured language.  This is simply not a practical approach.  They
have been manually entering common sense information since 1984.  In 1994
Lenat predicted that Cyc would be on every computer and would solve the
software brittleness bottleneck in 5 years.  But if you play the FACTory game
at http://www.cyc.com/ you will get a sense of how shallow this database still
is, compared to what the average human knows.  Its ability to reason
deductively mostly generates useless facts, such as (from the game), "most
shirts weigh more than most appendixes".  Logic is a poor model of human
knowledge.

Cyc, like a lot of AGI projects, seems to lack a well defined goal.  "Let's
build it and see what happens.  We will know AGI when we see it".  No we
won't.  Like Russell Wallace said, intelligence is not a scalar quantity.  We
already have machines that are vastly more intelligent than humans in some
areas and less in others.  If the goal of Cyc is to make computers more
usable, it has certainly not done that.

Common sense is useless without natural language.  Solve the language problem
first.  I believe this has to be done by modeling childhood development. 
Human knowledge is like a broad pyramid with sensory and motor I/O at the base
and abstract, adult level knowledge at the tip.  Cyc and other expert systems
skip right to the abstract knowledge for the sake of computational efficiency.
 But to communicate with users you need the whole pyramid.  Natural language
has a structure that allows it to be learned bottom up, layer by layer:
phonemes (one month), then word segmentation (7 months), then word semantics
(12 months), then grammatical structures (2 years), and only then can you
teach it logical connectives (and, or, if) and arithmetic (5 years).  In NLP
there are many examples of successes using bottom up models (e.g. information
retrieval), and many examples of failures using top down models (e.g. parsers,
expert systems).


-- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to