On Mon, Nov 12, 2007 at 08:44:58PM -0500, Mark Waser wrote:
> 
> >So perhaps the AGI question is, "what is the difference between
> >a know-it-all mechano-librarian, and a sentient being?"
> 
> I wasn't assuming a mechano-librarian.  I was assuming a human that could 
> (and might be trained to) do some initial translation of the question and 
> some final rephrasing of the answer.

I'm surprised by your answer. 

I don't see that the "hardest part" of agi is NLP i/o. To put it into
perspective: one can fake up some trivial NLP i/o now, and with a bit of
effort, one can improve significantly on that.  Sure, it would be
child-like conversation, and the system would be incapable of learning
new idioms, expressions, etc., but I don't see that you'd need a human
to translate the question into some formal reasoning-engine language.

The hard part of NLP is being able to read complex texts, whether
Alexander Pope or Karl Marx; but a basic NLP i/o interface stapled to
a reasoning engine doesn't need to really do that, or at least not well.
Yet, these two stapled toegether would qualify as a "mechano-librarian"
for me.

To me, the hard part is still the reasoning engine itself, and the 
pruning, and the tailoring of responses to the topic at hand. 

So let me rephrase the question: If one had
1) A reasoing engine that could provide short yet appropriate responses
   to questions,
2) A simple NLP interface to the reasoning engine

would that be AGI?  I imagine most folks would say "no", so let me throw
in: 

3) System can learn new NLP idioms, so that it can eventually come to
understand those sentences and paragraphs that make Karl Marx so hard to
read.

With this enhanced reading ability, it could then presumably become a
"know-it-all" ultra-question-answerer. 

Would that be AGI? Or is there yet more? Well, of course there's more:
one expects creativity, aesthetics, ethics. But we know just about nothing
about that.

This is the thing that I think is relevent to Robin Hanson's original
question.  I think we can build 1+2 is short order, and maybe 3 in a
while longer. But the result of 1+2+3 will almost surely be an
idiot-savant: knows everything about horses, and can talk about them
at length, but, like a pedantic lecturer, the droning will put you
asleep.  So is there more to AGI, and exactly how do way start laying
hands on that?

--linas






-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=64661358-af169f

Reply via email to