> From: Richard Loosemore [mailto:[EMAIL PROTECTED]
> 
> The reason it reminds me of this episode is that you are calmly talking
> here about "the high dimensional problem of seeking to understand the
> meaning of text, which often involve multiple levels of implication,
> which would normally be accomplished by some sort of search of a large
> semantic space" ......... this is your equivalent of the anti-gravity
> drive.  This is the part that needs extremely detailed knowledge of AI
> and psychology, just to be understand the nature of the problem (never
> mind to solve it).  If you had any idea bout how to solve this part of
> the problem, everything else would drop into your lap.  You wouldn't
> need a P2P AGI-at-home system, because with this solution in hand you
> would have people beating down your door to give you a supercomputer.


This is naïve. It almost never works this way, where if someone has a
solution to a well known unsolved engineering problem that resources just
come knocking at the door.

 
> Menawhile, unfortunately, solving all those other issues like making
> parsers and trying to do word-sense disambiguation would not help one
> whit to get the real theoretical task done.

This is impractical. ... 
 
> I am not being negative, I am just relaying the standard understanding
> of priorities in the AGI field as a whole.  Send complaints addressed to
> "AGI Community", not to me, please.

You are being negative! And since when have the priorities of understandings
in the AGI field been standardized? Perhaps that is part the limiting factor
and self-defeating narrow-mindedness.

John


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=71501965-68a77a

Reply via email to