AGI ideas that are well developed can be quite concrete, as well as having
payoffs in the near future. Our project's business plan aims to do both.

Peter


-----Original Message-----
From: Eliezer S. Yudkowsky [mailto:[EMAIL PROTECTED] 

Additional factor:  AGI ideas are often vague or analogical.  Even the 
ideas with mathematically describable internals are often vague in the 
explanation of what they are supposed to do, or why they are supposed to 
be "intelligent".  It would be harder to cooperate on a project like 
that, than on developing a faster sorting algorithm.  Fuzzy beliefs are 
harder to communicate; communication is the essence of cooperation.


-----Original Message-----
From: Neil H. [mailto:[EMAIL PROTECTED] 

Of course, one might also argue that they simply didn't venture far
enough to see the proverbial "light at the end of the tunnel." I
suppose one of the downsides about AGI is that, unlike more focused AI
research (vision, NLP, etc), there really aren't any intermediate
payoffs between now and the "holy grail."


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to