I hope this question isn't too forward, but it would certainly help clarify the possibilities for AGI.
 
To those doing AGI development: If, at the end of the development stage of your project  -- say, after approximately five years -- you find that it has failed technically to the point that it is not salvageable, what do you think is most likely to have caused it? Let's exclude financial and management considerations from this discussion; and let's take for granted that a failure is just a learning opportunity for the next step.

Answers can be oriented to functionality or implementation. Some examples: True general intelligence in some areas, but so unintelligent in others as to be useless; Super-intelligence in principle but severely limited by hardware capacity to the point of uselessness. But of course, I'm interested in _your_ answers.

Thanks,

Joshua

 

This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to