Looking at past and current (likely) failures –

 

  • trying to solve the wrong problem is the first place, or
  • not having good enough theory/ approaches to solving the right problems, or
  • poor implementation

 

However, even though you specifically restricted your question to technical matters, by far the most important reasons are ‘managerial’ – ie. Staying focused on general intelligence, funding, project management, etc.

 

Peter

 

http://adaptiveai.com/faq/index.htm#little_progress

 


From: Joshua Fox [mailto:[EMAIL PROTECTED]
Sent: Monday, September 25, 2006 5:53 AM
To: agi@v2.listbox.com
Subject: [agi] Failure scenarios

 

I hope this question isn't too forward, but it would certainly help clarify the possibilities for AGI.

 

To those doing AGI development: If, at the end of the development stage of your project  -- say, after approximately five years -- you find that it has failed technically to the point that it is not salvageable, what do you think is most likely to have caused it? Let's exclude financial and management considerations from this discussion; and let's take for granted that a failure is just a learning opportunity for the next step.


Answers can be oriented to functionality or implementation. Some examples: True general intelligence in some areas, but so unintelligent in others as to be useless; Super-intelligence in principle but severely limited by hardware capacity to the point of uselessness. But of course, I'm interested in _your_ answers.


Thanks,

Joshua

 


This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED]


This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to