In my way of seeing things, when projects reach that "non salvageable"
status, one is likely to find serious theoretical errors, possibly
made in the beginning of the journey. That's a problem we cannot
avoid, because we all don't know precisely what it is that we must
do to achieve general intelligence. Hell, we don't even agree what is
intelligence after all. To design and build an intelligent system
we invariably pose as a group of engineers. But this is not like building
a sophisticated bridge or an innovative building. We don't have a set
of physics laws, or accumulated chemistry experience about concrete. We
don't have a good set of universally accepted theoretical principles. And
the easiest way to understand this is just to compare the enormous variation
of "first principles" embraced by different projects (try listing the
ones for symbol based systems, neural systems, evolutionary systems,
hybrid systems, interactive systems, knowledge-based systems, etc.).
Most projects start from incompatible premises. So what one is expected
to learn when one reaches that sad status of seeing no way to continue
is just this: that its premises aren't good. Not much clues about what the
right ones could be.
However, in the current day, I would say that we can list some principles
that any successful project must comply. Anyone want to start the list?
 
Sergio Navega.
 
 
----- Original Message -----
From: Joshua Fox
Sent: Monday, September 25, 2006 9:52 AM
Subject: [agi] Failure scenarios

I hope this question isn't too forward, but it would certainly help clarify the possibilities for AGI.
 
To those doing AGI development: If, at the end of the development stage of your project  -- say, after approximately five years -- you find that it has failed technically to the point that it is not salvageable, what do you think is most likely to have caused it? Let's exclude financial and management considerations from this discussion; and let's take for granted that a failure is just a learning opportunity for the next step.

Answers can be oriented to functionality or implementation. Some examples: True general intelligence in some areas, but so unintelligent in others as to be useless; Super-intelligence in principle but severely limited by hardware capacity to the point of uselessness. But of course, I'm interested in _your_ answers.

Thanks,

Joshua

 

This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED]


No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.1.405 / Virus Database: 268.12.8/455 - Release Date: 22/09/2006

This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to