|
In my way of seeing things, when projects reach that "non
salvageable"
status, one is likely to find serious theoretical errors, possibly
made in the beginning of the journey. That's a problem we cannot
avoid, because we all don't know precisely what it is that we must
do to achieve general intelligence. Hell, we don't even agree what is
intelligence after all. To design and build an intelligent system
we invariably pose as a group of engineers. But this is not like building
a sophisticated bridge or an innovative building. We don't have a set
of physics laws, or accumulated chemistry experience about concrete. We
don't have a good set of universally accepted theoretical principles.
And
the easiest way to understand this is just to compare the
enormous variation
of "first principles" embraced by different projects (try listing the
ones for symbol based systems, neural systems, evolutionary systems,
hybrid systems, interactive systems, knowledge-based systems, etc.).
Most projects start from incompatible premises. So what one is
expected
to learn when one reaches that sad status of seeing no way to continue
is just this: that its premises aren't good. Not much clues about what
the
right ones could be.
However, in the current day, I would say that we can list some principles
that any successful project must comply. Anyone want to start the
list?
Sergio Navega.
This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED] |
- Re: [agi] Failure scenarios Sergio Navega
- Re: [agi] Failure scenarios Ben Goertzel
- Re: [agi] Failure scenarios Sergio Navega
- Re: [agi] Failure scenarios Nathan Cook
- RE: [agi] Failure scenarios Andrew Babian
- Re: [agi] Failure scenarios J. Storrs Hall, PhD.
- Re: [agi] Failure scenarios Sergio Navega
- Re: [agi] Failure scenarios Ben Goertzel
- Re: [agi] Failure scenarios J. Storrs Hall, PhD.
- Re: [agi] Failure scenarios Richard Loosemore
- Re: [agi] Failure scenarios Ben Goertzel
