On 5/1/07, Mike Tintner <[EMAIL PROTECTED]> wrote:
Well, that really frustrates me. You just can't produce a machine that's going to work, unless you start with its goal/function.
I think you are making an error of projecting the methodologies that are appropriate for narrow-purpose-specific machines, onto the quite different problem of designing AGIs... My colleagues at Novamente LLC have built plenty of purpose-specific software systems for customers, so it's not as though we're unable to work in the manner you're suggesting. We just find it inappropriate for the AGI task.
The obvious and most basic type of adaptive problem it seems to me that agents/ robots should start with is navigational.
Navigation IMO is a relatively narrow problem that can likely be solved by narrow-AI methods pretty effectively, without need for a really broad and robust AGI. So I don't view it as a great "incremental problem" for AGI. On the other hand, for instance "Learning the rules of new games via communication with humans, and then being able to play these games effectively" does seem to me like an appropriate "incremental problem" to orient one's work toward, on the gradual path toward AGI. However, I note that we will likely be approaching the navigation problem w/ Novamente during the next year, due to our intended business course of applying our proto-AGI system to control virtual agents in simulation worlds. -- Ben G ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
