Mike Tintner wrote:
James,
It's interesting - there is a general huge block here - and culture-wide - to thinking about intelligence in terms of problems as opposed to the means and media of solution (or if you like the tasks vs the tools). Your list is all about means - AGI that uses this language or that, and that uses a body or not. Similarly, Pei's and Ben's expositions of their systems are all about how-it-works rather than what-it-does. Everything I listed started at the other end - with types of problems. And that is how you do indeed have to start.

Mike,

I think your comments in this thread are an interesting mix of good
insight and (unfortunately) bad mistakes.  ;-)

On one level, I think you are showing insight by feeling frustrated with
some of the things that you believe are missing from the AGI approaches
you have read about here.  I feel frustrated too, so I am the last
person to disagree with you, in general.

But you are making the wrong criticisms. You are advocating a strategy that has been tried, and which failed, and out of which was born the focus on mechanism that you see now.

In other words, long ago in AI people really did believe that they
should figure out what problem their system should solve, and then focus
on how to get it to solve that problem.  The result of that attitude was
a long period when people attacked all kinds of problems, but in such a
narrow way that nothing could be generalized.  End result:  Narrow AI,
which we are all now trying to avoid.

I have a feeling that you are speaking on the basis of only a surface scan of the subject: forgive the implied criticism, but greater depth of reading might answer some of your worries.



Richard Loosemore.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to