I have REPEATEDLY said I am talking about defining general problem classes,
rather than setting narrow AI specialised problems. (Show me BTW where there
has been discussion of general problem classes - I'd be v. interested).
Inevitably, in short posts, there are going to be misunderstandings. I
suggest check out whether you're properly understood me, and ask questions
rather than jumping to dismiss me.
Your conclusion, for example, that I was "sadly mistaken" - and that what I
was saying about how the brain makes sense of language and info. generally
has all been said before by Kosslyn & co is nonsense. What I was talking
about can be classified under the heading of "psychosemiotics" - the study
of how the hierarchy of human sign systems reflects a parallel hierarchy in
the human brain's information processing. That field doesn't exist yet -
it's virgin territory. And the whole, related area of embodied cognition in
cognitive psychology is also still in its infancy.
----- Original Message -----
From: "Richard Loosemore" <[EMAIL PROTECTED]>
To: <[email protected]>
Sent: Thursday, May 03, 2007 3:27 PM
Subject: Re: [agi] The University of Phoenix Test [was: Why do you think
your AGI design will work?]
Mike Tintner wrote:
James,
It's interesting - there is a general huge block here - and
culture-wide - to thinking about intelligence in terms of problems as
opposed to the means and media of solution (or if you like the tasks vs
the tools).
Your list is all about means - AGI that uses this language or that, and
that uses a body or not. Similarly, Pei's and Ben's expositions of their
systems are all about how-it-works rather than what-it-does.
Everything I listed started at the other end - with types of problems.
And that is how you do indeed have to start.
Mike,
I think your comments in this thread are an interesting mix of good
insight and (unfortunately) bad mistakes. ;-)
On one level, I think you are showing insight by feeling frustrated with
some of the things that you believe are missing from the AGI approaches
you have read about here. I feel frustrated too, so I am the last
person to disagree with you, in general.
But you are making the wrong criticisms. You are advocating a strategy
that has been tried, and which failed, and out of which was born the focus
on mechanism that you see now.
In other words, long ago in AI people really did believe that they
should figure out what problem their system should solve, and then focus
on how to get it to solve that problem. The result of that attitude was
a long period when people attacked all kinds of problems, but in such a
narrow way that nothing could be generalized. End result: Narrow AI,
which we are all now trying to avoid.
I have a feeling that you are speaking on the basis of only a surface scan
of the subject: forgive the implied criticism, but greater depth of
reading might answer some of your worries.
Richard Loosemore.
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&
--
No virus found in this incoming message.
Checked by AVG Free Edition. Version: 7.5.467 / Virus Database:
269.6.2/785 - Release Date: 02/05/2007 14:16
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936