John G. Rose wrote:
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
It is easy for a research field to agree that certain problems are
really serious and unsolved.

A hundred years ago, the results of the Michelson-Morley experiments
were a big unsolved problem, and pretty serious for the foundations of
physics.  I don't think it would have been "self-defeating
narrow-mindedness" for someone to have pointed to that problem and said
"this is a serious problem".


Well the definition of problems and the approaches to solving the problems
can be narrow-minded or looked at with a narrow-human-psychological AI
perspective.

Most of these problems boil down to engineering problems and the theory
already exists in some other form; it is a matter of putting things together
IMO.

I think this is a very important issue in AGI, which is why I felt compelled to say something.

As you know, I keep trying to get meaningful debate to happen on the subject of *methodology* in AGI. That is what my claims about the complex systems problem are all about: the very serious possibility that the existing AGI/AI methodology is so seriously broken that virtually everything going on right now will be written up by future historians as a complete waste of effort.

In that context - where there is something of an agreement about what the big unsolved problems are, and where I have raised questions about the very foundations of today's AGI methodology - it is truly astonishing to hear people talking about issues being more or less solved, bar the shouting.



Richard Loosemore

P.S. BTW, it isn't really anything to do with taking a cognitive science perspective. Don't forget that I come from a hybrid background: I am not a cognitive scientist encroaching on hard-science AI and computing, I have done both sides in equal measure.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=71525665-a80bc7

Reply via email to