Ben said: > > The article contains the quote: > > ** > "David Yarowsky, associate professor of computer science, co-leads the > Natural Language Processing, or NLP, research group. "A lot of people in > computer science don't worry about whether computers think, or what > qualifies as intelligence," says Yarowsky. "That is a philosophical question > in the realm of Sartre or Kierkegaard, up there with the question of 'What > is the meaning of life?' After a while, what does it matter? If the computer > gets so good at something that it looks like it's intelligence, maybe you > can just call it that. " > ** > > This is a very typical attitude in the academic AI community. I feel it's > partly right.
I was about to quote the same paragraph, and to say that it is totally wrong, and it is a major reason for the mainstraim AI to be unsuccessful. > Yes, the exact true definition of "intelligence" is merely a subject for a > philosophical debate. In fact, it's a fairly pointless philosophical > debate, much more so in my view than the issues pursued by Kierkegaard or > Sartre, who were considering more essential things. Yes, there is no such a thing as "true definition of intelligence", but it doesn't mean that any definition is equally good, nor that anyone can be neutral on this issue --- there is no way to work on AI without a working definition of intelligence, though many people take their definition to be "natural" or "obvious", without a justification. To call it "merely a philosophical debate" has the danger of unconsciensly taking a bad philosophical position. The key issue is: different definition lead to different research paradigm, and finally, to different result. For a long argument on this topic, see http://www.cogsci.indiana.edu/farg/peiwang/PUBLICATION/#intelligence. I'm not claiming that my definition is the correct one, but that this is a crucial issue to be considered by anyone in the field. I share the bad feeling toward "pointless philosophical debate", which usually ends up with nothing achieved. However, it doesn't mean that all philosophical issues are irrelevant to AI, and we can be "philosophy-free". > On the other hand, this doesn't mean that making ANY distinctions regarding > intelligence is meaningless. > > I continue to believe that "degree of generality of scope" is a meaningful > qualifier to apply to intelligent system, so that we can speak about narrow > AI vs. general AI. I agree, though I think the difference in scope is secondary. To me, the primary difference is: most AI studies are about concrete "capacities", but I believe AI should be about abstract "principles". Again, see above paper for a long argument (which Ben is already familiar with, but other people in the list may be not). Pei ------- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/
