Joshua Fox wrote:
I'd like to raise a FAQ: Why is so little AGI research and development being done?

The answers of Goertzel, Moravec, Kurzweil, Voss, and others all agree on this (no need to repeat them here), and I've read Are We Spiritual Machines, but I come away unsatisfied. (Still, if there is nothing more to say on this question, please do the AGIRI-equivalent of sniping this thread immediately.)

I respect existing AGI researchers, but I am surprised that more members of the "establishment" are not on board. I just can't believe that , for example, almost all leading computer-science/cognitive-science professors are herd-following closed-minded stuck-in-the-muds. The leading universities do have their share of creative, free-thinking, inquisitive people, and the same goes for other parts of the "establishment". To clarify what I am looking for, I should describe a recent conversation. I spoke to an open-minded and intelligent friend who has a PhD from, and does research in, a top university. The research is in exactly the sort of technologies used in brain-scanning. I asked him about Kurzweil's trends on the accelerating advance of human-brain-scanning technologies. He did not agree with Kurzweil's conclusions, and explained why.

Likewise, I'm looking for input from a open-minded, intelligent, computer/cognitive scientist (who does not strongly support AGI research) on the above question. I don't know where to find them, so perhaps someone on this list could role-play one.

What would s/he say if I asked "Why do you not pursue or support AGI research? Even if you believe that implementation is a long way off, surely academia can study, and has studied for thousands of years, impractical but interesting pie-in-the-sky topics, including human cognition? And AGI, if nothing else, models (however partially and imperfectly with our contemporary technology) essential aspects of some philosophically very important problems."

I think the simple answer (all I got time for now :-)) is twofold:

1) If you ask why Kurzweil's ideas are not immediately infectious, it is because his claims (and all singularity claims) are not just a few steps beyond the current state of the art, they *look* like a wild leap into the realms of speculation. Not much to be done about this: slowly, over the next few years, it will become mor respectable, and then one day you will wake up to find every researcher on the planet trying to get grants in the new "singularity" field-cum-bandwagon.

2) Researchers need small, biteable, 6-months-to-publishable-paper projects to get their teeth into. They would say that their Narrow-AI research projects ARE the biteable chunks for today that will lead to AGI tomorrow. Why do they do this? Because the people higher up from them will crucify them if their work starts to get oriented towards anything else but high publication rate in "respectable" journals ... don't do this, and they will start to find promotions slipping, or they'll just be dumped. Short term results pressure in other words.

Richard Loosemore.


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to