Hi, The following was my brief reply when someone asked me recently why I think AGI is coming:
1. New constructive theories and engineering plans on AGI begin to appear after decades of vacancy on this topic --- AGI won't be possible until someone begin to try 2. All proposed arguments on the impossibility of AGI failed to settle the debate --- if something isn't proven impossible, it remains possible 3. More and more people get disappointed by the mainstream AI research --- if you want AGI, you must directly work on it, not on a "piece" cut from it arbitrarily 4. The advance of computer techniques, both in hardware and software, make system development much easier --- an individual or a small team can go quite far 5. The Web let the small number of AGI believers speak to and hear from each other, and an AGI community is forming --- not only the widely accepted opinions can be heard 6. Theoretical progress in the related cognitive sciences --- to build AGI, it is needed to know the "I" in it first As for the "rapid progress" part of your question, of course it will be considered as "rapid", compared to the last two decades, when there wasn't much progress in this direction at all. I don't expect the above answer to "convince a wide academic audience" --- that requires a much more detailed and technical analysis. To my opinion, even when AGI is finally achieved, it will still take some people some time to acknowledge its intelligence, since it will be very different from their expectation. Pei Wang http://nars.wang.googlepages.com/ On Nov 10, 2007 6:41 AM, Robin Hanson <[EMAIL PROTECTED]> wrote: > > I've been invited to write an article for an upcoming special issue of IEEE > Spectrum on "Singularity", which in this context means rapid and large > social change from human-level or higher artificial intelligence. I may be > among the most enthusiastic authors in that issue, but even I am somewhat > skeptical. Specifically, after ten years as an AI researcher, my > inclination has been to see progress as very slow toward an explicitly-coded > AI, and so to guess that the whole brain emulation approach would succeed > first if, as it seems, that approach becomes feasible within the next > century. > > But I want to try to make sure I've heard the best arguments on the other > side, and my impression was that many people here expect more rapid AI > progress. So I am here to ask: where are the best analyses arguing the > case for rapid (non-emulation) AI progress? I am less interested in the > arguments that convince you personally than arguments that can or should > convince a wide academic audience. > > [I also posted this same question to the sl4 list.] > > > Robin Hanson [EMAIL PROTECTED] http://hanson.gmu.edu > Research Associate, Future of Humanity Institute at Oxford University > Associate Professor of Economics, George Mason University > MSN 1D3, Carow Hall, Fairfax VA 22030-4444 > 703-993-2326 FAX: 703-993-2323 > ________________________________ > This list is sponsored by AGIRI: http://www.agiri.org/email > To unsubscribe or change your options, please go to: > http://v2.listbox.com/member/?& ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_secret=63814140-f1835c
