Robin Hanson wrote:
I've been invited to write an article for an upcoming special issue of
IEEE Spectrum on "Singularity", which in this context means rapid and
large social change from human-level or higher artificial
intelligence. I may be among the most enthusiastic authors in that
issue, but even I am somewhat skeptical. Specifically, after ten years
as an AI researcher, my inclination has been to see progress as very
slow toward an explicitly-coded AI, and so to guess that the whole brain
emulation approach would succeed first if, as it seems, that approach
becomes feasible within the next century.
But I want to try to make sure I've heard the best arguments on the
other side, and my impression was that many people here expect more
rapid AI progress. So I am here to ask: where are the best analyses
arguing the case for rapid (non-emulation) AI progress? I am less
interested in the arguments that convince you personally than arguments
that can or should convince a wide academic audience.
I gave my answer to this question in a paper I presented at the 2006
AGIRI workshop on Artificial General Intelligence [1].
Stripped to its core, the argument is that AI progress has been slow for
a specific reason, not because the problem is intrinsically hard. The
reason for the slow progress is a fundamental misperception of the
nature of the AI problem: intelligent systems (by which I mean
completely general intelligent systems that are capable of acquiring
knowledge on their own initiative) *probably* contain an irreducible
element of complexity, in the 'complex systems' sense of 'complexity'.
The two main consequence of this complexity are that (1) we would expect
some of an AI's low-level mechanisms to have an opaque relationship to
the AI's overall behavior (i.e. there are mechanisms down there that do
not look like they have any bearing whatsoever on the intelligence of
the overall system, and yet they play an indispensible role in the
system's intelligent performance), and (2) the only way to get around
the problems caused by (1) would be to make a systematic effort to
emulate the human cognitive system -- not at the neural level, mark you,
but at the cognitive level.
The final conclusion of the argument I give in the paper is an
interesting sociology-of-science observation that bears directly on your
question of how rapidly we could get to full AGI: unfortunately, the AI
community is populated with people who have an extremely strong bias
against accepting these arguments, and this strong bias is what is
holding back progress. Basically, 'traditional' AI people have an
almost theological aversion to the idea that the task of building an AI
might involve having to learn (and deconstruct!) a vast amount of
cognitive science, and then use an experimental-science methodology to
find the mechanisms that really give rise to AI. AI people are, at
heart, mathematicians, and this is serious problem if the only way to
succeed has little to do with mathematics.
Looked at in this way, the answer to your question is that if a new type
of AI comes along (what I have dubbed 'theoretical psychology' because
of its unique relationship to AI and psychology) and if it gathers
enough support, we could find that the progress rate of this new
approach bears no relationship to the progress rate of AI over the last
fifty years.
I have started the process of building the infrastructure needed to do
this kind of work. So far this is working well: among other things, a
colleague of mine (Trevor Harley) and I have started re-analyzing the
literature of cognitive science to bring it into line with the new
approach, and our efforts have met with some surprising early successes
(the first fruits of this effort being a cognitive neuroscience paper
that is currently in press [2]). From my point of view, old-style
cognitive science and old-style AI are both falling neatly and elegantly
into this new framework, so my personal feeling is that a new period of
rapid progress is just over the horizon, and that human-level AGI might
happen in the coming decade.
If it were not for this particular way of seeing the problems of AI, I
would be with the skeptics: I think that conventional AI will not yield
a singularity-class AGI for a long time (if ever), and I believe that
the brain-emulation folks are being wildly optimistic about what they
can achieve, because they are blind to functional-level issues, and do
not have the resolution or in-vivo tools needed to reach their goals.
Regards
Richard Loosemore
References.
[1] Loosemore, R.P.W. (2007). Complex Systems, Artificial Intelligence
and Theoretical Psychology. In B. Goertzel & P. Wang, Proceedings of the
2006 AGI Workshop. Amsterdam: IOS Press. This can be found online at
http://www.agiri.org/wiki/Workshop_Proceedings (chapter 11).
[2] Loosemore, R.P.W. & Harley, T.A. "Brains and Minds: On the
Usefulness of Localisation Data to Cognitive Psychology". To appear in
M.Bunzl & S.J.Hanson (Eds.), Philosophical Foundations of fMRI.
Cambridge, MA: MIT Press.
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=63863318-e61f34