Hi Robin.  In part it depends on what you mean by "fast".
 
1. "Fast" -> less than 10 years.
 
I do not believe there are any strong arguments for general-purpose AI being 
developed in this timeframe.  The argument here is not that it is likely, but 
rather that it is *possible*.  Some AI researchers, such as Marvin Minsky, 
believe that we already have the necessary hardware commonly available, if we 
only knew what software to write for it.  If, as seems likely, there is a large 
economic incentive for the development of this software, it seems reasonable to 
grant the possibility that it will be developed.
 
Following that line of reasoning, a computation of "probability * impact" 
yields a large number for even small probabilities since the impact of a 
technological singularity could be very large.  So planning for the possibility 
seems prudent.
 
2. "Fast" -> less than 50 years.
 
For this timeframe, just dust off Moravec's old computer speed chart.  On such 
a chart I think we're supposed to be at something like mouse level right now -- 
and in fact we have seen supercomputers beginning to take a shot at simulating 
mouse-brain-like structures.  It does not feel so wrong to think that the robot 
cars succeeding in the DARPA challenges are maybe up to mouse-level 
capabilities.
 
It is certainly possible that once computers surpass the raw processing power 
of the human brain by 10, 100, 1000 times, we will just be too stupid to keep 
up with their capabilities for some reason, but it seems like a more reasonable 
bet to me that the economic pressures to make somewhat good use of available 
computing resources will win out.
 
AI is often called a perpetual failure, but from this view that is not true at 
all; AI has been a spectacular success.  It's very impressive that the early 
researchers were able to get computers with nematode-level "nervous systems" to 
show any interesting cognitive behavior at all.  At worst, AI is keeping up 
with the available machine capabilities admirably.
 
Still, putting aside the "brain simulation" route, we do have to build models 
of mind that actually work.  As Pei Wang just pointed out, we are beginning to 
see models such as Ben Goertzel's Novamente that at least seem like they might 
have a shot at sufficiency.  That is not proof, but it is an indication that we 
may not be overmatched by this challenge, once the machinery becomes available.
 
If something like Moore's law continues (I suppose it's a cognitive bias to 
assume it will continue and a different bias to assume it won't), who wants to 
bet that computers 10,000, 100,000, or 1,000,000 times as powerful as our 
brains will go to waste?  Add as many zeros as you want... they cost five years 
each.
 
-----
 
Having written that, I confess it is not completely convincing.  There are a 
lot of assumptions involved.  I don't think there *is* an objectively 
convincing argument.  That's why I never try to convince anybody... I can play 
in the intersection between engineering and wishful thinking if I want, simply 
because it amuses me more than watching football.
 
Hopefully some folks with more earnest beliefs will have better arguments for 
you.
 
 

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=63831279-12920a

Reply via email to