On 18/10/2007, Edward W. Porter <[EMAIL PROTECTED]> wrote:
>
> With regard to the fact that many people who promised to produce AI in the
> past have failed -- I repeat what I have said on this list many times -- you
> can't do the type of computation the human brain does without at least
> something within several orders of magnitude of the computational,
> representational, and (importantly) interconnect capacity of the human
> brain.  And to the best of my knowledge, most AI projects until very
> recently have been run on hardware with roughly one 100 millionth to about
> one 100,000th such capacity.
>
> So it is no surprise they failed.  What is surprising is that they were so
> blind to the importance of hardware.
>

This isn't because previous generations of AI researchers were in denial
about the amount of hardware they needed - a whiggish view of recent
history.  Estimates of the computational capacity of the human brain have
always been flaky, because ultimately we still don't really know what the
essential function of a neuron is (the part which can be abstracted from the
biology).  The figures that you're giving are presumably derived from Hans
Moravec's calculations which were based upon the amount of information your
retina can process whilst observing a screen at a distance of a few metres.
Assuming that he's right, the uncertainty bounds which he puts on these
calculations could delay human equivalent computation by a few decades,
which is a wider uncertainty margin than the usual 5-10 years to AGI
mantra.  Even so, a few decades isn't much if you're a "Long Now" kind of
person.  And of course this is all based upon the assumption that to build a
successful AGI you need enough computation to simulate the equivalent number
of neurons and their interactions.


But the hardware barrier to the creation of human-level AGI is being
> removed.
>

I agree with this, but hardware alone is not enough.  Even if I had a
machine on my desk today capable to carrying out any arbitrarily large
computation instantaneously I still wouldn't have sufficient knowledge to be
able to build a human equivalent AI.  I think Hugo de Garis has for some
time had systems capable of evolving neural nets "at electronic speeds", but
what's missing so far is a good idea of what to do with them.

Add all these things together and I think it is clear that if a well funded
> AGI initiative gave the money to the right people (not just spread it
> throughout academic AI based on seniority or somebody's buddy system), it
> would be almost certain that stunning strides could be made in the power of
> artificial intelligence in 5 to 10 years.
>

Anyone remember 5th generation ?

I agree that a relatively small team of the best AI people if funded
generously and possessing a detailed AGI design over a ten year period could
make good progress, but remain skeptical about large scale governmental
projects or notions of throwing cash at the problem in an indiscriminate way
(which in practice is often what governments do).  Personally, I don't
believe that the problem is primarily one of funding, although funding
certainly helps.  In my opinion any third world villager with a laptop and
internet access could make significant progress in AGI if they're able to
conceptualise the problem in the right way, although I realise that this is
not a widely held view.


But the chance that such a project would create dramatic and extremely
> valuable advances in the power of artificial intelligence in all of these
> areas in 10 years – advances  that would be worth many times the $2 Billion
> dollar investment -- would be at least 99%.
>

Unfortunately, cognitive biases may play a role when statements like this
are made.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=55046747-cc76fe

Reply via email to