in response to Bob Mottram Thursday, October 18, 2007 1:47 PM post.




>This isn't because previous generations of AI researchers were in denial
about the amount of hardware they needed - a whiggish view of recent
history.



I have been told for years that “the problem is not hardware, its
software.”  I have probably been told that at least thirty times over at
least 22 years.



>Even if I had a machine on my desk today capable to carrying out any
arbitrarily large computation instantaneously I still wouldn't have
sufficient knowledge to be able to build a human equivalent AI.



I wouldn’t either to start, but if I had human-level hardware and was
starting with Novamente (if it is as good as it seems to be from my
limited reading) and building on that, I think I could create some very
impressive results very quickly.



>In my opinion any third world villager with a laptop and internet access
could make significant progress in AGI if they're able to conceptualise
the problem in the right way, although I realise that this is not a widely
held view.





I think there are some first world people, like Ben Goertzel, and I assume
other, who have already done a lot of very good thinking



>Unfortunately, cognitive biases may play a role when statements like this
are made.



No one is without biases.  But there are very good reasons to believe
extremely large strides can be made in the next ten years if the right
people are funded.  There has recently be a great increase in our
understanding of the problem and there will be a great increase in the
hardware.  It is hard to imagine that those two factors would not
contribute to massive increases in AGI.








Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-----Original Message-----
From: Bob Mottram [mailto:[EMAIL PROTECTED]
Sent: Thursday, October 18, 2007 1:47 PM
To: agi@v2.listbox.com
Subject: Re: [agi] More public awarenesss that AGI is coming fast




On 18/10/2007, Edward W. Porter <[EMAIL PROTECTED]> wrote:

With regard to the fact that many people who promised to produce AI in the
past have failed -- I repeat what I have said on this list many times --
you can't do the type of computation the human brain does without at least
something within several orders of magnitude of the computational,
representational, and (importantly) interconnect capacity of the human
brain.  And to the best of my knowledge, most AI projects until very
recently have been run on hardware with roughly one 100 millionth to about
one 100,000 th such capacity.

So it is no surprise they failed.  What is surprising is that they were so
blind to the importance of hardware.


This isn't because previous generations of AI researchers were in denial
about the amount of hardware they needed - a whiggish view of recent
history.  Estimates of the computational capacity of the human brain have
always been flaky, because ultimately we still don't really know what the
essential function of a neuron is (the part which can be abstracted from
the biology).  The figures that you're giving are presumably derived from
Hans Moravec's calculations which were based upon the amount of
information your retina can process whilst observing a screen at a
distance of a few metres.  Assuming that he's right, the uncertainty
bounds which he puts on these calculations could delay human equivalent
computation by a few decades, which is a wider uncertainty margin than the
usual 5-10 years to AGI mantra.  Even so, a few decades isn't much if
you're a "Long Now" kind of person.  And of course this is all based upon
the assumption that to build a successful AGI you need enough computation
to simulate the equivalent number of neurons and their interactions.




But the hardware barrier to the creation of human-level AGI is being
removed.


I agree with this, but hardware alone is not enough.  Even if I had a
machine on my desk today capable to carrying out any arbitrarily large
computation instantaneously I still wouldn't have sufficient knowledge to
be able to build a human equivalent AI.  I think Hugo de Garis has for
some time had systems capable of evolving neural nets "at electronic
speeds", but what's missing so far is a good idea of what to do with them.



Add all these things together and I think it is clear that if a well
funded AGI initiative gave the money to the right people (not just spread
it throughout academic AI based on seniority or somebody's buddy system),
it would be almost certain that stunning strides could be made in the
power of artificial intelligence in 5 to 10 years.


Anyone remember 5th generation ?

I agree that a relatively small team of the best AI people if funded
generously and possessing a detailed AGI design over a ten year period
could make good progress, but remain skeptical about large scale
governmental projects or notions of throwing cash at the problem in an
indiscriminate way (which in practice is often what governments do).
Personally, I don't believe that the problem is primarily one of funding,
although funding certainly helps.  In my opinion any third world villager
with a laptop and internet access could make significant progress in AGI
if they're able to conceptualise the problem in the right way, although I
realise that this is not a widely held view.




But the chance that such a project would create dramatic and extremely
valuable advances in the power of artificial intelligence in all of these
areas in 10 years – advances  that would be worth many times the $2
Billion dollar investment -- would be at least 99%.


Unfortunately, cognitive biases may play a role when statements like this
are made.


  _____

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?
<http://v2.listbox.com/member/?&;
> &

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=55060103-0acab2

Reply via email to