Robin,


I am an evangelist for the fact that the time for powerful AI could be
here very rapidly if there were reasonable funding for the right people.
There is a small, but increasing number of people who pretty much
understand how to build artificial brains as powerful as that of humans,
not 100% but probably at least 90% at an architectual level.  What is
needed is funding.  It will come, but exactly how fast, and to which
people, is the big question.  The below paper is written with the
assumption that someone -- some VC's, Governments, Google, Microsoft,
Intel, some Chinese multi-billionaire -- makes a significant investment in
the right people.



I have cobbled this together rapidly from some similar prior writings, so
please forgive the typos.  I assume you will only pick through it for
ideas, so exact language is not important.



I you have any questions, please call or email me.



Ed Porter





==================================================================



The Time for Powerful General AI

is Rapidly Approaching

by Edward Porter



The time for powerful general AI is rapidly approaching.  Its beginnings
could be here in two to ten years if the right people got the right
funding.  Starting in two years it could start providing the first in a
series of ever-more-powerful, ever-more-valuable, market-dominating
products.  In five to ten years it could be delivering true superhuman
intelligence.  In that time frame, for example, this would enable software
running on less than $3 million dollar hardware to write reliable code
faster than a thousand human programmers - or, with a memory swap, to
remember every word, every concept, every stated rational in a world-class
law library and to reason from that knowledge hundreds to millions of
times faster than a human lawyer, depending on the exact nature of the
reasoning task.



You should be skeptical.  The AI field has been littered with false claims
before.  But for each of history's long-sought, but long-delayed,
technical breakthroughs, there has always come a time when it finally
happened.  There is strong reason to believe that for powerful machine
intelligence that time is now.



What is the evidence?  It has two major threads.



The first is that for the first time in history we have hardware with the
computational power to support near-human intelligence, and in five to
seven years the cost of hardware powerful enough to support superhuman
intelligence could be as low as $200,000 to $3,000,000, meaning that
virtually every medium to mid-size organization will want many of them.



The second is that, due to advances in brain science and in AI, itself,
there are starting to be people, like those at Novamente LLC, who have
developed reasonable and detailed architectures for how to use such
powerful hardware efficiently to create near- or super-human intelligence.




THE HARDWARE



To do computation of the general type and sophistication of the human
brain, you need something within at least several orders of magnitude of
the capacity of the human brain, itself, in each of three dimensions:
representational, computational, and intercommunication capacity.  You
can't have the common sense, intuition, and context appropriateness of a
human mind unless you can represent and rapidly make generalizations from
and inference between substantially all parts of world knowledge - where
"world knowledge" is the name given to the extremely large body of
experientially derived knowledge most humans have.



Most past AI work has been done on machines that have less than one one
millionth the capacity in one or more of these three dimensions.  This is
like trying to do what the human brain does with a brain roughly 2000
times smaller than that of a rat.



No wonder most prior attempts at human-level AI have had so many false
promises and failures.  No wonder the correct, large-hardware approaches
have been up until very recently impossible to properly demonstrate and,
thus, get funding for.  And, thus, no wonder, the AI establishment does
not understand such correct approaches.



But human-level hardware is coming soon.  Systems are already available
for under ten million dollars (with roughly 4.5K 2Ghz 4 core processors,
168 TeraFlops/sec, a nominal bandwidth of 4TBytes/sec, and massive hard
disk storage) that are very roughly human level in two out of the above
three dimensions.  These machines are very roughly 1000 times slower than
humans with regard to messaging interconnect, but they are also hundreds
of millions of times faster than humans for many of the tasks at which
machines already out perform us.



Even machines with much less hardware could provide marketable powerful
intelligences.  AIs that were substantially sub-human at some tasks could
combine that sub-human intelligence with the skill at which computers
greatly out perform us to produce combined intelligences that could be
extremely valuable for many tasks.



Furthermore, not only is Moore's Law expected to keep on going for at
least several more generations, but, perhaps even more importantly, there
has been a growing trend toward more AI-capable hardware architectures.
This is indicated by the trend toward putting more and more processor
cores, all with high speed interconnect, on a chip.  Intel and IBM both
have R&D chips with 80 or so such mesh-networked processors.  There are
also plans to provide high bandwidth memory connections to each such
networked processor with the memory being placed on multiple semiconductor
layers above or below the processors and with a huge numbers of data
transferring vias connected between layers.  This will substantially break
the Von Neumann bottleneck, a well known hardware limitation that has
greatly restricted the usefulness of traditional computers for many tasks
involving large amount of complexly interconnected data, such those
involved in computing from world knowledge.



With the highly redundant designs made possible by such grids of tiled
networked processors, wafer scale and multiple level wafer scale
manufacturing techniques (or equivalents of them provided by Sun
Microsystems' capacitive-coupling interconnects) become extremely
practical and can greatly decrease the cost of manufacturing massive
amounts of memory and processing power all connected by very high internal
bandwidths.  When you combine this with the rapid increases in the
bandwidth of optical interconnect being made by companies such as Luxera,
it becomes possible to extend this extremely high bandwidth in a third
dimension, making it possible to create computers not only with much more
memory and computational power than the human brain, but also much greater
interconnect.



In fact, if the ITRS roadmap projections continue to be met through to the
22nm node as expected, and if hardware were specifically designed to
support general purpose AI, it is highly likely roughly brain-level AI
hardware could -- if the Intels and Samsungs of the world focused on it --
be sold with a mark-up over marginal cost of 80% for between $200,000 to
$3,000,000 dollars in just five to seven years.



As one of the former head's of DARPA's AI funding said, "The hardware
being there is a given, it's the software that is needed."



SOFTWARE ARCHITECTURES



Tremendous advances have been made in Artificial Intelligence in the
recent past, in part due to the ever increasing rate of progress in brain
science and the increasing power of the computers that brain scientists
and AI researchers have to experiment with. For example, the paper
"Learning a Dictionary of Shape-Components in Visual Cortex:...", by
Thomas Serre of Tomasa Poggio's group at MIT, provides a some-what
limited, but still amazingly powerful simulation of human visual
perception (
<http://cbcl.mit.edu/projects/cbcl/publications/ps/MIT-CSAIL-TR-2006-028.p
df>
http://cbcl.mit.edu/projects/cbcl/publications/ps/MIT-CSAIL-TR-2006-028.pd
f ).  It gives, just one, of many possible examples of how much our
understanding the brain and its functions has grown.  It learns and uses
patterns in a generalizational and compositional hierarchy, that allows
for efficient reuse of representational components and matching
computation, and which allows a system to learn in a compositional
increments.  Similar amazing advances are being made in understanding
other brain system, including those that control and coordinate the
behavior of multiple areas in the brain, enough so that, that for the
first time, we really have enough understanding from which to design
artificial minds.



The most impressive current brain architecture of which I am aware, is the
Novamente architecture from Novamente LLC, a start-up headed by Ben
Goertzel, the former CTO of the $20 Million startup, IntelliGenesis, that
showed great promise in the dot.com boom until its financial plug was
pulled, with less than a day's notice, during the dot.com crash.  There
may be other impressive brain architectures, but since I don't know of
them, let me give a brief --  but hopefully revealing - description of the
Novamente architecture as a good example of the state of the art, since it
has a rather detailed blueprint for how to build powerful, even
superhuman, artificial minds.



Novamente starts with a focus on "General Intelligence", which it defines
as "the ability to achieve complex goals in complex environments."  It is
focused on automatic, interactive learning, experiential grounding, self
understanding, and both conscious (focus-of-attention) and unconscious
(currently less attended) thought.



It records experience, finds repeated patterns in it, makes
generalizations and compositions out of such patterns -- all through
multiple generalizational and compositional levels -- based on spatial,
temporal, and learned-pattern-derived relationships.  It uses a novel form
of inference, firmly grounded in Bayesian mathematics, for deriving
inferences from many millions of activated patterns at once.  This
provides probabilistic reasoning much more powerful and flexible than any
prior Bayesian techniques.



Patterns -- which can include behaviors (including mental behaviors) --
are formed, modified, generalized, and deleted all the time.  They have to
complete for their computational resources and continued existence.  This
results in a self-organizing network of similarity, generalizational, and
compositional patterns and relationships, that all must continue to prove
their worth in a survival-of-the-fittest, goal-oriented,
experiential-knowledge ecology.



Re-enforcement learning is used to weight patterns, both for general long
term and context specific importance, based on the direct or indirect
roles they have played in achieving the system's goals in the past.  These
indications of importance -- along with a deep memory for past similar
experiences, goals, contexts, and similar inferencing and learning
patterns -- significantly narrow and focus attention, avoiding the
pitfalls of combinatorial explosion, and resulting in context-appropriate
decision making.  Genetic learning algorithms, made more efficient by the
system's experience and probabilistic inferencing, give the system the
ability to learn new behaviors, classifiers, and creative ideas.



Taken together all these features, and many more, will allow the system to
automatically learn, reason, plan, imagine, and create with a
sophistication and power never before possible -- not even for the most
intelligent of humans.



Of course, it will take some time for the first such systems to learn the
important aspects of world knowledge.  Most of the valuable patterns in
the minds of such machines will come from machine learning and not human
programming.   Such learning can be greatly speed if such machines are
taught, at least partially, the way human children are.  But once
knowledge has been learned by such systems much of it can be quickly
replicated into, or shared between, other machines.



So will it work?



The answer is yes - because that is substantially how the human brain
works.



It is hard to overstate the economic value and transformative power of
such machines. That $3 million dollar system that I described at the start
of this paper that could do the work of thousands of programmers or
lawyers, that could be rented on the web for under $400 dollars an hour.
And if nano-electronics delivers on its promise within twenty-five years
such a machine might not cost any more than a  PC, and so the work of
people like programmers, lawyers, doctors, teachers, psychologists, and
investment counselors, could all be largly replaced for roughly a penny an
hour.



All of a sudden the price of human labor, even in places like China,
India, and Haiti becomes uncompetitive for most current employment.
Marx's labor theory of value gets thrown on its ass, and is almost
entirely replaced by the machine theory of value. Commercially,
politically, and geo-politically it's a whole new ball game.  It could go
either way.  It could greatly enlighten and improve human existence, or it
could greatly darken it, or it could do some of both.  One of the biggest
challenges is making our social and political institutions intelligent
enough to deal with it well



We are truly taking about a "singularity," a technology so powerful that
-- when combined with the massive acceleration it will cause in the web,
in electronics, robotics, and nano- and biotechnology -- will warp the
very fabric of human economy and society in somewhat the same way the
singularity of a black hole warps the fabric of space-time.







Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]


 -----Original Message-----
From: Robin Hanson [mailto:[EMAIL PROTECTED]
Sent: Saturday, November 10, 2007 6:41 AM
To: agi@v2.listbox.com
Subject: [agi] What best evidence for fast AI?



I've been invited to write an article for an upcoming special issue of
IEEE Spectrum on "Singularity", which in this context means rapid and
large social change from human-level or higher artificial intelligence.
I may be among the most enthusiastic authors in that issue, but even I am
somewhat skeptical.   Specifically, after ten years as an AI researcher,
my inclination has been to see progress as very slow toward an
explicitly-coded AI, and so to guess that the whole brain emulation
approach would succeed first if, as it seems, that approach becomes
feasible within the next century.

But I want to try to make sure I've heard the best arguments on the other
side, and my impression was that many people here expect more rapid AI
progress.   So I am here to ask: where are the best analyses arguing the
case for rapid (non-emulation) AI progress?   I am less interested in the
arguments that convince you personally than arguments that can or should
convince a wide academic audience.

[I also posted this same question to the sl4 list.]


Robin Hanson  [EMAIL PROTECTED]  http://hanson.gmu.edu
Research Associate, Future of Humanity Institute at Oxford University
Associate Professor of Economics, George Mason University
MSN 1D3, Carow Hall, Fairfax VA 22030-4444
703-993-2326  FAX: 703-993-2323


  _____

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?
<http://v2.listbox.com/member/?&;
> &

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=63858546-65fab7

Reply via email to