HOW TO CREATE THE BUZZ THAT BRINGS THE BUCKS

Of course, the best way is to have results.  But we are not there yet, and
the issue is: how to get funding so we can get there.  As I said in one of
my posts of yesterday, if Pell’s Powerset, Omohundro’s Self-Aware Systems,
or Geortzel’s Novamente comes through half as well as suggested, we will
have initial results exciting enough to greatly increase good buzz within
a year or two.

Until then -- or if it such results don’t happen for three, four, or five
years  -- we should focus on trying to understand and make a good case for
funding AGI now.

My initial suggestions for such arguments are as follows:

THERE IS A POWERFUL CONVERGENCE OF TRENDS THAT INDICATE THE TIME FOR
STRONG AI IS RAPIDLY APPROACHING – THESE INCLUDE:


1. Moore’s Law And The Fact That We Will Be Able To Build Brain Level
Machines At Ever Decreasing Price.

                Moore is more.  I have had a theory for more than three
decades that you can’t do what the brain does without within several
orders of magnitude the same representational, computational, and (for the
last decade or two) internal bandwidth, as the human brain.  There are
already machines that are within several orders of magnitude of the brain,
but they are very expensive and, as far as I know, aren’t being used for
AGI.

                Moore’s law is expected to get us at least to the 22nm
node, perhaps, as soon as 2012.  The roughly eight-fold increase in
component density offered by this change is powerful, but perhaps even
more important for AI is the trend toward massive core counts, networks on
chips, and massive memory bandwidth using thru-chip vias.  Not only can
AI’s massive parallelism find a perfect use for such parallel processing,
but the relative uniformity of the resulting architecture allows the cost
of making large brainware to be much lower, such as by (don’t laugh)
wafer-scale integration of multiple wafer levels. (I was told by one of
the leading semiconductor people at IBM that he knew of no reason why such
circuitry would not be viable, provided it was designed with fault
tolerance in mind.)

                So we can for the first time realistically talk about
machines that reason from human-level world knowledge with roughly the
same depth of computational search as the human mind, for prices that
would make them valuable for all sorts of commercial uses. This change is
new and important, and it offers the chance to overcome most of the
traditional failings of past AI


2. The Combination of Brain Research and Neural Modeling Has Created an
Explosion in Brain Understanding Enabling Us to Really Understand, To a
Surprising Degree of Detail, How Human-Level Intelligences Might Operate

                I did a lot of reading on brain science ten years ago, and
I can tell you that the descriptive and explanatory quality of articles
coming out now is at a totally different, higher, and more integrated than
it was just A decade ago.   I am not suggesting that there was not some
very amazing work in brain science back then, I am just saying there is
much more extremely exciting work now.

                I have cited three relatively recent articles below which
provide examples of how amazing current models of the brain have become.

                The first provides a detailed discussion of how to make a
simplified, but still surprisingly powerful system for rapid pattern
recognition with does substantial automatic learning, uses a hierarchical
memory representation alternating matching with max pooling, and which
claims to be  a surprisingly accurate recreation of part of the human
brain.

                The second is a hypothesis on how the basil ganglia might
control mental functions.

                The third explains more about basil ganglia brain control,
but also suggests that the cortico-thalamic loop may have a built in
system for serializing spreading activation that represents different
aspects of a current brain state into, in effect, a brain activation
grammar, so as to allow recognition or activation of neural patterns in
the brain that have been developed to record and respond to such temporal
patterns.

                The flurry of articles in the last few years contain many
conflicting hypotheses, but there is much commonality on many points, and
many suggestions for how the functions of artificial minds, at many
different levels, could be performed.  From such articles, if combined
with an understanding of current AI, and if you have been thinking about
how to design an artificial brain for some time, one can truly begin to
feel that we really do understand the basic functions needed for an AGI,
and ways in which all those functions can be accomplished.

3. The Rapid Rate Of Improvements From The Above Two Trends Combined With
The Huge Bag Of Powerful Tricks We Have Already Learned In AI, Suggests
That We Have, Or Very Shortly Will Have, All Of The Tools Necessary To
Build Powerful AGIs.

                The combination of traditional AI techniques that have
been yielding results for years, the fact that the cost of hardware
appropriate for computing AI is almost certain to drop drastically over
the next ten years, the surprisingly amount we are learning about how
human-level intelligence is produced in the brain, and the fact that some
very reasonable AGI architectures have recently been proposed, such as
Goertzel’s Novamente, all suggests the time is drawing neigh.


4. AGI’s That We Could Start Designing and Building Today Have Time
Horizons Compatible With Venture Capital  and Government Funding, and Can
Have Many Valuable Uses Sufficient To Draw the Funding Necessary to
Finance the Transition to Human Level AGI

                There are a lot of commercial environments where systems
that are on the transition path to full AGI can be commercially valuable,
such as in on-line search, on-line video games, data-base mining, improved
machine translation, improved speech and vision recognition, corporate
intelligence, medical research, medical diagnosis, military awareness, and
national security.  Because hardware powerful enough for computing AGI
well is currently expensive, careful thinking should be done about what
applications have enough perceived government or market value to get
sufficient funding now.

                Such applications currently exist a plenty, provided we
can convince the funders that the initial AGI’s will yield sufficiently
improved performance.  And since such relatively early AIs will be able to
compute, for the first time, from something approaching human level world
knowledge and context sensitivity, there is a strong argument for such
improved performance.

                And as hardware prices decrease, the sweet spot will
continue to grow and grow and grow.


In summary It is absolutely clear that, barring some major setback to
civilization, the power of hardware to compute artificial intelligence
will increase drastically in the next decade.  It is absolutely clear that
we have learned a tremendous amount about how human-level intelligence is
generated in the brain, and thus how it could be created by analogy in
machines.  And such knowledge is increasing rapidly.

What is more important, reasonable architectures have already been
designed that, if they don’t create human-level intelligence, are almost
certain, in the hands of a good team, to create artificial intelligences
much more capable than anything we have today.

So why wouldn’t anyone believe that everything is in place to make
consistent strides toward powerful AGI’s in the next three to ten years.

There is the sticky little problem of getting it all to work together well
automatically.  That requires really good automatic learning of many
things and it requires really good context sensitivity to automatically
guide search and reasoning.

But there are multiple approaches for dealing with these problems.  And
because of the importance of AGI, there should be funding for multiple
different teams to ensure that different approaches and multiple different
human talents are applied to the problem.



And now for the buzz fuster...

AGI Will Be The Most Powerful Technology In Human History – In Fact, So
Powerful that it Threatens Us

                To all you singularity types out there I don’t have to
belabor this point.  Ray has out-radiated me on this one.

                The problem is not that anyone who understands AGI doubts
it is of earth shaking importance; it is that the extreme extremity of its
power makes it a two-edged sword, one that arguably can hurt humanity
much more than helps it.

                To military funders it is easy to make the argument that
AGI is such a militarily and economically powerful technology that we can
not dare let our enemies get it first.  And because it is so within reach,
to not aggressively pursue it risks letting our enemies get its tremendous
power first.

                To many companies like Google, Microsoft, and IBM similar
fears of competitive disadvantage, combined with realistic expectations of
returns, should be compelling.

                To companies like Intel and Samsung, its represents one of
the fattest markets for semiconductors imaginable.

                But to society as a whole there is a need for lot of
thinking about the issues raised by Dr. James Hughes at the recent
Singularity Summit (assuming he’s the one who showed the cartoon robot
with a smoking gun saying “Asta la vista, meat bag”).

                Of course, early level AGI’s need not be threatening.
Because I believe that human level intelligence requires certain minimal
levels of storage, messaging, and opps, I don’t think we have to worry
about a machine that all of a sudden becomes hundreds or thousands or
millions of times more efficient, merely through self modification, that
is unless we do an absolutely terribly inefficient job of designing it in
the first place.  Initially we humans will control the rate at which such
machines are made, and it is not likely to be until we make the mistake of
putting machines that cannot be easily controlled in charge of our vital
systems (e.g., internet, power grid, air traffic control, military warning
systems, etc.)  that we will be vulnerable.  Of course, machines already
control a lot of such vital functions, and they currently present dangers
through being hacked or software bugs -- but we should be careful about
putting machines that have too much of a mind of their own in charge of
any of them.

                With regard to controlling machines, shortly after doing
my independent study based on a reading list from Minsky in 1970, I came
to the firm belief, based largely on the K-Line theory, that with enough
hardware we could make artificial brains (even though at the time I had
much, much less of an understanding as to how to do so than I do now).

                 It was clear to me that machines could be brighter than
we are, and that they would be an inherent threat.  That’s when I came up
with the idea of the Fido AI.  That is an AI that had been designed or
bred like a dog to like people and be faithful to them.  I appreciated
that this would mean such computers would not have the same potential as a
machine of equal hardware that had more freedom of thought.  But I felt
that, to mix the metaphor, hundreds of millions of humans -- each riding
their own extremely powerful, yet less free, Fidos, to amplify their
ability to think in the same way that humans ride horses to amplify their
ability to move – would give us at least a temporary advantage over the
more dangerous, free minded machines.

                At the Singularity Summit this general concept was called
IA, or intelligence augmentation.  I think it is a vital.  It, along with
the collective intelligence discuss below, could at least buy us some time
as we make the trans-humanist transition -- hopefully a generation or two
or three of time.

                Beside the threat of machines taking over, there is the
threat of how they could change human existence even if they stay under
human control.

                AGI probably will -- within twenty to thirty years after
it achieves near human level capability --replace all but relatively low
level human labor.  And even that will be replaced if AGI helps engineer
much better robots and cheap means for manufacturing them.

                In other words, AGI flips Marx on his ass.  The Labor
Theory of Value gets totally replaced by the Capital Theory of Value (as
long as machines are still considered capital.)  Absent some new sort of
new social contract this would tend to increasingly concentrate wealth
into those people who control the most machines and have the most AGI
amplification.  A very ugly class system could evolve, in which the
machines are used to monitor, control, punish, entertain and pacify.  Low
cost housing might be replaced by uploading, and then killing, people who
don’t provide an attractive back drop for the lives of the powerful, into
low rent AGI’s (a la The Matrix).  Hopefully they will be merciful enough
to let every mind be a king in its own virtual castle, but then why should
they, why shouldn’t they just pull the plug.  After all they are just
machines, with some delusion of being human.

                And then there is the problem of the “undead”.  Both the
elderly who become increasingly bionic, but may maintain all or a remnant
of their human brain, and the uploaded (the geek equivalent of those
lifted up by “The Rapture”).  This raise all sorts of kinky issues, such
as:  Will Ray Kurzweil, upon being uploaded let Eliezer Judkowsky castrate
his brain to ensure he stays as much a friendly AI as his current human
brain is AI friendly?  What will it be like when there are more of these
mainly bionic and unloaded beings, all demanding human rights, than there
are of what we currently consider viable human beings?  What are the
chances that the uploaded will feel more kindship with the AGI’s and
conspire with them against the biological humans?

                It is non-trivial to both get AI a lot of buzz and keep
the types of fears expressed above well corked.

Associating AGI With Human Collective Intelligence Makes The AGI Future
More Safe And Less Dehumanizing and, Thus, Easier to Sell

                That is why I think one of the most important things to
develop in conjunction with AGI is collective human intelligence,

                We as a species need to be more intelligent in the ways we
act together in our families, with our friends, in our work place, in our
institutions, in our media, in our government, and in the governance of
the world as a whole.

                When I was doing my independent study under Marvin Minsky
in my senior year ’69-70 at Harvard, the leftists, several of whom told me
they were going to kill me after “the revolution”, preached participatory
democracy, which in their Leninist interpretation usually meant a
gathering at which only they had microphones.

                As part of this participatory democracy, the SDS and the
more moderate anti-war groups organized a meeting in Harvard’s football
stadium which the whole school was asked to attend.  Again it was another
meeting with only the organizers controlling who had microphones.

                Like probably half the people in the stadium on that sunny
day I was stoned (after all it was ’69-’70), and I remember thinking how
could one create a truly participatory democracy that would, to some
extent, function as a brain in that it would bring valuable voices up from
the subconscious into the conscious (replacing the self-important drones
who currently had the microphone.)

                During the meeting I came up with the basic architecture I
called multi-consciousness, which was based on having a market in which
getting consciousness for a speaker, an argument, or a statement (i.e.,
being heard by the whole stadium or getting time on a widely watched TV
show) was obtained by buying it with votes from others with whom you would
have, over time developed, networks of who was likely to vote for what
types of things.   And there would be brokers to whom people would entrust
the allocation of their votes, or at least the relaying of messages to
them regarding whether they would support a given speaker or argument with
their votes.  People who had the consciousness could also directly ask for
more votes, to expand on their current point or to be trusted to represent
similar ideas and value in the future.

                Within the following days, weeks, and years, I come up
with many different variations.  With the advent of the internet in the
early ‘90s I designed an internet based version, which had the advantage
of not needing to operate in real time.

                The relevance to our present times is not my particular
system, it is the general concept that with computer science, the
internet, and, soon, AGI to rapidly help organize similar views and search
the world’s knowledge bases for knowledge and understanding; it is
possible for people to communicate in groups in a much more intelligent,
important, and productive manner.  Instead of having politics be largely a
two year media discussion of “the horse race”, in which raising money for
30 second TV adds is the most important virtues, much more attention would
be paid to issues and to rationals for supporting or opposing them.  This,
is to a certain extent already happening on the web, but Silicon Valley
and socially conscious nerds around the world should focus on making it
much better.  There is a real limit to how enlightened human society can
be come, we are basically flawed creatures, but we are no where near the
limits of potential human and social enlightenment.

                One of the main goals should be to give people a voice
according both to their number and the ability of their arguments to
convince others of their validity.  This goal is not just to give
everybody a vote, but also to give them a say, and to weight that say by
how many other people they can get to share that view.  All of this can
occur in a forum that --  through a much better-than-current Google-like
AGI -- lets the best evidence for and against points be rounded up by
anyone who wants it within second.

                For best effect, collective intelligence should be
combined with both individual and group intelligence augmentation.

                Collective intelligence is required to reach a new social
contract, ultimately one that spans the earth.  It will help us create
such a contract that continues to gives meaning to most human lives at a
time when most things of value, both material and intellectual, is created
by machines.  It will help us create a social contract that lets us
develop institutions to protect us and our human and/or machine
descendants from oppression.

                Creating a buzz for human collective intelligence will
help sell the buzz for AGI, because it associates a level of equality,
true participatory democracy, human community, and enlightenment with AGI,
rather than just machines that are better at everything than we are.
Unless mankind gets its act together AGI will demean and kill us.  But if
we can become a much more enlightened species with a much greater sense of
shared values and understandings, if we amplify the collective
intelligence that we as a species have, we are much more likely to stretch
out the time during which AGI’s stay under our control, at least for a
long enough to let mankind to make an emotionally acceptable transition to
a trans-huminist future.




References:

“Learning a Dictionary of Shape-Components in Visual Cortex: Comparison
with Neurons, Humans and Machines”, by Thomas Serre.

“Towards an executive without a homunculus: computational models of the
prefrontal cortex/basal ganglia system”, by Thomas E. Hazy, Michael J.
Frank and Randall C. O’Reilly

“Engines of the brain: The computational instruction set of human
cognition”, by Richard Granger



Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=47932591-7eb688

Reply via email to