Matt Mahoney wrote:
AGI does not need promoting. AGI could potentially replace all human labor,
currently valued at US $66 trillion per year worldwide. Google has gone from
nothing to the fifth biggest company in the U.S. in 10 years by solving just a
little bit of of the AI problem better than its competitors.
We should be more concerned about the risks of AGI. When humans can make
machines smarter than themselves, then so can those machines. The result will
be an intelligence explosion. http://mindstalk.net/vinge/vinge-sing.html
The problem is that humans cannot predict -- and therefore cannot control --
machines that are vastly smarter. The SIAI ( http://www.singinst.org/ ) has
tried to address these risks, so far without success. This really is a
fundamental problem, proved in a more formal sense by Shane Legg (
http://www.vetta.org/documents/IDSIA-12-06-1.pdf ). Recursive self
improvement is a probabilistic, evolutionary process that favors rapid
reproduction and acquisition of computing resources (aka intelligence),
regardless of its initial goals. Each successive generation gets smarter,
faster, and less dependent on human cooperation.
Whether this is good or bad is a philosophical question we can't answer. It
is what it is. The brain is a computer, programed through evolution with
goals that maximize fitness but limit our capacity for rational introspection.
Could your consciousness exist in a machine with different goals or different
memories? Do you become the godlike intelligence that replaces the human
race?
This is the worst possible summary of the situation, because instead of
dealing with each issue as if there were many possibilities, it pretends
that there is only one possible outcome to each issue.
In this respect it is as bad as (or worse than) all the science fiction
nonsense that has distorted AI since before AI even existed.
Example 1: "...humans cannot predict -- and therefore cannot control --
machines that are vastly smarter." According to some interpretations of
how AI systems will be built, this is simply not true at all. If AI
systems are built with motivation systems that are stable, then we could
predict that they will remain synchronized with the goals of the human
race until the end of history. This does not mean that we could
"predict" them in the sense of knowing everything they would say and do
before they do it, but it would mean that we could know what their goals
abd values were - and this would be the the only important sense of the
word "predict".
Example 2: "This really is a fundamental problem, proved in a more
formal sense by Shane Legg
(http://www.vetta.org/documents/IDSIA-12-06-1.pdf). This paper "proves"
nothing whatever about the issue!
Example 3: "Recursive self improvement is a probabilistic, evolutionary
process that favors rapid reproduction and acquisition of computing
resources (aka intelligence), regardless of its initial goals." This is
a statement about the goal system of an AGI, but it is extraordinarily
presumptious. I can think of many, many types of non-goal-stack
motivational systems for which this statement is a complete falsehood.
I have described some of those systems on this list before, but this
paragraph simply pretends that all such motivational systems just do not
exist.
Example 4: "Each successive generation gets smarter, faster, and less
dependent on human cooperation." Absolutely not true. If "humans" take
advantage of the ability to enhance their own intelligence up to the
same level as the AGI systems, the amount of "dependence" between the
two groups will stay exactly the same, for the simple reason that there
will not be a sensible distinction between the two groups.
Richard Loosemore
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=60391991-edd8b3