Sent to you by Sean McBride via Google Reader: Thinkism via The
Technium on 9/29/08
Here is why you don't have to worry about the Singularity in your
lifetime: thinkism doesn't work.

First, some definitions. According to Wikipedia, the Singularity is "a
theoretical future point of unprecedented technological progress,
caused in part by the ability of machines to improve themselves using
artificial intelligence." According to Vernor Vinge and Ray Kurzweil, a
smarter than human artificial intelligence will bring about yet smarter
intelligence, which in turn will rapidly solve related scientific
problems (including how to make yet smarter intelligence), expanding
intelligence until all technical problems are quickly solved, so that
society's overall progress makes it impossible for us to imagine what
lies beyond the Singularity's birth. Oh, and it is due to happen no
later than 2045.

I agree with parts of that. There appears to be nothing in the
composition of the universe, or our minds, that would prevent us from
making a machine as smart as us, and probably (but not as surely)
smarter than us. My current bet is that this smarter-than-us
intelligence will not be created by Apple, or IBM, or two unknown guys
in a garage, but by Google; that is, it will emerge sooner or later as
the World Wide Computer on the internet. And it is very possible that
this other intelligence beyond ours would emerge on or long before 2045.

Let's say that on Kurzweil's 97th birthday, February 12, 2045, a
no-kidding smarter-than-human AI is recognized on the web. What happens
the next day? Answer: not much. But according to Singularitans what
happens is that "a smarter-than-human AI absorbs all unused computing
power on the then-existent Internet in a matter of hours; uses this
computing power and smarter-than-human design ability to crack the
protein folding problem for artificial proteins in a few more hours;
emails separate rush orders to a dozen online peptide synthesis labs,
and in two days receives via FedEx a set of proteins which, mixed
together, self-assemble into an acoustically controlled nanodevice
which can build more advanced nanotechnology." Ad infinitum.

Ray Kurzweil, whom I greatly admire, is working to "cross the bridge to
the bridge." He is taking 250 pills a day so that he might live to be
97, old enough to make the Singularity date, which would in turn take
him across to immortality. For obviously to him, this super-super
intelligence would be able to use advance nanotechnology (which it had
invented a few days before) to cure cancer, heart disease, and death
itself in the few years before Ray had to die. If you can live long
enough to see the Singularity, you'll live forever. More than one
Singularitan is preparing for this.

Setting aside the Maes-Garreau effect, the major trouble with this
scenario is a confusion between intelligence and work. The notion of an
instant Singularity rests upon the misguided idea that intelligence
alone can solve problems. As an essay called Why Work Toward the
Singularity lets slip: "Even humans could probably solve those
difficulties given hundreds of years to think about it." In this
approach one only has to think about problems smartly enough to solve
them. I call that "thinkism."



Let's take curing cancer or prolonging longevity. These are problems
that thinking along cannot solve. No amount of thinkism will discover
how the cell ages, or how telomeres fall off. No intelligence, no
matter how super duper, can figure out how human body works simply by
reading all the known scientific literature in the world and then
contemplating it. No super AI can simply think about all the current
and past nuclear fission experiments and then come up with working
nuclear fusion in a day. Between not knowing how things work and
knowing how they work is a lot more than thinkism. There are tons of
experiments in the real world which yields tons and tons of data that
will be required to form the correct working hypothesis. Thinking about
the potential data will not yield the correct data. Thinking is only
part of science; maybe even a small part. We don't have enough proper
data to come close to solving the death problem. And in the case of
living organisms, most of these experiments take calendar time. They
take years, or months, or at least days, to get results. Thinkism may
be instant for a super AI, but experimental results are not instant.

There is no doubt that a super AI can accelerate the process of
science, as even non-AI computation has already speed it up. But the
slow metabolism of a cell (which is what we are trying to augment)
cannot be sped up. If we want to know what happens to subatomic
particles, we can't just think about them. We have to build very large,
very complex, very tricky physical structures to find out. Even if the
smartest physicists were 1,000 smarter than they are now, without a
Collider, they will know nothing new. Sure, we can make a computer
simulation of an atom or cell (and will someday). We can speed up this
simulations many factors, but the testing, vetting and proving of those
models also has to take place in calendar time to match the rate of
their targets

To be useful artificial intelligences have to be embodied in the world,
and that world will often set their pace of innovations. Thinkism is
not enough. Without conducting experiements, building prototypes,
having failures, and engaging in reality, an intelligence can have
thoughts but not results. It cannot think its way to solving the
world's problems. There won't be instant discoveries the minute, hour,
day or year a smarter-than-human AI appears. The rate of discovery will
hopefully be significantly accelerated. Even better, a super AI will
ask questions no human would ask. But, to take one example, it will
require many generations of experiments on living organisms, not even
to mention humans, before such a difficult achievement as immortality
is gained.

Because thinkism doesn't work, you can relax.

The Singularity is an illusion that will be constantly retreating --
always "near" but never arriving. We'll wonder why it never came after
we got AI. Then one day in the future, we'll realize it already
happened. The super AI came, and all the things we thought it would
bring instantly -- personal nanotechnology, brain upgrades, immortality
-- did not come. Instead other benefits accrued, which we did not
anticipate, and took long to appreciate. Since we did not see them
coming, we look back and say, yes, that was the Singularity.

Things you can do from here:
- Subscribe to The Technium using Google Reader
- Get started using Google Reader to easily keep up with all your
favorite sites

Reply via email to