Ohh! My favorite kind of essay: one that takes a dig at Singularitarianism
and Libertarianism.

Thaths
PS: Posting in HTML to preserve links

http://www.antipope.org/charlie/blog-static/2011/06/reality-check-1.html

Three arguments against the singularityBy Charlie
Stross<http://www.antipope.org/mt/mt-cp.cgi?__mode=view&blog_id=1&id=2>

I periodically get email from folks who, having read "Accelerando", assume I
am some kind of fire-breathing extropian zealot who believes in the
imminence of the singularity, the uploading of the libertarians, and the
rapture of the nerds. I find this mildly distressing, and so I think it's
time to set the record straight and say what I *really* think.

Short version: *Santa Claus doesn't exist.*

Long version:

I'm going to take it as read that you've read Vernor Vinge's essay on the
coming technological singularity
(1993)<http://www-rohan.sdsu.edu/faculty/vinge/misc/singularity.html>,
are familiar with Hans Moravec's concept ofmind
uploading<http://www.aleph.se/Trans/Global/Uploading/>,
and know about Nick Bostrom's Simulation
argument<http://www.simulation-argument.com/>
. *If not, stop right now and read them before you continue with this piece*.
Otherwise you're missing out on the fertilizer in which the whole field of
singularitarian SF, not to mention posthuman thought, is rooted. It's
probably a good idea to also be familiar with Extropianism and to have read the
posthumanism FAQ <http://www.extropy.org/faq.htm>, because if you haven't
you'll have missed out on the salient social point that posthumanism has a
posse.

(In passing, let me add that I am not an extropian, although I've hung out
on and participated in their online discussions since the early 1990s. I'm *
definitely* not a libertarian: economic libertarianism is based on the same
reductionist view of human beings as rational economic actors as 19th
century classical economics — a drastic over-simplification of human
behaviour. Like Communism, Libertarianism is a superficially comprehensive
theory of human behaviour that is based on flawed axioms and, if acted upon,
would result in either failure or a hellishly unpleasant state of
post-industrial feudalism.)

But anyway ...

I can't prove that there isn't going to be a hard take-off singularity in
which a human-equivalent AI rapidly bootstraps itself to de-facto god-hood.
Nor can I prove that mind uploading won't work, or that we are or aren't
living in a simulation. *Any* of these things would require me to prove the
impossibility of a highly complex activity which nobody has really attempted
so far.

However, I can make some guesses about their likelihood, and the prospects
aren't good.

First: super-intelligent AI is unlikely because, if you pursue Vernor's
program, you get there incrementally by way of human-equivalent AI, and
human-equivalent AI is unlikely. The *reason* it's unlikely is that human
intelligence is an emergent phenomenon of human physiology, and it only
survived the filtering effect of evolution by enhancing human survival
fitness in some way. Enhancements to primate evolutionary fitness are not
much use to a machine, or to people who want to extract useful payback (in
the shape of work) from a machine they spent lots of time and effort
developing. We may want machines that can recognize and respond to our
motivations and needs, but we're likely to leave out the annoying bits, like
needing to sleep for roughly 30% of the time, being lazy or emotionally
unstable, and having motivations of its own.

(This is all aside from the gigantic can of worms that is the ethical status
of artificial intelligence; if we ascribe the value inherent in human
existence to conscious intelligence, then before creating a conscious *
artificial* intelligence we have to ask if we're creating an entity
deserving of rights. Is it murder to shut down a software process that is in
some sense "conscious"? Is it genocide to use genetic algorithms to evolve
software agents towards consciousness? These are huge show-stoppers — it's
possible that just as destructive research on human embryos is tightly
regulated and restricted, we may find it socially desirable to restrict
destructive research on borderline autonomous intelligences ... lest we
inadvertently open the door to inhumane uses of human beings as well.)

We clearly want machines that perform human-like tasks. We want computers
that recognize our language and motivations and can take hints, rather than
requiring instructions enumerated in mind-numbingly tedious detail. But
whether we want them to be *conscious* and *volitional* is another question
entirely. I don't want my self-driving car to argue with me about where we
want to go today. I don't want my robot housekeeper to spend all its time in
front of the TV watching contact sports or music videos. And I certainly
don't want to be sued for maintenance by an abandoned software development
project.

Karl Schroeder <http://www.kschroeder.com/> suggested one interesting
solution to the AI/consciousness ethical bind, which I used in my novel Rule
34<http://www.amazon.com/Rule-34-Charles-Stross/dp/0441020348/charlieswebsi-20>.
Consciousness seems to be a mechanism for recursively modeling internal
states within a body. In most humans, it reflexively applies to the human
being's own person: but some people who have suffered neurological damage
(due to cancer or traumatic injury) project their sense of identity onto an
external object. Or they are convinced that they are dead, even though they
know their body is physically alive and moving around.

If the subject of consciousness is not intrinsically pinned to the conscious
platform, but can be arbitrarily re-targeted, then we may want AIs that
focus reflexively on the needs of the humans they are assigned to — in other
words, their sense of self is focussed on us, rather than internally. They
perceive our needs as being their needs, with no internal sense of self to
compete with our requirements. While such an AI might accidentally
jeopardize its human's well-being, it's no more likely to *deliberately* turn
on it's external "self" than you or I are to shoot ourselves in the head.
And it's no more likely to try to bootstrap itself to a higher level of
intelligence that has different motivational parameters than your right hand
is likely to grow a motorcycle and go zooming off to explore the world
around it without you.

Uploading ... is not obviously impossible unless you are a crude mind/body
dualist <http://en.wikipedia.org/wiki/Dualism_%28philosophy_of_mind%29>.
However, if it becomes plausible in the near future we can expect extensive
theological arguments over it. If you thought the abortion debate was
heated, wait until you have people trying to become immortal via the wire.
Uploading implicitly refutes the doctrine of the existence of an immortal
soul, and therefore presents a raw rebuttal to those religious doctrines
that believe in a life after death. People who believe in an afterlife will
go to the mattresses to maintain a belief system that tells them their dead
loved ones are in heaven rather than rotting in the ground.

But even if mind uploading *is* possible and eventually happens, as Hans
Moravec remarks <http://www.primitivism.com/pigs.htm>, "Exploration and
colonization of the universe awaits, but earth-adapted biological humans are
ill-equipped to respond to the challenge. ... Imagine most of the inhabited
universe has been converted to a computer network — a cyberspace — where
such programs live, side by side with downloaded human minds and
accompanying simulated human bodies. A human would likely fare poorly in
such a cyberspace. Unlike the streamlined artificial intelligences that zip
about, making discoveries and deals, reconfiguring themselves to efficiently
handle the data that constitutes their interactions, a human mind would
lumber about in a massively inappropriate body simulation, analogous to
someone in a deep diving suit plodding along among a troupe of acrobatic
dolphins. Every interaction with the data world would first have to be
analogized as some recognizable quasi-physical entity ... Maintaining such
fictions increases the cost of doing business, as does operating the mind
machinery that reduces the physical simulations into mental abstractions in
the downloaded human mind. Though a few humans may find a niche exploiting
their baroque construction to produce human-flavored art, more may feel a
great economic incentive to streamline their interface to the cyberspace." (
*Pigs in Cyberspace*, 1993.)

Our form of conscious intelligence emerged from our evolutionary heritage,
which in turn was shaped by our biological environment. We are not evolved
for existence as disembodied intelligences, as "brains in a vat", and we
ignore E. O. Wilson's Biophilia
Hypothesis<http://en.wikipedia.org/wiki/Biophilia_hypothesis> at
our peril; I strongly suspect that the hardest part of mind uploading won't
be the mind part, but the body and its interactions with its surroundings.

Moving on to the Simulation Argument: I can't disprove that, either. And it
has a deeper-than-superficial appeal, insofar as it offers a deity-free
afterlife, as long as the ethical issues involved in creating ancestor
simulations are ignored. (Is it an act of genocide to create a software
simulation of an entire world and its inhabitants, if the conscious
inhabitants are party to an act of genocide?) Leaving aside the sneaking
suspicion that anyone capable of creating an ancestor simulation wouldn't be
focussing their attention on any ancestors as primitive as *us*, it would
make a good free-form framework for a postmodern high-tech religion.
Unfortunately it seems to be unfalsifiable, at least by the inmates (us).

Anyway, in summary ...

This is my take on the singularity: we're not going to see a hard take-off,
or a slow take-off, or any kind of AI-mediated exponential outburst. What
we're going to see is increasingly solicitous machines defining our
environment — machines that sense and respond to our needs "intelligently".
But it will be the intelligence of the serving hand rather than the
commanding brain, and we're only at risk of disaster if we harbour
self-destructive impulses.

We *may* eventually see mind uploading, but there'll be a holy war to end
holy wars before it becomes widespread: it will literally overturn
religions. That *would* be a singular event, but beyond giving us an
opportunity to run Nozick's experience
machine<http://en.wikipedia.org/wiki/Experience_machine> thought
experiment for real, I'm not sure we'd be able to make effective use of it —
our hard-wired biophilia will keep dragging us back to the real world, or to
simulations indistinguishable from it.

Finally, the simulation hypothesis builds on this and suggests that if we *
are*already living in a cyberspatial history simulation (and not a
philosopher's hedonic thought experiment) we might not be able to apprehend
the underlying "true" reality. In fact, the gap between here and there might
be non-existent. Either way, we can't actually prove anything about it,
unless the designers of the ancestor simulation have been kind enough to
gift us with an afterlife as well.

Any way you cut these three ideas, they don't provide much in the way of
referent points for building a good life, especially if they turn out to be
untrue or impossible (the null hypothesis). Therefore I conclude that, while
not ruling them out, it's unwise to live on the assumption that they're
coming down the pipeline within my lifetime.

I'm done with computational theology: I think I need a drink!

*Update*: Today appears to be Steam Engine day: Robin Hanson on why he
thinks a singularity is
unlikely<http://www.overcomingbias.com/2011/06/the-betterness-explosion.html>.
Go read.
-- 
Marge: Quick, somebody perform CPR!
Homer: Umm (singing) I see a bad moon rising.
Marge: That's CCR!
Homer: Looks like we're in for nasty weather.
Sudhakar Chandra                                    Slacker Without Borders

Reply via email to