A fun, thought-provoking talk by Charlie Stross.
This man belongs on silk. Cory, want to do the honours? :)
Udhay
http://www.antipope.org/charlie/blog-static/2007/05/shaping_the_future.html
Shaping the future
(One of the things that goes with being an SF
writer is that people expect you to talk about,
well, the future. Last week, engineering
consultancy TNG Technology Consulting invited me
to Munich to address one of their technology open
days. Here's a transcript of my talk, which
discusses certain under-considered side effects
of some technologies that you're probably already
becoming familiar with. Note that this is a long
blog entry even by my verbose standards so
you'll need to hit on the "continue reading" link to see the whole thing.)
Good afternoon, and thank you for inviting me
here today. I understand that you're expecting a
talk about where the next 20 years are taking us,
how far technology will go, how people will use
the net, and whether big shoulder pads and food
pills will be fashionable. Personally, I'm still
waiting for my personal jet car I've been
waiting about fifty years now and I mention
this as a note of caution: while personal jet
cars aren't obviously impossible, their
non-appearance should give us some insights into
how attempts to predict the future go wrong.
I'm a science fiction writer by trade, and people
often think that means I spend a lot of time
trying to predict possible futures. Actually,
that's not the job of the SF writer at all
we're not professional futurologists, and we
probably get things wrong as often as anybody
else. But because we're not tied to a specific
technical field we are at least supposed to keep our eyes open for surprises.
So I'm going to ignore the temptation to talk
about a whole lot of subjects global warming,
bioengineering, the green revolution, the
intellectual property wars and explain why,
sooner or later, everyone in this room is going
to end up in Wikipedia. And I'm going to get us there the long way round ...
Speed
The big surprise in the 20th century remember
that personal jet car? was the redefinition of
progress that took place some time between 1950 and 1970.
Before 1800, human beings didn't travel faster
than a horse could gallop. The experience of
travel was that it was unpleasant, slow, and
usually involved a lot of exercise or the
hazards of the seas. Then something odd happened;
a constant that had held for all of human history
the upper limit on travel speed turned into a
variable. By 1980, the upper limit on travel
speed had risen (for some lucky people on some
routes) to just over Mach Two, and to just under
Mach One on many other shorter routes. But from
1970 onwards, the change in the rate at which
human beings travel ceased to all intents and
purposes, we aren't any faster today than we were
when the Comet and Boeing 707 airliners first flew.
We can plot this increase in travel speed on a
graph better still, plot the increase in
maximum possible speed and it looks quite
pretty; it's a classic sigmoid curve, initially
rising slowly, then with the rate of change
peaking between 1920 and 1950, before tapering
off again after 1970. Today, the fastest vehicle
ever built, NASA's New Horizons spacecraft, en
route to Pluto, is moving at approximately 21
kilometres per second only twice as fast as an
Apollo spacecraft from the late-1960s. Forty-five
years to double the maximum velocity; back in the
1930s it was happening in less than a decade.
One side-effect of faster travel was that people
traveled more. A brief google told me that in
1900, the average American traveled 210 miles per
year by steam-traction railroad, and 130 miles by
electric railways. Today, comparable travel
figures are 16,000 miles by road and air a
fifty-fold increase in distance traveled. I'd
like to note that the new transport technologies
consume one-fifth the energy per
passenger-kilometer, but overall energy
consumption is much higher because of the
distances involved. We probably don't spend
significantly more hours per year aboard aircraft
that our 1900-period ancestors spent aboard steam
trains, but at twenty times the velocity or
more we travel much further and consume energy faster while we're doing so.
Information
Around 1950, everyone tended to look at what the
future held in terms of improvements in transportation speed.
But as we know now, that wasn't where the big
improvements were going to come from. The
automation of information systems just weren't on
the map, other than in the crudest sense
punched card sorting and collating machines and desktop calculators.
We can plot a graph of computing power against
time that, prior to 1900, looks remarkably
similar to the graph of maximum speed against
time. Basically it's a flat line from prehistory
up to the invention, in the seventeenth or
eighteenth century, of the first mechanical
calculating machines. It gradually rises as
mechanical calculators become more sophisticated,
then in the late 1930s and 1940s it starts to
rise steeply. From 1960 onwards, with the
transition to solid state digital electronics,
it's been necessary to switch to a logarithmic
scale to even keep sight of this graph.
It's worth noting that the complexity of the
problems we can solve with computers has not
risen as rapidly as their performance would
suggest to a naive bystander. This is largely
because interesting problems tend to be complex,
and computational complexity rarely scales
linearly with the number of inputs; we haven't
seen the same breakthroughs in the theory of
algorithmics that we've seen in the engineering
practicalities of building incrementally faster machines.
Speaking of engineering practicalities, I'm sure
everyone here has heard of Moore's Law. Gordon
Moore of Intel coined this one back in 1965 when
he observed that the number of transistor count
on an integrated circuit for minimum component
cost doubles every 24 months. This isn't just
about the number of transistors on a chip, but
the density of transistors. A similar law seems
to govern storage density in bits per unit area for rotating media.
As a given circuit becomes physically smaller,
the time taken for a signal to propagate across
it decreases and if it's printed on a material
of a given resistivity, the amount of power
dissipated in the process decreases. (I hope I've
got that right: my basic physics is a little
rusty.) So we get faster operation, or we get
lower power operation, by going smaller.
We know that Moore's Law has some way to run
before we run up against the irreducible limit to
downsizing. However, it looks unlikely that we'll
ever be able to build circuits where the
component count exceeds the number of component
atoms, so I'm going to draw a line in the sand
and suggest that this exponential increase in
component count isn't going to go on forever;
it's going to stop around the time we wake up and
discover we've hit the nanoscale limits.
The cultural picture in computing today therefore
looks much as it did in transportation technology
in the 1930s everything tomorrow is going to be
wildly faster than it is today, let alone
yesterday. And this progress has been running for
long enough that it's seeped into the public
consciousness. In the 1920s, boys often wanted to
grow up to be steam locomotive engineers;
politicians and publicists in the 1930s talked
about "air-mindedness" as the key to future
prosperity. In the 1990s it was software
engineers and in the current decade it's the politics of internet governance.
All of this is irrelevant. Because computers and
microprocessors aren't the future. They're
yesterday's future, and tomorrow will be about something else.
Bandwidth
I don't expect I need to lecture you about
bandwidth. Let's just say that our communication
bandwidth has been increasing in what should by
now be a very familiar pattern since, oh, the
eighteenth century, and the elaborate system of
semaphore stations the French crown used for its own purposes.
Improvements in bandwidth are something we get
from improvements in travel speed or information
processing; you should never underestimate the
bandwidth of a pickup truck full of magnetic
tapes driving cross-country (or an Airbus full of
DVDs), and similarly, moving more data per unit
time over fiber requires faster switches at each end.
Now, with little or no bandwidth, when it was
expensive and scarce and modems were boxes the
size of filing cabinets that could pump out a few
hundred bits per second, computers weren't that
interesting; they tended to be big, centralized
sorting machines that very few people could get
to and make use of, and they tended to be used
for the kind of jobs that can be centralized, by
large institutions. That's the past, where we've come from.
With lots of bandwidth, the picture is very
different but you don't get lots of bandwidth
without also getting lots of cheap information
processing, lots of small but dense circuitry,
hordes of small computers spliced into everything
around us. So the picture we've got today is of a
world where there are nearly as many mobile
phones in the EU as there are people, where each
mobile phone is a small computer, and where the
fast 3G, UMTS phones are moving up to a megabit
or so of data per second over the air and the
next-generation 4G standards are looking to move
100 mbps of data. So that's where we are now. And
this picture differs from the past in a very
interesting way: because lots of people are interacting with them.
(That, incidentally, is what makes the world wide
web possible; it's not the technology but the
fact that millions of people are throwing random
stuff into their computers and publishing on it.
You can't do that without ubiquitous cheap
bandwidth and cheap terminals to let people
publish stuff. And there seems to be a critical
threshold for it to work; any BBS or network
system seems to require a certain size of user
base before it begins to acquire a culture of its own.)
Which didn't happen before, with computers. It's
like the difference between having an
experimental test plane that can fly at 1000
km/h, and having thousands of Boeings and
Airbuses that can fly at 1000 km/h and are used
by millions of people every month. There will be
social consequences, and you can't easily predict
the consequences of the mass uptake of a
technology by observing the leading-edge consequences when it first arrives.
Unintended Consequences
It typically takes at least a generation before
the social impact of a ubiquitous new technology becomes obvious.
We are currently aware of the consequences of the
switch to personal high-speed transportation
the car and road freight distribution. It
shapes our cities and towns, dictates where we
live and work, and turns out to have
disadvantages our ancestors were not aware of,
from particulate air pollution to suburban sprawl
and the decay of city centers in some countries.
We tend to be less aware of the social
consequences, too. Compare that 1900-era figure
of 360 miles per year traveled by rail, against
the 16,000 miles of a typical modern American. It
is no longer rare to live a long way from
relatives, workplaces, and educational
institutions. Countries look much more
homogeneous on the large scale the same shops
in every high street because community has
become delocalized from geography. Often we don't
know our neighbours as well as we know people who
live hundreds of kilometers away. This is the
effect of cheap, convenient high speed transport.
Now, we're still in the early stages of the
uptake of mobile telephony, but some lessons are already becoming clear.
Traditional fixed land-lines connect places, not
people; you dial a number and it puts you through
to a room in a building somewhere, and you hope
the person you want to talk to is there.
Mobile phones in contrast connect people, not
places. You don't necessarily know where the
person at the other end of the line is, what room
in which building they're in, but you know who they are.
This has interesting social effects. Sometimes
it's benign; you never have to wonder if someone
you're meeting is lost or unable to find the
venue, you never lose track of people. On the
other hand, it has bad effects, especially when
combined with other technologies: bullying via
mobile phone is rife in British schools, and
"happy slapping" wouldn't be possible without
them. (Assaulting people while an accomplice
films it with a cameraphone, for the purpose of
sending the movie footage around often used for
intimidation, sometimes used just for vicarious violent fun.)
Convergence
It's even harder to predict the second-order
consequences of new technologies when they start
merging at the edges, and hybridizing.
A modern cellphone is nothing like a late-1980s
cellphone. Back then, the cellphone was basically
a voice terminal. Today it's as likely as not to
be a video and still camera, a GPS navigation
unit, have a keyboard for texting, a screen for
surfing the web, an MP3 player, and it may also
be a full-blown business computer with word
processing and spreadsheet applications aboard.
In future it may end up as a pocket computer that
simply runs voice-over-IP software, using the
cellular telephony network or WiFi or WiMax or
just about any other transport layer that comes
to hand to move speech packets back and forth with acceptable latency.
And it's got peripherals. GPS location, cameras,
text input. What does it all mean?
Putting it all together
Let's look at our notional end-point where the
bandwidth and information processing revolutions
are taking us as far ahead as we can see
without positing real breakthroughs and new
technologies, such as cheap quantum computing,
pocket fusion reactors, and an artificial
intelligence that is as flexible and
unpredictable as ourselves. It's about 25-50 years away.
Firstly, storage. I like to look at the trailing
edge; how much non-volatile solid-state storage
can you buy for, say, ten euros? (I don't like
rotating media; they tend to be fragile, slow,
and subject to amnesia after a few years. So this
isn't the cheapest storage you can buy just the
cheapest reasonably robust solid-state storage.)
Today, I can pick up about 1Gb of FLASH memory in
a postage stamp sized card for that much money.
fast-forward a decade and that'll be 100Gb. Two
decades and we'll be up to 10Tb.
10Tb is an interesting number. That's a megabit
for every second in a year there are roughly 10
million seconds per year. That's enough to store
a live DivX video stream compressed a lot
relative to a DVD, but the same overall
resolution of everything I look at for a year,
including time I spend sleeping, or in the
bathroom. Realistically, with multiplexing, it
puts three or four video channels and a sound
channel and other telemetry a heart monitor,
say, a running GPS/Galileo location signal,
everything I type and every mouse event I send
onto that chip, while I'm awake. All the time.
It's a life log; replay it and you've got a
journal file for my life. Ten euros a year in
2027, or maybe a thousand euros a year in 2017.
(Cheaper if we use those pesky rotating hard
disks it's actually about five thousand euros
if we want to do this right now.)
Why would anyone want to do this?
I can think of several reasons. Initially, it'll
be edge cases. Police officers on duty: it'd be
great to record everything they see, as evidence.
Folks with early stage neurodegenerative
conditions like Alzheimers: with voice tagging
and some sophisticated searching, it's a memory prosthesis.
Add optical character recognition on the fly for
any text you look at, speech-to-text for anything
you say, and it's all indexed and searchable.
"What was the title of the book I looked at and
wanted to remember last Thursday at 3pm?"
Think of it as google for real life.
We may even end up being required to do this, by
our employers or insurers in many towns in the
UK, it is impossible for shops to get insurance,
a condition of doing business, without
demonstrating that they have CCTV cameras in
place. Having such a lifelog would certainly make
things easier for teachers and social workers at
risk of being maliciously accused by a student or client.
(There are also a whole bunch of very nasty
drawbacks to this technology I'll talk about
some of them later, but right now I'd just like
to note that it would fundamentally change our
understanding of privacy, redefine the boundary
between memory and public record, and be subject
to new and excitingly unpleasant forms of abuse
but I suspect it's inevitable, and rather than
asking whether this technology is avoidable, I
think we need to be thinking about how we're going to live with it.)
Now, this might seem as if it's generating
mountains of data but really, it isn't. There
are roughly 80 million people in Germany. Let's
assume they all have lifelogs. They're generating
something like 10Tb of data each, 1013 bits, per
year, or 1021 bits for the entire nation every year. 1023 bits per century.
Is 1023 bits a huge number? No it isn't, when we
pursue Moore's Law to the bitter end.
There's a model for long term high volume storage
that I like to use as a reference point.
Obviously, we want our storage to be as compact
as possible one bit per atom, ideally, if not
more, but one bit per atom seems as if it might
be achievable. We want it to be stable, too. (In
the future, the 20th century will be seen as a
dark age while previous centuries left books
and papers that are stable for centuries with
proper storage, many of the early analog
recordings were stable enough to survive for
decades, but the digital media and magnetic tapes
and optical disks of the latter third of the 20th
century decay in mere years. And if they don't
decay, they become unreadable: the original tapes
of the slow-scan video from the first moon
landing, for example, appear to be missing, and
the much lower quality broadcast images are all
that remain. So stability is important, and I'm
not even going to start on how we store data and
metainformation describing it.)
My model of a long term high volume data storage
medium is a synthetic diamond. Carbon occurs in a
variety of isotopes, and the commonest stable
ones are carbon-12 and carbon-13, occurring in
roughly equal abundance. We can speculate that if
molecular nanotechnology as described by, among
others, Eric Drexler, is possible, we can build a
device that will create a diamond, one layer at a
time, atom by atom, by stacking individual atoms
and with enough discrimination to stack
carbon-12 and carbon-13, we've got a tool for
writing memory diamond. Memory diamond is quite
simple: at any given position in the rigid carbon
lattice, a carbon-12 followed by a carbon-13
means zero, and a carbon-13 followed by a
carbon-12 means one. To rewrite a zero to a one,
you swap the positions of the two atoms, and vice versa.
It's hard, it's very stable, and it's very dense.
How much data does it store, in practical terms?
The capacity of memory diamond storage is of the
order of Avogadro's number of bits per two molar
weights. For diamond, that works out at 6.022 x
1023 bits per 25 grams. So going back to my
earlier figure for the combined lifelog data
streams of everyone in Germany twenty five
grams of memory diamond would store six years' worth of data.
Six hundred grams of this material would be
enough to store lifelogs for everyone on the
planet (at an average population of, say, eight
billion people) for a year. Sixty kilograms can
store a lifelog for the entire human species for a century.
In more familiar terms: by the best estimate I
can track down, in 2003 we as a species recorded
2500 petabytes 2.5 x 1018 bytes of data.
That's almost ten milligrams. The Google cluster,
as of mid-2006, was estimated to have 4 petabytes
of RAM. In memory diamond, you'd need a microscope to see it.
So, it's reasonable to conclude that we're not
going to run out of storage any time soon.
Now, capturing the data, indexing and searching
the storage, and identifying relevance that's
another matter entirely, and it's going to be one
that imprint the shape of our current century on
those ahead, much as the great 19th century
infrastructure projects (that gave our cities
paved roads and sewers and railways) define that era for us.
I'd like to suggest that really fine-grained
distributed processing is going to help; small
processors embedded with every few hundred
terabytes of storage. You want to know something,
you broadcast a query: the local processors
handle the problem of searching their respective
chunks of the 128-bit address space, and when one
of them finds something, it reports back. But
this is actually boring. It's an implementation detail.
What I'd like to look at is the effect this sort
of project is going to have on human civilization.
The Singularity reconsidered
Those of you who're familiar with my writing
might expect me to spend some time talking about
the singularity. It's an interesting term, coined
by computer scientist and SF writer Vernor Vinge.
Earlier, I was discussing the way in which new
technological fields show a curve of accelerating
progress until it hits a plateau and slows down
rapidly. It's the familiar sigmoid curve. Vinge
asked, "what if there exist new technologies
where the curve never flattens, but looks
exponential?" The obvious example to him was
Artificial Intelligence. It's still thirty years
away today, just as it was in the 1950s, but the
idea of building machines that think has been
around for centuries, and more recently, the idea
of understanding how the human brain processes
information and coding some kind of procedural
system in software for doing the same sort of
thing has soaked up a lot of research.
Vernor came up with two postulates. Firstly, if
we can design a true artificial intelligence,
something that's cognitively our equal, then we
can make it run faster by throwing more computing
resources at it. (Yes, I know this is
questionable it begs the question of whether
intelligence is parallelizeable, or what
resources it takes.) And if you can make it run
faster, you can make it run much faster
hundreds, millions, of times faster. Which means
problems get solved fast. This is your basic
weakly superhuman AI: the one you deploy if you
want it to spend an afternoon and crack a problem
that's been bugging everyone for a few centuries.
He also noted something else: we humans are
pretty dumb. We can see most of the elements of
our own success in other species, and
individually, on average, we're not terribly
smart. But we've got the ability to communicate,
to bind time, and to plan, and we've got a theory
of mind that lets us model the behaviour of other
animals. What if there can exist other forms of
intelligence, other types of consciousness, which
are fundamentally better than ours at doing
whatever it is that consciousness does? Just as a
quicksort algorithm that sorts in O(n log n)
comparisons is fundamentally better (except in
very small sets) than a bubble sort that typically takes O(n2) comparisons.
If such higher types of intelligence can exist,
and if a human-equivalent intelligence can build
an AI that runs one of them, then it's going to
appear very rapidly after the first weakly
superhuman AI. And we're not going to be able to
second guess it because it'll be as much smarter than us as we are than a frog.
Vernor's singularity is therefore usually
presented as an artificial intelligence induced
leap into the unknown: we can't predict where
things are going on the other side of that event
because it's simply unprecedented. It's as if the
steadily steepening rate of improvement in
transportation technologies that gave us the
Apollo flights by the late 1960s kept on going,
with a Jupiter mission in 1982, a fast
relativistic flight to Alpha Centauri by 1990, a
faster than light drive by 2000, and then a time
machine so we could arrive before we set off. It
makes a mockery of attempts to extrapolate from prior conditions.
Of course, aside from making it possible to write
very interesting science fiction stories, the
Singularity is a very controversial idea. For one
thing, there's the whole question of whether a
machine can think although as the late, eminent
professor Edsger Djikstra said, "the question of
whether machines can think is no more interesting
than the question of whether submarines can
swim". A secondary pathway to the Singularity is
the idea of augmented intelligence, as opposed to
artificial intelligence: we may not need machines
that think, if we can come up with tools that
help us think faster and more efficiently. The
world wide web seems to be one example. The
memory prostheses I've been muttering about are another.
And then there's a school of thought that holds
that, even if AI is possible, the Singularity
idea is hogwash it just looks like an
insuperable barrier or a permanent step change
because we're too far away from it to see the
fine-grained detail. Canadian SF writer Karl
Schroeder has explored a different hypothesis:
that there may be an end to progress. We may
reach a point where the scientific enterprise is
done where all the outstanding questions have
been answered and the unanswered ones are
physically impossible for us to address. (He's
also opined that the idea of an AI-induced
Singularity is actually an example of erroneous
thinking that makes the same mistake as the
proponents of intelligent design (Creationism)
the assumption that complex systems cannot be
produced by simple non-consciously directed
processes.) An end to science is still a very
long way away right now; for example, I've
completely failed to talk about the real elephant
in the living room, the recent explosion in our
understanding of biological systems that started
in the 1950s but only really began to gather pace in the 1990s. But what then?
Well, we're going to end up with at the least
lifelogs, ubiquitous positioning and
communication services, a civilization where
every artifact more complicated than a spoon is
on the internet and attentive to our moods and
desires, cars that drive themselves, and a whole
lot of other mind-bending consequences. All
within the next two or three decades. So what can
we expect of this collision between
transportation, information processing, and bandwidth?
Drawing Conclusions
We're already living in a future nobody
anticipated. We don't have personal jet cars, but
we have ridiculously cheap intercontinental
airline travel. (Holidays on the Moon? Not yet,
but if you're a billionaire you can pay for a
week in orbit.) On the other hand, we discovered
that we do, in fact, require more than four
computers for the entire planet (as Thomas Watson
is alleged to have said). An increasing number of
people don't have telephone lines any more they
rely on a radio network instead.
The flip side of Moore's Law, which we don't pay
much attention to, is that the cost of electronic
components is in deflationary free fall of a kind
that would have given a Depression-era economist
nightmares. When we hit the brick wall at the end
of the road when further miniaturization is
impossible things are going to get very bumpy
indeed, much as the aerospace industry hit the
buffers at the end of the 1960s in North America
and elsewhere. This stuff isn't big and it
doesn't have to be expensive, as the One Laptop
Per Child project is attempting to demonstrate.
Sooner or later there won't be a new model to
upgrade to every year, the fab lines will have
paid for themselves, and the bottom will fall out
of the consumer electronics industry, just as it
did for the steam locomotive workshops before them.
Before that happens, we're going to get used to
some very disorienting social changes.
Hands up, anyone in the audience, who owns a
slide rule? Or a set of trigonometric tables?
Who's actually used them, for work, in the past year? Or decade?
I think I've made my point: the pocket calculator
and the computer algebra program have effectively
driven those tools into obsolescence. This
happened some time between the early 1970s and
the late 1980s. Now we're about to see a whole
bunch of similar and much weirder types of obsolescence.
Right now, Nokia is designing global positioning
system receivers into every new mobile phone they
plan to sell. GPS receivers in a phone SIM card
have been demonstrated. GPS is exploding
everywhere. It used to be for navigating
battleships; now it's in your pocket, along with
a moving map. And GPS is pretty crude you need
open line of sight on the satellites, and the
signal's messed up. We can do better than this,
and we will. In five years, we'll all have phones
that connect physical locations again, instead of
(or as well as) people. And we'll be raising a
generation of kids who don't know what it is to
be lost, to not know where you are and how to get
to some desired destination from wherever that is.
Think about that. "Being lost" has been part of
the human experience ever since our hominid
ancestors were knuckle-walking around the plains
of Africa. And we're going to lose it at least,
we're going to make it as unusual an experience
as finding yourself out in public without your underpants.
We're also in some danger of losing the concepts
of privacy, and warping history out of all recognition.
Our concept of privacy relies on the fact that
it's hard to discover information about other
people. Today, you've all got private lives that
are not open to me. Even those of you with blogs,
or even lifelogs. But we're already seeing some
interesting tendencies in the area of attitudes
to privacy on the internet among young people,
under about 25; if they've grown up with the
internet they have no expectation of being able
to conceal information about themselves. They
seem to work on the assumption that anything that
is known about them will turn up on the net
sooner or later, at which point it is trivially searchable.
Now, in this age of rapid, transparent
information retrieval, what happens if you've got
a lifelog, registering your precise GPS
coordinates and scanning everything around you?
If you're updating your whereabouts via a
lightweight protocol like Twitter and keeping in
touch with friends and associates via a blog?
It'd be nice to tie your lifelog into your blog
and the rest of your net presence, for your
personal convenience. And at first, it'll just be
the kids who do this kids who've grown up with
little expectation of or understanding of
privacy. Well, it'll be the kids and the folks on
the Sex Offenders Register who're forced to
lifelog as part of their probation terms, but
that's not our problem. Okay, it'll also be
people in businesses with directors who want to
exercise total control over what their employees
are doing, but they don't have to work there ... yet.
You know something? Keeping track of those quaint
old laws about personal privacy is going to be
really important. Because in countries with no
explicit right to privacy I believe the US
constitution is mostly silent on the subject
we're going to end up blurring the boundary
between our Second Lives and the first life, the
one we live from moment to moment. We're
time-binding animals and nothing binds time
tighter than a cradle to grave recording of our every moment.
The political hazards of lifelogging are, or
should be, semi-obvious. In the short term, we're
going to have to learn to do without a lot of bad
laws. If it's an offense to pick your nose in
public, someone, sooner or later, will write a
'bot to hunt down nose-pickers and refer them to
the police. Or people who put the wrong type of
rubbish in the recycling bags. Or cross the road
without using a pedestrian crossing, when there's
no traffic about. If you dig hard enough,
everyone is a criminal. In the UK, today, there
are only about four million public CCTV
surveillance cameras; I'm asking myself, what is
life going to be like when there are, say, four
hundred million of them? And everything they see
is recorded and retained forever, and can be
searched retroactively for wrong-doing.
One of the biggest risks we face is that of
sleep-walking into a police state, simply by
mistaking the ability to monitor everyone for
even minute legal infractions for the imperative to do so.
And then there's history.
History today is patchy. I never met either of my
grandfathers both of them died before I was
born. One of them I recognize from three
photographs; the other, from two photographs and
about a minute of cine film. Silent, of course.
Going back further, to their parents ... I know
nothing of these people beyond names and dates.
(They died thirty years before I was born.)
This century we're going to learn a lesson about
what it means to be unable to forget anything.
And it's going to go on, and on. Barring a
catastrophic universal collapse of human
civilization which I should note was widely
predicted from August 1945 onward, and hasn't
happened yet we're going to be laying down
memories in diamond that will outlast our bones,
and our civilizations, and our languages. Sixty
kilograms will handily sum up the total history
of the human species, up to the year 2000. From
then on ... we still don't need much storage, in
bulk or mass terms. There's no reason not to
massively replicate it and ensure that it survives into the deep future.
And with ubiquitous lifelogs, and the internet,
and attempts at providing a unified interface to
all interesting information wikipedia, let's
say we're going to give future historians a
chance to build an annotated, comprehensive
history of the entire human race. Charting the
relationships and interactions between everyone
who's ever lived since the dawn of history or
at least, the dawn of the new kind of history
that is about to be born this century.
Total history a term I'd like to coin, by
analogy to total war is something we haven't
experienced yet. I'm really not sure what its
implications are, but then, I'm one of the odd
primitive shadows just visible at one edge of the
archive: I expect to live long enough to be
lifelogging, but my first forty or fifty years
are going to be very poorly documented, mere
gigabytes of text and audio to document decades
of experience. What I can be fairly sure of is
that our descendants' relationship with their
history is going to be very different from our
own, because they will be able to see it with a
level of depth and clarity that nobody has ever experienced before.
Meet your descendants. They don't know what it's
like to be involuntarily lost, don't understand
what we mean by the word "privacy", and will have
access (sooner or later) to a historical
representation of our species that defies
understanding. They live in a world where history
has a sharply-drawn start line, and everything
they individually do or say will sooner or later
be visible to everyone who comes after them,
forever. They are incredibly alien to us.
And yet, these trends are emergent from the
current direction of the telecommunications
industry, and are likely to become visible as
major cultural changes within the next ten to
thirty years. None of them require anything but a
linear progression from where we are now, in a
direction we're already going in. None of them
take into account external technological
synergies, stuff that's not obviously predictable
like brain/computer interfaces, artificial
intelligences, or magic wands. I've purposefully
ignored discussion of nanotechnology, tissue
engineering, stem cells, genomics, proteomics,
the future of nuclear power, the future of
environmentalism and religion, demographics, our
environment, peak oil and our future energy
economy, space exploration, and a host of other topics.
The wrap
As projections of a near future go, the one I've
presented in this talk is pretty poor. In my
defense, I'd like to say that the only thing I
can be sure of is that I'm probably wrong, or at
least missing something as big as the internet, or antibiotics.
(I know: driverless cars. They're going to
redefine our whole concept of personal autonomy.
Once autonomous vehicle technology becomes
sufficiently reliable, it's fairly likely that
human drivers will be forbidden, except under
very limited conditions. After all, human drivers
are the cause of about 90% of traffic accidents:
recent research shows that in about 80% of
vehicle collisions the driver was distracted in
the 3 seconds leading up to the incident. There's
an inescapable logic to taking the most common
point of failure out of the control loop my
freedom to drive should not come at the risk of
life and limb to other road users, after all. But
because cars have until now been marketed to us
by appealing to our personal autonomy, there are
going to be big social changes when we switch over to driverless vehicles.
(Once all on-road cars are driverless, the
current restrictions on driving age and status of
intoxication will cease to make sense. Why
require a human driver to take an eight year old
to school, when the eight year old can travel by
themselves? Why not let drunks go home, if
they're not controlling the vehicle? So the rules
over who can direct a car will change. And
shortly thereafter, the whole point of owning
your own car that you can drive it yourself,
wherever you want is going to be subtly
undermined by the redefinition of car from an
expression of independence to a glorified taxi.
If I was malicious, I'd suggest that the move to
autonomous vehicles will kill the personal
automobile market; but instead I'll assume that
people will still want to own their own
four-wheeled living room, even though their
relationship with it will change fundamentally. But I digress ...)
Anyway, this is the future that some of you are
building. It's not the future you thought you
were building, any more than the rocket designers
of the 1940s would have recognized a future in
which GPS-equipped hobbyists go geocaching at
weekends. But it's a future that's taking shape
right now, and I'd like to urge you to think hard
about what kind of future you'd like your
descendants or yourselves to live in.
Engineers and programmers are the often-anonymous
architects of society, and what you do now could
make a huge difference to the lives of millions,
even billions, of people in decades to come.
Thank you, and good afternoon.
Posted by Charlie Stross on May 13, 2007 10:44 PM |
--
((Udhay Shankar N)) ((udhay @ pobox.com)) ((www.digeratus.com))