I'm a sucker for well-argued predictions. Whether they come true or not
is secondary to the "well-argued" part, as their primary function, for
me, is to provoke thought.

Udhay

http://www.antipope.org/charlie/blog-static/2011/08/usenix-2011-keynote-network-se.html

USENIX 2011 Keynote: Network Security in the Medium Term, 2061-2561 AD
By Charlie Stross

Good afternoon, and thank you for inviting me to speak at USENIX Security.

Unlike you, I am not a security professional. However, we probably share
a common human trait, namely that none of us enjoy looking like a fool
in front of a large audience. I therefore chose the title of my talk to
minimize the risk of ridicule: if we should meet up in 2061, much less
in the 26th century, you’re welcome to rib me about this talk. Because
I’ll be happy to still be alive to rib.

So what follows should be seen as a farrago of speculation by a guy who
earns his living telling entertaining lies for money.

The question I’m going to spin entertaining lies around is this: what is
network security going to be about once we get past the current sigmoid
curve of accelerating progress and into a steady state, when Moore’s
first law is long since burned out, and networked computing appliances
have been around for as long as steam engines?

I’d like to start by making a few basic assumptions about the future,
some implicit and some explicit: if only to narrow the field.

For starters, it’s not impossible that we’ll render ourselves extinct
through warfare, be wiped out by a gamma ray burster or other
cosmological sick joke, or experience the economic equivalent of a
kernel panic – an unrecoverable global error in our technosphere. Any of
these could happen at some point in the next five and a half centuries:
survival is not assured. However, I’m going to spend the next hour
assuming that this doesn’t happen – otherwise there’s nothing much for
me to talk about.

The idea of an AI singularity has become common currency in SF over the
past two decades – that we will create human-equivalent general
artificial intelligences, and they will proceed to bootstrap themselves
to ever-higher levels of nerdish god-hood, and either keep us as pets or
turn us into brightly coloured machine parts. I’m going to palm this
card because it’s not immediately obvious that I can say anything useful
about a civilization run by beings vastly more intelligent than us. I’d
be like an australopithecine trying to visualize daytime cable TV. More
to the point, the whole idea of artificial general intelligence strikes
me as being as questionable as 19th century fantasies about
steam-powered tin men. I do expect us to develop some eerily purposeful
software agents over the next decades, tools that can accomplish
human-like behavioural patterns better than most humans can, but all
that’s going to happen is that those behaviours are going to be
reclassified as basically unintelligent, like playing chess or Jeopardy.

In addition to all this Grinch-dom, I’m going to ignore a whole grab-bag
of toys from science fiction’s toolbox. It may be fun in fiction, but if
you start trying to visualize a coherent future that includes aliens,
telepathy, faster than light travel, or time machines, your futurology
is going to rapidly run off the road and go crashing around in the blank
bits of the map that say HERE BE DRAGONS. This is non-constructive. You
can’t look for ways to harden systems against threats that emerge from
the existence of Leprechauns or Martians or invisible pink unicorns. So,
no Hollywood movie scenarios need apply.

Having said which, I cheerfully predict that at least one barkingly
implausible innovation will come along between now and 2061 and turn
everything we do upside down, just as the internet has pretty much
invalidated any survey of the future of computer security that might
have been carried out in 1961.

So what do I expect the world of 2061 to look like?

I am going to explicitly assume that we muddle through our current
energy crises, re-tooling for a carbon-neutral economy based on a
mixture of power sources. My crystal ball is currently predicting that
base load electricity will come from a mix of advanced nuclear fission
reactor designs and predictable renewables such as tidal and
hydroelectric power. Meanwhile, intermittent renewables such as solar
and wind power will be hooked to batteries for load smoothing, used to
power up off-grid locations such as much of the (current) developing
world, and possibly used on a large scale to produce storable fuels –
hydrocarbons via Fischer-Tropsch synthesis, or hydrogen gas vial
electrolysis.

We are, I think, going to have molecular nanotechnology and atomic scale
integrated circuitry. This doesn’t mean magic nanite pixie-dust a la
Star Trek; it means, at a minimum, what today we’d consider to be exotic
structural materials. It also means engineered solutions that work a bit
like biological systems, but much more efficiently and controllably, and
under a much wider range of temperatures and pressures.

Mature nanotechnology is going to resemble organic life forms the way a
Boeing 737 resembles thirty tons of seagull biomass. Both the Boeing and
the flock of seagulls can fly, and both of them need a supply of organic
fuel to oxidize in order to do so. But a flock of thirty tons of
seagulls can’t carry a hundred passengers across the Atlantic, and
Boeings don’t lay eggs. Designed nanosystems don’t need to be
general-purpose replicators.

(Incidentally, I’ve been banging on about energy and mechanical
efficiency because without energy we don’t have a technological
civilization, and without a technological civilization questions of
network security take second place to where to get a new flint
arrowhead. Communications infrastructure depends on power; without it,
we’re nowhere.)

Where I’m going to stick my neck out, is that I predict great things for
medicine and biology over the next century. At the beginning of this
talk I said that if we’re still alive in the 26th century you’re welcome
to remind me of what I got wrong in this talk. I’m actually agnostic
about the possibility that some of us may still be around then. There
are a bunch of major medical obstacles to smash flat before that becomes
possible, but we’re living through the early days of a revolution in
genomics and biology – one made possible by massively parallel
number-crunching and networking – and we’re beginning to work out just
how complex our intracellular machinery is. Whole new areas of cellular
biology have opening up in the past decade; RNA interference as a
mechanism for modulating gene expression, how the topology of
chromosomal folding affects genetics, a bunch of stuff that wasn’t even
on the horizon in 2001. Getting a handle on your ignorance is the first
step on the road to understanding. And the long-term benefits are likely
to be huge.

We haven’t yet managed to raise the upper limit on human life expectancy
(it’s currently around 120 years), but an increasing number of us are
going to get close to it. And I think it’s quite likely that within
another century the mechanisms underlying cellular senescence will be
understood and treatable like other inborn errors of metabolism – in
this case, ones that aren’t weeded out by natural selection because they
do not impact the organism’s survival fitness until it has already long
since passed reproductive age.

This, incidentally, leads me to another prediction: something outwardly
resembling democracy everywhere.

Since 1911, democractic government by a republic has gone from being an
eccentric minority practice to the default system of government
world-wide – there are now more democracies than any other system, and
even authoritarian tyrannies find it expedient to ape at least the
outward symbolism of democratic forms, via rigged elections and life
presidencies.

As the collapse of the Warsaw Pact, and then the Arab Spring
demonstrated, popular support for democracy and freedom of speech is not
exceptional: it exists and is expressed everywhere where it is not
actively suppressed. Democracy is a lousy form of government in some
respects – it is particularly bad at long-term planning, for no event
that lies beyond the electoral event horizon can compel a politician to
pay attention to it – but it has two gigantic benefits: it handles
transfers of power peacefully, and provides a pressure relief valve for
internal social dissent. If enough people get angry they can vote the
bums out, and the bums will go – you don’t need to hold a civil war.

Unfortunately there are problems with democracy. In general,
democratically elected politicians are forced to focus on short-term
solutions to long-term problems because their performance is evaluated
by elections held on a time scale of single-digit years: if a project
doesn’t come to fruition within their time in office, it’s less than
useful to them. Democratic systems are prone to capture by special
interest groups that exploit the information asymmetry that’s endemic in
complex societies, or disciplined radical parties that simply refuse to
negotiate. The adversarial two-party model is a very bad tool for
generating consensus on how to tackle difficult problems with no
precedents – such as responses to climate instability or resource
shortages or new communications media. Finally, representative democracy
scales up badly – on the largest scales, those of national governments
with populations in the tens to hundreds of millions, it tends towards
voter apathy and alienation from the process of government – a pervasive
sense that “voting doesn’t change anything” – because individual voters
are distant from their representatives. Questionable e-voting
technologies with poor anonymization or broken audit capabilities don’t
help, of course.

Nor are governments as important as they used to be. National
governments are constrained by external treaties – by some estimates, up
to two-thirds of primary legislation in the UK has its origins in EU
directives or WTO and WIPO trade treaties. Even the US government, the
largest superpower on the block right now, is tightly constrained by the
international trade system it promoted in the wake of the second world war.

Ultimately, a lot of the decision-making power of government in the 21st
century is pushed down a level, to civil service committees and special
interest groups: and so we have democratic forms of government, without
the transparency and accountability. At least, until we invent something
better – which I expect will become an urgent priority before the end of
the century.

Now I want to talk a bit about economic development, because that’s one
of the key determinants of the shape of the world we live in.

Having asserted – as I said, you can point and mock later – that we’re
going to solve the energy crises, continue to burn non-fossil oil for
transport, and get better materials and medical treatments, I’d like to
look at the shape of our civilization.

Here in San Francisco it probably sometimes seems like the United States
of America is the centre of the world. In a very real way it was, within
living memory: in 1945, about 60% of global GDP came out of this nation,
because the rest of the developed world had been pounded flat. Today,
the United States, with around 5% of the planet’s population, is
responsible for around 25% of planetary GDP. Note that this isn’t an
absolute decline – the USA today is richer than it was in 1945. Rather,
the rest of the world has been playing catch-up.

Something similar happened in the 19th century; in 1860 the United
Kingdom, cradle of the industrial revolution, occupied about the same
position relative to the rest of the world that the USA occupied in
1945. Today, the UK is down to 3.5% of planetary GDP, albeit with less
than 1% of population. The good news is, we’re a lot richer than our
ancestors. Relative decline is not tragic in a positive-sum world.

I’m not telling you anything new if I mention that the big story of the
period from 1985 to 2015 is the development of China and India. Both
nations – together they account for 2.5 billion people, more than four
times the USA and EU combined – are sustaining economic growth at close
to 10% per annum, compounded over long periods. Assuming that they
survive the obstacles on the road to development, this process is going
to end fairly predictably: both India and China will eventually converge
with a developed world standard of living, while undergoing the
demographic transition to stable or slowly declining populations that
appears to be an inevitable correlate of development. (The population
time bomb that mesmerized futurologists in the 1970s has, happily, fizzled.)

A less noticed phenomenon is that of Africa’s development. Africa is a
huge continent, home to around a billion people in 53 nations. Africa
entered the 1980s in dismal shape; even today, 34 of those 53 nations
are ranked among the UN’s list of 48 least developed countries.

However, a quiet economic revolution is sweeping Africa. During the past
decade, overall economic growth averaged a more-than-respectable 5% per
annum; some areas are experiencing growth in the 6–7% range. Africa in
2011 is still very poor, but it is the poverty of the 1860s, not the
1660s. The short term prognosis for Africa as a continent is good – and
I would hazard a guess that, barring unexpected setbacks such as an even
larger war than the Congo conflict, Africa will follow China and India
up the development curve before 2040.

In 2006, for the first time, more than half of the planet’s human
population lived in cities. And by 2061 I expect more than half of the
planet’s human population will live in conditions that correspond to the
middle class citizens of developed nations.

We’re used to thinking of our world as being divided into economic zones
– the developed nations, primarily urban and relatively wealthy,
surmounting an immense pool of misery and mediaeval rural deprivation.
But by 2061 we or our children are going to be living on an urban
middle-class planet, with a globalized economic and financial
infrastructure recognizably descended from today’s system, and
governments that at least try to pay lip service to democratic norms.

(Checks wrist-watch.)

This is USENIX Security and I’m 10 minutes into my talk and I haven’t
mentioned computers or networks. Some of you are probably getting bored
or irritated by now, and it’s too early for an after-lunch nap, and I’m
too low-tech to give you hypnosis by powerpoint. So let me get round to
the stuff you came to hear about.

And let me say, before I do, that the picture I just painted – of the
world circa 2061, which is to say of the starting point from which the
world of 2561 will evolve – is bunk. It’s a normative projection, an
if-this-goes-on kind of future, based on the assumption that lots of
stuff won’t happen. No fourth world war, no alien invasion, no AI
singularity, no rapture of the nerds, no singularity of the baptists. In
actual fact, I’m pretty certain that something utterly unexpected will
come along and up-end all these projections – something as weird as the
world wide web would have looked in 1961. But even if no such black
swans take wing, the world of 2061 is still going to be really odd,
because the pervasive spread of networking technologies that we’ve
witnessed over the past half century is only the beginning. And while
the outer forms of that comfortable, middle-class urban developed-world
planetary experience might look familiar to us, the internal
architecture will be unbelievably different.

Let me start with an analogy.

Let’s imagine that, circa 1961 – just fifty years ago – a budding
Nikolai Tesla or Bill Packard somewhere in big-city USA is tinkering in
his garage and succeeds in building a time machine.

Being adventurous – but not too adventurous – he sets the controls for
fifty years in the future, and arrives in downtown San Francisco.

What will he see, and how will he interpret it?

Well, a lot of the buildings are going to be familiar. Those that aren’t
– much of the skyline – will at least look like the city of the future
so familiar to us from magazines of the 1930s through the 1960s.
Automobiles are automobiles, even if the ones he sees look kind of
melted, and an obsession with aerodynamics might be taken for just
another fashion fad. (The ten million lines of code and multiple
microprocessors within the average 2011 car are, of course, invisible.)
Fashion? Hats are out, clothing has mutated in strange directions, but
that’s to be expected.

He may be thrown by the number of pedestrians walking around with wires
in their ears, or holding these cigarette-pack-sized boxes with glowing
screens. But earphones weren’t unheard of in 1961, and pocket television
sets were one of the routine signs that you’re in the future now, as far
back as Dick Tracy.

But there seem to be an awful lot of mad people walking around with bits
of plastic clipped to their ears, talking to themselves … and why do all
the advertising posters have a line of alphabet soup ending in ‘.com’ on
them?

The outward shape of the future contains the present and the past,
embedded within it like flies in amber. Our visitor from 1961 is
familiar with cars and clothes and buildings: after all, they all
existed in his own time. But he hasn’t heard of packet switched
networks, and thinks of computers as mainframes trapped in
air-conditioned rooms, and if he’s even aware of hypertext as a concept
it’s in the form Vannevar Bush described in his essay “How We May Think”
published in The Atlantic in 1945. Cellular radio networks lay in the
future; the 1961 iteration of mobile telephony was the MTS network,
which was entirely operator-assisted and ran over a bunch of about 25
VHF frequencies. Licklider’s paper proposing a packet-switched network
to allow general communications among computer users wasn’t published
until 1962. And while Jack Kilby’s first working example of an
integrated circuit was demonstrated in fall of 1958, the first IC based
computers didn’t come along until the 1960s.

Our time traveller from 1961 has a steep learning curve if he wants to
understand the technology the folks with the cordless headsets are
using. And as for the social consequences of the technologies in
question – beyond lots of people wandering the streets holding
conversations with imaginary companions – that’s an even longer reach.
The social consequences of a new technology are almost always impossible
to guess in advance.

Let me take mobile phones as an example. They let people talk to one
another – that much is obvious. What is less obvious is that for the
first time the telephone network connects people, not places – it’s
possible for new social behaviours to emerge. For example, people who
are wandering a city’s streets can contact one another and decide to
meet at a coffee shop, even if neither of them is near a fixed land line
for which the other has a phone number. This example might strike you as
trivial, but it represents an immense behavioural shift. Add in text
messaging and GPS and mobile internet access and yet more behavioural
shifts are possible. The current riots in London and elsewhere in the
UK, for example, appear to be coordinated by Blackberry Messenger and
text messaging, as looters exchange information about promising
locations that are unprotected by the police. And the behavioural
consequences of mobile phones go right off the map once we have a mature
LTE or WiMax network, and once the telcos have been bludgeoned into
providing plumbing rather than trying to renting you the dishwasher and
the kitchen taps.

For example, we’re currently raising the first generation of kids who
won’t know what it means to be lost – everywhere they go, they have GPS
service and a moving map that will helpfully show them how to get
wherever they want to go. It’s not hard to envisage an app that goes a
step beyond Google Maps on your smartphone, whereby it not only shows
you how to get from point A to point B, but it can book transport to get
you there – by taxi, ride-share, or plane – within your budgetary and
other constraints. That’s not even far-fetched: it’s just what you get
when you tie the mutant offspring of Hipmunk or Kayak into Google, and
add Paypal. But to our time traveller from 1961, it’s magic: you have a
little glowing box, and if you tell it “I want to visit my cousin Bill,
wherever he is,” a taxi will pull up and take you to Bill’s house (if he
lives nearby), or a Greyhound bus station, or the airport. (Better hope
he’s not visiting Nepal; that could be expensive.)

Smartphones aren’t merely there to make your high school geography
teacher weep, just as pocket calculators made maths teachers cry a
generation ago. Something like 50% of smartphone users check their work
email while they’re on vacation; a vast majority check work email when
they’re away from their desk. The whole point of the desk was, for many
people in the 20th century, to be a place of contact where co-workers or
customers could reach them reliably during fixed office hours. Our
phones aren’t quite up to the job of replacing the office desk today,
but with picoprojectors and more bandwidth in the pipeline they show
every sign of hitting that point within the next five to ten years. Even
today, a typical iOS or Android handset has about as many MIPS as a
workstation circa 2001–2003. It’s not immediately obvious why most
office workers need many more orders of magnitude more computing power
than that to get the job done – at least in their pocket, as opposed to
in a server farm somewhere in the cloud.

We already saw a bunch of changes in office jobs come in with laptops.
Hot-desking isn’t easy when employees are tied to a specific location by
wires. Similarly, it’s going in the opposite direction in some sectors;
following a spate of embarrassing “laptop left in taxi” stories in the
last few years, large chunks of the British civil service no longer use
laptops, instead relying on desktop PCs and wired ethernet, with
biometrically authenticated thumb drives to store employee-specific
credentials.

All this, of course, assumes we have jobs to go to. The whole question
of whether a mature technosphere needs three or four billion full-time
employees is an open one, as is the question of what we’re all going to
do if it turns out that the future can’t deliver jobs. So I’m going to
tip-toe away from that ticking bomb …

What of the non-employment-related impact of smartphones? Most people
spend most of their lives away from the desk, away from work, doing
other stuff. Surfing the web for silly cat photographs or porn, trying
to keep the multiple facets of their identity from colliding messily on
Facebook – forget online dating, how many teens have met their
girlfriend or boyfriend’s parents for the first time via FB? – checking
competitors’ prices from the aisles in WalMart, and texting while
driving. We’re still in the first decade of mass mobile internet uptake,
and we still haven’t seen what it really means when the internet becomes
a pervasive part of our social environment, rather than something we
have to specifically sit down and plug ourselves in to, usually at a desk.

So let me start by trying to predict the mobile internet of 2061.

To some extent, the shape of the future depends on whether whoever
provides the basic service of communication – be it fibre in the ground
or wireless or optical frequencies over the air – funds their service by
charging for bandwidth or charging for a fixed infrastructure cost. The
latter is vastly preferable. I’m British, and I’m carrying a smartphone
today that has an IQ of about 70 – a full-fat iphone 4 that is, despite
everything, as dumb as a brick. The trouble is that I’m using it with a
SIM from a British cellco whose international roaming data rate is
around US $5 per megabyte. Welcome back to the internet of the early
1990s! To get around the problem I’ve got a Virgin Mobile MiFi, a
cellular wifi router. But it comes with an unwelcome constraint – even
the so-called “unlimited” tariff limits me to 2.5Gb of data in a month.
If I go over that cap, the connection will be throttled. And I could
blow through that cap in a couple of hours, just by downloading a new
software update for my laptop.

>From my dumb consumer’s perspective, it’d be preferable to pay a higher,
but fixed, price for infrastructure – then I could download all the OS
updates, or movies, or whatever, at will. But this is obviously less
than appealing to Sprint, or AT&T, or whoever – they’d all much rather
artificially depress demand by imposing punitive charges on heavy users,
because installing new infrastructure is expensive.

These two models for pricing imply very different network topologies. A
model where bandwidth on the backhaul is capped implies lots of
peer-to-peer traffic over local area networks and lots of caching, but
relatively little long haul traffic. It implies that more powerful
processors are needed at the edge of the network because offloading the
job of OCR’ing that fifty page construction contract to the cloud means
uploading fifty pages of bitmapped scans, which will cost money: you
can’t outsource your brains. In contrast, a model where you just rent
the fibre or wireless spectrum for a fixed price implies a lot more
activity in the cloud, with thin clients. (And yes, I know full well
we’ve been chewing this tension between business models over since the
1990s.)

This leaves aside a third model, that of peer to peer mesh networks with
no actual cellcos as such – just lots of folks with cheap routers. I’m
going to provisionally assume that this one is hopelessly utopian, a GNU
vision of telecommunications that can’t actually work on a large scale
because the routing topology of such a network is going to be
nightmarish unless there are some fat fibre optic cables somewhere in
the picture. It’s kind of a shame – I’d love to see a future where no
corporate behemoths have a choke hold on the internet – but humans
aren’t evenly distributed geographically.

Our best hope may be that the new middle-class African, Indian and
Chinese populations will benefit from the kind of shiny new
infrastructure that we in crumbling Europe and American can only dream
of – and that eventually this goads our local infrastructure services
into raising their game. Or that some radically disruptive new
technology comes along: open access peer to peer mesh networks using DIY
remotely piloted drones whipped up on garage 3D printers as home brew
laser relays to span long distances and fill the fibre gap, for example.
Mind you, the security problems of a home-brew mesh network are enormous
and gnarly; when any enterprising gang of scammers can set up a public
router, who can you trust? Such a world is going to be either
crime-ridden or pervasively encrypted and inhabited by natives who are
required to be perfectly spherical cypherpunks – just like my
eighty-something parents. Not!

A brief aside on storage density is in order at this point. I’m throwing
around fairly gigantic amounts of data in this talk – where are we going
to store it all? The answer is, as Richard Fenyman put it in 1959,
there’s plenty of room at the bottom. Let’s hypothesize a very high
density, non-volatile serial storage medium that might be manufactured
using molecular nanotechnology: I call it memory diamond. It’s a
diamondoid mesh, within which the state of a single data bit is encoded
in each atom: because we want it to be rigid and stable, we use a
carbon–12 nucleus to represent a zero, and a carbon-thirteen to
represent a one. How we read and write these bits is left as an exercise
for the student of mature molecular nanotechnology, but we can say with
some certainty that we can store Avogadro’s number of bits – 6 x 1023 –
in 12.5 grams of carbon, or around 13 thousand terabytes in an ounce of
memory diamond. Going by the figures in a report from UCSD last year,
the average worker processed or consumed 3 terabytes per year, and there
are around 3.18 billion workers; which works out at 23 tons of memory
diamond needed to store everything without compression or deduplication.
At a guess, once you take out cute captioned cat videos and downloads
that annoy the hell out of the MPAA you can reduce that by an order of
magnitude.

(So I conclude that yes, in the long term we will have more storage
capacity than we necessarily know what to do with.)

Now, wireless bandwidth appears to be constrained fundamentally by the
transparency of air to electromagnetic radiation. I’ve seen some
estimates that we may be able to punch as much as 2 tb/sec through air;
then we run into problems. This bandwidth is spread between all the
users within a given cell – the smaller the cell the better, obviously.
I’ve recently been hearing some interesting noise about the possibility
of using multiple-input/multiple-output to get around Shannon’s Law –
notably from a startup called OnLive – using multiple transmitters to
produce constructive or destructive interference around each receiver’s
antenna. This may be snake oil, or there may be something there. Even
so, even if MIMO works perfectly, it’s hard to see how we can get past
that hard limit of 2 tb/sec per wireless host.

So, let’s approximate the upper limit on bandwidth to 2 tb/person, by
postulating a mixture of novel compression algorithms and really tiny cells.

What can you do with 2 terabits per second per human being on the
planet? (Let alone 2tb/sec per wireless device, given that we’re already
within a handful of years of having more wireless devices than people?)

One thing you can do trivially with that kind of capacity is full
lifelogging for everyone. Lifelogging today is in its infancy, but it’s
going to be a major disruptive technology within two decades.

The basic idea behind lifelogging is simple enough: wear a couple of
small, unobtrusive camera chips and microphones at all time. Stream
their output, along with metadata such as GPS coordinates and a time
sequence to a server somewhere. Working offline, the server performs
speech-to-text on all the dialogue you utter or hear, face recognition
on everyone you see, OCR on everything you read, and indexes it against
images and location. Whether it’s performed in the cloud or in your
smartphone is irrelenvant – the resulting search technology essentially
gives you a prosthetic memory.

We’re already used to prosthetic memory to some extent; I used Google
multiple times in the preparation of this talk, to retrieve specific
dates and times of stuff I vaguely recalled but couldn’t bring directly
to memory. But Google and other search engines are a collective
prosthetic memory that can only scrutinize the sunlit upper waters of
the sea of human experience, the ones that have been committed to
writing and indexed. Lifelogging offers the promise of indexing and
retrieving the unwritten and undocmented. And this is both a huge
promise and an enormous threat.

Initially I see lifelogging having specific niches; as an aid for people
with early-stage dementia or other memory impairments, or to allow
students to sleep through lectures. Police in the UK are already
experimenting with real time video recording of interactions with the
public – I suspect that before long we’re going to see cops required to
run lifelogging apps constantly when on duty, with the output locked
down as evidence. And it’ll eventually become mandatory for other people
who work in professions where they are exposed to any risk that might
result in a significant insurance claim – surgeons, for example, or
truck drivers – not by force of law but as a condition of insurance cover.

Lifelogging into the cloud doesn’t require much bandwidth in absolute
terms, although it will probably take a few years to take off if the
cellcos succeed in imposing bandwidth caps. A few terabytes per year per
person should suffice for a couple of basic video streams and full
audio, plus locational metadata – multiply by ten if you want high
definition video at a high frame rate. And the additional hardware –
beyond that which comes in a 2011 smartphone – is minimal: a couple of
small webcams and microphones connected over some sort of short range
personal area network, plus software to do the offline indexing.

Lifelogging raises huge privacy concerns, of course. Under what
circumstances can your lifelog legally be accessed by third parties? And
how do privacy laws apply? It should be clear that anyone currently
lifelogging in this way takes their privacy – and that of the people
around them – very lightly: as far as governments are concerned they can
subpoena any data they want, usually without even needing a court
warrant. Projects such as the UK’s Interception Modernization Program –
essentially a comprehensive internet communications retention system
mandated by government and implemented by ISPs – mean that if you become
a person of interest to the security services they’d have access to
everything. The prudent move would be to lifelog to encrypted SSDs in
your personal possession. Or not to do it at all. The security
implications are monstrous: if you rely on lifelogging for your memory
or your ability to do your job, then the importance of security is
pushed down Maslow’s hierarchy of needs. When only elite computer
scientists on ARPANet had accounts so they can telnet into mainframes at
another site, security was just a desirable luxury item – part of the
apex of the pyramid of needs. But when it’s your memory or your ability
to do paid employment, security gets to be something close to food and
water and shelter: you can’t live without it.

On the up side, if done right, widespread lifelogging to cloud based
storage would have immense advantages for combating crime and preventing
identity theft. Coupled with some sort of global identification system
and a system of access permissions that would allow limited queries
against a private citizen’s lifelog, it’d be very difficult to fake an
alibi for a crime, or to impersonate someone else. If Bill the Gangster
claims he was in the pub the night of a bank robbery, you can just query
the cloud of lifelogs with a hash of his facial features, the GPS
location of the pub, and the time he claims he was there. If one or more
people’s lifelogs provide a match, Bill has an alibi. Alternatively, if
a whole bunch of folks saw him exiting the back of the bank with a sack
marked SWAG, that tells a different story. Faking up an alibi in a
pervasively lifelogged civilization will be very difficult, requiring
the simultaneous corruption of multiple lifelogs in a way that portrays
a coherent narrative.

So whether lifelogging becomes a big social issue depends partly on the
nature of our pricing model for bandwidth, and how we hammer out the
security issues surrounding the idea of our sensory inputs being logged
for posterity.

Lifelogging need not merely be something for humans. You can already buy
a collar-mounted camera for your pet dog or cat; I think it’s pretty
likely that we’re going to end up instrumenting farm animals as well,
and possibly individual plants such as tomato vines or apple trees –
anything of sufficient value that we don’t kill it after it has fruited.
Lifelogging for cars is already here, if you buy a high end model;
sooner or later they’re all going to be networked, able to book
themselves in for preventative maintenance rather than running until
they break down and disrupt your travel. Not to mention snitching on
your acceleration and overtaking habits to the insurance company, at
least until the self-driving automobile matches and then exceeds human
driver safety.

And then there are the other data sources we can import into the internet.

We’re currently living through a period in genomics research that is
roughly equivalent to the early 1960s in computing. In particular,
there’s a huge boom in new technologies for high speed gene sequencing.
Back in 1989, it was anticipated that the human genome project would
cost on the order of $3Bn and take two decades. In practice, it cost a
lot less, and was completed well ahead of schedule – with around 90% of
the work being done in the last 18 months, thanks to the development of
automated high speed sequencers. We’re now seeing a mass of disruptive
technologies for high throughput DNA sequencing appear, with full genome
sequencing for individuals now available for around US $30,000, and
expected to drop to around $1000–3000 within a couple of years. The
technologies in question, such as DNA microarrays, are benefiting from
the same miniaturisation cycle as semiconductors three to five decades
ago, and analysis of the resulting data in turn relies on the
availability of cheap supercomputing clusters. The Archon X-Prize for
genomics, established in 2006 and likely to be won in the next couple of
years, promises $10 million to “the first Team that can build a device
and use it to sequence 100 human genomes within 10 days or less, with
certain accuracy constraints, at a recurring cost of no more than
$10,000 (US) per genome.”

Now, there’s probably no reason to exhaustively sequence your own genome
more than once. And your genome isn’t the rigid determinant of your
health and metabolic fate that it was believed to be in the naive,
pre-scientific dark ages of the late 1980s. We are, if anything, only
just beginning to scope out the extent of our own ignorance of how our
cellular biology really works. But we live within an ecosystem of other
organisms. Each of us is carrying around a cargo of 1–3 kilograms of
bacteria and other unicellular organisms, which collectively outnumber
the cells of our own bodies by a thousand to one. (They’re mostly much
smaller than our own cells.) These are for the most part commensal
organisms – they live in our guts and predigest our food, or on our skin
– and they play a significant role in the functioning of our immune
system. One of the most dangerous medical crises we face today is the
spread of antibiotic resistance among pathogenic organisms: lest we
forget, as recently as the 1930s fully 30% of us could expect to die of
a fulminating bacterial infection. Old threats like tuberculosis are
re-emerging in the form of multidrug resistant strains; and new ones are
also appearing. Most members of the public seem not to understand how
close we came to catastrophe in 2006 with SARS – a disease not unlike
the common cold in its infectious virulence, but with a 15% associated
mortality rate. Only the rapid development of DNA assays for SARS – it
was sequenced within 48 hours of its identification as a new pathogenic
virus – made it possible to build and enforce the strict quarantine
regime that saved us from somewhere between two hundred million and a
billion deaths.

A second crisis we face is that of cancer, almost invariably the
emergent consequence of a malfunction within the genetic machinery of a
cell that causes it to start dividing and refuse to stop. Today, 40% of
the population of the UK, where I live, are expected to develop some
sort of cancer at some point in their lives.

With genome sequencing microarrays tending towards the price and
efficiency of VLSI circuits, and serious bandwidth available to upload
and process the data stream they deliver, we can expect eventually to
see home genome monitoring – both looking for indicators of precancerous
conditions or immune disorders within our bodies, and performing
metagenomic analysis on our environment. This will deliver both personal
benefits – catching early signs of infectious diseases or cancer – but
also, more importantly, providing health agencies with early warning of
epidemics. If our metagenomic environment is routinely included in
lifelogs, we have the holy grail of epidemiology within reach; the
ability to exhaustively track the spread of pathogens and identify how
they adapt to their host environment, right down to the level of
individual victims.

Is losing your genomic privacy an excessive price to pay for surviving
cancer and evading plagues?

Is compromising your sensory privacy through lifelogging a reasonable
price to pay for preventing malicious impersonation and apprehending
criminals?

Is letting your insurance company know exactly how you steer and hit the
gas and brake pedals, and where you drive, an acceptable price to pay
for cheaper insurance?

In each of these three examples of situations where personal privacy may
be invaded, there exists a strong argument for doing so in the name of
the common good – for prevention of epidemics, for prevention of crime,
and for prevention of traffic accidents. They differ fundamentally from
the currently familiar arguments for invasion of our data privacy by law
enforcement – for example, to read our email or to look for evidence of
copyright violation. Reading our email involves our public and private
speech, and looking for warez involves our public and private assertion
of intellectual property rights …. but eavesdropping on our metagenomic
environment and our sensory environment impinges directly on the very
core of our identities.

I’m not talking about our identities in the conventional information
security context of our access credentials to information resources, but
of our actual identities as physically distinct human beings. We use the
term “identity theft” today to talk about theft of access credentials –
in this regime, “identity theft” means something radically more drastic.
If we take a reductionist view of human nature – as I’m inclined to –
our metagenomic context (including not just our own genome and proteome,
but the genome of our gut flora and fauna and the organisms we coexist
with) and our sensory inputs actually define who we are, at least from
the outside. And that’s not a lot of data to capture, if you look at it
in the context of two terabits per second of bandwidth per person.
Assume a human life expectancy of a century, and a terabit per second of
data to log everything about that person, and you can capture a human
existence in roughly 3.15 x 1021 bits … or about 65 milligrams of memory
diamond.

With lifelogging and other forms of ubiquitous computing mediated by
wireless broadband, securing our personal data will become as important
to individuals as securing our physical bodies. Unfortunately we can no
more expect the general public to become security professionals than we
can expect them to become judo black-belts or expert marksmen. Security
is going to be a perpetual, on-going problem.

Moreover, right now we have the luxury of a short history; the world
wide web is twenty years old, the internet is younger than I am, and the
shifting sands of software obsolescence have for the most part buried
our ancient learning mistakes. Who remembers GeoCities today? Nor is
there much to be gained by a black hat from brute-force decrypting a
bunch of ten year old credit card accounts.

But it’s not going to be like that in the future. We can expect the pace
of innovation to slow drastically, once we can no longer count on
routinely more powerful computing hardware or faster network connections
coming along every eighteen months or so. But some forms of personal
data – medical records, for example, or land title deeds – need to
remain accessible over periods of decades to centuries. Lifelogs will be
similar; if you want at age ninety to recall events from age nine, then
a stable platform for storing your memory is essential, and it needs to
be one that isn’t trivially crackable in less than eighty-one years and
counting.

Robustness and durabilitiy are going to be at a premium in the future –
even if we don’t get those breakthroughs in life extension medicine that
will allow you to mock me for getting it wrong when we meet again in 2561.

So, to summarize: we’re moving towards an age where we may have enough
bandwidth to capture pretty much the totality of a human lifespan,
everything except for what’s going on inside our skulls. Storing and
indexing the data from such exhaustive lifelogging is, if not trivial,
certainly do-able (the average human being utters around 5000 words per
day, and probably reads less than 50,000; these aren’t impossible
speech-to-text and OCR targets). And while there are plausible reasons
why we might not be able to assert the overriding importance of personal
privacy in such data, it’s also clear that a complete transcript of
every word you ever utter in your life (or hear uttered), with
accompanying visuals and (for all I know) smell and haptic and
locational metadata, is of enormous value.

Which leads me to conclude that it’s nearly impossible to underestimate
the political significance of information security on the internet of
the future. Rather than our credentials and secrets being at risk – our
credit card accounts and access to our email – our actual memories and
sense of self may be vulnerable to tampering by a sufficiently deft
attacker. From being an afterthought or a luxury – relevant only to the
tiny fraction of people with accounts on time-sharing systems in the
1970s – security is pushed down the pyramid of needs until it’s
important to all of us. Because it’s no longer about our property,
physical or intellectual, or about authentication: it’s about our actual
identity as physical human beings.

Posted by Charlie Stross at 13:35 on August 16, 2011 | Comments (136)

-- 
((Udhay Shankar N)) ((udhay @ pobox.com)) ((www.digeratus.com))

Reply via email to