<<http://www.schneier.com/crypto-gram-0404.html>>

National ID Cards 

As a security technologist, I regularly encounter people who say the
United States should adopt a national ID card. How could such a program
not make us more secure, they ask? 

The suggestion, when it's made by a thoughtful civic-minded person like
Nicholas Kristof in the New York Times, often takes on a tone that is
regretful and ambivalent: Yes, indeed, the card would be a minor invasion
of our privacy, and undoubtedly it would add to the growing list of
interruptions and delays we encounter every day; but we live in dangerous
times, we live in a new world.... 

It all sounds so reasonable, but there's a lot to disagree with in such
an attitude. 

The potential privacy encroachments of an ID card system are far from
minor. And the interruptions and delays caused by incessant ID checks
could easily proliferate into a persistent traffic jam in office lobbies
and airports and hospital waiting rooms and shopping malls. 

But my primary objection isn't the totalitarian potential of national
IDs, nor the likelihood that they'll create a whole immense new class of
social and economic dislocations. Nor is it the opportunities they will
create for colossal boondoggles by government contractors. My objection
to the national ID card, at least for the purposes of this essay, is much
simpler. 

It won't work. It won't make us more secure. 

In fact, everything I've learned about security over the last 20 years
tells me that once it is put in place, a national ID card program will
actually make us less secure. 

My argument may not be obvious, but it's not hard to follow, either. It
centers around the notion that security must be evaluated not based on
how it works, but on how it fails. 

It doesn't really matter how well an ID card works when used by the
hundreds of millions of honest people that would carry it. What matters
is how the system might fail when used by someone intent on subverting
that system: how it fails naturally, how it can be made to fail, and how
failures might be exploited. 

The first problem is the card itself. No matter how unforgeable we make
it, it will be forged. And even worse, people will get legitimate cards
in fraudulent names. 

Two of the 9/11 terrorists had valid Virginia driver's licenses in fake
names. And even if we could guarantee that everyone who issued national
ID cards couldn't be bribed, initial cardholder identity would be
determined by other identity documents... all of which would be easier to
forge. 

Not that there would ever be such thing as a single ID card. Currently
about 20 percent of all identity documents are lost per year. An entirely
separate security system would have to be developed for people who lost
their card, a system that itself is capable of abuse. 

Additionally, any ID system involves people... people who regularly make
mistakes. We all have stories of bartenders falling for obviously fake
IDs, or sloppy ID checks at airports and government buildings. It's not
simply a matter of training; checking IDs is a mind-numbingly boring
task, one that is guaranteed to have failures. Biometrics such as
thumbprints show some promise here, but bring with them their own set of
exploitable failure modes. 

But the main problem with any ID system is that it requires the existence
of a database. In this case it would have to be an immense database of
private and sensitive information on every American -- one widely and
instantaneously accessible from airline check-in stations, police cars,
schools, and so on. 

The security risks are enormous. Such a database would be a kludge of
existing databases; databases that are incompatible, full of erroneous
data, and unreliable. As computer scientists, we do not know how to keep
a database of this magnitude secure, whether from outside hackers or the
thousands of insiders authorized to access it. 

And when the inevitable worms, viruses, or random failures happen and the
database goes down, what then? Is America supposed to shut down until
it's restored? 

Proponents of national ID cards want us to assume all these problems, and
the tens of billions of dollars such a system would cost -- for what? For
the promise of being able to identify someone? 

What good would it have been to know the names of Timothy McVeigh, the
Unabomber, or the DC snipers before they were arrested? Palestinian
suicide bombers generally have no history of terrorism. The goal is here
is to know someone's intentions, and their identity has very little to do
with that. 

And there are security benefits in having a variety of different ID
documents. A single national ID is an exceedingly valuable document, and
accordingly there's greater incentive to forge it. There is more security
in alert guards paying attention to subtle social cues than bored
minimum-wage guards blindly checking IDs. 

That's why, when someone asks me to rate the security of a national ID
card on a scale of one to 10, I can't give an answer. It doesn't even
belong on a scale. 

...

Stealing an Election 

There are major efforts by computer security professionals to convince
government officials that paper audit trails are essential in any
computerized voting machine. They have conducted actual examination of
software, engaged in letter writing campaigns, testified before
government bodies, and collectively, have maintained visibility and
public awareness of the issue. 

The track record of the computerized voting machines used to date has
been abysmal; stories of errors are legion. Here's another way to look at
the issue: what are the economics of trying to steal an election? 

Let's look at the 2002 election results for the 435 seats in the House of
Representatives. In order to gain control of the House, the Democrats
would have needed to win 23 more seats. According to actual voting data
(pulled off the ABC News website), the Democrats could have won these 23
seats by swinging 163,953 votes from Republican to Democrat, out of the
total 65,812,545 cast for both parties. (The total number of votes cast
is actually a bit higher; this analysis only uses data for the winning
and second-place candidates.) 

This means that the Democrats could have gained the majority in the House
by switching less than 1/4 of one percent of the total votes -- less than
one in 250 votes. 

Of course, this analysis is done in hindsight. In practice, more cheating
would be required to be reasonably certain of winning. Even so, the
Democrats could have won the house by shifting well below 0.5% of the
total votes cast across the election. 

Let's try another analysis: What is it worth to compromise a voting
machine? In contested House races in 2002, candidates typically spent $3M
to $4M, although the highest was over $8M. The outcomes of the 20 closest
races would have changed by swinging an average of 2,593 votes each.
Assuming (conservatively) a candidate would pay $1M to switch 5,000
votes, votes are worth $200 each. The actual value is probably closer to
$500, but I figured conservatively here to reflect the additional risk of
breaking the law. 

If a voting machine collects 250 votes (about 125 for each candidate),
rigging the machine to swing all of its votes would be worth $25,000.
That's going to be detected, so is unlikely to happen. Swinging 10% of
the votes on any given machine would be worth $2500. 

This suggests that it is necessary to assume that attacks against
individual voting machines are a serious risk. 

Computerized voting machines have software, which means we need to figure
out what it's worth to compromise a voting machine software design or
code, and not just individual machines. Any voting machine type deployed
in 25% of precincts would register enough votes that malicious software
could swing the balance of power without creating terribly obvious
statistical abnormalities. 

In 2002, all the Congressional candidates together raised over $500M. As
a result, one can conservatively conclude that affecting the balance of
power in the House of Representatives is worth at least $100M to the
party who would otherwise be losing. So when designing the security
behind the software, one must assume an attacker with a $100M budget. 

Conclusion: The risks to electronic voting machine software are even
greater than first appears. 

This essay was written with Paul Kocher. 

...

Man-in-the-Middle Attack
 
The phrase "man-in-the-middle attack" is used to describe a computer
attack where the adversary sits in the middle of a communications channel
between two people, fooling them both. It is an important attack, and
causes all sorts of design considerations in communications protocols. 

But it's a real-life attack, too. Here's a story of a woman who posts an
ad requesting a nanny. When a potential nanny responds, she asks for
references for a background check. Then she places another ad, using the
reference material as a fake identity. She gets a job with the good
references -- they're real, although for another person -- and then robs
the family who hires her. And then she repeats the process. 

Look what's going on here. She inserts herself in the middle of a
communication between the real nanny and the real employer, pretending to
be one to the other. The nanny sends her references to someone she
assumes to be a potential employer, not realizing that it is a criminal.
The employer receives the references and checks them, not realizing that
they don't actually belong to the person who is sending them. 

It's a nasty piece of crime. 

...
_______________________________________________
http://www.mccmedia.com/mailman/listinfo/brin-l

Reply via email to