We're all in the middle of a maze trying to get back. It's easier to understand
things if you start at the beginning and walk your way forward. (It's often
even easier to start at the end and walk backwards, too, but I don't think we
have that option.)
When public-key crypto was created, it liberated us from shared secrets. Alice
can talk secretly with Bob without pre-exchanging keys. It's important to also
remember that if Alice and Bob can pre-exchange public keys, then it's a huge
win over pre-exchanging mere secrets.
Let me start with a story.
Mulla Nasrudin walked into a haberdashery. As he came in, he said to the
proprietor, "First things first, my good man." (You can tell from the "my good
man" that he'd been spending too much time with Richard Burton.) "Did you just
see me walk in?"
The haberdasher said, "Why, yes, good Mulla, I did." (The intelligent reader
wonders how the haberdasher knew he was a Mulla. The astute one doesn't.)
Nasrudin replied, "Ah *HAH*!" (Thus showing how intelligent he was.) He
narrowed his eyes at the haberdasher and spoke slowly to him. "I want you to
answer a question, my good man, and think very carefully before you answer."
The haberdasher nodded assent and Nasrudin asked slowly, pausing between each
word, "Have. You. Ever. Seen me. Before?"
The haberdasher thought for a moment and then replied, "No, Mulla, I haven't."
Nasrudin snorted contemptuously and said, "Then how did you know it was me?"
And indeed, how *did* the haberdasher know that it was Nasrudin?
One of the great technologic triumphs of public-key cryptography is that that
joke is no longer funny. Perhaps the other major triumph is that if you have a
well-defended hypothesis about how the haberdasher knew it was Nasrudin, you
can get a Ph.D. at any of the world's finest universities. My discussion
continues now.
As you see, public-key cryptography creates the problem that Alice needs to
know that she's really talking to Bob, and not someone pretending to be Bob.
I'm going to call this the misidentification problem. It is *the* problem that
is created by pubic key cryptography; before 1975, it was just a cheap Nasrudin
joke. Alice can accidentally misidentify the wrong Bob, or a malicious person
can pretend to be Bob.
The so-called Man-in-the-Middle problem is merely misidentification going
maximally wrong: Mallory talks to both Alice and Bob, pretending to be each to
the other. I think there's too much stressing over MITM problems, and a good
deal of that is that MITM is a subset of misidentification. MITM is a horrible
name for the class of problems, but we're really stuck with it. All I'll say is
that anyone who wants to think about this in depth is advised to remember that
the real problem is misidentification. There are many places where you can
create identification technology and MITM protection follows from that.
Laotse famously observed that before there were locks there were no burglars.
(Many people think that good Master Lao was against property, but actually he
was a security guy. Before there were locks, there were thieves, but there were
no burglars. Creating locks solved old problems but created new ones. Locks
thwart thieves and create burglars.) I'll paraphrase him here and say that
before Diffie-Hellman, there was no Mallory. (He's called Mallory because that
was the name of Nasrudin's haberdasher.)
There are two basic ways to solve the misidentification problem: key continuity
and certification. Each of these has a set of advantages and disadvantages, but
they're really the only two ways to solve the problem.
Let's talk about key continuity first. It's also a horrible name. The idea here
is that I know that the endpoint I'm talking to now is the same endpoint that I
was talking to last time. SSH does this, as does ZRTP. (Full disclosure, I am a
ZRTP co-author.) A cool thing about continuity (note that I've dropped the key)
is that it doesn't need an infrastructure. It just works. If I call Bob's
phone, I know I'm talking to Bob's phone. Mostly.
If the first time I try to talk to Bob, I get Mallory instead, I'm screwed. On
the other hand, Mallory has to *always* answer every time I call Bob or I
detect the break in continuity. Even if I don't twig to Mallory being there,
thinking perhaps that it's a network error, Mallory has to start all over
again. Even when Mallory succeeds, Mallory has a lot of work to do to keep up
the ruse. This is a cool thing because it makes Mallory's problem both hard and
brittle. It thus eliminates most Mallories.
Yes, Mallory may only need to keep up the ruse for a short amount of time.
However, Mallory also has to be quasi-prescient in the abstract case. This is
why continuity works for a lot of problems. Sometimes it's easy to be
quasi-prescient, and for these, continuity works badly.
Otherwise sane people worry far to much about these problems. Let me illustrate
with two anecdotes.
* Many years ago, Don Eastlake asked me if I wanted to go out to dinner with
Carl Ellison. I went out to dinner and a person introduced himself as Carl
Ellison. He gave me a cheap business card printed on orange paper with a PGP
public key fingerprint on it, and also a horrid grayscale bitmap photo. The
card claimed to be that for Carl Ellison. In the intervening years, I've
conversed with this person by email, phone, and in person. Nonetheless, I am to
this day aware of at least three grave security errors I made, continue to
make, and no doubt will make in the future. I'm sure you see them too. Last
year, to my chagrin, I learned that despite the intervening time, I've been
talking to Nicholas Bourbaki, and I forgot to ask his Erdos number.
* Just the other week, Tamzen and I were driving and my mobile phone rang.
Let's just assume for giggles that the incoming call was protected with ZRTP. I
handed the phone to Tamzen and she answered the call saying, "Jon Callas's
phone." "Yes, this is his wife. We're in the car and he's driving. I can relay
a message." The caller did let her relay a message to me. Ooooo, pwned!
As absurd as those two stories are, there are people who actually consider each
of them to be a security problem that needs to be solved in the protocol, and
that a protocol that doesn't solve them is broken.
The bottom line is that there are places that continuity works well -- phone
calls are actually a good one. There are places it doesn't. The SSL problem
that Lucky has talked about so well is a place where it doesn't. Amazon can't
use continuity. It is both inconvenient and insecure.
It is inconvenient because it doesn't scale well. Amazon doesn't want everyone
to have to go through a ritual saying that they think this storefront they've
never been to before is Amazon. That doesn't pass the Grandma test. It's asking
everyone to be smarter than Mulla Nasrudin.
It's mind-bogglingly insecure For something like a storefront, it doesn't even
pretend to solve the identification problem, let alone the misidentification
problem. The attacker can play the same game that spammers and phishers do.
They cast a broad net and only need a very small percentage of hits because a
small percentage times a large number works for them. This is a way of being
quasi-prescient.
The nicest thing you say about it is that it replaces an identification problem
Grandma can understand (how do you know that's Amazon -- a store you've never
been to before?) with an identification problem that she can't (how do you know
that's not someone who isn't Amazon pretending to be Amazon, a store you've
never heard of before).
Nonetheless, despite the fact that continuity works badly for the SSL problem,
for problems like SSH and ZRTP, continuity works very well.
Now on to certification.
It's also very simple. We know Alice is Alice because she wears a name tag that
says, "Hello, my name is ALICE." Similarly, Bob wears a name tag, but his has
"BOB" written on it instead of "ALICE." That's it. We're done. Almost.
If Alice wrote her own name tag, then that's self-certification. It works well,
unless her name isn't Alice, in which case we're back to misidentification.
The obvious way around this is to get someone else to say that they think
that's Alice, or someone like her. That's what we usually mean by certification.
There are a lot of ways to do certification, but they all boil down to either
by consensus or by an authority.
PGP is of course the most notorious consensus system. There's a lot of good
things about it. It's very resilient in the face of unreliable authorities
(think Nasrudin). A number of proposals on how to fix the SSL problem adopt a
quasi-PGP system. I will flatter myself by assuming I don't need to describe
how it works.
There are a number of problems with the consensus approach, though. They
include:
* It's pseudonym-surly. If you want to use a pseudonym, you have to get it
certified by people whom you tell your pseudonym to, which kinda defeats the
point of having a pseudonym. All the many years I worked on PGP, I worried that
I was going to wake up one day and discover that I completed the panopticon
that Jeremy Bentham started.
At least with authorities, you can go up to Nasrudin and say, "Mulla, I know it
says Nicholas Bourbaki on my driver's license, but I'd like to be known as Carl
Ellison." If he agrees, then only he knows. In fact, given who he is, he's
likely to tell you, "I think that is wise, my child. Nicholas Bourbaki is a
really stupid name and you'd have a lot of trouble getting anyone with a lick
of sense to think it's your real name. But if I were you, I'd call myself John
Wilson instead."
* The system can be gamed. It can be charming, or it can be evil. One of the
most charming bits of gaming I remember from PGP in the late '90s was that
someone created a set of PGP keys for the Clinton White House (Bill, Hillary,
Al, etc.) and stuck them in the key server. They had nice little web of trust
there with them all, which is how you knew they were real!
The gaming can also be accidental. Careless people who certify things they
don't understand can create weird effects. More in the next bullet.
We all know that because you're selecting your own trust points, you can
untangle these things, but there are two problems. Grandma can't be expected to
do it, and other is that as the whole web of trust gets bigger, a 4chan attack
on the web of trust causes work for everyone to unravel it. It means that as
the web gets bigger, there's more lulz in mischief.
* You can't stop people from certifying things. Lots of people end up with
certifications on their key by people they don't know. I disliked that so much
that when PGP became OpenPGP, I put in features to allow key owners to have
control on who certifies what.
For a number of years, one of my fears what an attack that I called "signature
harassment." It's very simple. Create a key with a name that will offend
people. It can be the name a person who would upset some people, or just blunt
and just put racial/sexual/religious epithets in a key and start signing away.
It's childish, but as we all know, certification works because people believe
it, and this is every bit an attack on the ecosystem as attacking an authority.
I'm glad it never turned into a problem. It was one of my big worries and I
never mentioned it in more than a whisper. Any sort of crowd-sourced system has
to worry about a peanut gallery.
* Authorities arise no matter how much you avoid them. Or, self-organizing
structures have a tendency to self-organize. In the old PGP environment, people
who looked at trust paths noted that as one would expect, the Web O' Trust had
formed into a scale-free network that had a relatively few number of
super-nodes. These super-nodes were arguably the equivalents of CAs. The
SSL/X.509 world had VeriSign and Thawte, but we had Ted T'so and Rodney Thayer.
For a time, there were people who wanted me to pursue becoming a super-node, as
if that were a good thing (as opposed to just a thing; no slight is meant to
Ted or Rodney, who were each mildly creeped out by the knowledge that they were
emergent authorities).
Imagine what would happen to a web of trust in a world filled full of botnets,
Anonymous, spammers, 4chan, phishing, and so on.
* Weird social effects trump security. There are what I call "fashionable
certs." They're like autographs. ZOMG! Phil Zimmmermann signed my key! I'll
never wash it again! It's hard enough to get people who should know better
(like me) to make new keys as it is. It's worse when they lose social status.
When famous people sign a key, it pushes the Too Big To Fail problem to every
person who got a key signed by them. That key becomes Too Cool To Fail.
Ironically, the best way to solve this problem (and others above) is to create
a slut certifier that will sign anything. The only certificate worth having is
one that's not worth having.
Frankly, anyone who thinks that crowdsourcing will improve the problems we have
today hasn't thought it through. I have. At best, you'll trade one problem for
another. At worst, you end up with something that can't be fixed because it
would require a unanimous consensus to fix it. (It's possible that's true in
the X.509 world too, but if it is, they start with fewer authorities.)
Okay, so let's go on to explicit authorities, like CAs. If you have an
authority sign the name on your name tag, Nasrudin is vetting your name. What
happens when Nasrudin writes, "Robert" on your name tag instead of "Bob"? This
is a real problem when Alice knows you as Bob and wants to identify you that
way.
If you give Nasrudin 10^100 reasons why you should be Bob and not Robert and he
still says no, then Robert you are. Imagine the problem if Nasrudin believes
that no one in the world *could* be named Violet Blue or *should* be called
Identity Woman (because that would be evil) and that even after 10^100 Plus
reasons, he says no. You have no other choice than to say "badda-bing" and
search for a new authority.
Now despite the fact that being able to call yourself what you want to be
called is one of the most fundamental rights there is, there also have to be
limits as well. At the rock-bottom level, if you're wanting to solve the
misidentification problem, you can't just let bad actors be misidentified. I
think there is a very interesting argument to be made that the
misidentification problem can't be solved and so therefore isn't worth trying
to solve, but I'm not interested in making it today. I could go on for
paragraphs why I think identification is a good thing, and I'm sure that you'd
agree with the basics.
The inevitable conflicts, often boil down to a simple question, "Who made you
an authority?" There are people who are rightly contemptuous of Nasrudin. There
are people who are wrongly contemptuous of Nasrudin. There are people who think
no mulla should be an authority, and those who think that only mullas should be
authorities. There are people who have their own favorite authority. There are
people who just don't like authorities and will pick a non-authority to be
their authority. In my opinion, it all comes back to "Who made you an
authority?" and all that that question brings with it.
What we know as cross-certification comes from the simple fact that no one
wants anyone else to be the ultimate creator of name tags. If Friar Tuck says,
"I delegate my own tag creation to Nasrudin" then a couple of interesting
things happen. He can gain the acceptance of all Nasrudin's work. His own
followers, many of whom have real problems with Mullas, can be told that those
are really the Friar's name tags and that way Bob doesn't have to have two name
tags and figure out which to use where. There is value to it. This is also a
way for an authority to do a power grab from other authorities. It's also ways
for cooperating authorities to divvy up the work if they have to create
millions, billions, or even 10^100 name tags. But there are lots of problems
here, especially when Nasrudin and Tuck delegate each other. The whole point of
an authority is that they're authoritative, and when they point at each other,
they are neither and both authorities at the same time. That makes my brain
hurt, and fortunately for Grandma, she won't even recognize that those sounds
are words. There are many good reasons and venal reasons for
cross-certification, but I think cross-certification is a bad because it should
always be clear what authority is doing what.
I think it's obvious to all that I'm suspicious of authorities in general. My
last two involvements with cryptosystems, ZRTP and DKIM were each
*intentionally* authority-less. My suspicion, though, comes from the "Who made
you an authority?" part more than the belief that authorities themselves
shouldn't exist. The whole point of OpenPGP was democratizing authorities, not
eliminating them.
It's less well known that my last project at PGP was to create a CA. I called
the project Casablanca. Partially because it begins and ends with a CA, but
also because I love the ambiguous nature of all the characters in that movie.
None are truly good or truly bad. But most of all it's because I still smile
really big when I say, "I am shocked, shocked that PGP is issuing X.509
certificates." I figured that since I've never gotten a good answer to "Who
made you an authority?" then I might as well be one myself. For better or
worse, Casablanca got bought up with all the rest of PGP, and is at Symantec
with all of Greater VeriSign.
The problems we're having now occur because some authorities have not been
keeping track of their pen. From where I sit, the righteous anger and general
foment (which I share) doesn't address that. I hear people who seem to be
saying if the authorities wrote name tags with green ink instead of black, we'd
all be safe. I hear people talking about how the name tags are printed or
filled out, and how some other way would fix it. I hear people who don't like
the present authorities suggest new ways to make name tags, or new ways to hand
out pens. None of them really get to what I think are the key issues.
* It's obvious that incompetent authorities should be divested of their pens.
How can you argue with that? We take the licenses from quack doctors. We disbar
rotten lawyers. But I don't know that mere misuse of a pen is evidence of
incompetence. We don't want to create evolutionary pressure on the authorities
so that the worst customer service wins. Misplaced righteous anger teaches
ass-covering. I think that the biggest problem with the authorities is that the
use models presume that the dominant technological gadget is a rotary-dial
phone, not laptops and smartphones. The infrastructure is only now starting to
come to terms with a ubiquitous Internet. We need that infrastructure to run
faster, not slower and running faster is going to mean that errors are
inevitable. The solution is to make errors easy to recover from, not hard to
make.
* If I were king, I'd just get rid of cross-certification. Peter and Lucky have
said it all, for me. I'd like someone to explain bridge CAs in a way that I
could understand them after a night's sleep. When I was a practicing
mathematician, I once proved a theorem I called, "I think that I should never
see // A graph as simple as a tree" and this week I'm highly in favor of
graph-theoretic simplicity.
* The infrastructure we have now is really, really passive-aggressive. On the
one hand, it's designed to succeed, not to fail. That's good. If it were
designed to fail, we'd be yelling about how it's worthless because it's always
giving false positives. For example, hardly anyone ever checks revocation.
(Which, by the way, I think is actually good in the aggregate. More in a bit.)
On the other hand, a couple browsers (I'm looking at you, Firefox and
especially you, Chrome) have gotten utterly stupid about self-signed
certificates. I have a NAS box in my house that has a web management console,
and spins its own self-signed certificates for SSL. When I attach to it, Chrome
puts up a special page with danger icons and a blood-red background and it
says, "ZOMG! This is self-signed and if you proceed, BABIES WILL DIE!" After
proceeding, there's a blood-red X through the lock and blood-red strike-through
on the "https" part of the URL. Give. Me. An. Effing. Break. Really, these
people just don't comprehend the difference between security and trust. If you
want to see the right way, look at it in Safari.
* Related to that, there are people who think that aggressive revocation
checking is good. It's not. Geolocating IP addresses is so good that every time
you send a packet, you locate your user in space and time. Just accept that
every packet is a privacy leak of where your user is. Aggressive OCSP checks,
for example, tell any CA where the customers of its customers are. It's the
perfect surveillance tool. And I'm not really worried about CAs doing analytics
on people, I'm worried about someone else hacking systems close to the OCSP
responders and surveilling that way. It turns SSL into the worlds biggest
privacy leak.
But worse than that, it turns those revocation servers into critical Internet
infrastructure. What fun Anonymous can have when they turn the LOIC not on
Mastercard, but on an OCSP server and thus cause commerce failures to happen
everywhere. Locks create burglars, guys.
* It's obvious to me if not others that if the life of a certificate were
measured in seconds (or hours) rather than months or years, many problems would
fall away. The first fundamental axiom of data-driven programming comes from
one of Mulla Nasrudin's disciples, Kalil Gibran. My translation is not poetry,
but I translate it as, "a datum, having been emitted, cannot be recalled." You
can't revoke a certificate. You can't. Can't as in can not. The sooner we all
*believe* that revocation is impossible in the general case, the better we can
really improve things. Kerberos had it right.
If you have very short certificates, the value of a temporary compromise of an
authority is small. That means that a compromise has to be long-term, and that
makes it harder. It also makes cleanup easier. Everything is easier with
short-term certs, and gosh darn it, we have this Internet thingie now.
The way to fix the broken revocation infrastructure is to dump revocation. It's
good for privacy, it's good for everyone. Alas, it would take a lot more
structural work, but it would be worth it. Fixing OCSP, for example, is
addressing the headache, not the brain tumor.
* Crowd-sourcing, or merely creating checks and balances on authorities can
improve the problem if you do it right, but it screws it up if you do it wrong.
I haven't seen any that make it better, only different. Frankly, suggestions I
have seen just creates a half-assed version of PGP's mechanisms. We should go
for the full ass, at the very least.
* I have a half-baked, but fully-assed proposal that I think can actually work,
but I'll write it up later. I've been thinking about it for a year or two, and
did the original design after the Comodo hack. Yes, I need to finish and
circulate it. I see it as solving the maze by entering the exit.
* Nonetheless, I don't think you can get rid of authorities. I think they are
emergent or intentional, take your pick. Frankly, I think I'd rather have a
community of intentional authorities than a vague, faceless cloud. Not the
least of the reasons is that I think you should have authoritative pseudonyms.
That's my discussion. I think it's important to rewind to the beginning of the
discussion, because we're all too caught up in details and not in the structure
of the situation. The situation dictates what solutions are even possible. Much
of what I hear is just problem shuffling, not problem solving.
I'll sum up with one more thing. What we call "trust" is really knowing how to
believe a name tag. I paraphrase Laotse one more time. He said that you cannot
create kindness with cruelty, no matter how much cruelty you use. Well, you
cannot create trust with cryptography, no matter how much cryptography you use.
Trust comes from the same human qualities that kindness does. The who who make
you an authority are the community, and they do it because you act like one.
Jon_______________________________________________
cryptography mailing list
[email protected]
http://lists.randombit.net/mailman/listinfo/cryptography