On 09/16/2011 03:58 AM, Ben Laurie wrote:
On Fri, Sep 16, 2011 at 8:57 AM, Peter Gutmann
<[email protected]> wrote:
Marsh Ray<[email protected]> writes:
The CAs can each fail on you independently. Each one is a potential weakest
link in the chain that the Relying Party's security hangs from. So their
reliability statistics multiply:
one CA: 0.99 = 99% reliability
two CAs: 0.99*0.99 = 98% reliability
100 CAs: 0.99**100 = 37% reliability
I realise that this is playing with numbers to some extent (i.e. we don't know
what the true reliability figure actually is), but once you take it out to what
we currently have in browsers:
We could have a stab at it. A = Integral of number of CAs in trusted
root/number of years CAs have been around = ? (I'd guess 100?).
This data could probably be collected quite accurately with some code
archeology.
B = Total failures/number of years = ? (1, maybe?)
Difficult to know quantitatively, even about the present.
Iran may have the dubious distinction of the first CA-issued MitM to
have failed. I don't believe it was the first to have occurred, but up
until very recently, some asserted that it had never happened.
So failure rate = A/B = 1% p.a.
giving reliability of 99% p.a.. What do you know?
It's got to be worse than that because we know that for several months
in 2011 there was an attacker with the who could mint himself whatever
certs he wanted from Comodo and DigiNotar (he claims to still have this
capability from several others). As far as I can tell, my Android phone
is still vulnerable to his *.*.com certificate. Given that SSL has only
been around about 16 years, the SSL-PKI cannot be more than 97% reliable.
Anyone got better numbers?
There are some complicating factors that make it difficult for this
reliability analysis to get better than a back-of-the-envelope upper bound.
* The actual number of organizations knowing private keys with the
ability to sign something the user's browser will trust is unknown. Yes,
that's right, CAs refuse to disclose how many and to whom sub-CAs have
been issued. We only get a lower bound from the SSL Observatory.
* A 'failure' of a CA is probably often perceived by a small set, or
only one, user. Presumably some of these sub-CAs have been loaded into
SSL intercepting firewalls (e.g., BlueCoat devices) by corporations,
this is one of the arguments given for issuance. Whether or not you
believe that that is legitimate, it certainly seems plausible that a
guest on the network could be intercepted without prior informed consent.
* The degree of independence between the trusted root CAs and their
sub-CAs is not well understood (the 150 vs 600 vs 1500 debate).
* The issuance of a cert, a user's reliance on a cert, successful and
failed attacks on users, attacks on CAs, etc. are discrete events and
may be better modeled stochastically. Bayesian methods may be better at
dealing with the presence of the large unknowns. (Alas my own Bayes-fu
and Markov-fu are not presently up to the challenge).
Whether you arrive at 37% or 99% reliable, any honest analysis will show
the current system is ineffective against a significant set of adversaries.
- Marsh
_______________________________________________
cryptography mailing list
[email protected]
http://lists.randombit.net/mailman/listinfo/cryptography