On 26/06/11 5:50 AM, Ralph Holz wrote:
Hi,

Any model that offers a security feature to a trivially tiny minority,
to the expense of the dominant majority, is daft.  The logical
conclusion of 1.5 decades worth of experience with centralised root
lists is that we, in the aggregate, may as well trust Microsoft and the
other root vendors' root list entirely.

Or: find another model.  Change the assumptions.  Re-do the security
engineering.

You have avoided the wording "find a better model" - intentionally so?

:) It's very hard to word proposals that go against the belief of many, without being inflamatory. If it is too inflamatory, nobody reads it. Even if it is right. We lose another few years...

Because such work would only be meaningful if we could show we have
achieved an improvement by doing it.

Yeah. So we have a choice: improve the overall result of the current model, or try another model.

The point of the subject line is that certain options are fantasy. In the current model, we're rather stuck with a global solution.

So, fixing it for CNNIC is ... changing the model.

Which brings us to the next point: how do we measure improvement? What
we would need - and don't have, and likely won't have for another long
while - are numbers that are statistically meaningful.

Right, indeed.  The blind leading the blind :)

On moz.dev.sec.policy, the proposal is out that CAs need to publicly
disclose security incidents and breaches.

Yes, but they (we) haven't established why or what yet.

This could actually be a good
step forward. If the numbers show that incidents are far more frequent
than generally assumed, this would get us away from the "low frequency,
high impact" scenario that we all currently seem to assume, and which is
so hard to analyse. If the numbers show that incidents are very rare -
fine, too. Then the current model is maybe not too bad (apart from the
fact that one foul apple will still spoil everything, and government
interference will still likely remain undetected).

Except, we've known that the numbers of security patches released by Microsoft tells us ... nothing. We need more than "numbers" and "research" to justify a disclosure.

The problem is that CAs object to disclosure on the simple grounds that
public disclosure hurts them. Even Startcom, otherwise aiming to present
a clean vest, has not disclosed yet what happened on June, 15th this year.

Yes, it's hilarious isn't it :)

Mozilla seems to take the stance that incidents should, at most, be
disclosed to Mozilla, not the general public. While understandable from
Moz's point of view

Mozo are doing it because it makes them feel more in control. They are not as yet able to fully explain what the benefit is. Nor what the costs are.

- you don't want to hurt the CAs too badly if you
are a vendor - it still means researchers won't get the numbers they
need. And the circle closes - no numbers, no facts, no improvements,
other than those subjectively perceived.


OK.  So we need to show why researchers can benefit us with those numbers :)

(IMHO, the point is nothing to do with researchers. It's all to do with reputation. It's the only tool we have. So disclosure as a blunt weapon might work.)



iang
_______________________________________________
cryptography mailing list
[email protected]
http://lists.randombit.net/mailman/listinfo/cryptography

Reply via email to