On 19/08/14 19:00, elijah wrote:
> I agree CT is off topic, but on topic to the degree to which it keeps
> being suggested for user keys...
> 

I didn't mean to suggest it's off-topic - just to point out some issues with 
discussions about it.

> On 08/19/2014 05:45 AM, Ximin Luo wrote:
> 
>> I think people keep making the same mistake of treating CT as 
>> "providing key validity". It does *not* provide key validity, nor 
>> binding; it provides transparency *to enable systems that do provide
>>  the former*. In other words, via the auditing and monitoring 
>> components, then you gain "some confidence probability" that the key
>>  is valid.
> 
> ....
> 
>>> (1) The web server problem: a present server needs to prove itself
>>>  to a present visitor. Addressed by CT, etc.
> 
>> CT does not address (1) any more than it addresses (2) or (3). It is
>>  the auditing and monitoring surrounding CT that provides (1), and 
>> even currently there are known gaps and it is only half-implemented 
>> (no gossip protocol). As you say, "ultimately only the recipient 
>> knows for sure which public keys are correct for the recipient". This
>> is true *for webservers as well*.
> 
> You are making a distinction between the cryptographic log operations CT
> and the monitor and auditor operations. I have not heard anyone describe
> the monitors and auditors as "not CT". Is this semantic distinction
> accurate?
> 

Perhaps not, but monitors/gossip for non-X509 CT logs will need to do different 
things, yet people tend to neglect the specifics of this when they talk about 
using CT for a different application. The failure modes would be different, too.

>> Overall however, this is not a cryptographic problem, but a 
>> logistical one. (This is why I choose the term "key validity", to 
>> distinguish it from cryptographic "authenticity" once you have 
>> actually validated the key.)
> 
> I thought the point of CT was that it is a logistical approach that
> includes cryptography, but it does not provide cryptographic
> authentication. By your distinction here, CT does provide "key
> validity" but not "key authenticity", yes?
> 

Not sure what you mean by either concept. Let me try to define things from 
basics. Alice wants to talk to Bob. From Alice's POV there's a few things:

1. Bob in the real world
2. Bob's current key
3. The stuff she wants to send to Bob

We need to bind 1 -> 2, this is what I mean by "key validity". Then, 2 -> 3 is 
a standard cryptographic authentication (that assumes you already know the 
right key). In PGP and OTR and others, you can bind 1 -> 2 by meeting up 
physically and comparing keys. This is the strongest way of doing bindings.

Some systems do things differently, they have

1a. Some URI to contact Bob by.

Alice is expected to bind 1 -> 1a by various social/human/non-crypto means 
(e.g. reading lots of independent secure websites with the same contact 
details), and the system tries to bind 1a -> 2.

CT helps to bind 1a -> 2 by forcing all binding-claims ("certificates") 1a -> 2 
to be published in a log. This way, Alice can be confident that Bob can also 
see all claims 1a -> 2. Then, we need some way for Bob to denounce[1] bad 
claims, such that this denouncement can itself be bound back to Bob, and for 
Alice to ensure that her view is fresh. This isn't well-specified nor finalised 
yet even in plain CT, and is the bulk of the issue (more so than the log 
itself), in terms of both importance and difficulty.

And, how secure can a binding of the form 1a -> 2 be? I think that we don't 
have anything that *actually does this*. For X509, the claim is from a 
"trusted" third party that they performed some (non-standard, vendor-specific) 
verification process, that itself is probably open to attack (especially if 
using email or DNS; they are insecure). Often there is some other entity that 
controls 1a *by logistics* and therefore is always able to forge the binding 
(e.g. your email provider). So even if the CA is acting in good faith and does 
everything competently, they might still make false bindings of the form 1a -> 
2. How to react, in this case?

So, we have two bindings 1 -> 1a and 1a -> 2, each with less-than-perfect 
levels of assurance. For sure, often this is "good enough" when considered 
against convenience, but one ought to be precise in reasoning about which parts 
of it have holes, because only then can we improve security. One simple 
improvement is to have multi-signed certs, as well as stricter standards for 
CAs. [2] is actually pretty decent already, but could be more strict (e.g 
open-source verification code, continual verification) if we had some way to 
actually enforce the policy for the "too big to fail" CAs.

For small-scale personal communications though, I would rather take my bets on 
the 1 -> 2 binding, so I argue that every end-to-end strong-security system 
should offer this option.

X

(Re-reading your email, I think the point about unique entries for user-keys is 
not that important. The system must be able to identify badly-bound keys *that 
are already in the log*, this is needed for vanilla-CT too. OTOH if a user 
wants to have multiple short-lived keys, they can issue bindings that have 
expiry dates just like PGP.)

[1] This is not the same as revocation - only the person that issued a cert can 
revoke it.
[2] https://wiki.mozilla.org/CA:Recommended_Practices

-- 
GPG: 4096R/1318EFAC5FBBDBCE
git://github.com/infinity0/pubkeys.git

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
Messaging mailing list
[email protected]
https://moderncrypto.org/mailman/listinfo/messaging

Reply via email to