On Tuesday, June 30, 2015 at 2:36:57 PM UTC-4, Richard Barnes wrote:
> Dear dev.security.policy,
> 
> I wanted to let you all know of some correspondence that happened recently

I understand root certificate bundles that are managed by the browser either as 
part of the OS keybag, or software keybag. I see that the software, either 
through pinning, reference to a Markle hash tree, or reference to an 
authoritative list, or even a well known list, can enable the software to 
identify self signed roots. In addition any user  can add their  own root and 
set the trust bits for themselves. So it appears the users have the ultimate 
decision in regards to evaluation of  trust, and some tools that aid them in 
that decision.

I have also read the Certificate Policy and Practices statements by many of the 
CA that would typically be found in a browser store.

The question to me is how does this scale in terms of the trust model when 
there is obvious manipulation of code signing certificates with state level use 
case actor malware threats?

I have seen this brought up in terms of the discussion of domain constraints 
and the implementation in the code.

I think this brings us squarely back to 1996 when included bundles were the 
logical choice for browser users. Make it simple, and don't require mutual TLS, 
which would then also require client based certificates. 

Then with the industry groups like CAB promoting enhanced identity verification 
for the added green indicator, a step in the right direction.

But this was already part of X.509v1, previous to the RFC-5280 extensions. 
However unwieldy the X.509v1 model was for the Internet, in that there was very 
close binding between a X.500 Directory model, and the authentication to the 
Directory, e.g. X.509 in the original form...there already was established 
identity in the Directory.

Here in the U.S. that organizational identity is supplied by ANSI which 
registers Administrative Domains, as opposed to DNS domains.  The door that was 
left open in the security of binding the DNS domain to the certificate, say in 
the subject alternative field lacked this functionality of the original model 
in which there was a 1:1 relationship between the identity of the X.500 object, 
and the user certificate that was bound to that object.

Many organizations don't try to bite off the entire internet in terms of scope. 
We see constraints applied everywhere, and they all feed back to enabling 
legislation, such as the EU trust list example. We are seeing an unprecedented 
disruption of the CA PKI model with Let's Encrypt, which does fit the bill of 
enabling every site that wants to do TLS, the ability to do so. 

I respect Mozilla's commitment to develop policy artifacts in this difficult 
area and the work of industry fora to build trusted communities. I do think the 
problem has to be constrained in some  shape or manner, and what has existed as 
the "internet death for CA's that  don't do adequate protection of private 
signing keys" discussion is a valid lever that the community can use. Or even 
put that into the hands of the end user. I don't typically go to web sites that 
fully leverage the full power of the Internet to connect me to 1:* possible 
connections. Sometimes. 

I do know when I tried to engineer in a no-sub CA clause my CA balked. They 
would not put it into the contract. I do think the DNS is inadequate compared 
to the original X.509v1 model in establishing identity bound to the subject of 
the certificate.  Does that mean that Mozilla should allow for dynamic updating 
of certificates in Firefox via LDAP? Or DNS?That seems to already be available 
through DISA X.500, but I am restrained from using that software add on from 
Milforge. Or is this already built in?

_______________________________________________
dev-security-policy mailing list
[email protected]
https://lists.mozilla.org/listinfo/dev-security-policy

Reply via email to