Re: libpkix maintenance plan (was Re: What exactly are the benefits of libpkix over the old certificate path validation library?)

2012-01-25 Thread Sean Leonard
I ended up writing a lot of text in response to this post, so, I am 
breaking up the response into three mini-responses.


Part I

On 1/18/2012 4:23 PM, Brian Smith wrote:
 Sean Leonard wrote:
 The most glaring problem however is that when validation fails, such
 as in the case of a revoked certificate, the API returns no
 certificate chains

 My understanding is that when you are doing certificate path 
building, and you have to account for multiple possibilities any any 
point in the path, there is no partial chain that is better to return 
than any other one, so libpkix is better off not even trying to return a 
partial chain. The old code could return a partial chain somewhat 
sensibly because it only ever considered one possible cert (the best 
one, ha ha) at each point in the chain.



For our application--and I would venture to generalize that for all 
sophisticated certificate-using applications (i.e., applications that 
can act upon more than just valid/not valid)--more information is a 
lot better than less.


I have been writing notes on Sean's Comprehensive Guide to Certification 
Path Validation. Here's a few paragraphs of Draft 0:


Say you have a cert. You want to know if it's valid. How do you 
determine if it's valid?


A certificate is valid if it satisfies the RFC 5280 Certification Path 
Validation Algorithm. Given:
* a certification path of length n (the leaf cert and all certs up to 
the trust anchor--in RFC 5280, it is said that cert #1 is the one 
closest to the trust anchor, and cert n is the leaf cert you're validating),

* the time,
* policy-stuff, -- hand-wavy because few people in the SSL/TLS world 
worry about this but it's actually given a lot of space in the RFC

* permitted name subtrees,
* excluded name subtrees,
* trust anchor information (issuer name, public key info)

you run the algorithm, and out pops:
* success/failure,
* the working public key (of the cert you're validating),
* policy-stuff, -- again, hand-wavy
and anything else that you could have gleaned on the way.


But, this doesn't answer the obvious initial question: how do you 
construct a certification path of length n if you only have the 
initial cert? RFC 5280 doesn't prescribe any particular algorithm, but 
it does have some requirements (i.e., if you say you support X, you MUST 
support it by doing it Y way).


Certification Path Construction is where we get into a little bit more 
black art and try to make some tradeoffs based on speed, privacy, 
comprehensiveness, and so forth.


Imagine that you know all the certificates ever issued in the known 
universe. Given a set of trust anchors (ca name + public key), you 
should be able to draw lines from your cert through some subset of 
certificates to your trust anchors. What you'll find is that you've got 
a big tree (visually, but not necessarily in the computer science sense; 
it's actually a directed acyclic graph), where your cert is at the root 
and the TAs are at the leaves. The nodes are linked by virtue of the 
fact that the issuer DN in the prior cert is equal to the subject DN in 
the next cert, or to the ca name in the trust anchor.


Practically, you search the local database(s) for all certificates that 
match the issuer DN in the subject. If no certificates (or in your 
opinion, an insufficient number of certificates) are returned, then, you 
will want to resort to other methods, such as using the caIssuers AIA 
extension (HTTP or LDAP), looking in other remote stores, or otherwise.


The ideal way (Way #1) to represent the output is by a tree, where each 
node has zero or more children, and the root node is your target cert. 
In lieu of a tree, you can represent it as an array of cert paths 
(chains) (way #2). Way #2 is the way that Microsoft 
CertGetCertificateChain validation function returns its results, 
more-or-less.


Once you have all of these possibilities, you'll want to start pruning, 
which involves non-cryptography (e.g., checking for basic constraints), 
actual cryptography (digital signature verification), and more 
non-cryptography (e.g., time bounds and name constraints). The general 
received wisdom is to start verifying signatures from the trust anchor 
public key(s) down to the leaf, rather than the other way around, 
because otherwise an attacker can DoS your algorithm by putting in a 
 bit RSA key or some such. Incidentally, this is also one 
argument why unknown/untrusted issuer is much worse than some folks 
want to assume, but I understand that is a sensitive point among some 
technical people so the main point is that you have to provide as much 
of this information as possible to the validation-using application 
(Firefox, Thunderbird, Penango, IPsec kernel, whatever) so that the 
application can figure out these tradeoffs. If you keep it in the tree 
form, you can eliminate whole branches of the tree. Eliminate could mean 
a) don't report the path at all, or b) report the path anyway but stop 
reporting 

Re: libpkix maintenance plan (was Re: What exactly are the benefits of libpkix over the old certificate path validation library?)

2012-01-25 Thread Sean Leonard

Part II

On 1/18/2012 4:23 PM, Brian Smith wrote:
 Sean Leonard wrote:

 and no log information.

 Firefox has also been bitten by this and this is one of the things 
blocking the switch to libpkix as the default mechanism in Firefox. 
However, sometime soon I may just propose that we change to handle 
certificate overrides like Chrome does, in which case the log would 
become much less important for us. See bug 699874 and the bugs that are 
referred to by that bug.


 The only output (in the revoked case) is
 SEC_ERROR_REVOKED_CERTIFICATE. This is extremely unhelpful because it
 is a material distinction to know that the EE cert was revoked,
 versus an intermediary or root CA.

 Does libpkix return SEC_ERROR_REVOKED_CERTIFICATE in the case where 
an intermediate has been revoked? I would kind of expect that it would 
return whatever error it returns for could not build a path to a trust 
anchor instead, for the same reason I think it cannot return a partial 
chain.


When I last tested it, I recall that SEC_ERROR_REVOKED_CERTIFICATE was 
returned for intermediate certs.


When certLog is returned from CERT_VerifyCertificate, all validation 
errors with all certs (in the single path) are added. The 
CERTVerifyLogNode (certt.h) includes the depth, so multiple log entries 
can have the same depth (aka, same cert) but different error codes. It 
is up to the application to make sense of it and to correlate them 
together, but at least you can get all of the errors out.


 Such an error also masks other possible problems, such as whether
 a certificate has expired, lacks trust bits, or other information.

 Hopefully, libpkix at least returns the most serious problem. Have 
you found this to be the case? I realize that most serious is a 
judgement call that may vary by application, but at least Firefox 
separates cert errors into two buckets: overridable (e.g. expriation, 
untrusted issuer) and too-bad-to-allow-user-override (e.g. revocation).


As suggested in Part I, most serious problem really depends on your 
perspective and application. Let's take revoked as an example. 
Revocation has reason codes in CRLs, and in OCSP responses too under the 
RevokedInfo - revocationReason element. keyCompromise(1) is a fairly 
serious situation, but in that case, you may actually want to invalidate 
(i.e., treat as not valid) the cert *prior to* the revocation time, such 
as with the RFC 5280 sec. 5.3.2 Invalidity Date extension.


Contrast this with privilegeWithdrawn(9), which we joke internally is 
the failure to pay reason code. If someone fails to pay for their 
cert, that is bad, but probably not *as* bad in the grand scheme of 
things as keyCompromise(1). It also may trigger a different UI: this 
deadbeat failed to pay versus some Evil Eve stole this person's 
private key. In contrast, expiration--particularly expiration from a 
long time ago--is probably worse than privilegeWithdrawn(9).


Regarding the buckets: that is all well and good. It's worth driving 
home that it would be nice if all applications that use NSS/libpkix are 
starting with the same, fat deck of cards, that they can then separate 
into buckets of their choosing.



 Per above, we never used non-blocking I/O from libpkix; we use it in
 blocking mode but call it on a worker thread. Non-blocking I/O never
 seemed to work when we tried it, and in general we felt that doing
 anything more than absolutely necessary on the main thread was a
 recipe for non-deterministic behavior.

 This is also what Firefox and Chrome do internally, and this is why 
the non-blocking I/O feature is not seen as being necessary.


ok

Removing non-blocking I/O completely from libpkix may also save a 
non-negligible amount of codegen. Some libpkix entry points (such as 
PKIX_ValidateChain_NB) are not used at all, and therefore should be 
optimized away, but there are non-trivial parts of functions that check 
if (nonblocking) and such that are almost certainly not optimized away 
in the current code.


 The downside to blocking mode is that the API is one-shot: it is not
 possible to check on the progress of validation until it magically
 completes. When you have CRLs that are  10MB, this is an issue.
 However, this can be worked around (e.g., calling it twice: once for
 constructing a chain without revocation checking, and another time
 with revocation checking), and one-shot definitely simplifies the
 API for everyone.

 As I mentioned in another thread, it may be the case that we have to 
completely change the way CRL, OCSP, and cert fetching is done in 
libpkix, or in libpkix-based applications anyway, for performance 
reasons. I have definitely been thinking about doing things in Gecko in 
a way that is similar to what you suggest above.


Which thread?

Correction: I said a chain but I should have said a chain, but 
ideally, chains.


On the topic of chains, comparing the behavior of 
CertGetCertificateChain is very useful. In the MS API (which has been 
around 

Re: libpkix maintenance plan (was Re: What exactly are the benefits of libpkix over the old certificate path validation library?)

2012-01-25 Thread Sean Leonard

Part III

On 1/18/2012 4:23 PM, Brian Smith wrote:

Sean Leonard wrote:

 We do not currently use HTTP or LDAP certificate stores with respect
 to libpkix/the functionality that is exposed by CERT_PKIXVerifyCert.
 That being said, it is conceivable that others could use this feature,
 and we could use it in the future. We have definitely seen LDAP URLs in
 certificates that we have to validate (for example), and although
 Firefox does not ship with the Mozilla Directory (LDAP) SDK,
 Thunderbird does. Therefore, we encourage the maintainers to leave it
 in. We can contribute some test LDAP services if that is necessary for
 real-world testing.

 Definitely, I am concerned about how to test and maintain the LDAP 
code. And, I am not sure LDAP support is important for a modern web 
browser at least. Email clients may be a different story. One option may 
be to provide an option to CERT_PKIXVerifyCert to disable LDAP fetching 
but keep HTTP fetching enabled, to allow applications to minimize 
exposure to any possible LDAP-related exploits.


I'll see what we can do about setting up some example LDAP servers. From 
my own experience, I have seen several major CAs run by governments in 
production that include LDAP URLs. If the web browsers are being used on 
internal/intranet networks (as is increasingly the case with webapps 
taking over the world) then LDAP URLs remain useful for web browsers. In 
my review of RFC 5280 vs. CERT_PKIXVerifyCert, I devoted a section to 
this topic (see Access Methods).


nsNSSCallbacks.cpp is where the NSS-Necko bindings live. See 
nsNSSHttpInterface. SEC_RegisterDefaultHttpClient registers these 
bindings with NSS; then, libpkix (in pkix_pl_httpcertstore.c, and 
pkix_pl_ocspresponse.c) obtains these pointers with 
SEC_GetRegisteredHttpClient.


LDAP services are channeled through pkix_pl_ldapcertstore.c, and 
serviced by the default LDAP client, which exists in 
pkix_pl_ldapdefaultclient.c. This in turn relies on pkix_pl_socket.c for 
sundries such as pkix_pl_Socket_Create, which in turn (finally!) rely on 
NSPR sockets, with functions like PR_NewTCPSocket and PR_Send. Unlike 
HTTP, LDAP is actually implemented by libpkix itself. The advantage is 
that LDAP should work on every platform, without OpenLDAP or Wldap32, 
and without the Mozilla Directory (LDAP) SDK--which means that it ought 
to work in Firefox. The disadvantage is that LDAP may not take advantage 
of SOCKS or other proxies that are configured at the Necko layer.



 Congruence or mostly-similar
 behavior with Thunderbird is also important, as it is awkward to
 explain to users why Penango provides materially different
 validation results from Thunderbird.

 I expect that Thunderbird to change to use CERT_PKIXVerifyCert 
exclusively around the time that we make that change in Firefox, if not 
exactly at the same time.


ok

As I understand it, there are currently no less than six APIs (and four 
different sets of functionality) that can be used to verify certificates:


CERT_PKIXVerifyCert, the long-term preferred one.


CERT_VerifyCertChain, which depending on 
CERT_GetUsePKIXForValidation/CERT_SetUsePKIXForValidation, calls 
cert_VerifyCertChainPkix (which uses libpkix but actually uses a 
slightly different code path compared to CERT_PKIXVerifyCert) or 
cert_VerifyCertChainOld [which is REALLY old]
 NB: by setting the 
not-really-documented-but-appears-in-a-few-scattered-bugzilla-bugs 
environment variable, NSS_ENABLE_PKIX_VERIFY, a user can flip the 
SetUsePKIXForValidation switch.



CERT_VerifyCertificate, which is a gross amalgamation of a lot of 
hairballs improved over time, but seems to be the one that is actually 
used by the vast majority of Mozilla applications; unless you call one 
of the PSM functions and set the boolean pref 
security.use_libpkix_verification, in which case PSM will attempt 
(mostly) to use CERT_PKIXVerifyCert. However, an application that calls 
CERT_VerifyCertificate directly will not be affected.

 - CERT_VerifyCertificateNow (just uses PR_Now())


CERT_VerifyCert, which is a likewise gross amalgamation, except that in 
the middle of the gross amalgamation it calls CERT_VerifyCertChain (so 
it has mostly equivalent but not exactly the same functionality as 
CERT_VerifyCertChain, including the NSS_ENABLE_PKIX_VERIFY detour). I 
thought this one was not supposed to be used, as there is a comment: 
obsolete, do not use for new code on CERT_VerifyCertNow, but there it 
is, plain as day, in nsNSSCallbacks.cpp, nsNSSCertificate.cpp, and 
nsNSSCertificateDB.cpp.

 - CERT_VerifyCertNow (just uses PR_Now())


Firefox and Thunderbird appear to use CERT_PKIXVerifyCert and 
CERT_VerifyCertificate(Now), and CERT_VerifyCert(Now) in different places.


Consolidating these API calls to one API would seem to be sorely desired 
(and, if alternate APIs are removed or simplified, may result in a 
non-trivial size reduction); *except* that each API call has its own 
strange idiosyncrasies and are 

Re: libpkix maintenance plan (was Re: What exactly are the benefits of libpkix over the old certificate path validation library?)

2012-01-25 Thread Ryan Sleevi
Sean,

The Path Building logic/requirements/concerned you described is best
described within RFC 4158, which has been mentioned previously.

As Brian mentioned in the past, this was 'lumped in' with the description
of RFC 5280, but it's really its own thing.

libpkix reflects the union of RFC 4158's practices and RFC 5280's
requirements. As you note in your spreadsheet, libpkix already implements
the majority of 5280 (at least, the important to browsers / commonly
used in PKIs including Internet PKIs). While libpkix tries for some of
4158, it isn't exactly the most robust, nor is 4158 the end-all and be-all
of path building strategies.

I believe that over time, it would be useful (ergo likely) to implement
some of the scoring logic described in 4158 and hand-waved at by
Microsoft's CryptoAPI documentation, rather than its current logic of just
applying its checkers to see if the path MIGHT be valid in a DFS search,
so that libpkix returns not just a good path, but a close-to-optimal path,
and can also provide diagnostics for the paths not taken.

Ryan

  I ended up writing a lot of text in response to this post, so, I am
  breaking up the response into three mini-responses.

  Part I

  On 1/18/2012 4:23 PM, Brian Smith wrote:
Sean Leonard wrote:
The most glaring problem however is that when validation fails, such
as in the case of a revoked certificate, the API returns no
certificate chains
   
My understanding is that when you are doing certificate path
  building, and you have to account for multiple possibilities any any
  point in the path, there is no partial chain that is better to return
  than any other one, so libpkix is better off not even trying to return a
  partial chain. The old code could return a partial chain somewhat
  sensibly because it only ever considered one possible cert (the best
  one, ha ha) at each point in the chain.
   

  For our application--and I would venture to generalize that for all
  sophisticated certificate-using applications (i.e., applications that
  can act upon more than just valid/not valid)--more information is a
  lot better than less.

  I have been writing notes on Sean's Comprehensive Guide to Certification
  Path Validation. Here's a few paragraphs of Draft 0:

  Say you have a cert. You want to know if it's valid. How do you
  determine if it's valid?

  A certificate is valid if it satisfies the RFC 5280 Certification Path
  Validation Algorithm. Given:
  * a certification path of length n (the leaf cert and all certs up to
  the trust anchor--in RFC 5280, it is said that cert #1 is the one
  closest to the trust anchor, and cert n is the leaf cert you're
  validating),
  * the time,
  * policy-stuff, -- hand-wavy because few people in the SSL/TLS world
  worry about this but it's actually given a lot of space in the RFC
  * permitted name subtrees,
  * excluded name subtrees,
  * trust anchor information (issuer name, public key info)

  you run the algorithm, and out pops:
  * success/failure,
  * the working public key (of the cert you're validating),
  * policy-stuff, -- again, hand-wavy
  and anything else that you could have gleaned on the way.


  But, this doesn't answer the obvious initial question: how do you
  construct a certification path of length n if you only have the
  initial cert? RFC 5280 doesn't prescribe any particular algorithm, but
  it does have some requirements (i.e., if you say you support X, you MUST
  support it by doing it Y way).

  Certification Path Construction is where we get into a little bit more
  black art and try to make some tradeoffs based on speed, privacy,
  comprehensiveness, and so forth.

  Imagine that you know all the certificates ever issued in the known
  universe. Given a set of trust anchors (ca name + public key), you
  should be able to draw lines from your cert through some subset of
  certificates to your trust anchors. What you'll find is that you've got
  a big tree (visually, but not necessarily in the computer science sense;
  it's actually a directed acyclic graph), where your cert is at the root
  and the TAs are at the leaves. The nodes are linked by virtue of the
  fact that the issuer DN in the prior cert is equal to the subject DN in
  the next cert, or to the ca name in the trust anchor.

  Practically, you search the local database(s) for all certificates that
  match the issuer DN in the subject. If no certificates (or in your
  opinion, an insufficient number of certificates) are returned, then, you
  will want to resort to other methods, such as using the caIssuers AIA
  extension (HTTP or LDAP), looking in other remote stores, or otherwise.

  The ideal way (Way #1) to represent the output is by a tree, where each
  node has zero or more children, and the root node is your target cert.
  In lieu of a tree, you can represent it as an array of cert paths
  (chains) (way #2). Way #2 is the way that Microsoft
  CertGetCertificateChain validation function returns 

Re: libpkix maintenance plan (was Re: What exactly are the benefits of libpkix over the old certificate path validation library?)

2012-01-25 Thread Sean Leonard

Ryan,

I agree; while I did not mention RFC 4158, it is a good reference. I 
echo your hope that someday, CERT_PKIXVerifyCert/libpkix will provide 
additional diagnostic information.


Some of my own observations:
- while a scoring method is useful (and certainly, an objective method 
is best), there is no universal scoring algorithm. We can, however, sort 
into two big piles: valid paths, and invalid paths.


- scoring and returning multiple paths imply that the system will 
compute all paths, rather than the minimum number of paths to identity a 
valid path (and then, if a valid path is found, quit).


- in the current libpkix design an application could supply 
PKIX_CertSelector_MatchCallback (see PKIX_CertSelector -matchCallback 
and pkix_Build_InitiateBuildChain) to execute custom selection logic. I 
put application in quotes, because CERT_PKIXVerifyCert does not appear 
to have a mechanism to set the matchCallback.


- failing this, an application could attempt to search the local 
stores itself, then supply the candidate certificate path in 
cert_pi_certList. Unfortunately, the quotes apply here too: 
CERT_PKIXVerifyCert does not actually implement cert_pi_certList!


-Sean

On 1/25/2012 6:10 PM, Ryan Sleevi wrote:

Sean,

The Path Building logic/requirements/concerned you described is best
described within RFC 4158, which has been mentioned previously.

As Brian mentioned in the past, this was 'lumped in' with the description
of RFC 5280, but it's really its own thing.

libpkix reflects the union of RFC 4158's practices and RFC 5280's
requirements. As you note in your spreadsheet, libpkix already implements
the majority of 5280 (at least, the important to browsers / commonly
used in PKIs including Internet PKIs). While libpkix tries for some of
4158, it isn't exactly the most robust, nor is 4158 the end-all and be-all
of path building strategies.

I believe that over time, it would be useful (ergo likely) to implement
some of the scoring logic described in 4158 and hand-waved at by
Microsoft's CryptoAPI documentation, rather than its current logic of just
applying its checkers to see if the path MIGHT be valid in a DFS search,
so that libpkix returns not just a good path, but a close-to-optimal path,
and can also provide diagnostics for the paths not taken.

Ryan


  I ended up writing a lot of text in response to this post, so, I am
  breaking up the response into three mini-responses.

  Part I

  On 1/18/2012 4:23 PM, Brian Smith wrote:
 Sean Leonard wrote:
 The most glaring problem however is that when validation fails, such
 as in the case of a revoked certificate, the API returns no
 certificate chains
   
 My understanding is that when you are doing certificate path
  building, and you have to account for multiple possibilities any any
  point in the path, there is no partial chain that is better to return
  than any other one, so libpkix is better off not even trying to return a
  partial chain. The old code could return a partial chain somewhat
  sensibly because it only ever considered one possible cert (the best
  one, ha ha) at each point in the chain.
   

  For our application--and I would venture to generalize that for all
  sophisticated certificate-using applications (i.e., applications that
  can act upon more than just valid/not valid)--more information is a
  lot better than less.

  I have been writing notes on Sean's Comprehensive Guide to Certification
  Path Validation. Here's a few paragraphs of Draft 0:

  Say you have a cert. You want to know if it's valid. How do you
  determine if it's valid?

  A certificate is valid if it satisfies the RFC 5280 Certification Path
  Validation Algorithm. Given:
  * a certification path of length n (the leaf cert and all certs up to
  the trust anchor--in RFC 5280, it is said that cert #1 is the one
  closest to the trust anchor, and cert n is the leaf cert you're
  validating),
  * the time,
  * policy-stuff,-- hand-wavy because few people in the SSL/TLS world
  worry about this but it's actually given a lot of space in the RFC
  * permitted name subtrees,
  * excluded name subtrees,
  * trust anchor information (issuer name, public key info)

  you run the algorithm, and out pops:
  * success/failure,
  * the working public key (of the cert you're validating),
  * policy-stuff,-- again, hand-wavy
  and anything else that you could have gleaned on the way.


  But, this doesn't answer the obvious initial question: how do you
  construct a certification path of length n if you only have the
  initial cert? RFC 5280 doesn't prescribe any particular algorithm, but
  it does have some requirements (i.e., if you say you support X, you MUST
  support it by doing it Y way).

  Certification Path Construction is where we get into a little bit more
  black art and try to make some tradeoffs based on speed, privacy,
  comprehensiveness, and so forth.

  Imagine that you know all the certificates ever issued in the known
  

Re: libpkix maintenance plan (was Re: What exactly are the benefits of libpkix over the old certificate path validation library?)

2012-01-18 Thread Brian Smith
Sean Leonard wrote:
 The most glaring problem however is that when validation fails, such
 as in the case of a revoked certificate, the API returns no
 certificate chains 

My understanding is that when you are doing certificate path building, and you 
have to account for multiple possibilities any any point in the path, there is 
no partial chain that is better to return than any other one, so libpkix is 
better off not even trying to return a partial chain. The old code could return 
a partial chain somewhat sensibly because it only ever considered one possible 
cert (the best one, ha ha) at each point in the chain.

 and no log information.

Firefox has also been bitten by this and this is one of the things blocking the 
switch to libpkix as the default mechanism in Firefox. However, sometime soon I 
may just propose that we change to handle certificate overrides like Chrome 
does, in which case the log would become much less important for us. See bug 
699874 and the bugs that are referred to by that bug.

 The only output (in the revoked case) is
 SEC_ERROR_REVOKED_CERTIFICATE. This is extremely unhelpful because it
 is a material distinction to know that the EE cert was revoked,
 versus an intermediary or root CA.

Does libpkix return SEC_ERROR_REVOKED_CERTIFICATE in the case where an 
intermediate has been revoked? I would kind of expect that it would return 
whatever error it returns for could not build a path to a trust anchor 
instead, for the same reason I think it cannot return a partial chain.

 Such an error also masks other possible problems, such as whether
 a certificate has expired, lacks trust bits, or other information.

Hopefully, libpkix at least returns the most serious problem. Have you found 
this to be the case? I realize that most serious is a judgement call that may 
vary by application, but at least Firefox separates cert errors into two 
buckets: overridable (e.g. expriation, untrusted issuer) and 
too-bad-to-allow-user-override (e.g. revocation).

 Per above, we never used non-blocking I/O from libpkix; we use it in
 blocking mode but call it on a worker thread. Non-blocking I/O never
 seemed to work when we tried it, and in general we felt that doing
 anything more than absolutely necessary on the main thread was a
 recipe for non-deterministic behavior.

This is also what Firefox and Chrome do internally, and this is why the 
non-blocking I/O feature is not seen as being necessary.

 The downside to blocking mode is that the API is one-shot: it is not
 possible to check on the progress of validation until it magically
 completes. When you have CRLs that are  10MB, this is an issue.
 However, this can be worked around (e.g., calling it twice: once for
 constructing a chain without revocation checking, and another time
 with revocation checking), and one-shot definitely simplifies the
 API for everyone.

As I mentioned in another thread, it may be the case that we have to completely 
change the way CRL, OCSP, and cert fetching is done in libpkix, or in 
libpkix-based applications anyway, for performance reasons. I have definitely 
been thinking about doing things in Gecko in a way that is similar to what you 
suggest above.

 We do not currently use HTTP or LDAP certificate stores with respect
 to libpkix/the functionality that is exposed by CERT_PKIXVerifyCert.
 That being said, it is conceivable that others could use this feature,
 and we could use it in the future. We have definitely seen LDAP URLs in
 certificates that we have to validate (for example), and although
 Firefox does not ship with the Mozilla Directory (LDAP) SDK,
 Thunderbird does. Therefore, we encourage the maintainers to leave it
 in. We can contribute some test LDAP services if that is necessary for
 real-world testing.

Definitely, I am concerned about how to test and maintain the LDAP code. And, I 
am not sure LDAP support is important for a modern web browser at least. Email 
clients may be a different story. One option may be to provide an option to 
CERT_PKIXVerifyCert to disable LDAP fetching but keep HTTP fetching enabled, to 
allow applications to minimize exposure to any possible LDAP-related exploits.

 Congruence or mostly-similar
 behavior with Thunderbird is also important, as it is awkward to
 explain to users why Penango provides materially different
 validation results from Thunderbird.

I expect that Thunderbird to change to use CERT_PKIXVerifyCert exclusively 
around the time that we make that change in Firefox, if not exactly at the same 
time.

  From our testing, libpkix/PKIX_CERTVerifyCert is pretty close to RFC
 5280 as it stands. It would be cheaper and more useful for the
 Internet community if the maintainers put the 5% more effort necessary
 to finish the job, than the 95% to break compliance. If this is
 something that you want to see to believe, I can try to compile some
 kind of a spreadsheet that illustrates how RFC 5280 stacks up with
 the current PKIX_CERTVerifyCert 

Re: libpkix maintenance plan (was Re: What exactly are the benefits of libpkix over the old certificate path validation library?)

2012-01-13 Thread Gervase Markham
On 13/01/12 00:01, Brian Smith wrote:
 Ryan seems to be a great addition to the team. Welcome, Ryan!

Ryan - could you take a moment to introduce yourself? (Apologies if I
missed an earlier introduction.)

* We will drop the idea of supporting non-NSS certificate 
  library APIs, and we will remove the abstraction layers
  over NSS's certhigh library. That means dropping the idea
  of using libpkix in OpenSSL or in any OS kernel, for
  example. 

For my info: has anyone ever expressed interest in doing that, or did it
just seem like a useful capability to have in case someone needed it?

Thanks for this summary - it's great to hear that the NSS team are of
one mind :-))

Gerv
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


RE: libpkix maintenance plan (was Re: What exactly are the benefits of libpkix over the old certificate path validation library?)

2012-01-13 Thread Stephen Hanna
Let me just jump in and say that I'm also glad to see
libpkix being used and useful. I was the leader of the
team at Sun Labs that created libpkix (and the Java
CertPath libraries before them). Actually, it's an
exaggeration to say we created libpkix. We started
the work on it and then it took off. Lots of other
people have worked on it since then, probably putting
in many more hours than we did in creating it.

I'm mainly a lurker on this list since I don't do
much with PKI any more. I moved on to a new job
more than seven years ago, working on security
integration standards like TNC and NEA.

But if I can help answer an occasional question,
I'd be glad to do that. I'm having lunch today
with Yassir Elley, who did most of the coding
for the first version of libpkix. He works on
the same team as I do now, at Juniper. We'll
mull over this question and see if we can recall
why we included those layers of abstraction APIs.
I suspect it was because we wanted this to be
a PKIX-compliant library that could be used by
any project for any purpose in any environment.
That's also why it ended up being a bit bloated.
Maybe you could say it was a bit of a second
system effect, following CertPath as it did.

I apologize for whatever weaknesses we put into
libpkix but I'm glad to see that it's useful.
Feel free to adapt it as you see fit.

Thanks,

Steve Hanna

 -Original Message-
 From: dev-tech-crypto-bounces+shanna=funk@lists.mozilla.org
 [mailto:dev-tech-crypto-bounces+shanna=funk@lists.mozilla.org] On
 Behalf Of Gervase Markham
 Sent: Friday, January 13, 2012 6:01 AM
 To: mozilla-dev-tech-cry...@lists.mozilla.org
 Cc: Brian Smith
 Subject: Re: libpkix maintenance plan (was Re: What exactly are the
 benefits of libpkix over the old certificate path validation library?)
 
 On 13/01/12 00:01, Brian Smith wrote:
  Ryan seems to be a great addition to the team. Welcome, Ryan!
 
 Ryan - could you take a moment to introduce yourself? (Apologies if I
 missed an earlier introduction.)
 
 * We will drop the idea of supporting non-NSS certificate
   library APIs, and we will remove the abstraction layers
   over NSS's certhigh library. That means dropping the idea
   of using libpkix in OpenSSL or in any OS kernel, for
   example.
 
 For my info: has anyone ever expressed interest in doing that, or did
 it
 just seem like a useful capability to have in case someone needed it?
 
 Thanks for this summary - it's great to hear that the NSS team are of
 one mind :-))
 
 Gerv
 --
 dev-tech-crypto mailing list
 dev-tech-crypto@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-tech-crypto
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


libpkix maintenance plan (was Re: What exactly are the benefits of libpkix over the old certificate path validation library?)

2012-01-12 Thread Brian Smith
We (me, Kai, Bob, Wan-Teh, Ryan, Elio, Kai) had a meeting today to discuss the 
issues raised in this thread. We came to the following conclusions:

Ryan seems to be a great addition to the team. Welcome, Ryan!

Gecko (Firefox and Thunderbird) will make the switch to libpkix. See Ryan's 
comments about his ideas for expanding Chromium's usage of libpkix.

We will reduce the complexity of libpkix in the following ways:

   * We will drop the idea of supporting non-NSS certificate 
 library APIs, and we will remove the abstraction layers
 over NSS's certhigh library. That means dropping the idea
 of using libpkix in OpenSSL or in any OS kernel, for
 example. Basically, much of the pkix_pl_nss layer can be
 removed and/or folded into the core libpkix layer or into
 certhigh, if doing so would be helpful.

   * We will drop support for non-blocking I/O from libpkix.
 It isn't working now, and we will remove the code that
 handles the non-blocking case as we fix bugs, to make 
 the code easier to maintain.

   * More generally, we will simplify the coding style to make
 it easier to read, understand, and maintain. This includes
 splitting large functions into smaller functions, removing
 unnecessary abstractions, removing simple getter/setter
 functions, potentially renaming internal (to libpkix)
 functions to make the code easier to read, removing
 non-PKCS#11 certificate stores (e.g. HTTP, LDAP), etc.
 (I think we agreed to remove LDAP support, but also agreed
 that it wasn't a high priority. This is a little unclear to
 me.)

We are not going to attempt any kind of spring cleaning sprint on libpkix. 
Basically, developers working on libpkix should feel free to do any of the 
above when it helps simplify the implementation of an important fix or 
enhancement to libpkix.

We will not consider complete RFC 5280 (et. al.) support a priority. We will 
basically implement a subset of RFC 5280 (et al.), with an emphasis on features 
used in the existing PKITS tests, and with the primary emphasis on making 
existing real websites work securely and reliably. We will evaluate new RFC 
5280 features and/or new additions to PKITS critically and make cost/benefit 
and priority decisions on a feature-by-feature basis. Do not expect significant 
new RFC 5280 (et. al.) functionality to be added to libpkix any time soon, even 
if that functionality is specified by some (old) RFC already, unless that 
functionality already has significant usage. If there is RFC 5280 (et al.) 
functionality in libpkix that goes beyond what PKITS tests, then we may even 
consider removing that functionality if it causes problems (e.g. security 
vulnerabilities) and a proper fix for that feature is too time consuming. (I 
don't think others are as eager to do this as I am, and it is diffi
 cult to determine whether a feature is actually being relied upon or not, so I 
consider this last thing to be somewhat unlikely and rare if it ever happens.)

We did not come up with a plan on how to end-of-life the old classic 
certificate path validation/building. It might be the case that certhigh is 
implemented in a way enables us to easily make enhancements to it to improve 
libpkix-based processing without breaking the old classic API. I am a little 
skeptical that it will be easy to make improvements to certhigh to improve 
libpkix without having to do significant extra work to keep the classic API 
working.

In my opinion, it is a very good idea for applications to move to remove their 
dependencies on the classic API. Once Firefox is using libpkix exclusively, 
there will be little interest from Mozilla in fixing bugs in the classic 
library, and I got the idea that others feel similarly.

Let me know if there is anything I missed or am mistaken about.

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: What exactly are the benefits of libpkix over the old certificate path validation library?

2012-01-05 Thread Robert Relyea
On 01/04/2012 05:56 PM, Brian Smith wrote:
 Robert Relyea wrote:
 On 01/04/2012 04:18 PM, Brian Smith wrote:
 Are you actually fetching intermediates?

 In the cases where you fetch the intermediates, the old code will not
 work! We don't fetch the intermediate if we already have it, or it's
 already sent in the SSL chain.

 If you are seeing some performance issue, perhaps it some other
 issue? (are you turning on CRL fetching?).

I think we are misunderstanding each other.

I'm not talking about revocation on intermediates. I'm talking about
fetching intermediates themselves because they weren't included in the
chain. I thought that is what you were talking about. That was certainly
what I was talking about.


 We can just tell libpkix not to do OCSP fetching for intermediates. So, this 
 particular performance issue isn't a blocker for switching to libpkix, as 
 long as we make such a change before making libpkix the default.

 My point is that, in order to actually enable libpkix's ability to fetch 
 intermediate certificates in Firefox, we will have to do a substantial amount 
 of work to eliminate the performance regression that is inherent with the 
 serial fashion that libpkix does OCSP fetching. In some ways, this might be a 
 question of fast vs right but I am not sure that the right here is 
 enough of benefit to justify the performance cost. Still, I would like to do 
 the intermediate OCSP fetching if it can be made close to free, which means 
 doing it in parallel with the EE OCSP fetch, AFAICT.
If the OCSP responder is the same for the EE and intermediate certs, you
can issue a single OCSP request (at least in theory). It would require
some NSS code.

 (Persistent) caching of OCSP responses will help. But, caching won't help for 
 the I just installed Firefox and now I am going to see how fast it is by 
 going to twitter.com test. And, also, we haven't even started working on the 
 persistent caching of OCSP responses in Firefox yet.
What is the actual cost, BTW. persistent caching of OCSP responses are
not likely to buy a whole lot. You still have to fetch the OCSP
responses for the validity period of the response (usually something
like 24 hours).

bob

 - Brian


-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto

Re: What exactly are the benefits of libpkix over the old certificate path validation library?

2012-01-05 Thread Jean-Marc Desperrier

Robert Relyea a écrit :

7. libpkix can actually fetch CRL's on the fly. The old code can only
use CRL's that have been manually downloaded. We have hacks in PSM to
periodically load CRL's, which work for certain enterprises, but not
with the internet.


PSM's periodic CRL download's certainly quite broken, but OTOH on the 
fly CRL fetching certainly won't work either on the Internet with 
regard to the delay it induces.



I'm ok if someone wanted to rework the libpkix code itself, but trying
to shoehorn in the libpkix features into the old cert processing code is
the longer path to getting to something stable. Note that the decision
to move away from the old code was made by those who knew it best.


Probably quite true, but the question of why libpkix is so big stays, it 
very unlikely it brings a value proportionate to it's size.


In the best of world, I'd vote for a complete reworking of it.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: What exactly are the benefits of libpkix over the old certificate path validation library?

2012-01-05 Thread Jean-Marc Desperrier

Brian Smith a écrit :

3. libpkix can enforce certificate policies (e.g. requiring EV policy
OIDs). Can the non-libpkix validation?


EV policy have been defined in a way that means they could be supported 
by a code that handles an extremely tiny part of all what's possible 
with RFC5280 certificate policies.


They could even not be supported at all by NSS, and instead handled by a 
short bit of code inside PSM that inspects the certificate chain and 
extract the value of the OIDs. Given that the code above NSS needs 
anyway to have a list of EV OIDs/CA name hard coded (*if* I'm correct, I 
might be wrong on that one), it wouldn't change things that much actually.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto

Re: What exactly are the benefits of libpkix over the old certificate path validation library?

2012-01-05 Thread Ryan Sleevi
(resending from the correct address)

  On 01/04/2012 03:51 PM, Brian Smith wrote:
  Ryan Sleevi wrote:
  IIRC, libpkix is an RFC 3280 and RFC 4158 conforming implementation,
  while non-libpkix is not. That isn't to say the primitives don't exist
  -
  they do, and libpkix uses them - but that the non-libpkix path doesn't
  use
  them presently, and some may be non-trivial work to implement.
  It would be helpful to get some links to some real-world servers that
  would require Firefox to do complex path building.
  Mostly in the government. They higher 3rd parties to replace our current
  path processing because it is non-conformant. In the real world, FF is
  basically holding the web back because we are the only major browser
  that is not RFC compliant! We should have had full pkix processing 5
  years ago!


To echo what Bob is saying here, in past work I saw problems on a weekly
basis with non-3280 validating libraries within the areas of government,
military, and education - and these are not just US-only problems. The
'big ideas' of PKI tended not to take off commercially, especially in the
realm of ecommerce, but huge amounts of infrastructure and energy has been
dedicated to the dream of PKI elsewhere.

While you talk about the needs of Firefox with regards to NSS' future, I
think it is important to realize that libpkix is the only /open/
implementation (at least, as far as I know) that even comes close to
3280/5280, at least as is available to C/C++ applications. The next
closest is probably Peter Gutmann's cryptlib, which unfortunately is not
widely used in open-source projects. Note, for other languages, you have
Sun/Oracle's Java implementation (which libpkix mirrors a very early
version of, as discussed in the libpkix history) and the Legion of the
Bouncy Castle's C# implementation.

These are the same customers who are often beholden to keep IE 6/ActiveX
around for legacy applications. So while much energy is being put forth
(including from Microsoft) to move these organizations to 'modern' systems
that can support a richer web, if their security needs can't be met by
Firefox, then there will be a problem (or, like Bob said, they'll make
their own - and weigh that as a cost against switching from MSFT).

A couple examples would include the GRID project (which uses a
cross-certified mesh - http://www.igtf.net/), the US government's Federal
PKI Bridge CA (
https://turnlevel.com/wp-content/uploads/2010/05/FederalBridge.gif ), and
the DOD/DISA's PKI setup. The layout of the DOD PKI is fairly similar to
those among various European identity card PKIs, with added
cross-certification for test roots so that third-parties can develop
interoperable software.

However, even outside the spectrum of government/enterprise, you still see
issues that 3280/5280 address better than the current non-libpkix
implementation. EV certificates (and soon, the CA/B Forum Baseline
Specifications) rely on proper policy OID validation - but the failure to
match the OID is not a validation failure, it's just a sign of a 'lesser'
level of identity assurance. CA key rollover is incredibly common.
Likewise, as CAs buy eachother out, you end up with effectively bridge or
mesh topologies where they cross-certify eachother for legacy systems.

As far as non-TLS-compliant servers, I think that's an
oversimplification. It relies on the assumption that 1) There is one and
only one root certificate 2) the server knows all the trust anchors of the
client. Both statements can be shown to be demonstrably false (just look
at how many cross-certified verisign or entrust roots there are, due to CA
key rollovers). So there is no reasonable way for a server to send a
client a 'complete' chain, nor to send them a chain that they can know
will validate to the clients trust anchors. At best, only the EE cert
matters.

For all of these reasons, I really do think libpkix is a huge step forward
- and it's many nuances and bugs can be things we should work on solving,
rather than trying to determine some minimal set of functionality and
graft that onto the existing pre-libpkix implementation.

Speaking with an individual hat on, there are only a few reasons I can
think of why Chromium /wouldn't/ want to use libpkix universally on all
supported platforms:
1) On Windows, CryptoAPI simply is a more robust (5280 compliant) and
extendable implementation - and many of these government/enterprise
sectors have extended it, in my experience, so having Chromium ignore
those could be problematic.
2) On Mac, I haven't had any time to explore developing a PKCS#11 module
that can read Keychain/CDSA-based trust anchors and trust settings.


I would be absolutely thrilled to be able to use libpkix for the Mac
implementation - Apple's path building/chain validation logic is horrid
(barely targets RFC 2459), and they're on their way to deprecating every
useful API that returns meaningful information, over-simplifying it to
target the iOS market. This has been a 

Re: What exactly are the benefits of libpkix over the old certificate path validation library?

2012-01-05 Thread Ryan Sleevi
  On 01/04/2012 03:51 PM, Brian Smith wrote:
  Ryan Sleevi wrote:
  IIRC, libpkix is an RFC 3280 and RFC 4158 conforming implementation,
  while non-libpkix is not. That isn't to say the primitives don't exist
  -
  they do, and libpkix uses them - but that the non-libpkix path doesn't
  use
  them presently, and some may be non-trivial work to implement.
  It would be helpful to get some links to some real-world servers that
  would require Firefox to do complex path building.
  Mostly in the government. They higher 3rd parties to replace our current
  path processing because it is non-conformant. In the real world, FF is
  basically holding the web back because we are the only major browser
  that is not RFC compliant! We should have had full pkix processing 5
  years ago!


To echo what Bob is saying here, in past work I saw problems on a weekly
basis with non-3280 validating libraries within the areas of government,
military, and education - and these are not just US-only problems. The
'big ideas' of PKI tended not to take off commercially, especially in the
realm of ecommerce, but huge amounts of infrastructure and energy has been
dedicated to the dream of PKI elsewhere.

While you talk about the needs of Firefox with regards to NSS' future, I
think it is important to realize that libpkix is the only /open/
implementation (at least, as far as I know) that even comes close to
3280/5280, at least as is available to C/C++ applications. The next
closest is probably Peter Gutmann's cryptlib, which unfortunately is not
widely used in open-source projects. Note, for other languages, you have
Sun/Oracle's Java implementation (which libpkix mirrors a very early
version of, as discussed in the libpkix history) and the Legion of the
Bouncy Castle's C# implementation.

These are the same customers who are often beholden to keep IE 6/ActiveX
around for legacy applications. So while much energy is being put forth
(including from Microsoft) to move these organizations to 'modern' systems
that can support a richer web, if their security needs can't be met by
Firefox, then there will be a problem (or, like Bob said, they'll make
their own - and weigh that as a cost against switching from MSFT).

A couple examples would include the GRID project (which uses a
cross-certified mesh - http://www.igtf.net/), the US government's Federal
PKI Bridge CA (
https://turnlevel.com/wp-content/uploads/2010/05/FederalBridge.gif ), and
the DOD/DISA's PKI setup. The layout of the DOD PKI is fairly similar to
those among various European identity card PKIs, with added
cross-certification for test roots so that third-parties can develop
interoperable software.

However, even outside the spectrum of government/enterprise, you still see
issues that 3280/5280 address better than the current non-libpkix
implementation. EV certificates (and soon, the CA/B Forum Baseline
Specifications) rely on proper policy OID validation - but the failure to
match the OID is not a validation failure, it's just a sign of a 'lesser'
level of identity assurance. CA key rollover is incredibly common.
Likewise, as CAs buy eachother out, you end up with effectively bridge or
mesh topologies where they cross-certify eachother for legacy systems.

As far as non-TLS-compliant servers, I think that's an
oversimplification. It relies on the assumption that 1) There is one and
only one root certificate 2) the server knows all the trust anchors of the
client. Both statements can be shown to be demonstrably false (just look
at how many cross-certified verisign or entrust roots there are, due to CA
key rollovers). So there is no reasonable way for a server to send a
client a 'complete' chain, nor to send them a chain that they can know
will validate to the clients trust anchors. At best, only the EE cert
matters.

For all of these reasons, I really do think libpkix is a huge step forward
- and it's many nuances and bugs can be things we should work on solving,
rather than trying to determine some minimal set of functionality and
graft that onto the existing pre-libpkix implementation.

Speaking with an individual hat on, there are only a few reasons I can
think of why Chromium /wouldn't/ want to use libpkix universally on all
supported platforms:
1) On Windows, CryptoAPI simply is a more robust (5280 compliant) and
extendable implementation - and many of these government/enterprise
sectors have extended it, in my experience, so having Chromium ignore
those could be problematic.
2) On Mac, I haven't had any time to explore developing a PKCS#11 module
that can read Keychain/CDSA-based trust anchors and trust settings.


I would be absolutely thrilled to be able to use libpkix for the Mac
implementation - Apple's path building/chain validation logic is horrid
(barely targets RFC 2459), and they're on their way to deprecating every
useful API that returns meaningful information, over-simplifying it to
target the iOS market. This has been a sore point for many Apple users in

Re: What exactly are the benefits of libpkix over the old certificate path validation library?

2012-01-05 Thread Brian Smith
Jean-Marc Desperrier wrote:
 Brian Smith a écrit :
  3. libpkix can enforce certificate policies (e.g. requiring EV
  policy OIDs). Can the non-libpkix validation?
 
 EV policy have been defined in a way that means they could be
 supported by a code that handles an extremely tiny part of all what's
 possible with RFC5280 certificate policies.

Right. How much of PKIX a client actually needs to implement is still an open 
question in my mind.

 They could even not be supported at all by NSS, and instead handled
 by a short bit of code inside PSM that inspects the certificate chain
 and extract the value of the OIDs. Given that the code above NSS needs
 anyway to have a list of EV OIDs/CA name hard coded (*if* I'm
 correct, I might be wrong on that one), it wouldn't change things that
 much actually.

AFAICT, it is important that you know the EV policy OID you are looking for 
during path building, because otherwise you might build a path that has a cert 
without the EV policy even when there is another possible path that uses certs 
that all have the policy OID.

On the other hand, do we really need to do path building at all? It seems 
reasonable to me to require that sites that want EV treatment to return (in 
their TLS Certificates message) a pre-constructed path with the correct certs 
(all with the EV policy OID) to verify (sans root), which is what the TLS 
specification requires anyway. So, I would say that, AFAICT, practical EV 
support doesn't really require PKIX processing, though other things might.

- Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto

Re: What exactly are the benefits of libpkix over the old certificate path validation library?

2012-01-05 Thread Jean-Marc Desperrier

Robert Relyea a écrit :

On 01/04/2012 05:56 PM, Brian Smith wrote:

  Robert Relyea wrote:

  On 01/04/2012 04:18 PM, Brian Smith wrote:
  In the cases where you fetch the intermediates, the old code will not
  work!


[...] I'm talking about
fetching intermediates themselves because they weren't included in the
chain. I thought that is what you were talking about. That was certainly
what I was talking about.


Well, as Rob noted that's *very* surprising because the standard code 
will *not* work in that case, so you're talking about a case that's 
broken in the non-libpkix world which should be a rare case.

And not the one where performance is the main concern.
--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto

Re: What exactly are the benefits of libpkix over the old certificate path validation library?

2012-01-04 Thread Gervase Markham
On 04/01/12 00:59, Brian Smith wrote:
 5. libpkix has better AIA/CRL fetching: 5.a. libpkix can fetch
 revocation information for every cert in a chain. The non-libpkix
 validation cannot (right?). 5.b. libpkix can (in theory) fetch using
 LDAP in addition to HTTP. non-libpkix validation cannot. 

5b) is not a significant advantage; everything CABForum is doing
requires HTTP access to revocation information, as many SSL clients
don't have LDAP capabilities.

Gerv

-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: What exactly are the benefits of libpkix over the old certificate path validation library?

2012-01-04 Thread Robert Relyea
On 01/03/2012 04:59 PM, Brian Smith wrote:
 1. libpkix can handle cross-signed certificates correctly, without getting 
 stuck in loops. Non-libpkix validation cannot.

 2. libpkix can accept parameters that control each individual validation, 
 whereas non-libpkix validation relies on global settings.
 2.a. libpkix can control OCSP/CRL/cert fetching on a per-validation basis.
 2.b. libpkix can restrict the set of roots that are validated. non-libpkix 
 validation cannot.

 3. libpkix can enforce certificate policies (e.g. requiring EV policy OIDs). 
 Can the non-libpkix validation?

 4. libpkix can can return the full certificate chain to the caller. The 
 non-libpkix validation cannot.

 5. libpkix has better AIA/CRL fetching:
 5.a. libpkix can fetch revocation information for every cert in a chain. The 
 non-libpkix validation cannot (right?).\
yes, well for OCSP. for CLR's non-libkix does check the
revocation status, but it doesn't refresh or even update the CRL. If the
CRL is out of date, the validation just fails (though I'm not sure what
the current definition of 'out-of-date' is for the old code).
 5.b. libpkix can (in theory) fetch using LDAP in addition to HTTP. 
 non-libpkix validation cannot.
 5.c. libpkix checks for revocation information while walking from a trusted 
 root to the EE. The non-libpkix validation does the fetching while walking 
 from the EE to the root.

 Are there any other benefits?
6. libpkix can actually fetch missing certs in the chain. This has been
an issue for a very long time.

(actually most of the features in libpkix have been issues for a very
long time).

7. libpkix can actually fetch CRL's on the fly. The old code can only
use CRL's that have been manually downloaded. We have hacks in PSM to
periodically load CRL's, which work for certain enterprises, but not
with the internet.

 As for #5, I don't think Firefox is going to be able to use libpkix's current 
 OCSP/CRL fetching anyway, because libpkix's fetching is serialized and we 
 will need to be able to fetch revocation for every cert in the chain in 
 parallel in order to avoid regressing performance (too much) when we start 
 fetching intermediate certificates' revocation information. I have an idea 
 for how to do this without changing anything in NSS, doing all the OCSP/CRL 
 fetching in Gecko instead.
OCSP responses are cached, so OCSP fetching on common intermediates
should not be a significant performance hit. Chrome is using this
feature (we know because we've had some intermediates in were revoked).

 It seems to me that it would be relatively easy to add #2, #3, and #4 to the 
 non-libpkix validation engine, especially since we can reference the libpkix 
 code
No, it's going to be a real bear to do so. And in the long run, we are
still far away from our goal of being compliant with the RFC 3280. 
Number 1 is *very* tricky, which is way it was punted in the original code.

Also, 5 is a *very* important feature in the new world. We now have
revoked intermediates in the wild!

 I don't know how much effort it would take to implement #1, but to my naive 
 mind it seems like we could get something very serviceable pretty easily by 
 trying every matching cert at each point in the chain, instead of checking 
 only the best match. Is there some complexity there that I am missing?

 I know that just about everybody has expressed concerns about how difficult 
 libpkix is to maintain. And, also, it is huge. I am not aware of all the 
 problems that the older validation code has, so it seems like it might be 
 somewhat reasonable to extend the old validation code to add the features it 
 is missing, and avoid using libpkix at all. 
I'm ok if someone wanted to rework the libpkix code itself, but trying
to shoehorn in the libpkix features into the old cert processing code is
the longer path to getting to something stable. Note that the decision
to move away from the old code was made by those who knew it best.
Getting the RFC 3280 processing of certificates is long overdue in NSS,
and in Firefox in particular. It's time to just get on with it. We have
code that works. I'm OK with a plan to replace it with something else,
but right now it's the code we have. Trying to graft things onto the old
code (which is really 4 separate implementations anyway) is not a good
path forward.

bob

 Thoughts?

 Thanks,
 Brian


-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto

Re: What exactly are the benefits of libpkix over the old certificate path validation library?

2012-01-04 Thread Brian Smith
Ryan Sleevi wrote:
 IIRC, libpkix is an RFC 3280 and RFC 4158 conforming implementation,
 while non-libpkix is not. That isn't to say the primitives don't exist -
 they do, and libpkix uses them - but that the non-libpkix path doesn't use
 them presently, and some may be non-trivial work to implement.

It would be helpful to get some links to some real-world servers that would 
require Firefox to do complex path building.

No conformant TLS server can require RFC 4158 path building. I would like to 
understand better how much of RFC 3280, 4158, and 5280 is actually required for 
an HTTPS client. (Non-TLS usage like S/MIME in Thunderbird is a separate 
issue.) After all, the TLS specifications are pretty clear that the server is 
*supposed* to provide the full path to the root in its Certificate message, so 
even the dumbest path building code will work with any TLS-conformant server. 
Then, for Firefox, all of the complexity of the libpkix path building is purely 
there to handle non-conformant servers.

AFAICT, we can split these non-conformant servers into two classes: 
misconfigured servers, and enterprise/government servers. It seems very likely 
to me that simpler-than-RFC4158 processing will work very well for 
misconfigured servers (maybe just do AIA cert fetching is enough?). But, how 
much of RFC3280/4158 do real-world TLS-non-conformant government/enterprise 
servers without AIA cert information in the certs use? (Knowing nothing about 
this topic, I wouldn't be surprised if just do AIA cert fetching works even 
for these cases.)

 I find it much more predictable and reasonable than some of the
 non-3280 implementations - both non-libpkix and entirely non-NSS
 implementations (eg: OS X's Security.framework)

Thanks. This is very helpful to know.

 The problem that I fear is that once you start trying to go down the
 route of replacing libpkix, while still maintaining 3280 (or even
 better, 5280) compliance, in addition to some of the path building
 (/not/ verification) strategies of RFC 4158, you end up with a lot
 of 'big' and 'complex' code that can be a chore to maintain because
 PKI/PKIX is an inherently hairy and complicated beast.

 So what is the new value trying to be accomplished? As best I can
 tell, it seems focused around that libpkix is big, scary (macro-based
 error handling galore), and has bugs but only few people with
 expert/domain knowledge of the code to fix them? Does a new
 implementation solve that by much?

I am not thinking to convert any existing code into another conformant RFC 
3280/4158/5280 implementation. My goal is to make things work in Firefox. It 
seems like conform to RFC 3280/4158/5280 isn't a sufficient condition and I 
am curious if it is even a necessary condition. If RFC 3280/4158/5280 is a 
necessary condition (again, for a *web browser* only, not for a S/MIME and 
related things), then fixing existing problems with libpkix seems like the more 
reasonable path. My question is whether those RFCs actual describe what a web 
browser needs to do.

  As for #5, I don't think Firefox is going to be able to use
  libpkix's current OCSP/CRL fetching anyway, because libpkix's
  fetching is serialized and we will need to be able to fetch
  revocation for every cert in the chain in parallel in order
  to avoid regressing performance (too much) when we start
  fetching intermediate certificates' revocation information. I
  have an idea for how to do this without changing anything in NSS,
  doing all the OCSP/CRL fetching in Gecko instead.
 
 A word of caution - this is a very contentious area in the PKIX WG.

I am aware of all of that. But, I know some people don't want to turn on 
intermediate revocation fetching in Firefox at all (by default) because of the 
horrible performance regression it will induce. We can (and should) also 
improve our caching of revocation information to help mitigate that, but the 
fact is that there will be many important cases where fetching intermediate 
certs will cause a serious performance regression. There are other things we 
could do to avoid the performance regression instead of parallelizing the 
revocation status requests but they are also significant departures from the 
standards.

 While not opposed to exploring, I am trying to play the proverbial
 devil's advocate for security-sensitive code used by millions of
 users, especially for what sounds at first blush like a cut our
 losses proposal.

A few months ago, I had a discussion about Kai, where he asked me a question 
that he said Wan-Teh had asked him: are we committed to making libpkix work or 
not? This thread is the start of answering that question.

I am concerned that the libpkix code is hard to maintain and that there are 
very few people available to maintain it. If we have a group of people who are 
committed to making it work, then Mozilla relying on libpkix is probably 
workable. But, it is a little distressing that Google Chrome seems to avoid 
libpkix 

Re: What exactly are the benefits of libpkix over the old certificate path validation library?

2012-01-04 Thread Brian Smith
Gervase Markham wrote:
 On 04/01/12 00:59, Brian Smith wrote:
  5. libpkix has better AIA/CRL fetching: 5.a. libpkix can fetch
  revocation information for every cert in a chain. The non-libpkix
  validation cannot (right?). 5.b. libpkix can (in theory) fetch
  using
  LDAP in addition to HTTP. non-libpkix validation cannot.
 
 5b) is not a significant advantage; everything CABForum is doing
 requires HTTP access to revocation information, as many SSL clients
 don't have LDAP capabilities.

That is true for Firefox, but the LDAP code might be(come) useful for 
Thunderbird. I don't know how well tested it is or even if it works, though.

- Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: What exactly are the benefits of libpkix over the old certificate path validation library?

2012-01-04 Thread Brian Smith
Robert Relyea wrote:
 7. libpkix can actually fetch CRL's on the fly. The old code can only
 use CRL's that have been manually downloaded. We have hacks in PSM to
 periodically load CRL's, which work for certain enterprises, but not
 with the internet.

I am not too concerned with the fetching stuff. Fetching is not a hard problem 
to solve other ways, AFAICT.

 OCSP responses are cached, so OCSP fetching on common intermediates
 should not be a significant performance hit. Chrome is using this
 feature (we know because we've had some intermediates in were
 revoked).

When I browse with libpkix enabled (which also enables the intermediate 
fetching), connecting to HTTPS websites (like mail.mozilla.com).

Also, Chrome only uses libpkix on Linux, right?

Like I said in my other message, my main concern is that libpkix is huge and we 
don't have a lot of people lined up to maintain it or even understand it.

Ryan's comments are encouraging though.

- Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: What exactly are the benefits of libpkix over the old certificate path validation library?

2012-01-04 Thread Brian Smith
Brian Smith wrote:
 Robert Relyea wrote:
 When I browse with libpkix enabled (which also enables the
 intermediate fetching), connecting to HTTPS websites (like
 mail.mozilla.com)

... is much slower, at least when the browser starts up. We may be able to fix 
this with persistent caching of intermediates but it is still going to be slow 
the first time you go somewhere that uses a new intermediate--including the 
first time you browse to any HTTPS website after installing Firefox, which is 
critical, because users start judging us at that point, not after we've filled 
and warmed up our various caches.

- Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: What exactly are the benefits of libpkix over the old certificate path validation library?

2012-01-04 Thread Robert Relyea
On 01/04/2012 03:51 PM, Brian Smith wrote:
 Ryan Sleevi wrote:
 IIRC, libpkix is an RFC 3280 and RFC 4158 conforming implementation,
 while non-libpkix is not. That isn't to say the primitives don't exist -
 they do, and libpkix uses them - but that the non-libpkix path doesn't use
 them presently, and some may be non-trivial work to implement.
 It would be helpful to get some links to some real-world servers that would 
 require Firefox to do complex path building.
Mostly in the government. They higher 3rd parties to replace our current
path processing because it is non-conformant. In the real world, FF is
basically holding the web back because we are the only major browser
that is not RFC compliant! We should have had full pkix processing 5
years ago!


bob


-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto

Re: What exactly are the benefits of libpkix over the old certificate path validation library?

2012-01-04 Thread Robert Relyea
On 01/04/2012 03:51 PM, Brian Smith wrote:
 I am concerned that the libpkix code is hard to maintain and that
 there are very few people available to maintain it. If we have a group
 of people who are committed to making it work, then Mozilla relying on
 libpkix is probably workable. But, it is a little distressing that
 Google Chrome seems to avoid libpkix whenever possible, and that
 Sun/Oracle [redacted]. And, generally, nobody I have talked to seems
 happy with libpkix in practice, even though it seems to be the right
 choice in theory. Literally, the best thing that has been said about
 it is it's the only choice we have. I wonder if that is really true.
 - Brian 

Again, I'm OK with reworking libpkix. Trying to bend the old code into
doing RFC 3280 processing though will not help in the maintainability
arena. Let's get down to one set of code that meets the standards and
work on improving it.

bob


-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto

Re: What exactly are the benefits of libpkix over the old certificate path validation library?

2012-01-04 Thread Robert Relyea
On 01/04/2012 04:18 PM, Brian Smith wrote:
 Brian Smith wrote:
 Robert Relyea wrote:
 When I browse with libpkix enabled (which also enables the
 intermediate fetching), connecting to HTTPS websites (like
 mail.mozilla.com)
 ... is much slower, at least when the browser starts up. We may be able to 
 fix this with persistent caching of intermediates but it is still going to be 
 slow the first time you go somewhere that uses a new intermediate--including 
 the first time you browse to any HTTPS website after installing Firefox, 
 which is critical, because users start judging us at that point, not after 
 we've filled and warmed up our various caches.
Are you actually fetching intermediates?

In the cases where you fetch the intermediates, the old code will not work!
We don't fetch the intermediate if we already have it, or it's already
sent in the SSL chain.

If you are seeing some performance issue, perhaps it some other issue?
(are you turning on CRL fetching?).

bob

 - Brian


-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto

Re: What exactly are the benefits of libpkix over the old certificate path validation library?

2012-01-04 Thread Brian Smith
Robert Relyea wrote:
 On 01/04/2012 04:18 PM, Brian Smith wrote:
 Are you actually fetching intermediates?
 
 In the cases where you fetch the intermediates, the old code will not
 work! We don't fetch the intermediate if we already have it, or it's
 already sent in the SSL chain.
 
 If you are seeing some performance issue, perhaps it some other
 issue? (are you turning on CRL fetching?).

We can just tell libpkix not to do OCSP fetching for intermediates. So, this 
particular performance issue isn't a blocker for switching to libpkix, as long 
as we make such a change before making libpkix the default.

My point is that, in order to actually enable libpkix's ability to fetch 
intermediate certificates in Firefox, we will have to do a substantial amount 
of work to eliminate the performance regression that is inherent with the 
serial fashion that libpkix does OCSP fetching. In some ways, this might be a 
question of fast vs right but I am not sure that the right here is enough 
of benefit to justify the performance cost. Still, I would like to do the 
intermediate OCSP fetching if it can be made close to free, which means doing 
it in parallel with the EE OCSP fetch, AFAICT.

(Persistent) caching of OCSP responses will help. But, caching won't help for 
the I just installed Firefox and now I am going to see how fast it is by going 
to twitter.com test. And, also, we haven't even started working on the 
persistent caching of OCSP responses in Firefox yet.

- Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: What exactly are the benefits of libpkix over the old certificate path validation library?

2012-01-04 Thread Wan-Teh Chang
On Wed, Jan 4, 2012 at 3:51 PM, Brian Smith bsm...@mozilla.com wrote:

 But, it is a little distressing that Google Chrome seems to avoid libpkix
 whenever possible, ...

This is not true.  In fact, Google Chrome is an early adopter of libpkix,
and works very hard to fix or work around the bugs in libpkix.  (Google
Chrome uses the libpkix in system NSS, so it has to work around
libpkix bugs before the fixes appear in system NSS.)

Wan-Teh
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


What exactly are the benefits of libpkix over the old certificate path validation library?

2012-01-03 Thread Brian Smith
1. libpkix can handle cross-signed certificates correctly, without getting 
stuck in loops. Non-libpkix validation cannot.

2. libpkix can accept parameters that control each individual validation, 
whereas non-libpkix validation relies on global settings.
2.a. libpkix can control OCSP/CRL/cert fetching on a per-validation basis.
2.b. libpkix can restrict the set of roots that are validated. non-libpkix 
validation cannot.

3. libpkix can enforce certificate policies (e.g. requiring EV policy OIDs). 
Can the non-libpkix validation?

4. libpkix can can return the full certificate chain to the caller. The 
non-libpkix validation cannot.

5. libpkix has better AIA/CRL fetching:
5.a. libpkix can fetch revocation information for every cert in a chain. The 
non-libpkix validation cannot (right?).
5.b. libpkix can (in theory) fetch using LDAP in addition to HTTP. non-libpkix 
validation cannot.
5.c. libpkix checks for revocation information while walking from a trusted 
root to the EE. The non-libpkix validation does the fetching while walking from 
the EE to the root.

Are there any other benefits?

As for #5, I don't think Firefox is going to be able to use libpkix's current 
OCSP/CRL fetching anyway, because libpkix's fetching is serialized and we will 
need to be able to fetch revocation for every cert in the chain in parallel in 
order to avoid regressing performance (too much) when we start fetching 
intermediate certificates' revocation information. I have an idea for how to do 
this without changing anything in NSS, doing all the OCSP/CRL fetching in Gecko 
instead.

It seems to me that it would be relatively easy to add #2, #3, and #4 to the 
non-libpkix validation engine, especially since we can reference the libpkix 
code

I don't know how much effort it would take to implement #1, but to my naive 
mind it seems like we could get something very serviceable pretty easily by 
trying every matching cert at each point in the chain, instead of checking only 
the best match. Is there some complexity there that I am missing?

I know that just about everybody has expressed concerns about how difficult 
libpkix is to maintain. And, also, it is huge. I am not aware of all the 
problems that the older validation code has, so it seems like it might be 
somewhat reasonable to extend the old validation code to add the features it is 
missing, and avoid using libpkix at all. 

Thoughts?

Thanks,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: What exactly are the benefits of libpkix over the old certificate path validation library?

2012-01-03 Thread Ryan Sleevi
Snip
  Are there any other benefits?

IIRC, libpkix is an RFC 3280 and RFC 4158 conforming implementation, while
non-libpkix is not. That isn't to say the primitives don't exist - they
do, and libpkix uses them - but that the non-libpkix path doesn't use them
presently, and some may be non-trivial work to implement.

One benefit of libpkix is that it reflects much of the real world
experience and practical concerns re: PKI that were distilled in RFC 4158.
I also understand that it passes all the PKITS tests (
http://csrc.nist.gov/groups/ST/crypto_apps_infra/pki/pkitesting.html ),
while non-libpkix does not (is this correct?)

Don't get me wrong, I'm not trying to be a libpkix apologist - I've had
more than my share of annoyances (latest is http://crbug.com/108514#c3 ),
but I find it much more predictable and reasonable than some of the
non-3280 implementations - both non-libpkix and entirely non-NSS
implementations (eg: OS X's Security.framework)

The problem that I fear is that once you start trying to go down the route
of replacing libpkix, while still maintaining 3280 (or even better, 5280)
compliance, in addition to some of the path building (/not/ verification)
strategies of RFC 4158, you end up with a lot of 'big' and 'complex' code
that can be a chore to maintain because PKI/PKIX is an inherently hairy
and complicated beast.

So what is the new value trying to be accomplished? As best I can tell, it
seems focused around that libpkix is big, scary (macro-based error
handling galore), and has bugs but only few people with expert/domain
knowledge of the code to fix them? Does a new implementation solve that by
much?

From your list of pros/cons, it sounds like you're primarily focused on
the path verification aspects (policies, revocation), but a very important
part of what libpkix does is the path building/locating aspects (depth
first search, policy/constraint based edge filtering, etc). While it's not
perfect ( https://bugzilla.mozilla.org/show_bug.cgi?id=640892 ), as an
algorithm it's more robust than the non-libpkix implementation in my
experience.

  As for #5, I don't think Firefox is going to be able to use libpkix's
current OCSP/CRL fetching anyway, because libpkix's fetching is
serialized
  and we will need to be able to fetch revocation for every cert in the
chain in parallel in order to avoid regressing performance (too much) when
  we start fetching intermediate certificates' revocation information. I
have an idea for how to do this without changing anything in NSS, doing
all the OCSP/CRL fetching in Gecko instead.

A word of caution - this is a very contentious area in the PKIX WG. The
argument is that a correct implementation should only trust data as far
as it can throw it (or as far as it can be chained to a trusted root).
Serializing revocation checking by beginning at the root and then working
down /is/ the algorithm described in RFC 3280 Section 6.3. In short, the
argument goes that you shouldn't be trusting/operating on ANY information
from the intermediate until you've processed the root - since it may be a
hostile intermediate.

libpkix, like CryptoAPI and other implementations, defers revocation
checking until all trust paths are validated, but even then checks
revocation serially/carefully.

Now, I recognize that such an approach/interpretation is not universally
agreed upon, but I just want to make sure you realize there is a reasoning
for the approach it currently uses. For some people, even AIA chasing is
seen as a 'bad' idea - even if, in practice, every sane user agent does it
because of so many broken TLS implementations/webservers out there.

While not opposed to exploring, I am trying to play the proverbial devil's
advocate for security-sensitive code used by millions of users, especially
for what sounds at first blush like a cut our losses proposal.

Ryan




-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto