Re: Policy 2.7 Proposal: Clarify Section 5.1 ECDSA Curve-Hash Requirements

2019-05-22 Thread Brian Smith via dev-security-policy
Ryan Sleevi  wrote:

>
>
>> It would be easier to understand if this is true if the proposed text
>> cited the RFCs, like RFC 4055, that actually impose the requirements that
>> result in the given encodings.
>>
>
> Could you clarify, do you just mean adding references to each of the
> example encodings (such as the above example, for the SPKI encoding)?
>

Exactly. That way, it is clear that the given encodings are not imposing a
new requirement, and it would be clear which standard is being used to
determine to correct encoding.

I realize that determining the encoding from each of these cited specs
would require understanding more specifications, including in particular
how ASN.1 DER requires DEFAULT values to be encoded. I would advise against
calling out all of these details individually less people get confused by
inevitable omissions.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7 Proposal: Clarify Section 5.1 ECDSA Curve-Hash Requirements

2019-05-09 Thread Brian Smith via dev-security-policy
On Fri, Apr 26, 2019 at 11:39 AM Wayne Thayer  wrote:

> On Wed, Apr 24, 2019 at 10:02 AM Ryan Sleevi  wrote:
>
>> Thank you David and Ryan! This appears to me to be a reasonable
>> improvement to our policy.
>>
>
> Brian: could I ask you to review the proposed change?
>
>
>> This does not, however, address the last part of what Brian proposes -
>> which is examining if, how many, and which CAs would fail to meet these
>> encoding requirements today, either in their roots, subordinates, or leaf
>> certificates.
>>
>>
> While I agree that this would be useful information, for the purpose of
> moving ahead with this policy change would it instead be reasonable to set
> an effective date and require certificates issued (notBefore) after that
> date to comply, putting the burden on CAs to verify their implementations
> rather than relying on someone else to do that work?
>

My understanding here is that the proposed text is not imposing a new
requirement, but more explicitly stating a requirement that is already
imposed by the BRs. AFAICT BRs require syntactically valid X.509
certificates, RFC 5280 defines what's syntactically valid, RFC 5280 defers
to other documents about what is allowed for each algorithm identifier, and
this is an attempt to collect all those requirements into one spot for
convenience.

It would be easier to understand if this is true if the proposed text cited
the RFCs, like RFC 4055, that actually impose the requirements that result
in the given encodings.


>
> While this includes RSA-PSS, it's worth noting that mozilla::pkix does not
>> support these certificates, and also worth noting that the current encoding
>> scheme is substantially more verbose than desirable.
>>
>
I agree the encoding is unfortunate. But, also, there's no real prospect of
a shorter encoding being standardized and implemented in a realistic time
frame.

Cheers,
Brian
--
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7 Proposal: Clarify Section 5.1 ECDSA Curve-Hash Requirements

2019-04-22 Thread Brian Smith via dev-security-policy
Wayne Thayer  wrote:

> Brian Smith  wrote:
>
>> Ryan Sleevi wrote:
>>
>>> Given that CAs have struggled with the relevant encodings, both for the
>>> signatureAlgorithm and the subjectPublicKeyInfo field, I’m curious if
>>> you’d
>>> be open to instead enumerating the allowed (canonical) encodings for
>>> both.
>>> This would address open Mozilla Problematic Practices as well - namely,
>>> the
>>> encoding of NULL parameters with respect to certain signature algorithms.
>>>
>>
>>
> I would be happy with that approach if it makes our requirements clearer -
> I'm just not convinced that doing so will eliminate the confusion I
> attempted to describe.
>

There are three (that I can think of) sources of confusion:

1. Is there any requirement that the signature algorithm that is used to
sign a certificate be correlated in any way to the algorithm of the public
key of the signed certificate? AFAICT, the answer is "no."

2. What combinations of public key algorithm (RSA vs. ECDSA vs EdDSA),
Curve (N/A vs. P-256 vs P-384 vs Ed25519), and digest algorithm (SHA-256,
SHA-384, SHA-512) are allowed? This is quite difficult to get *precisely*
right in natural language, but easy to get right with a list of encodings.

3. Given a particular combination of algorithm, curve, and digest
algorithm, which encodings of that information are acceptable? For example,
when a a NULL parameter required and when is it optional. Again, this is
hard to get right in natural language, and again, listing the encodings
makes this trivial to get exactly right.

 Agreed - is someone willing to take on this task?
>

I could transform what I did with webpki into some text.

However, first I think it would be useful if somebody could check that the
encodings that webpki expects actually match what certificates in
Certificate Transparency are doing. For example, does every CA already
encode a NULL parameter when one is required by RFC 4055 (which is included
by reference from RFC 5280)? Are there any algorithm combinations in use
that aren't in webpki's list? This is something I don't have time to
thoroughly check.

Thanks,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7 Proposal: Require EKUs in End-Entity Certificates

2019-04-17 Thread Brian Smith via dev-security-policy
Wayne Thayer via dev-security-policy 
wrote:

> My conclusion from this discussion is that we should not add an explicit
> requirement for EKUs in end-entity certificates. I've closed the issue.
>

What will happen to all the certificates without an EKU that currently
exist, which don't conform to the program requirements?

For what it's worth, I don't object to a requirement for having an explicit
EKU in certificates covered by the program. Like I said, I think every
certificate that is issued should be issued with a clear understanding of
what applications it will be used for, and having an EKU extension does
achieve that.

The thing I am attempting to avoid is the implication that a missing EKU
implies a certificate is not subject to the program's requirements.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7 Proposal: Clarify Section 5.1 ECDSA Curve-Hash Requirements

2019-04-04 Thread Brian Smith via dev-security-policy
Ryan Sleevi wrote:

> Given that CAs have struggled with the relevant encodings, both for the
> signatureAlgorithm and the subjectPublicKeyInfo field, I’m curious if you’d
> be open to instead enumerating the allowed (canonical) encodings for both.
> This would address open Mozilla Problematic Practices as well - namely, the
> encoding of NULL parameters with respect to certain signature algorithms.
>

I agree with Ryan. It would be much better to list more precisely what
algorithm combinations are allowed, and how exactly they should be encoded.
From my experience in implementing webpki [1], knowing the exact allowed
encodings makes it much easier to write software that deals with
certificates and also makes it easier to validate that certificates conform
to the requirements.

These kinds of details are things that CAs need to delegate to their
technical staff for enforcement, and IMO it would make more sense to ask a
programmer in this space to draft the requirements, and then have other
programmers verify the requirements are accurate. In particular, it is
hugely inefficient for non-programmers to try to attempt to draft these
technical requirements and then ask programmers and others to check them
because it's unreasonable to expect people who are not programmers to be
able to see which details are important and which aren't.

You can find all the encodings of the algorithm identifiers at [2].

[1] https://github.com/briansmith/webpki
[2] https://github.com/briansmith/webpki/tree/master/src/data

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7 Proposal: Require EKUs in End-Entity Certificates

2019-04-03 Thread Brian Smith via dev-security-policy
Wayne Thayer  wrote:

> On Mon, Apr 1, 2019 at 5:36 PM Brian Smith via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
>> Here when you say "require EKUs," you mean that you are proposing that
>> software that uses Mozilla's trust store must be modified to reject
>> end-entity certificates that do not contain the EKU extension, if the
>> certificate chains up to the roots in Mozilla's program, right?
>
>
> That would be a logical goal, but I was only contemplating a policy
> requirement.
>

OK, let's say the policy were to change to require an EKU in every
end-entity certificate. Then, would the policy also require that existing
unexpired certificates that lack an EKU be revoked? Would the issuance of a
new certificate without an EKU be considered a policy violation that would
put the CA at risk of removal?

The thing I want to avoid is saying "It is OK for the CA to issue an
end-entity certificate without an EKU and if there is no EKU we will
consider it out of scope of the program." In particular, I don't want to
put software that (correctly) implements the "no EKU extension implies all
usages are acceptable" at risk.


>
> If so, how
>> would one implement the "chain[s] up to roots in our program" check?
>> What's
>> the algorithm? Is that actually well-defined?
>>
>>
> My starting proposal would be to reject all EE certs issued after a
> certain future date that don't include EKU(s), or that assert anyEKU. If
> your point is that it's not so simple and that this will break things, I
> suspect that you are correct.
>

The part that seems difficult to implement is the differentiation of a
certificate that chains up to a root in Mozilla's program from one that
doesn't. I don't think there is a good way to determine, given the
information that the certificate verifier has, whether a certificate chains
up to a root in Mozilla's program or not, so to be safe software has to
apply the same rules to regardless of whether the certificate appears to
chain up to a root in Mozilla's program or not.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7 Proposal: Require EKUs in End-Entity Certificates

2019-04-01 Thread Brian Smith via dev-security-policy
Wayne Thayer via dev-security-policy 
wrote:

> This leads to confusion such as [1] in
> which certificates that are not intended for TLS or S/MIME fall within the
> scope of our policies.
>

I disagree that there is any confusion. The policy is clear, as noted in
https://bugzilla.mozilla.org/show_bug.cgi?id=1523221#c3.

Simply requiring EKUs in S/MIME certificates won't solve the problem unless
> we are willing to exempt certificates without an EKU from our policies, and
> doing that would create a rather obvious loophole for issuing S/MIME
> certificates that don't adhere to our policies.
>

I agree that a requirement to add an EKU to certificates does not solve the
problem, because the problem that software (Mozilla's and others')
interprets the lack of an EKU extension as meaning "there is no restriction
on the EKU," which is the correct interpretation.


> The proposed solution is to require EKUs in all certificates that chain up
> to roots in our program, starting on some future effective date (e.g. April
> 1, 2020).


Here when you say "require EKUs," you mean that you are proposing that
software that uses Mozilla's trust store must be modified to reject
end-entity certificates that do not contain the EKU extension, if the
certificate chains up to the roots in Mozilla's program, right? If so, how
would one implement the "chain[s] up to roots in our program" check? What's
the algorithm? Is that actually well-defined?


> Alternately, we could easily argue that section 1.1 of our existing policy
> already makes it clear that CAs must include EKUs other than
> id-kp-serverAuth and id-kp-emailProtection in certificates that they wish
> to remain out of scope for our policies.
>

I agree the requirements are already clear. The problem is not the clarity
of the requirements. Anybody can define a new EKU because EKUs are listed
in the certificate by OIDs, and anybody can make up an EKU. A standard
isn't required for a new OID. Further, not agreeing on a specific EKU OID
for a particular kind of usage is poor practice, and we should discourage
that poor practice.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SHA256 for OCSP response issuer hashing

2016-12-20 Thread Brian Smith
Roland Shoemaker  wrote:
> Let's Encrypt is currently considering moving away from using SHA1 as
> the issuer subject/public key hashing function in OCSP responses and
> using SHA256 instead. Given a little investigation this seems like a
> safe move to make but we wanted to check with the community to see if
> anyone was aware of legacy (or contemporary) software issues that may
> cause us any trouble.

I'm not sure I understand you correctly, but see:
https://bugzilla.mozilla.org/show_bug.cgi?id=966856
https://hg.mozilla.org/mozilla-central/annotate/578899c0b819/security/pkix/lib/pkixocsp.cpp#l717

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.4 Proposal: Use language of capability throughout

2016-12-16 Thread Brian Smith
Gervase Markham  wrote:
> On 10/12/16 21:25, Brian Smith wrote:
>> Again, it doesn't make sense to say that the forms of names matter for
>> name constraints, but don't matter for end-entity certificates. If an
>> end-entity certificate doesn't contain any names of the forms dNSName,
>> iPAddress, SRVName, rfc822Name, then it shouldn't be in scope.
>
> Why would it have id-kp-serverAuth or id-kp-emailProtection and not have
> any names of those forms?

I'm more thinking of certificates that don't have an EKU extension but
do have names of those forms. Such certificates should be in scope.

Otherwise, A CA can easily issue just a certificate that is trusted
for every browser except Firefox, which is out of scope for Mozilla's
CA program. A CA might do this, for example, if Mozilla were being
more difficult than other root stores and/or the customer in question
doesn't care if the site works in Firefox or not.

>> Also, the way that the text is worded the above means that an
>> intermediate certificate that contains anyExtendedKeyUsage in its EKU
>> would be considered out of scope of Mozilla's policy. However, you
>> need to have such certificates be in scope so that you can forbid them
>> from using anyExtendedKeyUsage.
>
> Well, there are two responses to that.
>
> Firstly, no. The certs in scope are: "Intermediate certificates ...
> which are not technically constrained such that they are unable to issue
> working server or email certificates." If an intermediate certificate
> has an EKU with anyEKU, it is able to issue working server or email
> certificates. So it's in scope. (This utilises my use of "working"
> rather than "trusted", as noted in my previous email.)

What does "working" mean? If I were a CA I would interpret "working"
to mean "works in Firefox" which would then allow me to issue
certificates that violate Mozilla's CA policies by issuing them from
an intermediate that has (only) anyExtendedKeyUsage, so that they work
in every browser except Firefox and are out of scope of your policy.

> But secondly, I'm not banning the use of anyEKU, because Firefox doesn't
> trust cert chains that rely on it, so there's no need to ban it. Is there?

Again, the reason for banning anyEKU is to prevent, through policy,
CAs from using/issuing intermediate certificates that work in every
browser except Firefox, for whatever reason (most likely, to work
around a CA policy disagreement).

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Taiwan GRCA Root Renewal Request

2016-12-15 Thread Brian Smith
Kathleen Wilson  wrote:
> How about the following?

That sounds right to me.

It is important to fix the DoS issue with the path building when there
are many choices for the same subject. SKI/AKI matching only fixes the
DoS issue for benign cases, not malicious cases. Therefore some way of
limiting the resource usage without relying on AKI/SKI matching is
needed.

I'm not sure how to incorporate the possibility of the issue being
fixed into your text.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Taiwan GRCA Root Renewal Request

2016-12-14 Thread Brian Smith
On Tue, Dec 13, 2016 at 12:36 PM, Kathleen Wilson  wrote:
> Question: Do I need to update 
> https://wiki.mozilla.org/CA:How_to_apply#Root_certificates_with_the_same_subject_and_different_keys
>  ?

That description seems to have been written to describe the behavior
of the old, non-libpkix, NSS verification code. NSS's libpkix probably
works differently than that. Also, that description is not accurate
and is somewhat misleading for mozilla::pkix.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.4 Proposal: Require all OCSP responses to have a nextUpdate field

2016-12-10 Thread Brian Smith
Gervase Markham  wrote:
> On 08/12/16 12:46, Brian Smith wrote:
>> Are you intending to override the BR laxness for maximum OCSP lifetime
>> for intermedaites, or just match the BR requirements?
>
> The wider context of this section includes an "For end-entity
> certificates:". So the wording as proposed matches the BRs in terms of
> duration.

OK. This means that the policy isn't really sufficient for use with
the OCSP mult-stapling extension. Mutli-stapling only works well when
the OCSP responses for the intermediate CA certificates are treated
like what is proposed for end-entity certificates w.r.t. nextUpdate.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.4 Proposal: Use language of capability throughout

2016-12-10 Thread Brian Smith
On Thu, Dec 8, 2016 at 10:46 AM, Gervase Markham  wrote:
> We want to change the policy to make it clear that whether a cert is
> covered by our policy or not is dependent on whether it is technically
> capable of issuing server certs, not whether it is intended by the CA
> for issuing server certs.

I'll quote part of the proposed change here:

> 2. Intermediate certificates which have at least one valid, unrevoked chain up
> to such a CA certificate and which are not technically constrained such
> that they are unable to issue server or email certificates. Such technical
> constraints could consist of either:
> * an Extended Key Usage (EKU) extension which does not contain either of the
>   id-kp-serverAuth and id-kp-emailProtection EKUs; or:
> * name constraints which do not allow SANs of any of the following types:
>   dNSName, iPAddress, SRVName, rfc822Name
>
> 3. End-entity certificates which have at least one valid, unrevoked chain up 
> to
> such a CA certificate through intermediate certificates which are all in
> scope, such end-entity certificates having either:
> * an Extended Key Usage (EKU) extension which contains one or more of the
>   id-kp-serverAuth and id-kp-emailProtection EKUs; or:
> * no EKU extension.

Again, it doesn't make sense to say that the forms of names matter for
name constraints, but don't matter for end-entity certificates. If an
end-entity certificate doesn't contain any names of the forms dNSName,
iPAddress, SRVName, rfc822Name, then it shouldn't be in scope.

Also, the way that the text is worded the above means that an
intermediate certificate that contains anyExtendedKeyUsage in its EKU
would be considered out of scope of Mozilla's policy. However, you
need to have such certificates be in scope so that you can forbid them
from using anyExtendedKeyUsage.

Cheers,
Brian
--
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.4 Proposal: Use language of capability throughout

2016-12-10 Thread Brian Smith
Gervase Markham  wrote:
> On 08/12/16 13:06, Brian Smith wrote:
>> In particular, I suggest replacing "unable to issue server or email
>> certificates" with "unable to issue *trusted* server or email
>> certificates" or similar.
>
> I think I would prefer not to make that tie, because the obvious
> question is "trusted in which version of Firefox"? I would prefer to
> modify Firefox and the policy to match, but have the ability to skew
> those two updates as necessary, rather than tie the policy to what
> Firefox does directly.

"Unable to issue" means "unable to sign with the private key" which
can only happen if they don't have the private key. But they do have
the private key so they're always able to issue certificates with any
contents they want. Thus "unable to issue" is a not a useful criteria
since no CA meets it and so you need a different criteria.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Taiwan GRCA Root Renewal Request

2016-12-08 Thread Brian Smith
Gervase Markham  wrote:
> Just to help me be clear: the request is for the inclusion of a root
> with the same DN as a previous root, which will still be included after
> the addition? Or the problem with duplicate DNs occurs further down the
> hierarchy?

Some people claimed some software may be unable to cope with two
different CA certificates with the same subject DNs. Nobody claimed
that Firefox is unable to cope with two CA certificates having the
same subject DN. It should work fine in Firefox because Firefox will
attempt every CA cert it finds with the same DN.

One caveat: If there are "too many" CA certificates with the same
subject DN, Firefox will spend a very long time searching through
them. This is a bug in Firefox that's already on file.

> Does Firefox build cert chains using DNs, or using Key Identifiers as
> Wen-Cheng says it should? I assume it's the former, but want to check.

Firefox doesn't even parse the key identifiers. Using the key
identifiers are only helpful when a CA does the thing that this
particular CA does, using the same subject DN for multiple CA
certificates, to prevent the "too many" problem mentioned above.

I'm unconvinced that it is worthwhile to add the Key Identifier stuff
just to accommodate this one public CA plus any private CAs that do
similarly. I think it's better to ask this CA to instead do things the
way all the other public CAs do (AFAIK). In other words, this is kind
of where the Web PKI diverges from PKIX.

However, the CA changing its practices could be done on a
going-forward basis; the existing instances shouldn't be problematic
and so I don't think they should be excluded on the basis of what they
already did.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.4 Proposal: Use language of capability throughout

2016-12-08 Thread Brian Smith
Gervase Markham  wrote:
> We want to change the policy to make it clear that whether a cert is
> covered by our policy or not is dependent on whether it is technically
> capable of issuing server certs, not whether it is intended by the CA
> for issuing server certs.

NIT: The issue isn't whether it is technically capable of *issuing* a
cert, but what certificates it issues are trusted by Firefox (or, more
abstractly, TLS certificates trusted by a TLS client or email
certificates trusted by an S/MIME-capable email client).

In particular, I suggest replacing "unable to issue server or email
certificates" with "unable to issue *trusted* server or email
certificates" or similar.

Cheers,
Brian

> Until we change Firefox to require id-kp-serverAuth, the policy will
> define "capable" as "id-kp-serverAuth or no EKU".

This would allow anyExtendedKeyUsage in a way that isn't what you
intend, AFAICT. I suggest "without an EKU constraint that excludes
id-kp-serverAuth." This suggested new wording doesn't explicitly allow
anyExtendedKeyUsage to be used, not does it exclude a cert with
anyExtendedKeyUsage from being in scope, but otherwise accomplishes
the same thing.

More generally, I suggest you use the wording in my latest message in
the id-kp-serverAuth thread. In particular, id-kp-serverAuth doesn't
apply to email certificates; id-kp-emailProtection does. Thus, you
need to consider which EKU, which trust bit, and which types of names
are relevant separately for email and TLS.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.4 Proposal: Require all OCSP responses to have a nextUpdate field

2016-12-08 Thread Brian Smith
Gervase Markham  wrote:
> Add a requirement that every OCSP response must have a nextUpdate field.
> This is required to ensure that OCSP stapling works reliably with all
> (at least most) server and client products.
>
> Proposal: update the second bullet in point 3 of the Maintenance section
> so that the last sentence reads:
>
> OCSP responses from this service must have a defined value in the
> nextUpdate field, and it must be no more than ten days after the
> thisUpdate field.

The baseline requirements has different requirements for end-entity
and intermediate certificates. It requires the nextUpdate field to be
no more than 10 days after the thisUpdate field, but it doens't have
the same requirement for intermediates.

Are you intending to override the BR laxness for maximum OCSP lifetime
for intermedaites, or just match the BR requirements?

If you are intending to be stricter than the BRs requires, then your
change sounds good but maybe call out specifically that this is
stricter for intermediates than what the BRs require.

Otherwise, if you're intending to match the BRs then I would remove
the ", and it must be no more than ten days after the thisUpdate
field."

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Can we require id-kp-serverAuth now?

2016-12-08 Thread Brian Smith
Gervase Markham  wrote:
> On 05/12/16 12:43, Brian Smith wrote:
>> However, I do think that if a CA certificate is name constrained to not
>> allow any dNSName or iPAddress names, and/or it EKU that doesn't contain
>> id-kp-serverAuth, then it shouldn't be in scope of the proposal. Either
>> condition is sufficient.
>
> Can we get a whitelist of types we care about? Note that we care about
> email as well as server certs.
>
> dNSName
> iPAddress (this covers both v4 and v6?)
> rfc822Name
> SRVName

Here are the choices (from RFC 5280):

GeneralName ::= CHOICE {
otherName   [0] OtherName,
rfc822Name  [1] IA5String,
dNSName [2] IA5String,
x400Address [3] ORAddress,
directoryName   [4] Name,
ediPartyName[5] EDIPartyName,
uniformResourceIdentifier   [6] IA5String,
iPAddress   [7] OCTET STRING,
registeredID[8] OBJECT IDENTIFIER }

Note that RFC 4985 defines srvName as an otherName { id-on 7}. See
also 
http://www.iana.org/assignments/smi-numbers/smi-numbers.xhtml#smi-numbers-1.3.6.1.5.5.7.8

If the issuing CA is trusted for email (as determined by root email
trust bit, and there are no EKU constraints that exclude
id-kp-emailProtection), then a certificate with an rfc822Name would be
in scope, unless all such names were excluded by the name constraints.

If the issuing CA is trusted for TLS (as determined by root SSL trust
bit, and there are no EKUs constraints that exclude id-kp-serverAuth),
then a certificate with dNSName and iPAddress or srvName (a subtype of
otherName), would be in scope, unless all such names were excluded by
the name constraints.

Not sure about whether you would want to include the URL type.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Can we require id-kp-serverAuth now?

2016-12-08 Thread Brian Smith
Gervase Markham  wrote:
> On 07/12/16 12:44, Brian Smith wrote:
>> Notice in the BRs that the KeyUsage extension (not to be confused with the
>> ExtendedKeyUsage extension we're talking about here) is optional. Why is it
>> OK to be optional? Because the default implementation allows all usages,
>> including in particular the usages that browsers need.
>
> The fact that some defaults are suitable doesn't mean that all defaults
> are suitable. You are assuming what you seek to prove.

In your proposal, an end-entity certificate is allowed to have any
EKUs in addition to id-kp-serverAuth, right? So, all EKUs are indeed
acceptable and so the default is acceptable.

> Changing the BRs in this way would (arguably, as the scope of the BRs is
> a matter of ongoing debate, something we hope this line of work will
> eventually clarify) bring a whole load of certs which are not currently
> issued under the BRs and which aren't supposed to be under the BRs,
> under the BRs.

If a certificate is in scope of the BRs then it must conform to the
requirements. In particular, it isn't the case that any certificate
that conforms to the requirements is in scope. Therefore, loosening
the requirements doesn't change the scope.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Can we require id-kp-serverAuth now?

2016-12-08 Thread Brian Smith
Gervase Markham  wrote:
> On 05/12/16 12:43, Brian Smith wrote:
>> Let's consider the cases:
>>
>> A root CA: It is in scope if it has the SSL trust bit.
>>
>> An intermediate CA: It is in scope unless all the trusted certificates
>> issued for it have an EKU that excludes id-kp-serverAuth.
>
> No; it's in scope unless it has constraints which prevent the issue of
> (or rather, the trust of) certificates which contain id-kp-serverAuth.

Note I said "issued for", not "issued by". In particular, we are
saying the same thing, except that my statement acknowledges that an
intermediate CA may have multiple (intermediate) certificates with
different EKU (or name) constraints.

>> This is true regardless of whether you require an explicit id-kp-serverAuth
>> in the end-entity certificates, and/or if you base it on the subjectAltName
>> entries like I suggest, right? Because, at any time, the CA could issue an
>> end-entity certificate with id-kp-serverAuth in its EKU and/or a
>> certificate with a dNSName or iPAddress in its subjectAltNames.
>
> Right, so your initial formulation is not correct. It's in scope whether
> or not "all the trusted certificates issued for it [so far] have an EKU
> that excludes id-kp-serverAuth".

It's in scope regardless of the contents of the certificates issued
*by* it. The certificates issued *for* a CA are the intermediate CA
certificates that chain to it. The certificates issued *by* a CA are
the end-entity (or intermediate, if no path length constraints)
certificates.

Note that your proposal is talking about requiring id-kp-serverAuth in
the end-entity certificates issued *by* the CA, which shouldn't matter
insofar as determining scope is concerned.

>> However, I do think that if a CA certificate is name constrained to not
>> allow any dNSName or iPAddress names, and/or it EKU that doesn't contain
>> id-kp-serverAuth, then it shouldn't be in scope of the proposal. Either
>> condition is sufficient.
>
> Are there not other relevant name types, such as svrName (or whatever
> it's called)?

Fair point. But the constraint can be specified as a whitelist instead
of a blacklist: If the subjectAltName (and subject CN) consists only
of emailAddress and/or other non-webby names then it isn't in scope,
otherwise it is in scope.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Can we require id-kp-serverAuth now?

2016-12-07 Thread Brian Smith
Rob Stradling  wrote:

> Mozilla's CA Certificate Inclusion Policy already requires that "issuance
> of certificates to be used for SSL-enabled servers must also conform to"
> the BRs, and most other browser providers already require this too.
>
> For Subscriber Certificates, the CABForum BRs already require that...
>   "F. extKeyUsage (required)
>Either the value id‐kp‐serverAuth [RFC5280] or id‐kp‐clientAuth
>[RFC5280] or both values MUST be present."
>
> Since the policy already requires #3, ISTM that the technical
> implementation should enforce #3 (unless there's a really good reason not
> to).
>

The policy (including the BRs) can and should be changed.

Notice in the BRs that the KeyUsage extension (not to be confused with the
ExtendedKeyUsage extension we're talking about here) is optional. Why is it
OK to be optional? Because the default implementation allows all usages,
including in particular the usages that browsers need.

Similarly, why are name constraints optional? Because the default is no
name constraints. Why are policy constraints optional? Because the default
is no constraints. In all these cases the defaults might be not as strict
as possible but they work out OK.


> For it to make any sense to not enforce #3 technically, there would need
> to be cross-industry agreement (and a corresponding update to the BRs) that
> end-entity serverAuth certs need not contain the EKU extension. Good luck
> with that!!
>

I would expect it to be easier than most other changes to the BRs, because
it doesn't require anybody to do any work.


> How much effort should we go to just to shave 21 bytes off the size of
> each end-entity serverAuth cert?
>

My proposal requires as close to zero effort as any proposal to change the
BRs.

In isolation, shaving off 21 bytes isn't a huge win. However, IIRC, based
on the last time we measured this, combined with other changes it adds up
to be, on average, larger than the size of the SCTs that we're adding to
certs and/or less than the size of an additional OCSP response (without
embedded signing cert) and/or the cost of a minimal OCSP response signing
cert.

Why don't browsers support OCSP multi-stapling (OCSP for intermediate CA
certs)? Part of the reason is that it would be inefficient because
certificates and OCSP responses are too big. Also some choose to avoid
using X.509 certificates completely due to size issues.

People are working on certificate compression to make certificates smaller.
See, for example, the messages in this thread:
https://www.ietf.org/mail-archive/web/tls/current/msg22065.html. Also see
Google's QUIC protocol, which implements compression. Unfortunately, not
every implementation can support GZIP-based compression, and it's a good
idea to minimize the size of the decompressed certs in any case. Also see
the work that BoringSSL/Chrome is doing to de-dupe certs in memory because
certs are taking up too much memory.

Also, like I said in my previous message, it seems like requiring the EKU
in the end-entity certificates doesn't actually solve the problem that it
was proposed to solve, so I'm not sure if there is any motivation for
requiring it now.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Can we require id-kp-serverAuth now?

2016-12-05 Thread Brian Smith
Gervase Markham  wrote:

> On 04/12/16 19:11, Brian Smith wrote:
> > If certificates without an EKU have dNSName or iPAddress subjectAltName
> > entries, then they should be considered in scope. Otherwise they don't
> need
> > to be considered in scope as long as Firefox doesn't use the Subject CN
> as
> > a dNSName. You've already started down the path of fixing the Subject CN
> > issue in https://bugzilla.mozilla.org/show_bug.cgi?id=1245280 and maybe
> > elsewhere.
>
> That would be an alternative way to do it. The problem is that if you
> try and do it this way, the issuing CA is always in scope for all its
> issuances, because whether the certs it issues have these features is
> inevitably a matter of CA policy, and could change at any time.
> Therefore, the issuing CA has to be run in a BR-compliant way all the time.
>

Let's consider the cases:

A root CA: It is in scope if it has the SSL trust bit.

An intermediate CA: It is in scope unless all the trusted certificates
issued for it have an EKU that excludes id-kp-serverAuth.

This is true regardless of whether you require an explicit id-kp-serverAuth
in the end-entity certificates, and/or if you base it on the subjectAltName
entries like I suggest, right? Because, at any time, the CA could issue an
end-entity certificate with id-kp-serverAuth in its EKU and/or a
certificate with a dNSName or iPAddress in its subjectAltNames.


> Doing it based on the technical capabilities of the issuing CA allows us
> to say "we don't care about any certs this CA issues", rather than "we
> might care about some of the certs this CA has issued; we now have to
> find and examine them all to see".
>

I do very much agree with this! But, what matters is the constraints placed
on the CA's certificate, not on the end-entity certificates it issues,
right?

Do you know what kind of impact it would have on the 60+ CAs in
> our root program to tell them that they have to reissue every
> intermediate in their publicly-trusted hierarchies to contain
> non-server-auth name constraints?
>

I wasn't suggesting anything to do with name constraints.

However, I do think that if a CA certificate is name constrained to not
allow any dNSName or iPAddress names, and/or it EKU that doesn't contain
id-kp-serverAuth, then it shouldn't be in scope of the proposal. Either
condition is sufficient.



> > AFAICT almost all Mozilla software except for Firefox and Thunderbird,
> > would still trust the EKU-less certificates for id-kp-serverAuth. Thus
> > requiring an explicit id-kp-serverAuth in Firefox wouldn't even have the
> > intended ramifications for all of Mozilla's products.
>
> Are you talking about Firefox for iOS? Or something else?
>

Besides Firefox for iOS, most other Mozilla software, such as software
written in Go or Python or anything except {Firefox, Thunderbird} for {Mac,
Windows, Linux}. IIUC, Firefox for Android sometimes contacts https://
servers using the native Android software stack, which won't require an
explicit EKU either.


> > Also, pretty much all non-Mozilla software is using RFC 5280 semantics
> > already. So, such a change wouldn't do anything to help non-Mozilla
> > software nor even all of Mozilla's products.
>
> Well, our policy doesn't take explicit account of non-Mozilla software :-)
>

That's true, but at the same time we want to have standardized behavior
right? Mozilla is or will be asking people to go beyond existing standards
by:

1. Honoring Microsoft's semantics for EKU in intermediates.
2. Dropping support for interpreting the subject CN as a domain name or IP
address.
3. Maybe requiring an explicit EKU in the end-entity certificate.

My point is that #1 and #2 are already sufficient to solve this problem.
Let's make it easy for people to agree with Mozilla, by not requiring more
than is necessary, #3.


> I want the scope of what our software trusts to match the scope of what
> our policy controls, and I want that united scope to both incorporate
> all or the vast majority of certificates people are actually using for
> server-auth, and to have clear and enforceable boundaries which are
> administratively convenient for CAs and us. I think requiring
> id-kp-serverAuth does that.
>

I think my proposal does the same, but in a way that's easier for other
software to adopt.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Can we require id-kp-serverAuth now?

2016-12-04 Thread Brian Smith
Gervase Markham  wrote:

> On 03/12/16 03:42, Brian Smith wrote:
> > The solution to this problem is to get rid of the idea of "intent" from
> the
> > CA policy (including the baseline requirements, or in spit of the BRs if
> > the BRs cannot be changed), so that all that matters is the RFC 5280
> > "trusted for" semantics.
>
> We intend to do that for the Mozilla policy. However, that then widens
> the scope of the policy massively. It makes it cover, if I remember the
> example correctly, millions of EKU-less certs on smart cards used for a
> government ID program in Europe. These are not intended for server use,
> but they are trusted for it, and so a misissuance here such that
> something appears in a CN or SAN that is domain-name-like would mean
> that cert could be used for a server.
>

If certificates without an EKU have dNSName or iPAddress subjectAltName
entries, then they should be considered in scope. Otherwise they don't need
to be considered in scope as long as Firefox doesn't use the Subject CN as
a dNSName. You've already started down the path of fixing the Subject CN
issue in https://bugzilla.mozilla.org/show_bug.cgi?id=1245280 and maybe
elsewhere.


> This is why a change to "trusted for" rather than "intended for" needs
> to be accompanied by a change to explicitly require id-kp-serverAuth, in
> order to keep the scope correct, and stop the Mozilla policy extending
> to cover certificates it's not supposed to cover and which CAs don't
> want it to cover.
>

See above. I bet that you can make the subjectAltName restrictions tighter
so that "trusted for" already works without requiring an explicit EKU.


> Requiring that every issuance under the publicly-trusted roots which is
> using no EKU and which is not intended for server auth change to use an
> EKU which explicitly does not include id-kp-serverAuth would have
> unknown ramifications of unknown size.


This is a textbook instance of FUD.


> Changing both the Mozilla policy
> and Firefox to require id-kp-serverAuth is reasonably confidently known
> to have only minor ramifications, and we know what they will be.
>

AFAICT almost all Mozilla software except for Firefox and Thunderbird,
would still trust the EKU-less certificates for id-kp-serverAuth. Thus
requiring an explicit id-kp-serverAuth in Firefox wouldn't even have the
intended ramifications for all of Mozilla's products.

Also, pretty much all non-Mozilla software is using RFC 5280 semantics
already. So, such a change wouldn't do anything to help non-Mozilla
software nor even all of Mozilla's products.


> >> The advantage of doing this is that it makes it much easier to scope our
> >> root program to avoid capturing certs it's not meant to capture.
> >
> > This is not true. Since no EKU extension implies id-kp-serverAuth, certs
> > without an EKU extension or with an EKU extension containing
> > id-kp-serverAuth or anyExtendedKeyUsage (even though Firefox doesn't
> > support that) should be within the scope of the program.
>
> But given the current situation, doing that extends the scope of the
> program beyond where we want it to be, and where CAs want it to be.
>

Limiting the scope to things that contain dNSName and iPAddress
subjectAltNames addresses this, right? And, more importantly, it addresses
it in a way that makes sense considering that other software doesn't
require an explicit EKU, because it doesn't rely on Firefox-specific
semantics.

To be clear, I'm not against the idea of Firefox making a technical change
that others have to catch up with, like name constraints. However, that
should be a last resort. There seems to be a better alternative in this
case.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Can we require id-kp-serverAuth now?

2016-12-02 Thread Brian Smith
On Tue, Nov 8, 2016 at 11:58 PM, Gervase Markham  wrote:

> At the moment, Firefox recognises an EE cert as a server cert if it has
> an EKU extension with id-kp-serverAuth, or if it has no EKU at all.
>

The EKU extension indicates the limits of the key usage. A certificate
without an EKU extension has no limits on its key usage. In particular,
when no EKU is present, id-kp-serverAuth is allowed, as far as the CA is
concerned. Many X.509 features are defined this way, where the default is
"no limit"--pretty much all of them. The advantage of omitting these
extensions is that the resulting certificates are smaller. Smaller
certificates are better. Therefore, Mozilla should encourage behaviors that
result in smaller certificates, including in particular omitting the EKU
extension and other extensions where the defaults "work."

The problem is that CAB Forum stuff is defined in terms of "intended for,"
which is different than "trusted for." So, for example, some CAs have
argued that they issue certificates that say they are trusted for
id-kp-serverAuth (because they have no EKU), but since they're not
"intended for" id-kp-serverAuth, the baseline requirements don't apply to
them.

The solution to this problem is to get rid of the idea of "intent" from the
CA policy (including the baseline requirements, or in spit of the BRs if
the BRs cannot be changed), so that all that matters is the RFC 5280
"trusted for" semantics.

So, it is now possible to change Firefox to mandate the presence of
> id-kp-serverAuth for EE server certs from Mozilla-trusted roots? Or is
> there some reason I've missed we can't do that?
>

I'd like to point out that I've given the above explanation to you multiple
times.


> The advantage of doing this is that it makes it much easier to scope our
> root program to avoid capturing certs it's not meant to capture.
>

This is not true. Since no EKU extension implies id-kp-serverAuth, certs
without an EKU extension or with an EKU extension containing
id-kp-serverAuth or anyExtendedKeyUsage (even though Firefox doesn't
support that) should be within the scope of the program. You simply need to
define the scope of the program in terms of the **technical** semantics.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Technically Constrained Sub-CAs

2016-11-21 Thread Brian Smith
Ryan Sleevi  wrote:

> On Mon, Nov 21, 2016 at 11:01 AM, Brian Smith 
> wrote:
> > Absolutely we should be encouraging them to proliferate. Every site that
> is
> > doing anything moderately complex and/or that wants to use key pinning
> > should be using them.
>
> I do hope you can expand upon the former as to what you see.
> As to the latter, key pinning is viable without the use of TCSCs.


A lot of people disagree, perhaps because they read the text after
"WARNING:" in
https://noncombatant.org/2015/05/01/about-http-public-key-pinning/.

If nothing else, using your own intermediate can help avoid the problems
with Google Chrome's implementation. (FWIW, Firefox's implementation also
can be coerced into behaving as badly as Chrome's, in some situations,
IIRC.)


> > My hypothesis is that CAs would be willing to start selling such
>
> certificates under reasonable terms if they weren't held responsible for
> > the things signed by such sub-CAs. It would be good to hear from CAs who
> > would be interested in that to see if that is true.
>
> That would require a change to the BRs, right? So far, no CAs have
> requested such a change, so why do you believe such CAs exist?
>

It would require changes to browsers' policies. Changing the BRs is one way
to do that, but it seems like CAB Forum is non-functional right now so it
might be better to simply route around the BRs.


> Why should a technical document be blocked on the policy document?
>

Nobody said anything about blocking 6962-bis. Removing that one section is
a smaller change in terms than the change Google made to the document just
last week, as far as the practical considerations are concerned.

Regardless, the argument for removing it is exactly your own arguments for
why you don't want to do it in Chrome. Read your own emails to learn more
about my technical objections to it.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Technically Constrained Sub-CAs

2016-11-21 Thread Brian Smith
Gervase Markham  wrote:

> On 18/11/16 19:13, Brian Smith wrote:
> > Regardless, the main point of that message of mine was left out: You
> could
> > limit, in policy and in code, the acceptable lifetime of name-constrained
> > externally-operated sub-CAs
>
> Presumably the "externally-operated" part would need to be policy, or a
> code-detectable marker enforced by policy, because there's no way of
> detecting that otherwise?
>

In another message in this thread, I suggested one way to mark intermediate
certificates as meeting the criteria of an name-constrained
externally-operated sub-CA that uses certificate policy OIDs. That proposed
mechanism also ensures externally-operated sub-CAs comply with Mozilla's
technical requirements (e.g. SHA-1 deprecation and future deprecations or
transisitions).


>
> > and/or the end-entity certificates they issue
> > strictly, independently of whether it can be done for all certificates,
> and
> > doing so would be at least part of the solution to making
> name-constrained
> > externally-operated sub-CAs actually a viable alternative in the market.
>
> I'm not sure what you mean by "a viable alternative" - I thought the
> concern was to stop them proliferating,


Absolutely we should be encouraging them to proliferate. Every site that is
doing anything moderately complex and/or that wants to use key pinning
should be using them.


> if what's underneath them was
> opaque? And if it's not opaque,


If draft-ietf-trans-rfc6962-bis section 4.2 discourages Mozilla from making
externally-operated name-constrained certificates viable then please have
somebody from Mozilla write to the TRANS list asking for section 4.2 to be
removed from the draft.


> why are they not a viable alternative
> now, and why would restricting their capabilities make them _more_ viable?
>

Go out and try to find 3 different CAs that will sell you a
name-constrained sub-CA certificate where you maintain control of the
private key and with no strings attached (no requirement that you implement
the same technical controls as root CAs or being audited to the same level
as them). My understanding is that you won't be able to find any that will
do so, because if you go off and issue a google.com certificate then
Mozilla and others will then hold the issuing root CA responsible for that.

My hypothesis is that CAs would be willing to start selling such
certificates under reasonable terms if they weren't held responsible for
the things signed by such sub-CAs. It would be good to hear from CAs who
would be interested in that to see if that is true.

To reiterate, I disagree that the name-constraint redaction is bad because
the certificates issued by the externally-operated name-constrained CAs
must be subject to all the terms of browsers' policies, including the BRs.
That kind of thinking is 100% counter to the reason Mozilla created the
exceptions for externally-operated name-constrained CAs in its policy in
the first place. (Similarly, the requirements on externally-operated
name-constrained CAs in the baseline requirements defeat the purpose of
treating them specially.) However, i do agree that the technical details
regarding (externally-operated) name-constrained CAs in Mozilla's policy
and in draft-ietf-trans-rfc6962-bis are insufficient, and that's why I
support (1) removing section 4.2 from draft-ietf-trans-rfc6962-bis-20, and
(2) improving Mozilla's policy and the BRs so that the technical details do
become sufficient. After that we can then see if it makes sense to revise
rfc6962-bis to add redaction based on the revised details of how root
stores treat name-constrained externally-operated sub-CAs.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Technically Constrained Sub-CAs

2016-11-18 Thread Brian Smith
Gervase Markham  wrote:

> RFC 6962bis (the new CT RFC) allows certs below technically-constrained
> sub-CAs (TCSCs) to be exempt from CT. This is to allow name privacy.
> TCSCs themselves are also currently exempt from disclosure to Mozilla in
> the Common CA Database.
>
> If this is the only privacy mechanism available for 6962bis,


First, here's the RFC 6969-bis draft:
https://tools.ietf.org/html/draft-ietf-trans-rfc6962-bis-20#section-4.2.

Please see my other messages in this thread, where I pointed out that
Mozilla's own definition of externally-operated name-constrained sub-CAs
should be improved because name constraints don't mitigate every serious
concern one might have regarding technically-constrained sub-CAs. I think
that's clearly true for what RFC 6962-bis is trying to do with name
constraints too.

I think there might be ways to fix the name-constrained sub-CA stuff for
RFC 6962-bis, but those kinds of improvements are unlikely to happen in RFC
6962-bis itself, it seems. They will have to happen in an update to RFC
6962-bis.

I also disagree with Google's position that it is OK to leave bad stuff in
the spec and then ignore it. The WGLC has passed, but that doesn't mean
that the spec can't be changed. Google's already proposed a hugely
significant change to the spec in the last few days (which I support),
which demonstrates this.

Accordingly, I think the exception mechanism for name-constrained sub-CAs
(section 4.2) should be removed from the spec. This is especially the case
if there are no browsers who want to implement it. If the draft contains
things that clients won't implement, then that's an issue that's relevant
for the IETF last call, as that's against the general IETF philosophy of
requiring running code.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Technically Constrained Sub-CAs

2016-11-18 Thread Brian Smith
Gervase Markham  wrote:

> On 18/11/16 01:43, Brian Smith wrote:
> > The fundamental problem is that web browsers accept certificates with
> > validity periods that are years long. If you want to have the agility to
> > fix things with an N month turnaround, reject certificates that are valid
> > for more than N months.
>
> That's all very well to say. The CAB Forum is deadlocked over a proposal
> to reduce the max validity of everything to 2 years + 3 months; some
> people like it because it removes a disadvantage of EV (which already
> has this limit), other's don't like it because people like not having to
> change their cert and are willing to pay for longer. Mozilla is in
> support, but without agreement, we can hardly implement unilaterally -
> the breakage would be vast.
>

Regardless, the main point of that message of mine was left out: You could
limit, in policy and in code, the acceptable lifetime of name-constrained
externally-operated sub-CAs and/or the end-entity certificates they issue
strictly, independently of whether it can be done for all certificates, and
doing so would be at least part of the solution to making name-constrained
externally-operated sub-CAs actually a viable alternative in the market.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Technically Constrained Sub-CAs

2016-11-17 Thread Brian Smith
Andrew Ayer  wrote:

> The N month turnaround is only a reality if operators of TCSCs start
> issuing certificates that comply with the new rules as soon as the new
> rules are announced.  How do you ensure that this happens?
>

Imagine that the TCSCs are also required to have a short validity period, N
months. Further, require that each TCSC indicate using a certificate policy
(as already in the spec, or perhaps some simpler mechanism) that indicates
the version of the technical requirements on certificates that that TCSC is
trusted for. Then the end-entity certificates are also similarly marked.
Each policy implicitly maps to a period of time for which that policy
applies. At any given time, trusted CAs are only allowed to issue TCSCs
with validity periods that are within the period of time specified by all
policies listed in that TCSC.

Let's say that this was implemented already two year ago. At that time CAs
could issue SHA-1 certificates and so a TCSC could be issued for the policy
which browsers understand allows TCSCs to be issued. Root programs require
that all such TCSCs expire before January 1, 2016, because that's when
SHA-1 issuance became disallowed. Also, browsers have code in them that
make it so that certificates without that policy OID included won't be
trusted for SHA-1.

Now, let's say I got a TCSC for example.com in March 2015, and I want to
issue SHA-1 certificates, so I ask for that allow-SHA1 policy OID to be
included in my TCSC. That means my certificate will expire in January 2016,
because that's the end date for the allow-SHA1 policy. And also, browsers
would be coded to not recognize that policy OID after January 2016 anyway.

Now, December 2015 roles around and I get another TCSC for January
2016-January 2017. But, the allow-SHA1 policy isn't allowed for that
validity period, so my TCSC won't have that policy; instead it will have
the only-SHA2 policy.

Now, here are my choices:

* Do nothing. My intermediate will expire, and all my servers' certificates
will become untrusted.

* Issue new SHA-1 end-entity certificates from my new only-SHA2
intermediate. But, browsers would not trust these because even if the
end-entity cert contains the allow-SHA1 policy OID, my TCSC won't include
it.

* Issue new SHA-2 end-entity certificates from my new only-SHA2
intermediate.

The important aspects with this idea are:
1. Every TCSC has to be marked with the policies that they are to be
trusted for.
2. Root store policies assign a validity period to each policy.
3. Browsers must enforce the policies in code, and the code for enforcing a
policy must be deployed in production before the end (or maybe the
beginning) of the policy's validity period.
4. A TCSC's validity period must be within all the validity periods for
each policy they are marked with; that is, a TCSC's notAfter must never be
allowed to be after any deprecation deadline that would affect it.

Note that for the latest root store policies, we may not know the end date
of the validity period for the policy. This is where we have to choose an
amount of time, e.g. 12 months, and say we're never going to deprecate
anything with less than 12 months (unless there's some emergency or
whatever), and so we'll allow TCSCs issued today for the current policies
to be valid for up to 12 months.

Also note that the existing certificate policy infrastructure used for the
EV indicator could probably be used, so the code changes to certificate
validation libraries would likely be small.

Thoughts?

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Technically Constrained Sub-CAs

2016-11-17 Thread Brian Smith
Ryan Sleevi  wrote:

> On Thu, Nov 17, 2016 at 3:12 PM, Nick Lamb  wrote:
> > There's a recurring pattern in most of the examples. A technical
> counter-measure would be possible, therefore you suppose it's OK to
> screw-up and the counter-measure saves us. I believe this is the wrong
> attitude. These counter-measures are defence in depth. We need this defence
> because people will screw up, but that doesn't make screwing up OK.
>
> I think there's an even more telling pattern in Brian's examples -
> they're all looking in the past. That is, the technical mitigations
> only exist because of the ability of UAs to change to implement those
> mitigations, and the only reason those mitigations exist is because
> UAs could leverage the CA/B Forum to prevent issues.
>
> That is, imagine if this was 4 years ago, and TCSCs were the vogue,
> and as a result, most major sites had 5 year 1024-bit certificates.
> The browser wants the lock to signify something - that there's some
> reasonable assurance of confidentiality, integrity, and authenticity.
> Yet neither 5 year certs nor 1024-bit certificates met that bar.
>

The fundamental problem is that web browsers accept certificates with
validity periods that are years long. If you want to have the agility to
fix things with an N month turnaround, reject certificates that are valid
for more than N months.

In fact, since TCSCs that use name constraints as the technical constraints
basically do not exist, you could even start enforcing even stricter
enforcement than other certificates. For example, externally-operated name
constrained intermediates could be limited to 12 months of validity even if
other certificates aren't so restricted. Just make sure you actually
enforce it in the browser.

If you have a better plan for getting people to actually issue TCSCs of the
name constrained variety, let's hear it.

Cheers,
Brian.
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Technically Constrained Sub-CAs

2016-11-17 Thread Brian Smith
Nick Lamb  wrote:

> There's a recurring pattern in most of the examples. A technical
> counter-measure would be possible, therefore you suppose it's OK to
> screw-up and the counter-measure saves us.


Right.


> I believe this is the wrong attitude. These counter-measures are defence
> in depth. We need this defence because people will screw up, but that
> doesn't make screwing up OK.
>

With that attitude, CAs would never issue intermediate CAs with name
constraints as the technical constraint on reasonable terms (not costing a
fortune, not forcing you to let the issuing CA have the private key), and
key pinning would remain too dangerous for the vast majority of sites to
ever deploy. Giving up those things would be a huge cost. What's the actual
benefit to end users in giving them up?

(Note: Key pinning isn't the only advantage to being able to freely operate
your own intermediate CA.)

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Technically Constrained Sub-CAs

2016-11-17 Thread Brian Smith
On Mon, Nov 14, 2016 at 6:39 PM, Ryan Sleevi  wrote:

> As Andrew Ayer points out, currently, CAs are required to ensure TCSCs
> comply with the BRs. Non-compliance is misissuance. Does Mozilla share
> that view? And is Mozilla willing to surrender the ability to detect
> misissuance, in favor of something which clearly doesn't address the
> use cases for redaction identified in the CA/Browser Forum and in the
> IETF?
>

I don't agree that a third-party TCSC failing to conform to the BRs should
be considered misissuance in every case, when the technical constrain is
name constraints.

Let's run with an example where I am Example Corp, I own example.com, I
want to get a name-constrained CA certificate for example.com and *.
example.com.

Let's say I screw up something and accidentally issue a certificate from my
sub-CA for google.com or addons.mozilla.org. Because of the name
constraints, this is a non-issue and shouldn't result in any sanctions on
the original root CA or Example Corp. (Note that this means that relying
parties need to implement name constraints, as Mozilla products do, and so
this should be listed as a prerequisite for using Mozilla's trust anchor
list in any non-Mozilla product.)

Let's say I issue a SHA-1-signed certificate for
credit-card-readers.example.com. Again, that's 100% OK, if unfortunate,
because after 2017-1-1 one shouldn't be using Mozilla's trust store in a
web browser or similar consumer product if they accept SHA-1-signed
certificates.

Let's say that the private key for https://www.example.com gets
compromised, but I didn't create any revocation structure so I can't revoke
the certificate for that key. That's really, really, unfortunate, and that
highlights a significant problem with the definition of name-constrained
TCSCs now. In particular, it should be required that the name-constrained
intermediate be marked using this mechanism
https://tools.ietf.org/html/rfc7633#section-4.2.2 in order to be considered
technically-constrained.

Let's say I issue a malformed certificate that is rejected from my
name-constrained intermediate. Again, IMO, we simply shouldn't care too
much. The important thing is that implementations don't implement
workarounds to accomodate such broken certificates.

Let's say I issue a SHA-2 certificate that is valid for 20 years from my
name-constrained certificate. Again, that is not good, but it won't matter
as long as clients are rejecting certificates that are valid for too long,
for whatever definition of "too long" is decided.

Why is it so important to be lenient like this for name-constrained TCSC's?
One big reason is that HPKP is dangerous to use now. Key pinning is really
important, so we should fix it by making it less dangerous. The clearest
way to make it safer is to use only pin the public keys of multiple TCSCs,
where each public key is in an intermediate issued by multiple CAs. But,
basically no CAs are even offering TCSCs using name constraints as the
technical constraint, which means that websites can't do this, and so for
the most part can't safely use key pinning. Absolving CAs from having to
babysit their customers' use of their certificates will make it more
practical for them to make this possible.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Proposed limited exception to SHA-1 issuance

2016-02-25 Thread Brian Smith
Gervase Markham  wrote:

> On 23/02/16 18:57, Gervase Markham wrote:
> > Mozilla and other browsers have been approached by Worldpay, a large
> > payment processor, via Symantec, their CA. They have been transitioning
> > to SHA-2 but due to an oversight have failed to do so in time for a
> > portion of their infrastructure, and failed to renew some SHA-1 server
> > certificates before the issuance deadline of 31st December 2015.
>
> In relation to this issue, we just published a blog post:
>
> https://blog.mozilla.org/security/2016/02/24/payment-processors-still-using-weak-crypto/


This is all very disappointing. Effectively, Mozilla is punishing,
economically, all of WorldPay's and Symantec's competitors who spent real
money and/or turned down money in an effort to comply with Mozilla's
guidance on SHA-1. Meanwhile, no doubt Symantec receives a hefty fee in
return for issuing these certificates. Thus, Mozilla has effectively
reversed the economic incentives for CAs so that it is profitable to go
against Mozilla's initiatives to improve web security. And, in the course
of doing so, Mozilla has damaged its own credibility and reduced leverage
in enforcing its CA policies going forward.

Even worse, Firefox still hasn't been changed to block SHA-1 certificates
that chain to publicly-trusted CAs with a notBefore date after 2016-01-01.
After I left Mozilla, I continued to work on mozilla::pkix in part to make
it easy for Mozilla to implement such blocking, specifically, so I know as
well as anybody that it is easy to do. If such blocking were implemented
then Firefox users wouldn't even be affected by the above-mentioned
certificates. This was (is) an opportunity for Firefox to lead other
browsers in at least a small part of certificate security. The existing bug
[1] for this was closed when the botched attempt to implement it was
checked in, but it wasn't re-opened when the botched patch was reverted.
I've reopened the bug. It would be great to see somebody working on it.

Even the bug about passively warning users about SHA-1 certificates in the
chain [2] is currently assigned to *nobody*. AFAICT, Google Chrome has been
doing this since 2014. Firefox needs to catch up, at least.

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=942515
[2] https://bugzilla.mozilla.org/show_bug.cgi?id=1183718

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Name issues in public certificates

2015-11-19 Thread Brian Smith
Peter Bowen  wrote:

> Robin Alden  wrote:
> Given that it doesn't, but that that the BRs say "MUST be either a
> dNSName containing the Fully‐Qualified Domain Name or an iPAddress
> containing the IP address", it is clear we still need to have a valid
> FQDN.  I'll update my scanner to allow "_" in the labels that are not
> registry controlled or in the label that is immediately to the left of
> the registry controlled labels.  Give me a little while and I'll
> upload a revised data set with this fix.


See https://bugzilla.mozilla.org/show_bug.cgi?id=1136616. In mozilla::pkix,
we had to allow the underscore because of AWS.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Name issues in public certificates

2015-11-18 Thread Brian Smith
Peter Bowen  wrote:

> 2) For commonName attributes in subject DNs, clarify that they can only
> contain:
>
- IPv4 address in dotted-decimal notation (specified as IPv4address
> from section 3.2.2 of RFC 3986)
> - IPv6 address in coloned-hexadecimal notation (specified as
> IPv6address from section 3.2.2 of RFC 3986)
> - Fully Qualified Domain Name or Wildcard Domain Name in the
> "preferred name syntax" (specified by Section 3.5 of RFC1034 and as
> modified by Section 2.1 of RFC1123)
> - Fully Qualified Domain Name or Wildcard Domain Name in containing
> u-labels (as specified in RFC 5890)


> 3) Forbid commonName attributes in subject DNs from containing a Fully
> Qualified Domain Name or Wildcard Domain Name that contains both one
> or more u-labels and one or more a-labels (as specified in RFC 5890).
>

I don't think these rules are necessary, because CAs are already required
to encode all this information in the SAN, and if there is a SAN with a
dNSName and/or iPAddress the browser is required to ignore the subject CNs.
That is, if the certificate a SAN with a dNSName and/or iPAddress entry,
then it doesn't really matter how the CN is encoded as long as it isn't
misleading.


> If the Forum decides to allow an exception to RFC 5280 to permit IP
> address strings in dNSName general names, then require the same format
> as allowed for common names.
>

That should not be done. As I mentioned in my other reply in this thread,
Ryan Sleevi already described a workaround that seems to work very well:
Encode the IP addresses in the SubjectAltName as iPAddress entries, and
then also encode them as (normalized) ASCII dotted/colon-separated text in
the subject CN, using more than one subject CN if there is more than one IP
address.

By the way, I believe that mozilla::pkix will reject all the invalid names
that you found, except it accepts "_" in dNSNames. If you found some names
that mozilla::pkix accepts that you think are invalid, I would love to hear
about that.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Name issues in public certificates

2015-11-18 Thread Brian Smith
On Tue, Nov 17, 2015 at 4:40 PM, Richard Wang  wrote:

> So WoSign only left IP address issue that we added both IP address and DNS
> Name since some browser have warning for IP address only in SAN.
>

Put the IP addresses in the SAN as an iPAddress and then also put them in
the Subject CN, one CN per IP address. Then all browsers will accept the
certs and they will conform to the baseline requirements (IIUC).

Note that this is Ryan Sleevi's good idea.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy Update: section 8 of Maintenance Policy

2015-11-06 Thread Brian Smith
Kathleen Wilson  wrote:

> Bug https://bugzilla.mozilla.org/show_bug.cgi?id=1129083 was filed to
> remove support for certs signed using SHA-512-based signatures, but it was
> closed as invalid, and SHA-512 support was fixed via
> https://bugzilla.mozilla.org/show_bug.cgi?id=1155932


A P-256 signature cannot hold an entire SHA-384 or SHA-512 hash; the hash
will get truncated to 256 bits. Similarly, a P-384 signature cannot hold a
SHA-512 hash. While it isn't completely wrong to use a too-big hash, it is
kind of silly to do so.

> Bug https://bugzilla.mozilla.org/show_bug.cgi?id=1129077 was filed to
> remove support for certs that use the P-521 curve. But this is still up
> for discussion.

The issue with P-521 is simply one of compatibility with the broadest set
of products. Products basically *have* to support P-256 and P-384 because
that is what CAs are already using. But, lots of products can (and, it
seems, are planning to, or already are) omitting support for P-521. Thus,
even though Mozilla's products support P-521, it is worth steering towards
the more-compatible algorithms.

Also, is NSS's P-521 implementation actually production-quality? Has it
received proper QA. Check out:
https://bugzilla.mozilla.org/show_bug.cgi?id=650338
https://bugzilla.mozilla.org/show_bug.cgi?id=536389
https://bugzilla.mozilla.org/show_bug.cgi?id=325495
https://bugzilla.mozilla.org/show_bug.cgi?id=319252

I've forgotten exactly why now, but I remember thinking that I didn't feel
good about the P-521 implementation. And, IMO, it isn't worth spending time
working on P-521 considering the amount of work that is pending for
Curve25519, P-256, P-384, and Ed448.

I recommend that we change it to the following:
> ~~
> 8. We consider the following algorithms and key sizes to be acceptable and
> supported in Mozilla products:
> - SHA-256, SHA-384, SHA-512;
> - Elliptic Curve Digital Signature Algorithm (using ANSI X9.62) over SECG
> and NIST named curves P-256, P-384, and P-521; and
> - RSA 2048 bits or higher.
> ~~
>

I suggest:
~~
8. We consider the following algorithms and key sizes to be acceptable and
supported in Mozilla products:
- ECDSA using the P-256 curve and SHA-256.
- ECDSA using the P-384 curve and SHA-384.
- RSA using a 2048-bit or larger modulus, using SHA-256, SHA-384, or
SHA-512.
~~




> Another option is to delete this section from Mozilla's policy, because it
> is covered by the Baseline Requirements. However, the Baseline Requirements
> allows for DSA, which Mozilla does not support.
> The “Key Sizes” section of the Baseline Requirements allows for:
> SHA‐256, SHA‐384 or SHA‐512
> NIST P‐256, P‐384, or P‐521
> DSA L= 2048, N= 224 or L= 2048, N= 256
>

I suggest that Mozilla use the text I suggest above, and also propose it to
CABForum as the new CABForum language. Then, if/when CABForum adopts it,
replace the Mozilla policy text with a reference to the CABForum text in a
future version.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy Update Proposal -- Remove Email Trust Bit

2015-10-15 Thread Brian Smith
On Tue, Oct 13, 2015 at 5:04 AM, Kathleen Wilson 
wrote:

> I believe that such a resource commitment would satisfy all of the
> arguments against the Email trust bit that Ryan so eloquently summarized.
> [3]
>
> Is this a fair assessment?
>
> Is there anything else that should be added to the "job description" above?


I think your summary of what needs to be done with respect to the email
trust bit is good.

In an earlier message, you mentioned the idea of splitting the S/MIME
policy into a separate document from the TLS policy. I think that such a
split would be good and I think it should happen early on in the process
for version 2.3 of the policy. In particular, such a split would enable us
to have simpler language in the TLS policy, especially with respect to the
Extended Key Usage (EKU) extension.

I also think it would be good to have CAs apply for the TLS trust bit
separately from the email trust bit. In particular, when it comes time for
the public review of a CA inclusion request or update, I think it would be
better to have a separate email threads for the public discussions of the
granting of the TLS trust bit and the granting of the S/MIME trust bit, for
the same CA.

Note that certificate sfor TLS and for S/MIME are much more different than
they may first appear. In particular, it is very reasonable to have a
public log of issued certificates for TLS (Certificate Transparency) and
revocation via short-lived certificates and OCSP stapling should eventually
work. However, email certificates often contain personally identifiable
information (PII) and it isn't clear how to deal with that in CT. Also, the
privacy/security trade-off for revocation checking for S/MIME is much
different--much more difficult--than for TLS. So, I expect the technical
aspects of the TLS and S/MIME policies to be quite different going forward.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy Update Proposal -- Align with RFC 3647 now

2015-10-15 Thread Brian Smith
Ryan Sleevi  wrote:

> On Thu, October 15, 2015 12:30 pm, Kathleen Wilson wrote:
> >  It was previously suggested[1] that we align Mozilla's CA Certificate
> >  Policy to RFC 3647, so CAs can compare their CP/CPS side-by-side with
> >  Mozilla's policy, as well as the BRs and audit criteria (such as the
> >  forthcoming ETSI 319 411 series).
>
> Kathleen,
>
> I remain incredibly dubious and skeptical of the proposed value, and thus
> somewhat opposed. Though I've been a big proponent of adopting the 3647
> format for the CA/Browser Forum documents, I don't believe that root store
> requirements naturally fit into that form, nor should they.


I agree with Ryan. The organization of Mozilla's policy is good. The
technical requirements need to be improved. We should focus on improving
the technical requirements, not the organization.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Fwd: Policy Update Proposal: Remove Code Signing Trust Bit

2015-10-02 Thread Brian Smith
On Fri, Oct 2, 2015 at 7:41 AM, Joshua Cranmer 🐧 
wrote:

> On 10/2/2015 11:36 AM, Brian Smith wrote:
>
>> First of all, there is a widely-trusted set of email roots: Microsoft's.
>> Secondly, there's no indication that having a widely-trusted set of email
>> roots *even makes sense*. Nobody has shown any credible evidence that it
>> even makes sense to use publicly-trusted CAs for S/MIME. History has shown
>> that almost nobody wants to use publicly-trusted CAs for S/MIME, or even
>> S/MIME at all.
>>
>
> There is demonstrably more use of S/MIME than PGP. So, by extension of
> your argument, almost nobody wants to use secure email, and there is
> therefore no point in supporting them.


I think it is fair to say the empirical evidence does support the claim
that the vast majority of people don't want to, or can't, use S/MIME or GPG
as it exists today. I do think that almost everybody does want secure
email, though, if we can find a way to give it to them that they can
actually use.


> I do realize that I'm using strong language, but this does feel to me to
> be part of a campaign to intentionally sabotage Thunderbird development
> simply because it's not Firefox


It is much simpler than that: I don't want the S/MIME-related stuff to keep
getting in the way of the SSL-related stuff in Mozilla's CA inclusion
policy. People argue that the S/MIME stuff must keep being in the way of
the SSL-related stuff because of Thunderbird and other NSS-related
projects. I just want to point out that the "dereliction of duty," as you
put it, in the maintenance of that software seems to make that argument
dubious, at best.

Cheers,
Brian
--
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Fwd: Policy Update Proposal: Remove Code Signing Trust Bit

2015-10-02 Thread Brian Smith
-- Forwarded message --
From: Brian Smith 
Date: Thu, Oct 1, 2015 at 7:15 AM
Subject: Re: Policy Update Proposal: Remove Code Signing Trust Bit
To: Gervase Markham 
Cc: "kirk_h...@trendmicro.com" 


On Wed, Sep 30, 2015 at 11:05 PM, Gervase Markham  wrote:

> On 01/10/15 02:43, Brian Smith wrote:
> > Perhaps nobody's is, and the whole idea of using publicly-trusted CAs for
> > code signing and email certs is flawed and so nobody should do this.
>
> I think we should divide code-signing and email here. I can see how one
> might make an argument that using Mozilla's list for code-signing is not
> a good idea; a vendor trusting code-signing certs on their platform
> should choose which CAs they trust themselves.
>
> But if there is no widely-trusted set of email roots, what will that do
> for S/MIME interoperability?
>

First of all, there is a widely-trusted set of email roots: Microsoft's.
Secondly, there's no indication that having a widely-trusted set of email
roots *even makes sense*. Nobody has shown any credible evidence that it
even makes sense to use publicly-trusted CAs for S/MIME. History has shown
that almost nobody wants to use publicly-trusted CAs for S/MIME, or even
S/MIME at all.

Further, there's been actual evidence presented that Mozilla's S/MIME
software is not trustworthy due to lack of maintenance. And, really, what
does Mozilla even know about S/MIME? IIRC, most of the S/MIME stuff in
Mozilla products was made by Sun Microsystems. (Note: Oracle acquired Sun
Microsystems in January 2010. But, I don't remember any Oracle
contributions related to S/MIME. So, yes, I really mean Sun Microsystems
that hasn't even existed for almost 6 years.)

Cheers,
Brian
-- 
https://briansmith.org/




-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy Update Proposal: Remove Code Signing Trust Bit

2015-09-30 Thread Brian Smith
On Wed, Sep 30, 2015 at 3:11 PM, kirk_h...@trendmicro.com <
kirk_h...@trendmicro.com> wrote:

> The Mozilla NSS root store is used by some well-known applications as
> discussed, but also by many unknown applications.  If the trust bits are
> removed, CAs who issue code signing or email certs may find multiple
> environments dependent on the NSS root store where the CA's products will
> no longer work - and we don't have a list of those environments today.
>

That's OK.


> Mozilla does a sensible public review of a CA's practices for code signing
> and email certs before turning on the trust bits - and if Mozilla's review
> isn't sufficient, whose is?


Perhaps nobody's is, and the whole idea of using publicly-trusted CAs for
code signing and email certs is flawed and so nobody should do this.


> Who can conduct this review better than Mozilla?  (Answer: no one, and no
> one else will bother to do the review.).


If nobody will do it then that means nobody thinks it is important enough
to invest in. Why should Mozilla bother doing it if nobody cares enough to
invest in it?


> Without Mozilla trust bits, the trustworthiness of these types of certs
> will likely go down.
>

Isn't that a good thing? If the issuing policies have been insufficiently
reviewed, then that means Mozilla's current endorsement of these CAs is
misleading people into trusting these certs more than they should be.
Dropping these trust bits would be a clear sign that trust in these certs
should be re-evaluated, which is a good thing.


> Finally, if the trust bits are turned off, I'm concerned that some
> applications that use code signing and email certs will just go static on
> their trusted roots


A vendor that does that is a bad vendor with bad judgement and you should
probably not trust any of their products.


> Trusted by default, but can lose the trust bits by bad actions.
>

I wish you would have led with these completely ridiculous suggestion
instead of the only-slightly-less ridiculous stuff that preceded it.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy Update Proposal -- Specify audit criteria according to trust bit

2015-09-22 Thread Brian Smith
Joshua Cranmer 🐧  wrote:

> Kathleen Wilson wrote:
>
>> Large parts of it are
>>> out of date and the people who maintain the certificate validation logic
>>> aren't required to keeping S/MIME stuff working. In particular, it is OK
>>> according to current development policies for us to change Gecko's
>>> certificate validation logic so that it works for SSL but doesn't
>>> (completely) work for S/MIME. So, basically, Mozilla doesn't implement
>>> software that can properly use S/MIME certificates, as far as we know.
>>>
>>>
>> Is this true? Can some at Mozilla confirm or deny this statement about
>> current development policies?
>>
>
> Last I checked, Thunderbird is a product whose trademark is owned by
> Mozilla, whose infrastructure is paid for by Mozilla, and whose developers
> are Mozilla community members. And it is still a product with active
> development.
>
> So saying that Mozilla doesn't have any software that uses S/MIME is a lie.


Literally nobody said that. I said "Mozilla doesn't implement software that
can properly use S/MIME certificates, we far as we know." The key word is
*properly*. I cited two pieces of evidence in support of that.

Also, Joshua, I wish that the situation with Thunderbird was the opposite
of what it is. But, it is what it is and we have to acknowledge that.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy Update Proposal -- Specify audit criteria according to trust bit

2015-09-22 Thread Brian Smith
Kathleen Wilson  wrote:

> * It is better to spend energy improving TLS-related work than
>>
> S/MIME-related stuff. The S/MIME stuff distracts too much from the TLS
>> work.
>>
>>
> Please further explain whose energy this is referring too, and who is
> getting distracted too much from the TLS work.


Eveybody that reads or writes email in this mailing list, for one. Anybody
who has to write text for Mozilla's CA policy and/or propose changes for
another.


> * We can simplify the policy and tighten up the policy language more if the
>> policy only has to deal with TLS certificates.
>>
>
> Another approach would be to separate the policy language that is specific
> to the "Email trust bit" certs.


That also seems reasonable. If the email policy were completely separate
then people could ignore it.


> * Mozilla's S/MIME processing isn't well supported.
>>
>
> Mozilla is not the only consumer of the NSS root store.


Yes. But, I don't think that an organization that does not have a strong
interest in how the email trust bit affects its products is a good choice
to run a program for email CA trust, despite the good intentions and hard
work of the people within that organization to try to do something good.


> Large parts of it are
>> out of date and the people who maintain the certificate validation logic
>> aren't required to keeping S/MIME stuff working. In particular, it is OK
>> according to current development policies for us to change Gecko's
>> certificate validation logic so that it works for SSL but doesn't
>> (completely) work for S/MIME. So, basically, Mozilla doesn't implement
>> software that can properly use S/MIME certificates, as far as we know.
>>
>
> Is this true? Can some at Mozilla confirm or deny this statement about
> current development policies?


You can see an example of this policy at work at
https://bugzilla.mozilla.org/show_bug.cgi?id=1114787.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy Update Proposal -- Refer to BRs for NameConstraintsRequirement

2015-09-22 Thread Brian Smith
Rob Stradling  wrote:

> https://aka.ms/rootcert Section 4.A.12, for example, says...
>   "Rollover root certificates, or certificates which are intended to
> replace previously enrolled but expired certificates, will not be accepted
> if they combine server authentication with code signing uses unless the
> uses are separated by application of Extended Key Uses (“EKU”s) at the
> intermediate CA certificate level that are reflected in the whole
> certificate chain."
>

My reading of that is this: If you ask Microsoft to enable the code signing
bit and the server authentication bit for the same root CA, then you must
have separate intermediates for code signing and for server authentication,
and those separate intermediates must have EKU extensions. But, if a given
root certificate is only trusted for server authentication, then there is
no requirement that the intermediate CA certificates contain EKU extensions.

So, in fact, I think many CAs--e.g. ones that don't do code signing, or
that have separate roots for code signing, would benefit from such a change
because they'd be allowed to issue smaller certificates. And, that's a
goood thing.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy Update Proposal -- Specify audit criteria according to trust bit

2015-09-22 Thread Brian Smith
On Tue, Sep 22, 2015 at 1:47 AM, Brian Smith  wrote:

> * Mozilla's S/MIME processing isn't well supported. Large parts of it are
> out of date and the people who maintain the certificate validation logic
> aren't required to keeping S/MIME stuff working. In particular, it is OK
> according to current development policies for us to change Gecko's
> certificate validation logic so that it works for SSL but doesn't
> (completely) work for S/MIME. So, basically, Mozilla doesn't implement
> software that can properly use S/MIME certificates, as far as we know.
>

Here is a good example to show that the security of Thunderbird's S/MIME
handling is not properly managed:
https://bugzilla.mozilla.org/show_bug.cgi?id=1178032

The bug report is that email that the user tried to encrypt was sent
unencrypted. The bug was filed months ago, but hasn't been triaged so that
it is marked as a serious security issue, and the validity of the bug
report hasn't been investigated by anybody.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy Update Proposal -- Specify audit criteria according to trust bit

2015-09-22 Thread Brian Smith
Kathleen Wilson  wrote:

> Arguments for removing the Email trust bit:
> - Mozilla's policies regarding Email certificates are not currently
> sufficient.
> - What else?
>
>
* It isn't clear that S/MIME using certificates from publicly-trusted CAs
is a model of email security that is worth supporting. Alternatives with
different models exist, such a GPG and TextSecure. IMO, the TextSecure
model is more in line with what Mozilla is about that the S/MIME model.

* It is better to spend energy improving TLS-related work than
S/MIME-related stuff. The S/MIME stuff distracts too much from the TLS work.

* We can simplify the policy and tighten up the policy language more if the
policy only has to deal with TLS certificates.

* Mozilla's S/MIME processing isn't well supported. Large parts of it are
out of date and the people who maintain the certificate validation logic
aren't required to keeping S/MIME stuff working. In particular, it is OK
according to current development policies for us to change Gecko's
certificate validation logic so that it works for SSL but doesn't
(completely) work for S/MIME. So, basically, Mozilla doesn't implement
software that can properly use S/MIME certificates, as far as we know.

Just to make sure people understand the last point: I think it is great
that people try to maintain Thunderbird. But, it was a huge burden on Gecko
developers to maintain Thunderbird on top of maintaining Firefox, and some
of us (including me, when I worked at Mozilla) lobbied for a policy change
that let us do our work without consideration for Thunderbird. Thus, when
we completely replaced the certificate verification logic in Gecko last
year, we didn't check how it affected Thunderbird's S/MIME processing.
Somebody from the Thunderbird maintenance team was supposed to do so, but I
doubt anybody actually did. So, it would be prudent to assume that
Thunderbird's S/MIME certificate validation is broken.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy Update Proposal -- Refer to BRs for Name ConstraintsRequirement

2015-09-22 Thread Brian Smith
On Tue, Sep 22, 2015 at 12:51 AM, Rob Stradling 
wrote:

> On 22/09/15 01:01, Brian Smith wrote:
> 
>
>> But, if the intermediate CA certificate is allowed to issue SSL
>> certificates, then including the EKU extension with id-kp-serverAuth is
>> just wasting space. Mozilla's software assumes that when the intermediate
>> CA certificate does not have an EKU, then the certificate is valid for all
>> uses. So, including an EKU with id-kp-serverAuth is redundant. And, the
>> wasting of space within certificates has material consequences that affect
>> performance and thus indirectly security.
>>
>
> Brian,
>
> Given that the BRs require id-kp-serverAuth in Technically Constrained
> intermediates, what would be the point of Mozilla dropping that same
> requirement?
>
> There seems little point providing options that, in reality, CAs are never
> permitted to choose.


It would be the first step towards changing the BRs in the analogous manner.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy Update Proposal -- Refer to BRs for Name Constraints Requirement

2015-09-21 Thread Brian Smith
On Mon, Sep 21, 2015 at 4:02 PM, Kathleen Wilson 
wrote:

> Section 7.1.5 of version 1.3 of the Baseline Requirements says:
> The proposal is to simplify item #9 of the Inclusion Policy,
>
> https://www.mozilla.org/en-US/about/governance/policies/security-group/certs/policy/inclusion/
> by referring to the BRs in the first bullet point, as follows:
> ~~
> We encourage CAs to technically constrain all subordinate CA certificates.
> For a certificate to be considered technically constrained, the certificate
> MUST include an Extended Key Usage (EKU) extension specifying all extended
> key usages that the subordinate CA is authorized to issue certificates for.


I think it is better to resolve whether email certificates and code signing
certificates are in or out of scope for Mozilla's policy first. I would
prefer that email and code signing certificates to be considered out of
scope. In that case, the requirement that the intermediate certificate must
contain an EKU extension can clearly be removed.

The EKU-in-intermediate-certificate mechanism is most useful for the case
where the intermediate CA is NOT allowed to issue SSL certificates. In that
case, the EKU extension MUST be included, and the id-kp-serverAuth OID must
NOT be included in it.

But, if the intermediate CA certificate is allowed to issue SSL
certificates, then including the EKU extension with id-kp-serverAuth is
just wasting space. Mozilla's software assumes that when the intermediate
CA certificate does not have an EKU, then the certificate is valid for all
uses. So, including an EKU with id-kp-serverAuth is redundant. And, the
wasting of space within certificates has material consequences that affect
performance and thus indirectly security.



> - If the certificate includes the id-kp-emailProtection extended key
> usage, then all end-entity certificates MUST only include e-mail addresses
> or mailboxes that the issuing CA has confirmed (via technical and/or
> business controls) that the subordinate CA is authorized to use.
> - If the certificate includes the id-kp-codeSigning extended key usage,
> then the certificate MUST contain a directoryName permittedSubtrees
> constraint where each permittedSubtree contains the organizationName,
> localityName (where relevant), stateOrProvinceName (where relevant) and
> countryName fields of an address that the issuing CA has confirmed belongs
> to the subordinate CA.
> ~~
>

These requirements can be removed pending the resolution of the
email/code-signing trust bit issues. If Mozilla is only going to manage a
root program for SSL certificates, then it shouldn't impose requirements on
certificates that are marked (by having an EKU extension that does not
assert id-kp-serverAuth) as not relevant to SSL.

Cheers,
Brian

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Pre-cert misissuance

2015-09-19 Thread Brian Smith
On Sat, Sep 19, 2015 at 7:20 AM, Gervase Markham  wrote:

> Symantec just fired people for mis-issuing a google.com 1-day pre-cert:
>

By the way, Symantec didn't say "pre-cert," they said "certificates".

Also, I we shouldn't be splitting hairs at the difference between
pre-certificates and certificates as far as mis-issuance detection is
concerned. If people think there is a meaningful (technical, legal, etc.)
distinction between a pre-certificate being logged via CT and the
corresponding certificate being logged in CT, then we should consider
removing the pre-certificate mechanism from CT so that there's no doubts in
that. My view is that there is no meaningful difference.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Pre-cert misissuance

2015-09-19 Thread Brian Smith
On Sat, Sep 19, 2015 at 7:20 AM, Gervase Markham  wrote:

> Symantec just fired people for mis-issuing a google.com 1-day pre-cert:
>
> http://www.symantec.com/connect/blogs/tough-day-leaders
>
>
> http://googleonlinesecurity.blogspot.co.uk/2015/09/improved-digital-certificate-security.html
>
> Google: "Our primary consideration in these situations is always the
> security and privacy of our users; we currently do not have reason to
> believe they were at risk."
>
> Gerv
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>

People have been fired for worse reasons.

Good job, Google!

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy Update Proposal: Remove Code Signing Trust Bit

2015-09-11 Thread Brian Smith
On Thu, Sep 10, 2015 at 1:20 PM, Kathleen Wilson 
wrote:

> Proposal for version 2.3 of Mozilla's CA Certificate Policy:
>
> Remove the code signing trust bit.
>
> If this proposal is accepted, then there would be follow-up action items
> that would need to happen after version 2.3 of the policy is published:
> 1) Remove any root certificates that do not have the Websites and/or Email
> trust bit set.
> 2) Remove references to Code Signing trust bits from Mozilla’s wiki pages.
>

FWIW, I think this is a great and long-overdue idea. Mozilla can't do
everything; it has to make trade-offs on what to spend its time on. And, it
makes much more sense to stop caring about code signing trust bits in NSS
to make time for solve more important issues that are more relevant to
Mozilla's mission.

Building a properly-run code signing certificate program would be a ton of
work that Mozilla simply has never done. I think some of the arguments in
this thread for keeping code signing in Mozilla's program aren't fully
informed on just how little Mozilla actually did with respect to code
signing CA trust.

The same argument applies to email. Nobody wants to admit that Thunderbird
is dead, it is uncomfortable to know that the S/MIME handling in
Thunderbird has been unmaintained for at least half a decade, and it's a
little embarrassing to admit that the model we use for deciding which CAs
get the SSL trust bit works even less well for S/MIME and that basically
nobody cares about the S/MIME or code signing bits. But that's all true.
It's my professional opinion that if you actually care about S/MIME
security then it would be a mistake to use Thunderbird. (Sorry, people
volunteering to keep Thunderbird going.)

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Updating Mozilla's CA Certificate Policy

2015-08-24 Thread Brian Smith
On Mon, Aug 24, 2015 at 5:53 AM, Gervase Markham  wrote:

> On 20/08/15 19:12, Kathleen Wilson wrote:
> > It's time to begin discussions about updating Mozilla's CA Certificate
> > Policy.
>
> Great :-)
>
> > A list of the things to consider changing is here:
> > https://wiki.mozilla.org/CA:CertPolicyUpdates#Consider_for_Version_2.3
>
> How do you want to deal with this list? Is it "default-do" or
> "default-don't-do"? That is, should I spend my time arguing for the
> changes I would like to see, arguing against the changes I think are
> bogus, or a combination of the two?
>

I also have this same question.


> > Please review the list to let me know if there are any topics missing.
>

1. Mozilla recently asked some CAs about their practices in issuing
certificates that are syntactically invalid in various ways, and we got a
lot of good responses [1]. I was struck by the responses like GlobalSign's
that basically said, paraphrasing, "we intend to continue knowingly violate
the baseline requirements by issuing syntactically invalid certificates." I
think it would be good to make it clearer that producing syntactically
valid certificates is **required**. In particular, I think that Mozilla
should audit a CA's recently-issued certificates and automatically reject a
CA's request for inclusion or membership renewal if there are a non-trivial
number of certificates that have the problems mentioned in [2]. (Also, I
have some new information about problematic practices to expand the list in
[2], which I hope to share next week.)

2. Last week (or so), one of GlobalSign's OCSP response signing
certificates expired before the OCSP responses signed by the certificate
expired (IIUC), which caused problems for multiple websites, particularly
ones that use OCSP stapling. Please make it a requirement that every OCSP
response must have a nextUpdate field that is before or equal to the
notAfter date of the certificate that signs it. This should be easy for CAs
to comply with.

3. Please add a requirement that every OCSP response must have a nextUpdate
field. This is required to ensure that OCSP stapling works *reliably* with
all (at least most) server and client products.

4. Please add a requirement that the nextUpdate field must be no longer
than 72 hours after the thisUpdate field, i.e. that OCSP responses expire
within 3 days, for every certificate, for both end-entity certificates and
CA certificates.

5. On the page you linked to, there are items about removing support for
SHA-512-signed and P-521-signed certificates. Those were suggested by me
previously. I would like to change my suggestion to just recommending that
CAs avoid SHA-512 and P-521, especially in their CA certificates. Again,
this is to ensure interoperability, as SHA-512 and (especially) P-521 are
less well-supported than the other algorithms. (Note: On the page you
linked to, P-521 is incorrectly spelled "P-512".)

Thanks,
Brian

[1]
https://mozillacaprogram.secure.force.com/Communications/CommunicationActionOptionResponse?CommunicationId=a04o00M89RCAAZ&Question=ACTION%20%234:%20Workarounds%20were%20implemented
[2]
https://wiki.mozilla.org/SecurityEngineering/mozpkix-testing#Things_for_CAs_to_Fix
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA scope transparency (was Re: Name-constraining government CAs, or not)

2015-06-19 Thread Brian Smith
On Fri, Jun 19, 2015 at 1:38 PM, Ryan Sleevi <
ryan-mozdevsecpol...@sleevi.com> wrote:

> On Fri, June 19, 2015 11:10 am, Brian Smith wrote:
> >  The current set of roots is already too big for small devices to
> >  reasonably
> >  manage, and that problem will get worse as more roots are added. Thus,
> >  small devices have to take a subset of Mozilla's/Microsoft's/Apple's
> >  roots.
>




> It's also worth noting that these devices (which Mozilla does not develop
> code for, AFAIK) can further optimize their handling by treating roots as
> trust anchors (Subject + public key),


+ name constraints, at least. This thread is about, AFAICT, making the
amount of name constraint information in the root database quite large, and
making it necessary to update that name constraint information frequently.
What you're suggesting as far as minimizing the amount of data stored per
root is a good one. Having to store and update sets of name constraints for
each root seems counterproductive to that, to put it mildly.


> the same way that NSS trust records
> are expressed, rather than storing the full certificate. NSS's libpkix was

certainly designed for this, although I believe mozilla::pkix requires a
> full cert?
>

I agree that it is wasteful to encode trust anchors as full X.509
certificates. Many things in Gecko expect the trust anchors to be encoded
that way, though, so in order to accommodate that and to keep things
simple, mozilla::pkix work like that. This is one of the improvements I'm
looking forward to with the certificate verification stuff you are working
on now.


> Of course, it's completely unreasonable to talk about the constraints of
> IoT security on the internet when many of the devices being produced lack
> a basic capability of updating or security fixes. If you want to posit
> that these devices 'divide the internet' (as you suggest), then the first
> and foremost you must acknowledge the potential harm and self-inflicted
> wounds these devices are causing, before it be suggested that it's
> Mozilla's responsibility.
>

I agree that we should consider devices designed to have terrible security
to be out of scope. Not every device is inherently non-secure, though.


> So if
> there as an argument of "Mozilla's policy X would make it hard for
> downstream user Y to consume and reuse the trust store", then I think
> that's entirely reasonable.
>

Any time any root program adds a new root, it costs many people something
in terms of (a) increased risk of mis-issuance by the new root, (b)
increased update costs, (c) increased resource consumption, and (d)
increased compatibility risk. It's worth considering whether the addition
of a root is justified given these costs.


> Now, if the only option for recognizing these certificates was that these
> CAs would have to be globally accessible / serve a global market (again, a
> definitional issue that is surprisingly hard to pin down if you work
> through it), then the natural outcome of this is policies that go from:
> "We serve [population of users X] and employ [more rigorous method Y]"
> to
> "We serve all users globally. For [population of users X], we employ [more
> rigorous method Y]. For [everyone else], we employ [as little as
> possible]."
>

With what I'm suggesting in having a "best N" approach, the new CA's "as
little as possible" would have to be better than one of the N best CAs "as
little as possible" in order to even be considered. I also think it is
likely that Let's Encrypt and similar initiatives will put root programs in
a better position to hold out for "more rigorous" AND "global," at least
for future CA inclusions. If we raise the "as little as possible" bar high
enough, then I'm not sure it's worthwhile to bother considering "more
rigorous" anyway.


> requires either an element of lipservice (on the CA) or of arbitrary and
> capricious judgement (in the case of Mozilla operating the root policy and
> determining whether or not it's "global" enough).
>

Saying whatever is expected to get included is already part of what the
organization buying inclusion to the root programs pays for; i.e. we
already dealing with the lip service problem. Root programs already reserve
the right to make arbitrary judgements. I think risking being too arbitrary
in assessing the marginal utility and risk/reward as part of an inclusion
request, with a default "no" policy, seems less harmful than just accepting
everybody who writes down the right things in the application and passes
the audits.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA scope transparency (was Re: Name-constraining government CAs, or not)

2015-06-19 Thread Brian Smith
On Fri, Jun 19, 2015 at 7:24 AM, Gervase Markham  wrote:

> On 17/06/15 22:50, Brian Smith wrote:
> > By "small scope," I'm referring to CAs who limit their scope to a certain
> > geographical region, language, or type of institution.
>
> I'm not sure how that neuters my objection. CAs who do more than DV will
> need to have local infrastructure in place for identity validation. Are
> you saying that a CA who can't do that worldwide from the beginning is
> unsuitable for inclusion?
>

There should be a ramp-up period. But, any intent other than issuing
world-wide to the public should be rejected as not worth the risk or time
spent considering it.


> > For example, thinking about it more, I think it is bad to include
> > government-only CAs at all, because including government-only CAs means
> > that there would eventually be 196 government-only CAs
>
> Not necessarily at all; not all governments appear to be interested in
> running CAs for public use. The slope is not that slippery.
>

Mozilla's whole inclusion policy is based on that kind of short-sighted
thinking. If a group or individual is willing to spend ~$1 million and some
time, they could get into Mozilla's CA program. (It might cost more than $1
million to create a *good* CA, but if your goal is just to get your CA
trusted, you probably don't even need to spend $1 million.) Thus, even if
Let's Encrypt completely destroys the commercial CA market, the number of
trusted roots in Mozilla's program is still likely to keep growing over
time.

The current set of roots is already too big for small devices to reasonably
manage, and that problem will get worse as more roots are added. Thus,
small devices have to take a subset of Mozilla's/Microsoft's/Apple's roots.
As a result, these smaller devices can only access a subset of the secure
internet. Dividing the internet into subsets seems counter to the overall
Mozilla mission. In order to avoid the splitting of the internet into
subsets, I think Mozilla and other root programs should switch from a
"anybody with $1 million" inclusion policy to a "best N" inclusion policy.
Preferably, N is much smaller than the number of roots already included in
Mozilla's program. At least Mozilla should put into place some way to limit
growth while it works out how to do that.

In particular, Mozilla should put into place a mechanism to ensure that it
doesn't end up with ~400 government roots, instead of just hoping that 200
governments don't apply.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA scope transparency (was Re: Name-constraining government CAs, or not)

2015-06-17 Thread Brian Smith
Gervase Markham  wrote:

> On 06/06/15 02:12, Brian Smith wrote:
> > Richard Barnes  wrote:
> >
> >> Small CAs are a bad risk/reward trade-off.
> >
> > Why do CAs with small scope even get added to Mozilla's root program in
> the
> > first place? Why not just say "your scope is too limited to be worthwhile
> > for us to include"?
>
> There's the difficultly. All large CAs start off as (one or more :-)
> small CAs. If we admit no small CAs, we freeze the market with its
> current players.
>

By "small scope," I'm referring to CAs who limit their scope to a certain
geographical region, language, or type of institution.

For example, thinking about it more, I think it is bad to include
government-only CAs at all, because including government-only CAs means
that there would eventually be 196 government-only CAs and if each one has
just 1 ECDSA root and 1 RSA root, then that's 392 roots to deal with. If we
assume that every government will eventually want as many roots as Symantec
has, then there will be thousands of government-only roots. It's not
reasonable.

StartCom and Let's Encrypt is a converse example, because even though they
had issued certificates to zero websites when they started, their intent is
to issue certificates to every website.

For example, if Amazon had applied with a CP/CPS that limited the scope to
only their customers, then I would consider them to have too small a scope
to be included.

> Mozilla already tried that with the HARICA CA. But, the result was
> somewhat
> > nonsensical because there is no way to explain the intended scope of
> HARICA
> > precisely enough in terms of name constraints.
>
> Can you expand on that a little?
>

I did, in my original message. HARICA's constraint includes *.org, which is
much broader in scope than they intend to issue certificates for. dNSName
constraints can't describe HARICA's scope.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA scope transparency (was Re: Name-constraining government CAs, or not)

2015-06-05 Thread Brian Smith
Richard Barnes  wrote:

> Small CAs are a bad risk/reward trade-off.
>

Why do CAs with small scope even get added to Mozilla's root program in the
first place? Why not just say "your scope is too limited to be worthwhile
for us to include"?


> One way to balance this equation better is to scope the risk to the scope
> of the CA.  If a CA is only serving a small slice of the web, then they
> should only be able to harm a small slice of the web.  A CA should only be
> able to harm the entire web if it's providing benefit to a significant part
> of it.
>
> I wonder if we can agree on this general point -- That it would be
> beneficial to the PKI if we could create a mechanism by which CAs could
> disclose the scope of their operations, so that relying parties could
> recognize when the CA makes a mistake or a compromise that goes outside
> that scope, and prevent harm being done.
>

Mozilla already tried that with the HARICA CA. But, the result was somewhat
nonsensical because there is no way to explain the intended scope of HARICA
precisely enough in terms of name constraints.


> I think of this as CA scope transparency.  Not constraining what the CAs
> do, but asking them to be transparent about what they do.  That way if they
> do something they said they don't do, we can recognize it and reject it
> proactively.
>

 In general, it sounds sensible. But, just like when we try to figure out
ways restrict government CAs, it seems like when we look at the details, we
see that the value of the name constraints seems fairly limited. For
example, in the HARICA case, their name constraint still includes "*.org"
which means they can issue certificates for *.mozilla.org which means they
are a risk to the security of the Firefox browser (just like any other CA
that can issue for *.mozilla.org) except when the risk is limited by key
pinning.

It would be illustrative to see the list of CAs that volunteer to be
constrained such that they cannot issue certificates for any domains in
*.com. It seems like there are not many such CAs. Without having some way
to protect *.com, what's the point?

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Name-constraining government CAs, or not

2015-05-31 Thread Brian Smith
On Sun, May 31, 2015 at 12:43 PM, Ryan Sleevi <
ryan-mozdevsecpol...@sleevi.com> wrote:

> However, that you later bring in the idea that government's may pass laws
>
that make it illegal for browsers to take enforcement is, arguably,
> without merit or evidence. If we accept that "governments may pass laws to
> do X", then we can also logically assume two related statements
>
> 1) Governments may pass laws to compel a CA to issue a certificate to a
> party other than a domain applicant.
> 2) Governments may pass laws to compel browser makers to include
> particular roots as trusted.
>
> The added tinge of uncertainty "In fact, it might already be illegal to do
> in some circumstances" adds to the fear and doubt already sowed here.
>

The practical effects of FUD is really the concern I am raising. The
question I was responding to is basically "How is the threat model
different for Government CAs vs non-Government CAs?" Are browser makers
going to have additional fear, uncertainty, and doubt about taking
enforcement action that would negatively impact governments' essential
services? I think the publicly-available information on the DigiNotar and
ANSSI incidents at least hints that the answer is "yes." Do governments
have an unfair advantage as far as legal action regarding root removal is
concerned, to the point where browser makers should be especially concerned
about legal fights with them? I think it's kind of obvious that the answer
is "yes," though I agree my reasoning is based on speculation.

I agree with you that these concerns also apply in scenerios where
governments have some kind of relationship with a commercial CA.


> After all, why wouldn't we argue that the risk of being sued for tortious
>
interference exists if a browser removes trust, ergo they can't enforce
> their CA policies in any meaningful way?
>

It seems like a safe bet that whoever has the most money is most likely to
win a legal battle. Governments generally have more money than anybody
else. It's better to avoid getting into such legal battles in the first
place.

>  More generally, browsers should encourage CAs to agree to name
>
>  constraints,
> >  regardless of the "government" status of the CA.
>
> Of this, I absolutely agree. But I think there's a fine line between
> "encouraging" and "requiring", and how it's presented is key.
>
> Most importantly, I don't believe for a second that constraints justify a
> relaxation of security policy - they're an optional control for the CA to
> opt-in to, as a means of reducing their exposure.
>
> Name constraints can't replace compliance with the Mozilla Security
> Policy, nor should it, in part or in whole.
>

My hope, which may be too optimistic, is that the USG is content with
issuing itself certificates only for *.gov and *.mil, that they are willing
to be constrained to issuing certificates only to *.gov and *.mil, and that
we can easily help them get to the point where they are **effectively**
constrained to *.gov and *.mil. More generally, I hope that other
government CAs would also do likewise. Obviously there are a lot of cases
where the distinction between government CA and commercial CA is blurred,
but I don't think we should let those less clear situations stop us from
working with governments to improve the situation for situations where the
distinction is clear and meaningful.


> To be clear, my words are strong here because your argument is so
> appealing and so enchanting in a world of post-Snowdonia, in which "trust
> no one" is the phrase du jour, in which the NSA is the secret puppet
> master of all of NIST's activities, and in which vanity crypto sees a
> great resurgence. Your distinctions of government CAs, and their ability,
> while well intentioned, rest on arguments that are logically unsound and
> suspect, though highly appetizing for those who aren't following the
> matter closely.
>

I didn't and don't intend to make that kind of argument. My argument is
simpler: I expect less enforcement action against government CAs than
against commercial CAs by browser makers and I expect more incidents of
non-compliance from government CAs. In the situations where
governments--indeed any CAs--are willing to accept constraints, at least,
we should add the constraints, to reduce the negative consequences of of a
government CA being non-compliant and a browser delaying or forgoing
enforcement action.

I freely admit that my thinking is based on speculative inferences from my
understanding of the publicly-available history of browser CA policy
enforcement. I agree with you that the browser makers shouldn't delay or
forgo enforcement in these situations. In a perfect world we wouldn't need
to consider mitigations for if/when they do.

My specific concern is that, while some proposals for the use of name
constraints aren't good, this discussion is moving toward us not trying
very hard to convince governments to accept very useful and practical name
constraints. Name cons

Re: Name-constraining government CAs, or not

2015-05-30 Thread Brian Smith
Gervase Markham  wrote:

> 1) "Is the security analysis relating to government CAs, as a class,
> different to that relating to commercial CAs? If so, how exactly?"
>

It seems reasonable to assume that governments that have publicly-trusted
roots will provide essential government services from websites secured
using certificates that depend on those roots staying publicly-trusted.
Further, it is likely that, especially in the long run, they will do
things, including pass legislation, that would make it difficult for them
to offer these services using certificates issued by CAs other than
themselves, as being in full control will be seen as being a national
security issue. Further, governments may even pass laws that make it
illegal for browser makers to take any enforcement action that would reduce
or eliminate access to these government services. In fact, it might already
be illegal to do so in some circumstances.

The main sticks that browsers have in enforcing their CA policies is the
threat of removal. However, such a threat seem completely empty when
removal means that essential government services become inaccessible and
when the removal would likely lead to, at best, a protracted legal battle
with the government--perhaps in a secret court. Instead, it is likely that
browser makers would find that they cannot enforce their CA policies in any
meaningful way against government CAs. Thus, government CAs' ability to
create and enforce real-world laws likely will make them "above the law" as
far as browsers' CA policies are concerned.

Accordingly, when a browser maker adds a government CA to their default
trust store, and especially when that government CA has jurisdiction over
them, the browser maker should assume that they will never be able to
enforce any aspect of their policy for that CA in a way that would affect
existing websites that use that CA. And, they will probably never be able
to remove that CA, even if that CA were to be found to mis-issue
certificates or even if that CA established a policy of openly
man-in-the-middling websites.

IIRC, in the past, we've seen CAs that lapse in compliance with Mozilla's
CA policies and that have claimed they cannot do the work to become
compliant again until new legislation has passed to authorize their budget.
These episodes are mild examples show that government legislative processes
already have a negative impact on government CAs' compliance with browsers'
CA policies.

2) "If it is different, does name-constraining government CAs make
> things better, or not?"
>

Name constraints would allow governments that insist on providing
government services using certificates that they've issued themselves to do
so in a way that is totally independent of any browser policies. When a
government agrees to the name constraints, such as the case of the US FPKI
it seems like a no-brainer to add them.

More generally, browsers should encourage CAs to agree to name constraints,
regardless of the "government" status of the CA.

As far as what to do when a CA that is--or seems like--a government CA
wants to be able to issue certificates for everybody, I agree with Ryan
Sleevi and the other Googlers. In general, it seems like CT or similar
technology is needed to deal with the fact that browsers have (probably)
admitted, and will admit, untrustworthy CAs into their programs.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Requirements for CNNIC re-application

2015-05-30 Thread Brian Smith
On Tue, May 26, 2015 at 5:50 AM, Gervase Markham  wrote:

> On 24/05/15 06:19, percyal...@gmail.com wrote:
> > This is Percy from GreatFire.org. We have long advocated for the
> > revoking of CNNIC.
> >
> https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=site%3Agreatfire.org%20cnnic
> >
> >  If CNNIC were to re-included, CT MUST be implemented.
>
> At the moment, Mozilla does not have an official position of support for
> CT - we are "watching with interest" :-) Therefore, it's not really
> appropriate for Mozilla to mandate CT-related things as conditions of
> reinclusion for CNNIC.
>

We should be careful we don't don't turn that into "Mozilla doesn't
implement CT, so Mozilla has to allow CNNIC back in without requiring CT,
even if it would be clearly less safe to do so." A better interpretation
would be "Mozilla can't let CNNIC back in until it implements CT or
similar, because doing so would be clearly less safe."

By the way, what is Firefox's market share in China and other places that
commonly use CNNIC-issued certificates? My understanding is that it is
close to 0%. That's why it was relatively easy to remove them in the first
place. It also means that there's no need to rush to add them back, AFAICT.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DRAFT of next CA Communication

2015-04-13 Thread Brian Smith
Kathleen Wilson  wrote:
> ACTION #4
> Workarounds were implemented to allow mozilla::pkix to handle the things
> listed here:
> https://wiki.mozilla.org/SecurityEngineering/mozpkix-testing#Things_for_CAs_to_Fix

Hi Kathleen,

Thanks for including this in the CA communication.

That list of workarounds is out of date. I think it would be useful to
re-triage the fixed and still-open bugs in the PSM component related
to certificate verification and look for ones that were fixed by
implementing a workaround for a certificate with malformed or
deprecated content.

For example, here are some other things that should be on the list:

* Bug 1152515: CAs should ensure that all times in all certificates
are encoding in a way that conforms to the stricter requirements in
RFC 5280. In particular, the timezone must always be specified as "Z"
(Zulu/GMT).

* CAs should ensure, when signing OCSP responses with a delegated OCSP
response signing certificate, that the delegated OCSP response signing
certificate will not expire before the OCSP response expires.
Otherwise, when doing OCSP stapling, some servers will cache the OCSP
response past the point where the delegated response signing
certificate expires, and then Firefox will reject the connection.

* Bug 970760: CAs should ensure that all RSA end-entity certificates
that have a KeyUsage extension should include keyEncipherment in the
KeyUsage extension if the subscriber intends for the certificate to be
used for RSA key exchange in TLS. In other words, include
keyEncipherment in RSA certificates--but not ECDSA
certificates--unless the subscriber asks for it not to be included.
This way, Firefox can start enforcing the correct KeyUsage in
certificates sooner.

* CAs must ensure they include the subjectAltName extension with
appropriate dNSName/iPAddress entries in all certificates. Hopefully
soon Firefox and Chrome will be able to stop falling back on the
subject CN when there are no dNSName/iPAddress SAN entries.

* CAs should stop using any string types other than PrintableString
and UTF8String in DirectoryString components of names. In particular,
RFC 5280 says "TeletexString, BMPString, and UniversalString are
included for backward compatibility, and SHOULD NOT be used for
certificates for new subjects." Hopefully we will stop accepting
certificates that use those obsolete encodings soon.

There are other issues that should be on that list, but these are the
main ones off the top of my head.

Again, thanks for putting this together.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Requirements for CNNIC re-application

2015-04-10 Thread Brian Smith
Richard Barnes  wrote:
>> My argument is that if we think that CNNIC is likely to cause such
>> mis-issuance to occur because it runs the registry for those TLDs,
>> then there should be additional controls in place so that control over
>> those registries won't result in misissuance.
>
> Constraining what a registry can do for names over which it is authoritative
> is exactly what things like pinning and CT are for.  So maybe what you're
> actually saying is that there should be a requirement for CT as a check on
> CNNIC's ability to issue even for names for which they are authoritative?

Yes.

If a US-based CA were in a similar situation, would we consider name
constraining them to *.com, *.org, *.net, *.us? No, because that's not
much of a constraint. For people within China and others, a name
constraint of "*.cn" isn't much different than that. I think such a
constraint gives most of the people on this list a false sense of
resolution, because we *.cn websites aren't relevant to the our
security, so constraining CNNIC to *.cn is basically equivalent to
keeping them out of the program. But, there are many millions of
people for whom the security of *.cn websites does matter, and name
constraints don't help them.

Also, given how things seem to go in China, it seems reasonable to
expect some authorities in China to react to removal or limiting CNNIC
by blocking Let's Encrypt from operating correctly for *.cn and/or for
servers operating in China. Consequently, I'm doubting that building a
wall is ultimately what's best in the long term. The advantage of the
CT-based approach is that it avoids being such a wall.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Consequences of mis-issuance under CNNIC

2015-04-02 Thread Brian Smith
Florian Weimer  wrote:
> Gervase Markham wrote:
>> On 24/03/15 09:35, Florian Weimer wrote:
>>> Sadly, name constraints do not work because they do not constrain the
>>> Common Name field.  The IETF PKIX WG explicitly rejected an erratum
>>> which corrected this oversight.
>>>
>>> NSS used to be different (before the mozilla::pkix rewrite), but it's
>>> not PKIX-compliant.
>>
>> My understanding is that we continue to constrain the CN field using
>> name constraints, even after adopting mozilla::pkix; do you know
>> differently?
>
> I simply have not investigated, my comment was poorly phrased in this
> regard.

mozilla::pkix does enforce name constraints on domain names in the CN
attribute of the subject field.

https://mxr.mozilla.org/mozilla-central/source/security/pkix/test/gtest/pkixnames_tests.cpp#2186

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Name Constraints

2015-04-02 Thread Brian Smith
Florian Weimer  wrote:
> A PKIX-compliant implementation of Name Constraints is not effective
> in the browser PKI because these constraints are not applied to the
> Common Name.
>
> NSS used to be non-compliant (and deliberately so), so the constraints
> do work there, but I don't know if that's still the case.

mozilla::pkix does apply name constraints to domain names in the CN attribute.

https://mxr.mozilla.org/mozilla-central/source/security/pkix/test/gtest/pkixnames_tests.cpp#2186

Note that this is "PKIX-compliant" because RFC 5280 lets an
implementation apply additional constraints on top of the RFC 5280
rules.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Require separation between Issuing CAs and Policy CAs

2015-03-25 Thread Brian Smith
Peter Bowen  wrote:
> One possible solution is to require that all certificates for CAs that
> issue Subscriber certificates (those without CA:TRUE) have zero path
> length constraint in the basic constraints extension. All CAs with
> certificates with a longer allowed path length or no length constraint
> would only be allowed to issue certificate types that a Root CA is
> allowed to issue.

Consider a wildcard certificate for *.example.com. Now, consider an
intermediate CA certificate name constrained to .example.com. I don't
see why it is bad for the same CA certificate to be used to issue
both. In fact, I think it would be problematic to do so, because it
would add friction for websites to switch from wildcard certificates
to name-constrained intermediate certificates. That switch is
generally a good thing.

However, I do see how it could be valuable to separate non-constrained
intermediate CA certificates from the rest, because that would make
HPKP more effective. However, that would require not only that a
different CA certificate is used, but also that different keys were
used by the CA certificates.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Tightening up after the Lenovo and Comodo MITM certificates.

2015-02-24 Thread Brian Smith
Daniel Veditz  wrote:
> I don't think we can restrict it to add-ons since external programs like
> Superfish (and the Lenovo removal tool, for that matter) write directly
> into the NSS profile database. It would be a bunch of work for precisely
> zero win.

mozilla::pkix makes it so that you can ignore the NSS profile
database, if you wish to do so.

> Could we make the "real" and only root accepted by Firefox be a Mozilla
> root, which cross-signs all the built-in NSS roots as well as any
> corporate roots submitted via this kind of program?

This is effectively what the built-in roots module already does,
except the Mozilla root CA certificate is implied instead of explicit.

> I thought pkix gave us those kinds of abilities.

mozilla::pkix offers a lot of flexibility in terms of how certificate
trust is determined.

> Or we could reject any added root that wasn't logged in CT, and then put
> a scanner on the logs looking for self-signed CA=true certs. Of course
> that puts the logs in the crosshairs for spam and DOS attacks.

Those spam and DoS attacks are why logs are specified (required?
recommended?) to not accept those certificates.

If Mozilla wanted to, it is totally possible to make an extension API
that allows an extension, when it is not disabled, to provide PSM with
a list of roots that should be accepted as trust anchors. And, it is
totally possible for PSM to aggregate those lists of
extension-provided trust anchors and use that list, in conjunction
with the read-only built-in roots module, to determine certificate
trust, while ignoring the read/write profile certificate database.

Whether or not that is a good idea is not for me to decide. But, it
would not be a huge amount of work to implement.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: IdenTrust Root Renewal Request

2014-11-20 Thread Brian Smith
Renne Rodriguez  wrote:
> Comment 3:
> The OCSP responders both include too many certificates, this has a 
> performance impact for your users; no need to include intermediate and root 
> certificates in the response. Not a blocker.
> [IdenTrust] You are correct that there is some performance impact.
>  However, this approach is consistent with the RFC 6960 section 4.2.2 - Basic 
> Response: "The responder MAY include certificates in the certs field of 
> BasicOCSPResponse that help the OCSP client verify the responder's signature."
> In our experience, SSL certificates are used by clients other than browsers; 
> and, unfortunately, some clients are not able to do proper path construction. 
> For those cases, and we have had some, we provide those certificates.

How does this fit with Section 4.2.2.2, though? Either the OCSP
response has to be signed by the issuing certificate, in which case no
certificates need to be included in the OCSP response, or it must be
signed with a delegated OCSP responder certificate that is directly
issued by the issuing certificate, in which case only one certificate
is required. It seems to me like a correct implementation of OCSP
response verification should never need more than one certificate in a
reasonably-produced OCSP response, at least in the way that such
things are used by browsers.

Thanks,
Brian

[1] http://tools.ietf.org/html/rfc6960#section-4.2.2.2
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Trusted PEM distribution of Mozilla's CA bundle

2014-10-20 Thread Brian Smith
On Mon, Oct 20, 2014 at 8:33 AM, Ryan Sleevi <
ryan-mozdevsecpol...@sleevi.com> wrote:

> On Mon, October 20, 2014 7:17 am, Anne van Kesteren wrote:
> >  On Mon, Oct 20, 2014 at 3:41 PM, Gervase Markham 
> wrote:
> > > Perhaps we just need to jump that gap and accept what is /de facto/
> > > true.
> >
> >  Yeah, as with publicsuffix.org we should own this up.
>
> I would, in fact, argue strongly against this, despite recognizing the
> value that the open root program has.
>

I strongly agree with Ryan. Besides his very good points, I will add:

Not all of the relevant information about the roots is even available in
certdata.txt. For example, the name constraints on DCSSI are not encoded in
certdata.txt. For a long time there were hard-coded restrictions on some
Comodo and the Diginotar certificates, which weren't encoded in
certdata.txt. None of Google's CRLSet information is in certdata.txt, and
none of Mozilla's OneCRL information is in certdata.txt. One of the key
pinning information is in certdata.txt.

More generally, when Mozilla or Google address issues related to the root
program, they may do so with a code change to Firefox or Chrome that never
touches certdata.txt. And, they might do so in a hidden bug that people
trying to reuse certdata.txt may not see for many months. It's not
reasonable to give everybody who wants to use certdata.txt access to those
bugs, and it's not reasonable to constrain fixes to require they be done
through certdata.txt. AFAICT, none of the libraries or programs that try to
reuse the Mozilla root set even have enough flexibility to add all of these
additional data sources to their validation process.

For example, let's say some CA in Mozilla's root program mis-issues a
google.com certificate. Because google.com is pinned to certain CAs in
Firefox, Mozilla might not take any immediate action and may not make the
public aware of the issue for a long time (or ever).

Note that a consequence of this, even applications that use the NSS cert
verification APIs might not be as safe as they expect to be when trusting
the NSS root CA set.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Short-lived certs

2014-09-17 Thread Brian Smith
On Fri, Sep 5, 2014 at 2:43 AM, Gervase Markham  wrote:
> On 05/09/14 00:06, Brian Smith wrote:
>> Precisely defining a short-lived certificate is a prerequisite for a
>> proper discussion of policy for short-lived certificates. It seems
>> likely to me that short-lived certificates will be defined as
>> certificates that would expire before the longest-acceptable-life OCSP
>> response for that certificate would expire. Then it would be easy to
>> understand the security properties of short-lived certificates, given
>> that we understand the security properties of OCSP.
>
> I strongly want to avoid ratholing on this discussion; if I say "OK,
> let's say for the sake of argument that short-lived is the same as the
> max OCSP lifetime", then someone else will say "but that's still too
> long!" and so on.

I agree, because the maximum allowed OCSP *is* too long, regardless of
whether you want to do short-lived certificates or not.

>> Previously, we decided it was important that we have evidence that the
>> OCSP responder know about all certificates that were issued by the CA,
>> so we made it a requirement that OCSP responders must return not
>> return "Good" for certificates that they do not know about. But,
>> accepting short-lived certificates is equivalent to an OCSP responder
>> returning "Good" for all certificates, whether it knows about them or
>> not.
>
> Is that actually true? I am assuming that if a cert is mis-issued, for a
> few minutes at least the CA will stand by their issuance, and that the
> attacker can obtain a good OCSP response for it with a lifetime of X,
> and staple that response during their attack. So the security properties
> of that are about the same as those for a cert with lifetime X.

Then what was the value in adding requirement that an OCSP responder
cannot return "Good" for an unknown cert to the BRs? We added that
requirement specifically in response to Diginotar. Note that this
requirement has caused Firefox more trouble than anybody, because
Firefox is the only browser that tries to enforce this in a useful
way. Consequently, site stop working (only) in Firefox when the
website replaces their cert from a CA (in particular, StartSSL) that
does not keep their OCSP responder in sync with their cert issuance.
(Search Twitter for "sec_error_ocsp_unknown_cert").

> Hmm... is there some mileage in saying that OCSP responses for certs
> during their first week of existence must have a max lifetime of
> significantly less than for the rest of their lives? That wouldn't
> increase OCSP server load much, but would perhaps mitigate this issue if
> the CA were to discover the misissuance soon after it happened.

There are also reasons for shortening the max lifetime of an OCSP
response well past the initial issuance time. In particular, a server
getting its private key stolen is almost definitely more common than a
CA mis-issuing a certificate, and a stolen private key also requires a
quick response time.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Short-lived certs

2014-09-17 Thread Brian Smith
On Wed, Sep 17, 2014 at 12:25 AM, Gervase Markham  wrote:
> On 16/09/14 23:13, Richard Barnes wrote:
>> From a browser perspective, I don't care at all whether certificates
>> excused from containing revocation URLs if they're sufficiently short
>> lived.
>
> From a technical perspective, that is true. However, if we have an
> interest in making short-lived certs a usable option, we have to
> consider the ecosystem. CAs will have to do engineering work to issue
> (and reissue) such certs every 24 hours, and sites will have to do
> engineering work to request and deploy those certificates.

Changing a server to properly and safely support replacing its
certificate on the fly is a very error-prone and difficult thing to
do, compared to changing a server to properly and safely support OCSP
stapling. For example, when the server updates its certificate, it
needs to verify that the new certificate is the right one. Otherwise,
the updated certificate could contain a public key for which an
attacker owns the private key, and the server would be facilitating
its own compromise by switching to that new certificate.
In contract, with OCSP stapling, an attacker can never replace your
server's public key, and so there is much less risk of catastrophe
with OCSP stapling.

Because of the added risk and added complication of short-lived
certificates relative to OCSP stapling, and because OCSP stapling is
already well-specified and quite widely implemented (though not yet
commonly enabled), it would be better to prioritize shortening the
maximum acceptable OCSP response validity period (e.g. to 72 hours)
and to define and implement Must-Staple, over defining new standards
for short-lived certificates. Those two improvements would have an
immediate positive impact.

Note, also, that browsers already effectively support short-lived
certificates, even without any CABForum or browser policy work. And,
also, I do support defining standards for short-lived certificates; I
just think that fixing OCSP stapling should be a higher priority.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Short-lived certs

2014-09-17 Thread Brian Smith
On Wed, Sep 17, 2014 at 12:34 AM, Kurt Roeckx  wrote:
> On 2014-09-17 09:25, Gervase Markham wrote:
>>
>> A short-lived cert _without_ an OCSP URI also works with legacy
>> browsers. Unless you are using some other definition of "works"?
>
> A browser could perfectly reject a certificate that doesn't comply with the
> BR because the required OCSP URI is missing.

A browser can reject any certificate it wants to. No browsers are
rejecting certificates because the OCSP URI is missing, and if they
were to implement such policies, they'd have to do so in concert with
support for short-lived certificates, unless they are trying to
prevent short-lived certificates from becoming a thing.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Short-lived certs

2014-09-04 Thread Brian Smith
On Thu, Sep 4, 2014 at 6:04 AM, Gervase Markham  wrote:
> On 04/09/14 12:52, Hubert Kario wrote:
>> It all depends on the exact definition of "short-lived". If the definition
>> is basically the same as for OCSP responses or shorter, then yes, they
>> provide the same security as regular certs with hard fail for OCSP
>> querying/stapling.
>
> The exact definition of "short-lived" is something I want to declare out
> of scope for this particular discussion.

Precisely defining a short-lived certificate is a prerequisite for a
proper discussion of policy for short-lived certificates. It seems
likely to me that short-lived certificates will be defined as
certificates that would expire before the longest-acceptable-life OCSP
response for that certificate would expire. Then it would be easy to
understand the security properties of short-lived certificates, given
that we understand the security properties of OCSP.

Previously, we decided it was important that we have evidence that the
OCSP responder know about all certificates that were issued by the CA,
so we made it a requirement that OCSP responders must return not
return "Good" for certificates that they do not know about. But,
accepting short-lived certificates is equivalent to an OCSP responder
returning "Good" for all certificates, whether it knows about them or
not. So, we need to decide whether this aspect (a type of multi-factor
authentication or counter-signature mechanism) is really important or
not. It seems wrong for us to make it mandatory for long-lived
certificates but not short-lived certificates, considering that the
highest period of risk is immediately after issuance.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Removal of 1024 bit CA roots - interoperability

2014-08-04 Thread Brian Smith
On Mon, Aug 4, 2014 at 7:03 AM, Hubert Kario  wrote:
> it has limited effect on overall security of connection (if we assume 80 bit
> level of security for both SHA1 and 1024 bit RSA and ignore signature
> algorithm on the root certs):

Hi Hubert,

Thanks for doing that.

Note that because 1024-bit-to-2048-bit cross-signing certificates
exist for many CAs, removal of the these roots alone isn't going to
have a big effect on its own. Instead, removal of these roots is a
stepping stone. The next step is to stop accepting <2048-bit
*intermediate* CA certificates from the built-in trust anchors, even
if they chain to a trusted >=2048-bit root.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Removal of 1024 bit CA roots - interoperability

2014-08-04 Thread Brian Smith
On Mon, Aug 4, 2014 at 3:52 PM, Kathleen Wilson  wrote:
> It turns out that including the 2048-bit version of the cross-signed
> intermediate certificate does not help NSS at all. It would only help
> Firefox, and would cause confusion.

That isn't true, AFAICT.

> It works for Firefox, because mozilla::pkix keeps trying until it finds a
> certificate path that works.

NSS's libpkix also keeps trying until if finds a certificate path that
works. libpkix is used by Chromium and by Oracle's products (IIUC).

> Therefore, it looks like including the 2048-bit intermediate cert directly
> in NSS would cause different behavior depending on where the root store is
> being used. This would lead to confusion.

IMO, it isn't reasonable to make decisions like this based on the
behavior of the "classic" NSS path building. Really, the classic NSS
path building logic is obsolete, and anybody still using it is going
to have lots of compatibility problems due to this change and other
things, some of which are out of our control.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Removal of 1024 bit CA roots - interoperability

2014-07-31 Thread Brian Smith
Hubert Kario  wrote:
> Brian Smith wrote:
>> It depends on your definition of "help." I assume the goal is to
>> encourage websites to migrate from 1024-bit signatures to RSA-2048-bit
>> or ECDSA-P-256 signatures. If so, then including the intermediates in
>> NSS so that all NSS-based applications can use them will be
>> counterproductive to the goal, because when the system administrator
>> is testing his server using those other NSS-based tools, he will not
>> notice that he is depending on 1024-bit certificates (cross-signed or
>> root) because everything will work fine.
>
> The point is not to ship a 1024 bit cert, the point is to ship a 2048 bit 
> cert.
>
> So for sites that present a chain like this:
>
> 2048 bit host cert <- 2048 bit old sub CA <- 1024 bit root CA
>
> we can find a certificate chain like this:
>
> 2048 bit host cert <- 2048 bit new cross-signed sub CA <- 2048 bit root CA
>
> where the cross-signed sub CA is shipped by NSS

Sure. I have no objection to including cross-signing certificates
where both the subject public key and the issuer public key are 2048
bits (or more). I am objecting only to including any cross-signing
certificates of the 1024-bit-subject-signed-by-2048-bit-issuer
variety. It has been a long time since we had the initial
conversation, but IIRC both types of cross-signing certificates exist.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Removal of 1024 bit CA roots - interoperability

2014-07-30 Thread Brian Smith
On Mon, Jul 28, 2014 at 12:05 PM, Kai Engert  wrote:
> On Mon, 2014-07-28 at 21:02 +0200, Kai Engert wrote:
>> On Mon, 2014-07-28 at 11:00 -0700, Brian Smith wrote:
>> > I suggest that, instead of including the cross-signing certificates in
>> > the NSS certificate database, the mozilla::pkix code should be changed
>> > to look up those certificates when attempting to find them through NSS
>> > fails.
>>
>> We are looking for a way to fix all applications that use NSS, not just
>> Firefox. Only Firefox uses the mozilla::pkix library.
>
> Actually, including intermediates in the Mozilla root CA list should
> even help applications that use other crypto toolkits (not just NSS).

It depends on your definition of "help." I assume the goal is to
encourage websites to migrate from 1024-bit signatures to RSA-2048-bit
or ECDSA-P-256 signatures. If so, then including the intermediates in
NSS so that all NSS-based applications can use them will be
counterproductive to the goal, because when the system administrator
is testing his server using those other NSS-based tools, he will not
notice that he is depending on 1024-bit certificates (cross-signed or
root) because everything will work fine.

Similarly, as you note, many non-NSS-based tools copy the NSS
certificate set into their own certificate databases. Thus, the effect
of encouraging the continued dependency on 1024-bit signatures would
have an even wider impact beyond NSS-based applications.

I remember that we had a discussion about this a long time ago, but I
think it might have been private. In the previous discussion, I noted
that removing a 1024-bit root but still supporting a
1024-bit-to-2048-bit cross-signed intermediate results in no
improvement in security, but it does have a negative performance
impact because all the affected certificate chains grow by one
certificate. That's why I've been against removing the 1024-bit roots
while continuing to trust the 1024-bit-to-2048-bit cross-signing
certificates.

It is important to understand the cryptographic aspect of why 1024-bit
signatures are bad. People feel like it is possible for some people to
create valid signatures using a 1024-bit key even if they were not the
original holders of the private key. The only way to protect against
somebody with this capability is to reject ANY 1024-bit signature,
whether it is in a cross-signing certificate or a root certificate or
something else.

If it is not reasonable to reject all 1024-bit signatures, then I'd
suggest trying to find a different approach for gradually removing
support for 1024-bit signatures. For example, Firefox could keep
trusting 1024-bit signatures for most websites, but start rejecting
them for HSTS sites and for key-pinned websites. This would provide a
useful level of protection for those sites at least, even if it
wouldn't afford any protection for other websites. That would be an
improvement over the current change, which seems to hurt compatibility
and/or performance without improving security for any websites.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Dynamic Path Resolution in AIA CA Issuers

2014-07-30 Thread Brian Smith
On Wed, Jul 30, 2014 at 12:17 PM, Kathleen Wilson  wrote:
> On 7/28/14, 11:00 AM, Brian Smith wrote:
>>
>> I suggest that, instead of including the cross-signing certificates in
>> the NSS certificate database, the mozilla::pkix code should be changed
>> to look up those certificates when attempting to find them through NSS
>> fails. That way, Firefox and other products that use NSS will have a
>> lot more flexibility in how they handle the compatibility logic.
>
> There's already a bug for fetching missing intermediates:
> https://bugzilla.mozilla.org/show_bug.cgi?id=399324
>
> I think it would help with removal of roots (the remaining 1024-bit roots,
> non-BR-complaint roots, SHA1 roots, retired roots, etc.), and IE has been
> supporting this capability for a long time.

First of all, there is no such thing as a SHA1 root. Unlike the public
key algorithm, the hash algorithm is NOT fixed per root. That means
any RSA-2048 root can already issue certificates signed using SHA256
instead of SHA1. AFAICT, there's no reason for a CA to insist on
adding new roots for SHA256 support.

Other desktop browsers do support AIA certificate fetching, but many
mobile browsers don't. For example, Chrome on Android does not support
AIA fetching (at least, at the time I tried it) but Chrome on desktop
does support it. So, if Firefox were to add support for AIA
certificate fetching, it would be encouraging website administrators
to create websites that don't work on all browsers.

The AIA fetching mechanism is not reliable, for the same reasons that
OCSP fetching is not reliable. So, if Firefox were to add support for
AIA certificate fetching, it would be encouraging websites to create
websites that don't work reliably.

The AIA fetching process and OCSP fetching are both very slow--much
slower than the combination of all other SSL handshaking and
certificate verification. So, if Firefox were to add support for AIA
certificate fetching, it would be encouraging websites to create slow
websites.

The AIA fetching mechanism and OCSP fetching require an HTTP
implementation in order to verify certificates, and both of those
mechanisms require (practically, if not theoretically) the fetching to
be done over unauthenticated and unencrypted channels. It is not a
good idea to add the additional attack surface of an entire HTTP stack
to the certificate verification process.

If we are willing to encourage administrators to create websites that
don't work with all browsers, then we should just preload the
commonly-missing intermediate certificates into Firefox and/or NSS.
This would avoid all the performance problems, reliability problems,
and additional attack surface, and still provide a huge compatibility
benefit. In fact, most misconfigured websites would then work better
(faster, more reliably) in Firefox than in other browsers.

One of the motivations for creating mozilla::pkix was to make it easy
for Firefox to preload these certificates without having to have them
preloaded into NSS, because Wan-Teh had objected to preloading them
into NSS when I proposed it a couple of years ago. So, I think the
best course of action would be for us to try the preloading approach
first, and then re-evaluate whether AIA fetching is necessary later,
after measuring the results of preloading.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Removal of 1024 bit CA roots - interoperability

2014-07-28 Thread Brian Smith
On Fri, Jul 25, 2014 at 3:11 PM, Kathleen Wilson  wrote:
> == Possible Solution ==
> One possible way to help mitigate the pain of migration from an old root is
> to directly include the cross-signed intermediate certificate that chains up
> to the new root in NSS for 1 or 2 years.

I suggest that, instead of including the cross-signing certificates in
the NSS certificate database, the mozilla::pkix code should be changed
to look up those certificates when attempting to find them through NSS
fails. That way, Firefox and other products that use NSS will have a
lot more flexibility in how they handle the compatibility logic. Also,
leaving out the cross-signing certificates is a more secure default
configuration for NSS. We should be encouraging more secure default
configurations in widely-used crypto libraries instead of adding
compatibility hacks to them that are needed by just a few products.

> are considered until path validation succeeds. Therefore, directly including
> the cross-signed intermediate certificate for a while could provide a
> smoother transition. Presumably over that time, the SSL certs will expire
> and the web server operators will upgrade to the new cert chains.

I am not so sure. If the websites are using a cert chain like:

EE <- intermediate-1024 <- root-1024

then you are right. But, if the websites are using a cert chain like these:

   EE <- intermediate-2048 <- root-1024
   EE <- intermediate-2048 <- intermediate-1024 <- root-1024

Then it is likely that many of the websites may not update enough of
the cert chain to make the use of 1024-bit certificates to go away.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Proposal: Advocate to get Section 9.3.1 (Reserved Certificate Policy Identifiers) made mandatory.

2014-07-25 Thread Brian Smith
On Fri, Jul 25, 2014 at 8:59 AM, Ryan Sleevi
 wrote:
> I think we need to be careful in suggesting arbitrary and capricious
> requirements that fail to move the security needle further in a particular
> direction.

I agree with everything that Ryan said in his email...

> Do I wish everyone would include the u in favourite and colour?

... except this, which is 100% wrong. :)

Requiring these policies to be asserted in the certificates would make
certificates slightly worse (bigger), would create more work for
everybody involved, and would have no practical end-user benefit.

Cheeurs,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Proposal: Switch generic icon to negative feedback for non-https sites

2014-07-22 Thread Brian Smith
[+keeler, +cviecco]

On Tue, Jul 22, 2014 at 1:55 PM, Chris Palmer  wrote:
> On Tue, Jul 22, 2014 at 3:01 AM, Hubert Kario  wrote:
>
>>> I'm pretty sure Firefox merely remembers your decision to click
>>> through the warning, not that it pins the keys/certificates in the
>>> chain you clicked through on.
>>
>> No, I'm sure it remembers the certificate.
>
> 1. Generate a self-signed cert; configure Apache to use it; restart Apache.
> 2. Browse to the server with Firefox. Add Exception for the cert.
> 3. Quit Firefox; restart Firefox; browse to server again. Everything is good.
> 4. Generate a *new* self-signed cert; configure Apache to use it;
> restart Apache.
> 5. Quite Firefox; restart Firefox; browse to server again.
>
> Results:
>
> A. On first page-load after step (5), no certificate warning. (I
> assume a cached page was being shown.)
> B. Reload the page; now I get a cert warning as expected. But,
> crucially, this not a key pinning validation failure; just an unknown
> authority error. (Error code: sec_error_untrusted_issuer)

Firefox's cert override mechanism uses a different pinning mechanism
than the "key pinning" feature. Basically, Firefox saves a tuple
(domain, port, cert fingerprint, isDomainMismatch,
isValidityPeriodProblem, isUntrustedIssuer) into a database. When it
encounters an untrsuted certificate, it computes that tuple and tries
to find a matching one in the database; if so, it allows the
connection.

> C. I do the clicks to Add Exception, but it fails: In the Add Security
> Exception dialog, the [ ] Permanently store this exception checkbox is
> grayed out, and the [ Confirm Security Exception ] button is also
> grayed out. I can only click [ Cancel ].
>
> I take it this is a Firefox UI bug...? Everything was working as I
> expected except (C). I think the button and the checkbox should be
> active and should work as normal.

It seems like a UI bug to me.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Proposal: Switch generic icon to negative feedback for non-https sites

2014-07-22 Thread Brian Smith
On Mon, Jul 21, 2014 at 4:10 PM, Adrienne Porter Felt  wrote:
> I would very much like to make http sites look insecure.
>
> But we face a very real problem: a large fraction of the web is still
> http-only. That means that:
>
>- Users will get used to the insecure icon, and it will start looking
>meaningless pretty quickly.
>- This might also make users ignore the broken https icon.
>
> I'm not sure how to reconcile this.

I think they key to reconciling this is to recognize that the primary
audience for the address bar UI elements for this are website
*makers*, not website visitors, regardless of what we'd like. That is,
if the indicators in the address bar are already so confusing or
useless for end-users that they generally ignore them or take them to
have the opposite meaning from what's intended, and yet users are
still using our products, then that means that we don't have to worry
so much about the possibility of adding end-user confusion by making
such a change. Yet, it is in the economic interests of every website
to avoid being branded "not secure"; it is likely that the marginal
utility of avoiding that is significant enough that it will be the
tipping point for many websites to make the switch. To see if this is
a workable strategy, we should learn whether or not end-user apathy
and confusion is so high that we can turn it from a negative into a
positive this way.

Further, like I said in my previous message, we should be able to do a
lot more to ensure that the browser navigates to https:// instead of
http:// when https:// is available. This would likely significantly
reduce the number of websites for which the negative branding would be
shown.

Having said all of that, I remember that Mozilla did some user
research ~3 years ago that showed that when we show a negative
security indicator like the broken lock icon, a significant percentage
of users interpreted the problem to lie in the browser, not in the
website--i.e. the security problem is Firefox's fault, not their
favorite website. It would be important to do research to confirm or
(hopefully) refute this finding.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Problem (Error Code: sec_error_bad_der)

2014-07-21 Thread Brian Smith
On Thu, Jul 10, 2014 at 6:36 AM, Ernesto Acosta  wrote:
> With Firefox 30 everything works fancy me so far with the tests I've done. 
> But with Firefox Nightly I present problems when trying to access my business 
> sites that do not have a valid SSL certificate.
>
> When I try to access some of these sites, I get the following message:
>
> "Secure connection failed
>
> An error occurred during a connection to uinfo.correos.cu. security library: 
> DER encoded incorrectly formatted message. (Error code: sec_error_bad_der)

Please list the sites that you are having the trouble with.
https://uinfo.correos.cu doesn't connect for me now in any browser.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Proposal: Switch generic icon to negative feedback for non-https sites

2014-07-21 Thread Brian Smith
On Mon, Jul 21, 2014 at 8:50 PM, Eric Mill  wrote:
> Not claiming to have the solution at hand, but the best first step might be
> non-scolding, non-lock-related imagery that clearly and affirmativ' ely gets
> across that this is a *public* connection.

I think you have the right idea. Keep in mind that browsers reserve a
significant amount of space in the address bar for the organization
name in an EV certificate. So, we don't have to limit ourselves to the
square space that the lock icon occupies. For example, we could
replace the globe icon with gray text "Not Secure." That would be a
clear message for people who looked at it, and it would encourage
websites to switch to HTTPS, but it probably wouldn't be overly scary
(at least it's not red!). People who object to getting a certificate
for their website should be willing to accept browsers saying their
non-secure website is not secure.

Although the lock icon is often interpreted to mean "Secure," we know
that there are a lot of factors that go into whether a website is
secure. But, clearly HTTPS is necessary condition. Thus, it makes
sense to say "Not Secure" for non-HTTPS, but it doesn't make sense to
say explicitly "Secure" for HTTPS.

Further, this would work better if we stopped cutting off the
"http://"; prefix for non-secure sites, and if browsers made more of an
effort to try https:// URIs when the scheme is omitted from a domain
name or URL typed (or pasted) into the address bar. Right now,
browsers omit the "http://"; as a hint that it is not necessary to type
it in. But, we should make browsers such that it isn't necessary to
type in "https://"; to get the secure variant of a page too, so the
current UI doesn't make sense.

A good start for this might be building, maintaining, and sharing a
list of websites that should default to https:// in the address bar,
even if they are not HSTS. This would include, for example,
https://www.google.com, https://en.wikipedia.org/, and
https://bing.com/.

I fully support efforts to make address bar UI changes like this
happen. They are overdue; at least, it is unlikely things will change
dramatically in the future to make it easier to make changes later
than it is to make them now.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Checking certificate requirements

2014-05-28 Thread Brian Smith
On Wed, May 28, 2014 at 4:42 PM, Ryan Sleevi <
ryan-mozdevsecpol...@sleevi.com> wrote:

> Whether it's version 1 or 3 has no effect on path building. If the policy
> does require this, it's largely for cosmetic reasons than any strong
> technical reasons.
>
> That said, cutting a new v3 root may involve bringing the root signing key
> out of storage, hoisting a signing ceremony, etc. It may not be worth the
> cost. NSS could, if it wanted, create dummy certs (with invalid
> signatures) that looked just like the real thing, and things 'should' just
> work (mod, you know, the inevitable avalanche of bugs that crop up when I
> make statements like this).
>

mozilla::pkix will not trust a v1 certificate as an intermediate CA, but it
does accept v1 root certificates for backward compatibility with NSS and
for the reasons Ryan mentioned.

v1 TLS end-entity certificates do not comply with our policy because a v1
certificate cannot (according to the spec) contain a subjectAltName
extension and we require all TLS end-entity certificates to contain
subjectAltName. Similarly, v1 certificates cannot legally contain an OCSP
responder URI which is also required (practically).

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA Communication - May 12, 2014

2014-05-14 Thread Brian Smith
On Wed, May 14, 2014 at 10:06 AM, Patrick Kobly  wrote:

> Perhaps I'm dense and missing something or perhaps this isn't the right
> place to be asking.  Why would this necessitate bringing the CA online when
> responses can be signed by an Authorized Responder (i.e. cert with EKU
> id-kp-OCSPSigning)?
>

Right. Bulk preproduction of direct-signed OCSP responses is another way of
handling it. Nobody wants CA certificates to be online more than otherwise
necessary just to support shorter validity periods for OCSP responses.


> FWIW, Rob's concerns regarding the change process are certainly reasonable.
>

We did not intentionally want to short-circuit any process. I implemented
the restriction to 10 days due to a misunderstanding of the baseline
requirements, and then we decided my misunderstanding is better than what
the BRs would say, so we considered leaving my misunderstanding in the code
while we concurrently worked to improve the BRs to match my
misunderstanding. Ultimately, we decided to revert to the less-reasonable
but more compatible behavior.

It is OK (good even) for us to add additional requirements that go beyond
the baseline & EV requirements and not everything has to be approved
through CAB Forum. We do it all the time (otherwise our CA program
documentation would consist solely of "See the Baseline Requirements and EV
Requirements"). Google is doing the same with their proposed CT
requirements for EV. In this case, though, it was just an accident.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: EKUs covered in the Mozilla CA Program

2014-05-13 Thread Brian Smith
On Tue, May 13, 2014 at 6:01 PM, Peter Bowen  wrote:

> On Tue, May 13, 2014 at 11:45 AM, David Keeler 
> wrote:
> > On 05/13/2014 06:48 AM, Peter Bowen wrote:
> >> I think the biggest question probably is id-kp-clientAuth.  From a
> >> quick scan of the NSS certdb code, it seems that setting this EKU in a
> >> CA cert would allow it to issue serverAuth and emailProtection certs.
> >> Therefore it would seem reasonable to include this as well.
> >
> > That may well be the case for NSS. However, the new certificate
> > verification library under development and in use by default in Firefox
> >=
> > 31 does not allow this.
> >
> > In case you hadn't heard about it, the new library is "mozilla::pkix".
>
> In the certdata.txt file, there are only four trust attributes used.
> No certificate has CKA_TRUST_CLIENT_AUTH or CKA_TRUST_TIME_STAMPING.
> Does this mean that, with the switch to mozilla::pkix, Mozilla and NSS
> is not defining any CA as trusted to issue certificates for client
> authentication or time stamping?
>

How to interpret the NSS trust bits for client authentication is the
subject of bug 982340 [1]. I am not sure that NSS even has a way of
indicating trust anchors for time stamping. Further, Gecko doesn't use time
stamping so time stamping isn't relevant to mozilla::pkix.

mozilla::pkix is currently built on top of NSS and we didn't change
anything about how NSS works for mozilla::pkix. The way trust information
is stored is pluggable in mozilla::pkix via the TrustDomain [2] interface
and mozilla::pkix doesn't decide itself which certificates are trusted. In
Gecko, we have an implementation of TrustDomain that uses trust information
stored in NSS called NSSCertDBTrustDomain [3], and we have another one
called AppTrustDomain [4] that is hard-coded to trust only one certificate
only for code signing.

Cheers,
Brian

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=982340
[2]
http://mxr.mozilla.org/mozilla-central/source/security/pkix/include/pkix/pkixtypes.h#48
[3]
https://mxr.mozilla.org/mozilla-central/source/security/certverifier/NSSCertDBTrustDomain.cpp?rev=daee17c14581#70
[4]
https://mxr.mozilla.org/mozilla-central/source/security/apps/AppTrustDomain.cpp?rev=c968e47ef708#105
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DRAFT: May CA Communication

2014-05-13 Thread Brian Smith
On Tue, May 13, 2014 at 6:07 AM, Gervase Markham  wrote:

> The Firefox requirement is that serverAuth be included. It doesn't say
> anyEKU must be not included.
>

NSS's classic cert verification and mozilla::pkix do not implement the
special semantics for anyExtendedKeyUsage, and apparently it is extremely
uncommon (based on no pleas for us to add support), so it is good for us to
discourage its use for compatibility reasons and to allow for simpler
implementations.

If the certs you mention require EKU not to be present (what spec says
> they can even do that?), then are these certs that need to be recognised
> in Firefox, or not?
>

EKU is an *optional* extension according to RFC5280. The default (no EKU)
is like including anyExtendedKeyUsage. Since some libraries do not
implement support for anyExtendedKeyUsage, using the default (ommitted EKU
extension) behavior is a valid (according to the specification) way to get
the effect of anyExtendedKeyUsage while being compatible with products that
don't support anyExtendedKeyUsage. Additionally, adding an EKU extension
with (just) anyExtendedKeyUsage is wasteful in terms of space usage in the
cert.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DRAFT: May CA Communication

2014-05-08 Thread Brian Smith
On Thu, May 8, 2014 at 6:40 AM, Gervase Markham  wrote:

> On 06/05/14 20:58, Brian Smith wrote:
> > That isn't quite right either. It is OK for the intermediate certificate
> to
> > omit the EKU extension entirely.
>
> Well, not if we fix
> https://bugzilla.mozilla.org/show_bug.cgi?id=968817
> which Brian agreed that we could do.
>

We can *try* doing it for *end-entity* certificates. However:

1. IIRC, in that bug we were using information from Google's CT database to
conclude that every end-entity cert we can find has an EKU with
id-kp-serverAuth. However, Google's CT database doesn't (IIUC) include any
certificates for custom trust anchors. So, it could be the case that for
compatibility with custom root CAs, we may not be able to enforce this
extra requirement on top of RFC 5280.

2. The discussion in bug
968817<https://bugzilla.mozilla.org/show_bug.cgi?id=968817>was/is
about end-entity certificates. It isn't reasonable for us to enforce
a requirement that intermediate certificates have an id-kp-serverAuth EKU,
because that is non-standard (hopefully just not-yet standard).


> I think we should be aiming to require serverAuth in all intermediates
> and EE certs for SSL. I think that makes it much less likely that we
> will end up accepting as valid for SSL a cert someone has issued for
> another purpose entirely (e.g. smartcards).
>

It seems impractical to do that any time soon since that requirement is
stricter than any standard requires and I it is not hard to show
significant counter-examples where that would break the web, e.g.
https://mail.google.com. That is why I stated my suggestions for this the
way I did.

Kathleen is right that our policy doesn't require the use of technical
constraints if the certificate is audited as a public CA would be. However,
I think it is obvious that even publicly-disclosed & audited (sub-)CAs
benefit from implementing technical constraints. Consequently, it still
makes sense for us to recommend the use of technical constraints for
publicly-disclosed and audited external sub-CAs, and to require them for
other external sub-CAs.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Behavior changes - inhibitAnyPolicy extension

2014-05-06 Thread Brian Smith
On Tue, May 6, 2014 at 3:48 PM, Kathleen Wilson  wrote:

> It has been brought to my attention that the above statement is very
> difficult to understand.
>




> Any preference?
>

Let's just fix bug 989051 so that we can remove this statement completely.
It makes more sense to fix our bugs than it does to wordsmith a suggestion
to CAs for how to work around our bugs. The other things we're asking CAs
to do are actual problematic practices that need to be addressed, and we're
better off letting them focus on those things than to work around our bugs.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DRAFT: May CA Communication

2014-05-06 Thread Brian Smith
On Mon, May 5, 2014 at 4:45 PM, Kathleen Wilson  wrote:

> OK. Changed to the following.
>
> https://wiki.mozilla.org/SecurityEngineering/mozpkix-
> testing#Things_for_CAs_to_Fix
> --
> 1. For all new intermediate certificate issuance, use the "TLS Web Server
> Authentication (1.3.6.1.5.5.7.3.1)" (serverAuth) EKU if that intermediate
> certificate will be signing SSL certificates. Mozilla will stop recognizing
> the "Netscape Server Gated Crypto (2.16.840.1.113730.4.1)" EKU.
>

That isn't quite right either. It is OK for the intermediate certificate to
omit the EKU extension entirely. But, if an intermediate does include an
EKU extension then it must include id-kp-serverAuth (1.3.6.1.5.5.7.3.1).
Additionally, no certificates should contain the Netscape Server Gated
Crypto (2.16.840.1.113730.4.1) EKU, which is already no longer recognized
for end-entity certificates and which will be no longer supported for
intermediate certificates soon.

New externally-operated subordinate CA certificates should/must include an
EKU extension that does NOT contain id-kp-serverAuth (1.3.6.1.5.5.7.3.1) or
anyExtendedKeyPurpose (2.5.29.37.0) if the subordinate CA is not authorized
to issue TLS server certificates. Conversely, new externally-operated
subordinate CA certificates should/must include an EKU extension with
id-kp-serverAuth (1.3.6.1.5.5.7.3.1) if they are allowed to issue TLS
certificates.

Remember that we added the new enforcement of EKU in intermediates in
mozilla::pkix in order to enhance the ability of CAs to technically
constrain externally-operated sub-CAs.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Behavior changes - inhibitAnyPolicy extension

2014-04-28 Thread Brian Smith
[+dev-tech-crypto; Please discuss technical details of mozilla::pkix on
dev-tech-crypto and save dev-security-policy for discussion about Mozilla's
CA inclusion policies. There has been and will be a lot of technical
discussion on the behavior differences and rationale for those
differences--e.g. why policy mapping isn't supported and why policy
constraints are ignored--on the dev-tech-crypto mailing list. You can
subscribe at https://lists.mozilla.org/listinfo/dev-tech-crypto]

Responses inline.

On Mon, Apr 28, 2014 at 4:52 PM, Brown, Wendy (10421) <
wendy.br...@protiviti.com> wrote:

> Kathleen -
>
> In looking at this Draft CA Communications, I looked at the description of
> Behavior change and #5 doesn't look like the change is the right
> interpretation:
>
> 5. If the inhibitAnyPolicy extension is present in an intermediate
> certificate or trust anchor and children certificates have a certificate
> policy extension the verification will fail. bug 989051
>

I believe the above description of the behavior is wrong in a small but
important way. A better description would be "A certificate will not be
considered an EV certificate if mozilla::pkix cannot build a path to a
trusted root that does not contain any certificates with the
inhibitAnyPolicy extension. However, such certificates will still validate
as non-EV as long as there are no non-policy-related issues."

Please check out the bug: bug
https://bugzilla.mozilla.org/show_bug.cgi?id=989051. We know the current
behavior is wrong (at least, non-optimal). At the same time, it is a
low-priority issue because we only care about certificate policies for
determining whether or not to show the green site identity block for EV
certificates, and we suspect that no EV CAs in Mozilla's program use
inhibitAnyPolicy.

Some background: The original implementation of mozilla::pkix forced
inhibitAnyPolicy=true; i.e. we did not support anyPolicy at all. My hope
was that, when we define the WebPKI profile of X.509, that we would not
have anyPolicy in the profile. However, we found that some CAs in our CA
program depend on anyPolicy support for the EV certificates they issued, so
we had to add anyPolicy support after the fact, and we did it in the
simplest way possible because we were eager to work on higher-priority
issues. If somebody contributes better inhibitAnyPolicy support (with
tests) then that patch would be accepted. (Compare that to policy mapping,
where I probably wouldn't accept a patch even if one was contributed for
free.)

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: No CRL UI as of Firefox 24

2014-04-14 Thread Brian Smith
On Sun, Apr 13, 2014 at 7:41 AM, Michael Ströder wrote:

> Brian Smith wrote:
> > I always thought that the CRL requirement was in there because long of go
> > we expected that we'd eventually start fetching CRLs at some point, and
> > then it was left in there due to inertia, mostly.
> >
> > Keep in mind that S/MIME and TLS have different requirements.
>
> I really wonder why this is discussed *after* quickly hunking out CRL
> support
> even from Thunderbird and Seamonkey.
>

I spent a long time talking with Brendan and also with my former managers
at Mozilla about how we support Thunderbird while we improve Gecko for
Firefox. Unfortunately, Mozilla hasn't done a good job of frankly
explaining the policy. Basically, Gecko developers are allowed to basically
work almost as though Thunderbird doesn't exist; if we break something in
Thunderbird, it's up to the Thunderbird developers to fix it. It isn't
clear if Gecko developers actually have a responsibility to help integrate
fixes for Thunderbird, but I and others seem to all review patches to Gecko
for the Thunderbird team to fix breakage we caused.

In this instance, if we've broken something regarding S/MIME, it's up to
the Thunderbird developers to notice and fix it. Unfortunately, there are
no automated tests of S/MIME functionality (AFAICT) so it is likely that
nobody will notice if/when we break anything with S/MIME in Thunderbird.
AFAICT, the Thunderbird project is in need of somebody to volunteer to
maintain the S/MIME functionality, and has been in need of somebody for a
long time.

For a long time I've been wanting to remove the S/MIME code from Gecko
completely. It is likely that will happen the next time the S/MIME code
inconveniences us during Firefox development. I would guess that the
Thunderbird team would then import the S/MIME code into comm-central and
maintain it there.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: OCSP and must staple

2014-04-10 Thread Brian Smith
On Thu, Apr 10, 2014 at 3:54 PM, Phillip Hallam-Baker wrote:

> One of the problems with OCSP is the hardfail issue. Stapling reduces
> latency when a valid OCSP token is supplied but doesn't allow a server
> to hardfail if the token isn't provided as there is currently no way
> for a client to know if a token is missing because the server has been
> borked or if the server doesn't staple.
>
> This draft corrects the problem. It has been in IETF limbo due to the
> OID registry moving. But I now have a commitment from the AD that they
> will approve the OID assignment if there is support for this proposal
> from a browser provider:
>

David Keeler was working on implementing Must-Staple in Gecko. You can
point them to these two bugs:

https://bugzilla.mozilla.org/show_bug.cgi?id=921907
https://bugzilla.mozilla.org/show_bug.cgi?id=901698

The work got stalled because we decided to fix some infrastructure issues
(like the new mozilla::pkix cert verification library) first. Now that work
is winding down and I think we'll be able to finish the Must-Staple
implementation soon. Check with David.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: OCSP and must staple

2014-04-10 Thread Brian Smith
On Thu, Apr 10, 2014 at 3:54 PM, Phillip Hallam-Baker wrote:

> One of the problems with OCSP is the hardfail issue. Stapling reduces
> latency when a valid OCSP token is supplied but doesn't allow a server
> to hardfail if the token isn't provided as there is currently no way
> for a client to know if a token is missing because the server has been
> borked or if the server doesn't staple.
>
> This draft corrects the problem. It has been in IETF limbo due to the
> OID registry moving. But I now have a commitment from the AD that they
> will approve the OID assignment if there is support for this proposal
> from a browser provider:
>
> https://tools.ietf.org/html/draft-hallambaker-tlsfeature-02
>
> So anyone in mozilla space willing to co-author?
>

Hi Phillip,

I am working on another draft to do something similar with an HTTP header
(like Strict-Transport-Security) and I would be happy to co-author with
you. Note that I am not a Mozilla Corp employee any more though.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: No CRL UI as of Firefox 24

2014-03-14 Thread Brian Smith
On Fri, Mar 14, 2014 at 4:05 AM, Gervase Markham  wrote:

> On 13/03/14 19:20, Rick Andrews wrote:
> > Is it because Mozilla intends to build CRLs sets in the future?
>
> Yes.
>

I always thought that the CRL requirement was in there because long of go
we expected that we'd eventually start fetching CRLs at some point, and
then it was left in there due to inertia, mostly.

Keep in mind that S/MIME and TLS have different requirements. OCSP is a
significant private issue for both. We can resolve that for TLS by
switching to OCSP stapling. But, there's no good, practical stapling
solution for S/MIME (S/MIME can in theory do stapling, but nobody does). It
may be that we need to have separate requirements for S/MIME and TLS.

I think it will make sense to revise the CRL requirement in the future,
after we've figured out how we're going to build the revocation lists, and
after we've figured out Must-Staple, and after we've figured out the issue
in the preceding paragraph. If we don't end up using CRLs for building the
revocation list then CRLs could very well be useless. Also, nobody has
proposed a plan for using CRLs to build the revocation list, though Google
does do that.

Practically speaking, I'd argue strongly against any inclusions of CAs that
don't support OCSP into our root store and/or EV programs. I wouldn't
argue, at this time, against any CAs that don't do CRLs, but that could
change.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DigiCert Request to Include Renewed Roots

2014-01-28 Thread Brian Smith
On Tue, Jan 28, 2014 at 8:45 PM, David E. Ross  wrote:
> On 1/28/2014 4:37 PM, Brian Smith wrote :
>> Benefits of my counter-proposal:
>> 1. Fewer roots for us to manage.
>> 2. Sites that forget to include their intermediates in their TLS cert
>> chain are more likely to work in Firefox, without us having to do AIA
>> caIssuers, because of us preloading the intermediates.
>> 3. Because of #1, there is potential for us to design a simpler root
>> certificate management UI.
>> 4. We can do optimizations with the preloading of intermediates to
>> avoid building the whole chain every time. (That is, we can
>> precalculate the trust of the intermediates.)
>
> I do not consider "Benefit #2" to be a benefit.  This would mean that
> Mozilla is enabling poor security practices by allowing server
> administrators to be lazy and incompetent -- allowing them to tell users
> their browsing session is secure while the server is incompletely
> configured.

First, let me split my proposal into two parts:

Part 1:
I'm proposing that we add five certs that are equivalent to the five
certs that DigiCert wants to add, EXCEPT that only one of them would
be a trusted root, and the other four would be intermediates of that
root. So, as far as what I'm proposing here is concerned, there would
be no change as to what websites would be required or not required to
send in their SSL handshakes, if DigiCert continues to require an
intermediate between the end-entity cert and any of those five certs.

Part 2:
It is considered bad practice by some to issue certificates directly
from a root. But, since four of those certificates wouldn't be roots,
then DigiCert could issue certificates directly off of them without
doing the thing that is perceived to be bad. if they did so, then
because those intermediates would be preloaded into NSS, then we would
be able to tolerate the failure of a website to send the intermediate
certificate.

I understand that it is not 100% great to do things that encourage
websites to skip the inclusion of intermediates in their certificate
chains, but we're currently on the losing side of this compatibility
issue since we also do not implement caIssuers. And, we've helped make
the problem bad by caching intermediates collected from surfing the
internet; the consequence of this is that when a website admin is
testing his broken configuration in Firefox, he/she often won't notice
the missing intermediate because Firefox has papered over the issue by
having cached the needed intermediate from the CA's website. I'd like
us to stop doing that, and it is likely that doing so will require us
to preload quite a few intermediates to maintain compatibility.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DigiCert Request to Include Renewed Roots

2014-01-28 Thread Brian Smith
On Tue, Jan 28, 2014 at 4:25 PM, Kathleen Wilson  wrote:
> DigiCert has applied to include 5 new root certificates that will eventually
> replace the 3 DigiCert root certificates that were included in NSS via bug
> #364568. The request is to turn on all 3 trust bits and enable EV for all of
> the new root certs.
>
> 1) DigiCert Assured ID Root G2 -- This SHA-256 root will eventually replace
> the SHA-1 “DigiCert Assured ID Root CA” certificate.
>
> 2) DigiCert Assured ID Root G3 -- The ECC version of the Assured ID root.
>
> 3) DigiCert Global Root G2 -- This SHA-256 root will eventually replace the
> SHA-1 “DigiCert Global Root CA” certificate.
>
> 4) DigiCert Global Root G3 -- The ECC version of the Global root.
>
> 5) DigiCert Trusted Root G4 -- This SHA-384 root will eventually replace the
> SHA-1 “DigiCert High Assurance EV Root CA” certificate.

I object, only on the grounds that there is no technical need to have
more than one root. I have a counter-proposal:

1. Add DigiCert Trusted Root G4 with all three trust bits set.
2. Ask DigiCert to issue versions of their intermediates that are
signed/issued by "DigiCert Trusted Root G4".
3. Remove the existing DigiCert roots.
4. Preload all the intermediates signed by DigiCert Trusted Root G4
(with no trust bits, so they inherit trust from DigiCert Trusted Root
G4) into NSS.

Benefits of my counter-proposal:
1. Fewer roots for us to manage.
2. Sites that forget to include their intermediates in their TLS cert
chain are more likely to work in Firefox, without us having to do AIA
caIssuers, because of us preloading the intermediates.
3. Because of #1, there is potential for us to design a simpler root
certificate management UI.
4. We can do optimizations with the preloading of intermediates to
avoid building the whole chain every time. (That is, we can
precalculate the trust of the intermediates.)

This would set a good precedent for us to follow with all other CAs.
By working with all CAs to do something similar, we would end up with
one root per CA, and with a bunch of preloaded intermediates. Then we
can separate the view of intermediates from the view of roots in the
UI, and the UI will become much simpler.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Please Advise: What is the preferred source for 3rd parties to pull certdata.txt?

2013-12-17 Thread Brian Smith
On Tue, Dec 17, 2013 at 5:01 AM, Leif W  wrote:

> Hello,
>
> Many 3rd party software applications pull copies of the certdata.txt to
> generate PEM files (perhaps other uses). Recently, for example, I was
> looking at curl's mk-ca-bundle script, and it pulls from MXR's mozilla[1]
> which is nearly a year old.
>

This is better discussed on the dev-tech-crypto mailing list.

You should always be able to retrieve the file over HTTPS from
hg.mozilla.org. If I had to choose a version to use for a non-Firefox
application, I guess I would choose one of the tagged release versions from
https://hg.mozilla.org/projects/nss.

However, be aware that certdata.txt is designed to be used (only) by NSS's
build system. In theory the file format could change at any time and/or we
could change the way NSS works so that you need additional information to
construct an equivalent trust policy to NSS. Similarly, we could change how
Gecko works so that it has different (probably stricter) crieria than NSS.
For example, we are looking into the possibility of a short-term solution
for name-constraining some root certificate(s) and AFAICT there is no way
to generate a ca-bundle file that will be equivalent to the trust policy
that NSS and/or Gecko would have.

Consequently, I recommend that you manually review whichever certdata.txt
and also look at the release notes for the version of NSS that you
retrieved it from. I would consider such manual review to be necessary but
still insufficient for safely using certdata.txt outside of NSS.

Cheers,
Brian
-- 
Mozilla Networking/Crypto/Security (Necko/NSS/PSM)
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


As of Firefox 28, Firefox will not fetch CRLs during EV certificate validation

2013-12-12 Thread Brian Smith
Previously, Firefox would try to use OCSP to check revocation of an EV
certificate, and then fall back to CRLs if there was no OCSP URI in the AIA
extension or if the OCSP fetch failed. In Firefox 28 and later, Firefox
will stop trying to use CRLs. See bug 585122 [1] which is fixed in Firefox
28. Firefox 28 will be released 2014-03-18.

Because Firefox does not fall back from a failed OCSP request to a CRL
request, an unreliable OCSP responder will now be more likely to result in
an EV certificate being displayed as a non-EV certificate. Websites can
avoid this issue, mostly, by supporting OCSP stapling. (I say "mostly"
because it is still an issue for the intermediate certificate). For this
reason, I highly recommend that all EV sites enable OCSP stapling.

Another consequence of this is that certificates that are missing the OCSP
URI in the AIA extension, but which are otherwise valid EV certificates,
will no longer be displayed as EV certificates. Previous versions of
Firefox (27 and before) would display these certificates as EV certificates
if revocation checking via CRL succeeded.

If a website has a certificate without an OCSP AIA URL, but which is
otherwise a valid EV certificate, then the website administrator can get
the EV indicator back by either replacing the certificate with one that
does include an OCSP URI, or by manually configuring OCSP on the server to
use a fixed OCSP URI. In Apache, the option for setting the OCSP responder
URI is SSLStaplingForceURL [2]. I recommend you only use
SSLStaplingForceURL if your certificate does not contain an OCSP URI in the
AIA extension.

If the certificate chain that the CA provided is missing the OCSP URI in
the intermediates' AIA extension(s), then the certificate chain will need
to be replaced with one where all the necessary intermediates have the OCSP
URI in the AIA extension, in order for the EV indicator to be displayed in
Firefox.

For most users, this change marks the end of CRL support in Firefox (at
least in Firefox's default configuration), and we can act as though CRLs no
longer exist. (CRLs can still be imported into Firefox's certificate
database manually with command line tools, for now. However, people should
not expect Firefox to notice CRLs in the CRL database in Firefox 29 or
later.) Note that Firefox has never supported CRL fetching for non-EV
certificates.

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=585122
[2] http://httpd.apache.org/docs/2.4/mod/mod_ssl.html#sslstaplingforceurl

Cheers,
Brian
-- 
Mozilla Networking/Crypto/Security (Necko/NSS/PSM)
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Exceptions to 1024-bit cert revocation requirement

2013-12-11 Thread Brian Smith
On Wed, Dec 11, 2013 at 3:47 PM,  wrote:

> Well let's be clear about one thing: in Firefox land (as in others) there
> is no such thing as revocation; there is only changing the code.
>

Changing the code is required because currently-standardized revocation
mechanisms don't work effectively or in a reasonable way.

As far as our support for currently-standardized revocation mechanisms go,
we do support OCSP stapling in Firefox and we even currently still support
fetching OCSP responses from CA's websites.

As far as our support for future revocation mechanisms go, we are currently
doing the foundational work to add new features for making OCSP stapling
more effective and also better-performing, and we will work with others in
standards organizations to get such improvements standardized and widely
deployed.

Getting to a state where revocation checking is effective and performant
requires CAs, server software developers, server administrators, and
clients (browsers) to cooperate. The more cooperation there is, the better
things will work.

People who are system administrators of websites should enable OCSP
stapling. If your web server doesn't support OCSP stapling then please ask
your vendor to add OCSP stapling support. If your CA issued you a
certificate without an OCSP responder URI then please ask your CA to
replace it with one that has an OCSP responder URI. Then you will have
minimized the future work you need to do to support effective revocation
mechanisms.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


  1   2   >