Re: [FORGED] Re: Germany's cyber-security agency [BSI] recommends Firefox as most secure browser

2019-10-18 Thread Peter Bowen via dev-security-policy
On Fri, Oct 18, 2019 at 6:31 PM Peter Gutmann via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Paul Walsh via dev-security-policy 
> writes:
>
> >I have no evidence to prove what I’m about to say, but I *suspect* that
> the
> >people at BSI specified “EV” over the use of other terms because of the
> >consumer-visible UI associated with EV (I might be wrong).
>
> Except that, just like your claims about Mozilla, they never did that, they
> just give a checklist of cert types, DV, OV, and EV.  If there was a
> Mother-
> validated cert type, the list would no doubt have included MV as well.
>

I think this is even easier. Kirk linked the article which links to the
actual requirements at
https://www.bsi.bund.de/SharedDocs/Downloads/DE/BSI/Mindeststandards/Mindeststandard_Sichere_Web-Browser_V2_0.pdf

In section SW.2.1.01, it says "Zertifikate mit domainbasierter Validierung
(Domain-Validated-Zertrifikate, DV), mit organisationsbasierter Validierung
(Organizational-Validated-Zertifikate, OV) sowie Zertifikate mit
erweiterter Prüfung (Extended-Validation-Zertifikate) MÜSSEN unterstützt
werden".

Bing Microsoft Translator says the English translation is "Certificates
with domain-based validation (domain-validated certrifikate, DV), with
organization-based validation (Organizational-Validated Certificates, OV)
as well as certificates with Extended Validation Certificates MUST be
supported"

This appears to be the only reference to EV in the requirements.  Given the
discussion has been around moving the UI treatment of EV to match OV
(versus having a distinct EV-only UI treatment, I don't think there is
likely to be any impact on the BSI conformance results.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-30 Thread Peter Bowen via dev-security-policy
On Fri, Aug 30, 2019 at 10:22 AM Kirk Hall via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> I'll just reiterate my point and then drop the subject.  EV certificate
> subject information is used by anti-phishing services and browser phishing
> filters, and it would be a loss to the security ecosystem if this EV data
> disappears (meaning that the decision on removal of the EV UI has greater
> repercussions than just whether or not users can tell in the primary UI if
> their website does or does not have any confirmed identity information).
>

Kirk,

I have to admit that the first time I ever heard of browser phishing
filters and Internet security products (such as Trend Micro, Norton,
Mcafee, etc) differentiating between DV and EV SSL certificates as part of
their algorithm is in this thread, from you.  As someone who has a website,
I would really appreciate it if you could point to where this is
documented.  This morning I looked at a couple of network security vendor
products I've used and couldn't find any indication they differentiate, but
if there are ones that do it would certainly influence my personal decision
on the kind of certificates to use and to recommend others to use.

I'm not personally aware of anyone doing this.  Are you aware of any
product literature that discusses this?

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Representing one's employer

2019-08-29 Thread Peter Bowen via dev-security-policy
(forking this to a new subject)

On Thu, Aug 29, 2019 at 5:54 PM Kirk Hall via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> What the heck does it mean when sometimes you say you are posting "in a
> personal capacity" and sometimes you don't?  To me, it always appears that
> your postings on the Mozilla list are always the same as your postings on
> the CA/Browser Forum list and are always for the purpose of promoting [your
> employer's] policies and objectives.  Is there really a difference?
>

Kirk,

You ask a very important question that deserves a clear answer.  Yes, there
is a difference.  If I'm posting on behalf of my employer, the post can be
attributed to my employer and could be quoted as $EMPLOYER says ... while
if I'm posting as an individual, this is not true.

Many people, including myself and many others who participate in this
group, work for companies they do not control.  These companies frequently
have specific policies for their employees about who can speak on behalf of
the company and under what circumstances they can speak on behalf of the
company.  See, for example, https://www.ibm.com/blogs/zz/en/guidelines.html

The concept of authority to represent a legal entity and the fact not
everyone who works for an entity has authority to commit the entity to
agreements is fairly well known.  The CA/Browser Forum EV Guidelines
recognize this when require that the "CA MUST verify that the Contract
Signer is authorized by the Applicant to enter into the Subscriber
Agreement (and any other relevant contractual obligations) on behalf of the
Applicant".  I expect that many questions would come up if someone
indicated they are employed as a summer intern yet authorized to obligate
their employer to an agreement.

You point out that frequently personal opinions and the opinions of one's
employer align.  This is not all that surprising to me.  What it tells me
is that the poster is probably influential in their organization and has
convinced those who determine the position of the legal entity to align the
position with their thinking.  IBM says in their guidelines "the following
standard disclaimer should be prominently displayed: 'The postings on this
site are my own and don't necessarily represent IBM's positions, strategies
or opinions'" when posting.  Note that it doesn't say "do not represent",
rather "do not necessarily represent".  There are cases were an employee's
personal opinions will be aligned with their employer and vice-versa; this
does not mean they always will align.

Another way to think about this is that participation in Mozilla may easily
exceed the duration of one's employment with a given employer.  Looking
back, my first bug filed with Mozilla was 21 years and several employers
ago (https://bugzilla.mozilla.org/show_bug.cgi?id=7368) and my first
certificate related bug was filed before I worked for any part of Amazon (
https://bugzilla.mozilla.org/show_bug.cgi?id=546176).  I can assure you I
wasn't speaking on behalf of those employers then and I'm not speaking for
my current employer in this post.

I've tried to make clear for whom I'm speaking by using different email
addresses; @gmail.com for personal posts and @.com for the rare
times I'm speaking on behalf of my employer.  As you have pointed out,
identity is important in order to know to whom you are interacting.

Thanks,
Peter

(not speaking for my employer)
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DigiCert OCSP services returns 1 byte

2019-08-29 Thread Peter Bowen via dev-security-policy
On Thu, Aug 29, 2019 at 10:38 AM Ryan Sleevi via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Thu, Aug 29, 2019 at 1:15 PM Jeremy Rowley via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
> > Thanks for posting this Curt.  We investigated and posted an incident
> > report on Bugzilla. The root cause was related to pre-certs and an error
> in
> > generating certificates for them. We're fixing the issue (should be done
> > shortly).  I figured it'd be good to document here why pre-certs fall
> under
> > the requirement so there's no confusion for other CAs.
> >
>
> Oh, Jeremy, you were going so well on the bug, but now you've activated my
> trap card (since you love the memes :) )
>
> It's been repeatedly documented every time a CA tries to make this
> argument.
>
> Would you suggest we remove that from the BRs? I'm wholly supportive of
> this, since it's known I was not a fan of adding it to the BRs for
> precisely this sort of creative interpretation. I believe you're now the
> ... fourth... CA that's tried to skate on this?
>
> Multiple root programs have clarified: The existence of a pre-certificate
> is seen as a binding committment, for purposes of policy, by that CA, that
> it will or has issued an equivalent certificate.


Is there a requirement that a CA return a valid OCSP response for a
pre-cert if they have not yet issued the equivalent certificate?

Is there a requirement that a CA return a valid OCSP response for a serial
number that has never been assigned?  I know of several OCSP responders
that return a HTTP error in this case.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-23 Thread Peter Bowen via dev-security-policy
On Thu, Aug 22, 2019 at 1:44 PM kirkhalloregon--- via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Some have responded there is no research saying EV sites have
> significantly less phishing (and are therefore safer) than DV sites – Tim
> has listed two studies that say exactly that, and I’m not aware of any
> studies that say the opposite.  I can tell you that anti-phishing services
> and browser phishing filters have also have concluded that EV sites are
> very unlikely to be phishing sites and so are safer for users.
>
> Some opponents of the EV UI say it should go away because users don’t
> understand or know how to evaluate the specific organization information
> that’s displayed.  That’s true to a point – but an improved EV UI for
> Firefox could follow Apple’s example by showing a binary “identity/no
> identity” UI that would be easy for users to understand – green lock symbol
> and URL for identity (EV), black for no identity (DV).  If users want to
> see the specific organization information for the identity sites, it can be
> displayed with one click on the green lock symbol.
>


> users will have different needs to scrutinize identity information at
> different times.  Let’s look at currency, for example.  Currency contains
> many marks to validate its legitimacy such as watermarks, holograms, and
> the like.  The same person may treat currency differently based on
> context.  The same person might take cash out of the ATM with little close
> scrutiny but then look closely at the money received from a scalper at a
> sporting event or concert.  In the first case, the context is considered to
> be low risk, and in the second it’s considered to be high risk.  The
> security indicators are always there, so the relying party can take
> advantage of them when they’re warranted.



> To close - browsers love data, and Mozilla has a lot of really smart
> engineers.  That’s why I hope Mozilla will come up with innovative ways to
> use EV data, and not just drop it.
>

Kirk,

I think you hit the nail on the head here.  One of the big advantages of
the PKI model used in the public Internet is that certificates are
independent of browsers.   Different systems can use information contained
in the same certificate in different ways.  The validated information is
present in the certificate regardless of the browser UI.  Many browser
users have plugins installed to help detect malicious websites and software
downloads (frequently as part of an overall Internet security suite along
with anti-virus and anti-malware scanners).  These Internet security tools
can use the EV data to help implement user controllable policies completely
independent of the core browser UI.

Additionally, many people and organizations have filtering proxies that can
do some level of introspection.  My home router can do network filtering
and I know large enterprise firewalls do the same.  In TLS 1.2, they can
review the certificate and terminate the connection if it doesn't meet the
policies of the proxy owner.  This is an ideal place to check EV and does
not rely upon the end user remembering to check if the lock is black or
green.

There are also opportunities for browsers here.  I have to admit I
primarily use Google Chrome, rather than Firefox, so my observations may be
a little tainted, but I see various places where signals far more valuable
than the green lock could be implemented.  Consider that most browsers
recognize credit card entry fields -- wouldn't it be great if clicking on
one on an EV site showed a little drop down under the input box that said
"[CA name here] has certified that [EV info here] is receiving your credit
card information"?

I don't see the currently proposed change in the Firefox UI as having a
notable impact on the future of EV certificates.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Fwd: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-14 Thread Peter Bowen via dev-security-policy
On Wed, Aug 14, 2019 at 10:16 AM Jakob Bohm wrote:

> On 14/08/2019 18:18, Peter Bowen wrote:
> > On thing I've found really useful in working on user experience is to
> > discuss things using problem & solution statements that show the before
> and
> > after.  For example, "It used to take 10 minutes for the fire sprinklers
> to
> > activate after sensing excessive heat in our building.  With the new
> > sprinkler heads we installed they will activate within 15 seconds of
> > detecting heat above 200ºC, which will enable fire suppression long
> before
> > it spreads."
> >
>
> It used to be easy for fraudsters to get an OV certificate with untrue
> company information from smaller CAs.  By only displaying company
> information for more strictly checked EV certificates, it now becomes
> much more difficult for fraudsters to pretend to be someone else, making
> fewer users fall for such scams.
>
> Displaying an overly truncated form of the company information, combined
> with genuine high-trust companies (banks, credit card companies) often
> using obscure subsidiary names instead of their user trusted company
> names for their EV certs has greatly reduced this benefit.
>
> > If we assume for a minute that Firefox had no certificate information
> > anywhere in the UI (no subject info, no issuer info, no way to view
> chains,
> > etc), what user experience problem would you be solving by adding
> > information about certificates to the UI?
>
> This hasn't been the case since before Mozilla was founded.
>
> But lets assume we started from there, the benefit would be to tell
> users when they were dealing with the company they know from the
> physical world versus someone almost quite unlike them.
>
> Making this visible with as few (maybe 0) extra user actions increases
> the likelihood that users will spot the problem when there is one.
>

What is the problem being solved?  You specify the benefit but I'm still
not clear why this info is needed in the first place.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Fwd: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-14 Thread Peter Bowen via dev-security-policy
On Tue, Aug 13, 2019 at 4:24 AM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> A policy of switching from positive to negative indicators of security
> differences is no justification to switch to NO indication.  And it
> certainly doesn't help user understanding of any indicator to
> arbitrarily change it with 3 days of no meaningful discussion.
>
> The only thing that was insecure with Firefox EV has been that the
> original EV indicator only displayed the O= and C= field without enough
> context (ST, L).  The change fixes nothing, but instead removes the direct
> indication of
> the validation strength (low-effort DV vs. EV) AND removes the one piece
> of essential context that was previously there (country).
>
> If something should be done, it would be to merge the requirements for
> EV and OV with an appropriate transition period to cause the distinction
> to disappear (so at least 2 years from new issuance policy).  UI
> indication should continue to distinguish between properly validated OV
> and the mere "enable encryption with no real checks" DV certificates.
>

I have to admit that I'm a little confused by this whole discussion.  While
I've been involved with PKI for a while, I've never been clear on the
problem(s) that need to be solved that drove the browser UIs and creation
of EV certificates.

On thing I've found really useful in working on user experience is to
discuss things using problem & solution statements that show the before and
after.  For example, "It used to take 10 minutes for the fire sprinklers to
activate after sensing excessive heat in our building.  With the new
sprinkler heads we installed they will activate within 15 seconds of
detecting heat above 200ºC, which will enable fire suppression long before
it spreads."

If we assume for a minute that Firefox had no certificate information
anywhere in the UI (no subject info, no issuer info, no way to view chains,
etc), what user experience problem would you be solving by adding
information about certificates to the UI?

Thanks,
Peter

(speaking only for myself, not my employer)
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Disclosure and CP/CPS for Cross-Signed Roots

2019-07-18 Thread Peter Bowen via dev-security-policy
On Thu, Jul 18, 2019 at 11:40 AM Wayne Thayer via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Andrew Ayer filed two bugs yesterday that might be worthy of a bit
> of discussion. They both appear to be in reference to root certificates
> included in the Mozilla program that are cross-signed by a different TSP
> (CA). In both cases the TSP that signed the cross-certificate has had it
> audited, and disclosed it in CCADB as operating under their own CPS.
>
> For example:
> TSP 1 has Root A (subject A, issuer A, public key A) included in the
> Mozilla root store
> TSP 2 has Root B (subject B, issuer B, public key B) also included in the
> Mozilla root store
> TSP 2 has signed a cross certificate (subject A, issuer B, public key A)
> with Root B.
> TSP 2 has disclosed the cross-certificate in CCADB, has it included in
> their audit, and asserts that it is operated under their CP/CPS.
>
> One issue, that I recall having been been previously discussed, is that TSP
> 1 has no way of knowing if another TSP has cross-signed one of their CA
> certificates, so it makes sense to require disclosure from the TSP that
> issued the cross-certificate.
>
> I think Andrew is asserting that the cross-certificate is really operated
> by the root TSP that is in control of the key-pair (TSP 1), and should be
> audited and disclosed as such. Should that be our policy?
>

I think this confusion stems from the fact that the CCADB mixes the concept
of trust anchors and certificates (nodes and edges in the trust graph).  A
certificate is a link between two entities - the issuer and the subject.
The creation of a certificate is governed by the issuer's practices but the
use of the key that is certified by the issuer is governed by the subject's
practices.

When it comes to audits, the issuer and subject can each make different
auditable assertions.  The issuer can assert that the certificate was
created in accordance with the issuer's practices and the issuer can
describe controls around publishing status and revocation information about
the certificate.  The subject can describe controls around the generation
of the subject key and storage and usage of the private key associated with
the certified public key.  Unfortunately this nuance is currently not
describable in the CCADB.  It expects that a certificate hash is included
in the audit report but does not allow separate listing of trust anchors.

I think that the process should be updated to list CAs (subject, subject
public key, subject key identifier), is addition to listing the CA
certificates.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7 Proposal: Exclude Policy Certification Authorities from EKU Requirement

2019-04-29 Thread Peter Bowen via dev-security-policy
I support this, as long as Policy CAs meet the same operations standards
and have the same issuance restrictions as root CAs. This would result in
no real change to policy, as I assume roots not directly included in the
Mozilla root store were already considered “roots” for this part of the
policy.

On Fri, Apr 26, 2019 at 4:02 PM Wayne Thayer via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> In version 2.6 of our Root Store Policy, we added the requirement to
> section 5.3 that intermediate certificates contain an EKU and separate
> serverAuth and emailProtection uses. Version 2.6.1 updated the requirement
> to exclude cross certificates [1]. Last month, an issue [2] was filed
> requesting that we add "Policy Certification Authorities" (PCAs) as another
> exception.
>
> PCAs are described in RFC 5280 as a CA certificate that is only used to
> issue other CA certificates, so excluding PCAs from this requirement would
> not in theory weaken it. However, I'm not aware of any way to technically
> enforce that PCAs not issue end-entity certificates, and allowing more
> exceptions would seem to make this policy more difficult to enforce. In
> addition, RFC 5280 section 3.2 appears to reference PCAs as an example of
> an architecture that should be abandoned in favor of x509v3 certificate
> extensions:
>
>With X.509 v3, most of the requirements addressed by RFC 1422 can be
>addressed using certificate extensions, without a need to restrict
>the CA structures used.  In particular, the certificate extensions
>relating to certificate policies obviate the need for PCAs...
>
> This is https://github.com/mozilla/pkipolicy/issues/172
>
> I will appreciate everyone's input on this proposal.
>
> - Wayne
>
> [1]
>
> https://github.com/mozilla/pkipolicy/commit/a8353e12db6128d9a01de7ab94949180115a2d92
> [2] https://github.com/mozilla/pkipolicy/issues/172
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Applicability of SHA-1 Policy to Timestamping CAs

2019-03-22 Thread Peter Bowen via dev-security-policy
On Fri, Mar 22, 2019 at 11:51 AM Wayne Thayer via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> I've been asked if the section 5.1.1 restrictions on SHA-1 issuance apply
> to timestamping CAs. Specifically, does Mozilla policy apply to the
> issuance of a SHA-1 CA certificate asserting only the timestamping EKU and
> chaining to a root in our program? Because this certificate is not in scope
> for our policy as defined in section 1.1, I do not believe that this would
> be a violation of the policy. And because the CA would be in control of the
> entire contents of the certificate, I also do not believe that this action
> would create an unacceptable risk.
>
> I would appreciate everyone's input on this interpretation of our policy.
>

Do you have any information about the use case behind this request?  Are
there software packages that support a SHA-2 family hash for the issuing CA
certificate for the signing certificate but do not support SHA-2 family
hashes for the timestamping CA certificate?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Pre-Incident Report - GoDaddy Serial Number Entropy

2019-03-14 Thread Peter Bowen via dev-security-policy
On Thu, Mar 14, 2019 at 4:33 AM Rob Stradling via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On 14/03/2019 01:09, Peter Gutmann via dev-security-policy wrote:
> 
> > I'd already asked previously whether any CA wanted to indicate publicly
> that
> > they were compliant with BR 7.1, which zero CAs responded to (I counted
> them
> > twice).
>
> Peter,
>
> Mozilla Root Store Policy section 2.3 [1] requires CAs to conform to the
> latest version of the Baseline Requirements.  So ISTM that until or
> unless a CA publicly states that they are non-compliant with BR 7.1, we
> should act as if that CA has publicly stated that they are compliant
> with BR 7.1.
>
> FWIW though, you can find a public statement from Sectigo at [2].
>
>
> [1]
>
> https://www.mozilla.org/en-US/about/governance/policies/security-group/certs/policy/#23-baseline-requirements-conformance
>
> [2]
>
> https://sectigo.com/blog/all-sectigo-public-certificates-meet-64-bit-serial-number-requirements


As I posted in a related thread, we can see that both Boulder and R509
implement serial generation which conforms to BR 7.1.  Both of these are
open source open source CA software packages that were written by
organizations that run CAs in the mozilla program.  Unless the public code
has different generation semantics than the production code (which would be
very strange), one can surmise users of these packages are compliant.
Additionally many other CAs are known to have built their own software
and/or use software other than EJBCA, so making any generalization isn't
really valid.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: EJBCA defaulting to 63 bit serial numbers

2019-03-11 Thread Peter Bowen via dev-security-policy
On Mon, Mar 11, 2019 at 10:00 AM Daymion Reynolds via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Glad you agree 64bit serial numbers can have no fixed bits, as a fixed bit
> in a 64 bit serial number would result in less than 64 bits of entropy.  If
> you are going to fix a significant bit it must be beyond the 64th bit.  If
> your 64 bit serial number does not contain 1's in the significant byte, as
> long as you still write 64 full bits of data to the cert with 0's left
> padded, then the desired entropy is achieved and is valid. CAs should keep
> this in mind while building their revocation lists.
>

You can't left pad with zeros in DER.  DER requires that the maximum
padding is a leading zero to set the sign, otherwise no leading zeros.

You could go for more than 64 bit serial length and set the upper bits to
01 to avoid the issue, so the most significant byte is between 64 - 127
inclusive.  You would need to have at least 9 octets for serial number, but
this is no more than what you have 50% of the time now.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: EJBCA defaulting to 63 bit serial numbers

2019-03-08 Thread Peter Bowen via dev-security-policy
On Fri, Mar 8, 2019 at 7:55 PM Matthew Hardeman via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Fri, Mar 8, 2019 at 9:49 PM Ryan Sleevi  wrote:
>
> > I consider that only a single CA has represented any ambiguity as being
> > their explanation as to why the non-compliance existed, and even then,
> > clarifications to resolve that ambiguity already existed, had they simply
> > been sought.
> >
>
> Please contemplate this question, which is intended as rhetorical, in the
> most generous and non-judgmental light possible.  Have you contemplated the
> possibility that only one CA attempted to do so because you've stated your
> interpretation and because they're subject to your judgement and mercy,
> rather than because the text as written reflects a single objective
> mechanism which matches your own position?
>

Matthew,

I honestly doubt so.  It seems that one CA software vendor had a buggy
implementation but we know this is not universal.  For example, see
https://github.com/r509/r509/blob/05aaeb1b0314d68d2fcfd2a0502f31659f0de906/lib/r509/certificate_authority/signer.rb#L132
 and https://github.com/letsencrypt/boulder/blob/master/ca/ca.go#L511 are
open source CA software packages that clearly do not have the issue.
Further at least one CA has publicly stated their in-house written CA
software does not have the issue.

I know, as the author of cablint, that the main reason I didn't have any
confusion.  I didn't add more checks because of the false positive rate
issue; if I checked for 64 or more bits, it would be wrong 50% of the
time.  The rate is still unacceptable with even looser rules; in 1/256
cases the top 8 bits will all be zero, leading to a whole the serial being
a whole byte shorter.

I do personally think that the CAs using EJBCA should not be faulted here;
their vendor added an option to be compliant with the BRs and it was very
non-obvious that it had a bug in the implementation.  Based on my
experience with software development, we should be encouraging CAs to use
well tested software rather than inventing their own, when possible.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: The current and future role of national CAs in the root program

2019-03-07 Thread Peter Bowen via dev-security-policy
On Thu, Mar 7, 2019 at 11:45 AM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Currently the Mozilla root program contains a large number of roots that
> are apparently single-nation CA programs serving their local community
> almost exclusively, including by providing certificates that they can
> use to serve content with the rest of the world.
>
> For purposes of this, I define a national CA as a CA that has publicly
> self-declared that it serves a single geographic community almost
> exclusively, with that area generally corresponding to national borders
> of a country or territory.
>


> 5. Should the root program policies provide rules that enforce the
>   self-declared scope restrictions on a CA[?]


This has been discussed and the decision was no.  This in turn moots your
6-9.

10. The root trust data provided in the Firefox user interface does not
>   clearly indicate the national or other affiliation of the trusted
>   roots, such that concerned users may make informed decisions
>   accordingly.   Ditto for the root program dumps provided to other he st
>   users of the Mozilla root program data (inside and outside the Mozilla
>   product family).  For example, few users outside Scandinavia would
>   know that "Sonera" is really a national CA for the countries in which
>   Telia-Sonera is the incumbent Telco (Finland, Sweden and Åland).
>

Mozilla has specifically chosen to not distinguish between "government
CAs", "national CAs", "commercial CAs", "global CAs", etc.  The same rules
apply to every CA in the program.  Therefore, the "national or other
affiliation" is not something that is relevant to the end user.

These have all been discussed before and do not appear to be relevant to
any current conversation.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DarkMatter Concerns

2019-03-07 Thread Peter Bowen via dev-security-policy
On Thu, Mar 7, 2019 at 12:09 AM Benjamin Gabriel via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> A fair and transparent public discussion requires full disclosure of each
> participant's motivations and ultimate agenda.  Whether in CABForum, or
> Mozilla-dev-security-policy, I represent the viewpoints of my employer
> DarkMatter and passionately believe in our unflagging efforts to provide
> the citizens, residents and visitors to the United Arab Emirates with the
> same internet security and privacy protections that are taken for granted
> in other parts of the world.
>
> On Wednesday, March 6, 2019 7:51 PM, Ryan Sleevi wrote:
> >  (Writing in a personal capacity)
>
> Until such time as we have been formally advised by your employer
> (Google), that you no longer represent their views in CABForum, or in this
> Mozilla-dev-security-policy forum, we will proceed on the basis that all of
> your statements are the official viewpoint of your employer (Google).
>

Benjamin,

This statement is at odds with how the mozilla.dev.security.policy group
works.  Many people who are active in the Mozilla community, both in this
group and others, do so independently of their employer. I think it is safe
to assume that the majority of people you will meet in the Mozilla
community have paid employment; those employers may or may not be involved
with Mozilla.  When participants make it clear they are writing in a
personal capacity, or when they explicitly state they are representing
their employer, then that is what we as fellow participants should accept.
There is a page on the Mozilla wiki (
https://wiki.mozilla.org/CA/Policy_Participants ) that has a list of common
participants and whether they are speaking for anyone else.

I will not that in this specific email, to be clear, I am writing in a
personal capacity.  I have not discussed this with anyone else at my
employer and my employer may not even agree with what is in this email.  Or
they may. I simply do not know and would have to ask someone who is in a
position to represent my employer to find out.  This is what I mean when I
say I'm writing in a personal capacity.


> sovereign nations have the fundamental right to provide digital services
> to their own citizens, utilizing their own national root, without being
> held hostage by a provider situated in another nation.  You should note
> that DarkMatter's request is also for the inclusion of UAE's national root.
>
> Benjamin Gabriel
> General Counsel
> Dark Matter Group
>
>
> Benjamin Gabriel | General Counsel & SVP Legal
>

I think this is a great example of why it is important to be clear on who
you are speaking for when participating in public groups.  As per
Kathleen's post (which was clear it was posted in her role as a Mozilla
module owner), this discussion is about the subordinate CAs which Dark
Matter operates.  It was my impression that these are operated on a
commercial basis by one of the Dark Matter Group of companies and you are
writing as a representative of the Dark Matter Group.  You then raise that
DarkMatter is also requesting inclusion of the United Arab Emirates
national root.  This would appear to imply that DarkMatter is also acting
as a representative of the Government of the UAE.  We have seen other
governments use privately owned contractors to help operate their national
PKIs and these contractors have participated in the Mozilla groups.

Can you please clarify if you are speaking for Dark Matter as a commercial
entity or if you are speaking for the Government of the UAE?

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: AW: Incident Report DFN-PKI: Non-IDNA2003 encoded international domain names

2019-01-25 Thread Peter Bowen via dev-security-policy
On Fri, Jan 25, 2019 at 10:40 AM Ryan Sleevi via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> I mean, it's using an ACE label. That's where Ballot 202 would have
> clarified and required more explicit validation of the ACE labels to
> address the SHOULD NOT from https://tools.ietf.org/html/rfc3490#section-5
> to
> a MUST NOT.
>
> The CA can perform ToASCII(ToUnicode(label)) == label to validate.
>

 Ballot 202 explicitly required that ToUnicode(label) works (i.e. is valid
Punycode).  ToASCII() has a number of different parameters and different
clients use different parameter values.  I don't think the BRs should
require that CAs use a specific combination because that would effectively
mean that certain clients would not be able to use TLS with IDNs.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: AW: Incident Report DFN-PKI: Non-IDNA2003 encoded international domain names

2019-01-24 Thread Peter Bowen via dev-security-policy
On Thu, Jan 24, 2019 at 7:36 AM Kurt Roeckx via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On 2019-01-24 15:41, Rob Stradling wrote:
> >
> > Here's an example cert containing the A-label in the SAN:dNSName and the
> > U-label in the CN.  (It was issued by Sectigo, known back then as Comodo
> > CA, before we switched to always putting the A-label in the CN):
> >
> > https://crt.sh/?id=213062481=cablint,x509lint,zlint
> >
> > x509lint agrees with your opinion (unsurprisingly!), but both cablint
> > and zlint complain.
>
> x509lint doesn't do anything related to this. I've disabled the code to
> check that the CN is one of the SANs because I didn't write the code
> related to the conversion from the U-label to the A-label yet. It used
> to behave exactly like zlint and say it doesn't match, but I think
> that's wrong. It's was clearly my intention to say that a certificate
> like that is the correct way to do it. One of the reasons I didn't do
> this is that it was not obvious to me at that time which is the correct
> standard to use, which I guess is why this thread was started.


You don’t need to choose between IDNA2003 and 2008 to do A-label to
U-label. That direction is identical for both.   So you can try each of the
SANs and see if it decides to the CN.

>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Incident Report DFN-PKI: Non-IDNA2003 encoded international domain names

2019-01-24 Thread Peter Bowen via dev-security-policy
On Thu, Jan 24, 2019 at 4:17 AM Buschart, Rufus via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Hello
>
> > -Ursprüngliche Nachricht-
> > Von: Hanno Böck 
> > Gesendet: Donnerstag, 24. Januar 2019 12:36
> >
> > On Thu, 24 Jan 2019 11:14:11 + Buschart, Rufus wrote:
> >
> > > You are right, of course there are mandatory RFC to take into account.
> > > But there is - to my knowledge - no RFC that says, you MUST NOT issue
> > > a certificate to a domain that could be interpreted as an
> > > IDNA2008 punycode.
> >
> > https://tools.ietf.org/html/rfc5891
> >
> > 4.2.3.1.  Hyphen Restrictions
> >
> >The Unicode string MUST NOT contain "--" (two consecutive hyphens) in
> >the third and fourth character positions and MUST NOT start or end
> >with a "-" (hyphen).
> >
> > This means you can't have a valid host name that is just
> xn--[something]. You can only have it if it is also a valid IDN name.
> >
> I don't read it like this. This chapter describes the "Unicode string"
> which is the U-label before conversion. The hostname is the A-label after
> conversion and in the certificate you find the hostname. The RFC 3490
> clearly addressed this issue:
>
>While all ACE labels begin with the ACE prefix, not all labels
>beginning with the ACE prefix are necessarily ACE labels.  Non-ACE
>labels that begin with the ACE prefix will confuse users and SHOULD
>NOT be allowed in DNS zones.
>
> But first of all this is only a SHOULD requirement and second it places
> the burden on the operator of the DNS zones.
>

I agree with Rufus.  There are really two issues here:

1) The original reports to the CAs claimed an issue because RFC 5280
references the original IDNA RFCs (now known as IDNA2003).

RFC 5280 says "Rules for encoding internationalized domain names are
specified in Section 7.2 ."
Section 7.2 says: "one choice in GeneralName is the dNSName field, which is
defined as type IA5String. IA5String is limited to the set of ASCII
characters.  To accommodate internationalized domain names in the current
structure, conforming implementations MUST convert internationalized domain
names to the ASCII Compatible Encoding (ACE) format as specified in Section
4 of RFC 3490 before storage in the dNSName field."

This makes it clear it is only discussing a case where a domain name is
processed that does not meet the IA5String semantics.  Therefore both "
xn--foo-bar-ghost.example.com" or "zq--special.example.com" are both
acceptable in certificates as these do not need encoding and are valid
preferred name syntax.

2) How should CAs handle this going forward?

RFC 8399, dated May 2018, explicitly updates RFC 5280.  It says "Conforming
CAs SHOULD ensure that IDNs are valid.  This can be done by validating all
code points according to IDNA2008 [RFC5892]."  Note that this is only a
"SHOULD".  The CA/Browser Forum ballot 202 attempted to make this stricter,
requiring that CAs not issue for names that contain Reserved LDH labels
unless they start with the ACE prefix and the remainder is valid Punycode.
However this ballot failed.

This leaves us at the point that CAs "SHOULD" ensure IDNs are valid, but
they may issue for names with any LDH label that passes the validation of
control required by the BRs.

Maybe Mozilla should add something about acceptable LDH labels to the CA
policy?

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: When should honest subscribers expect sudden (24 hours / 120 hours) revocations?

2018-12-29 Thread Peter Bowen via dev-security-policy
On Thu, Dec 27, 2018 at 8:43 PM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> So absent a bad CA, I wonder where there is a rule that subscribers
> should be ready to quickly replace certificates due to actions far
> outside their own control.


 Consider the following cases:

- A company grows and moves to larger office space down the street.  It
turns out that the new office is in a different city even though the move
was only two blocks away.  The accounting department sends the CA a move
notice so the CA sends invoices to the new address.  Does this mean the CA
has to revoke all existing certificates in 5 days?
- Widget LLC is a startup with widgetco.example.  They want to take
investment so they change to a C-corp and become Widget, Inc.  Widget Inc
now is the registrant for widgetco.example. Does this now trigger the 5 day
rule?
- Same example as above, but the company doesn't remember to update the
domain registration.  It therefore is invalid, as it points to a
non-existence entity.  Does this trigger the 5 day rule?

- The IETF publishes a new RFC that "Updates: 5280
".  It removes a previously valid
feature in certificates.  Do all certificates using this feature need to be
revoked within 5 days?

- The  IETF publishes a new RFC that "Updates: 5280
".  It says it update 5280 as follows:

Old: Conforming CAs SHOULD use the UTF8String encoding for explicitText,
but MAY use IA5String. Conforming CAs MUST NOT encode explicitText as
VisibleString or BMPString.

NeW: Conforming CAs SHOULD use the UTF8String encoding for explicitText.
VisibleString or BMPString are acceptable but less preferred alternatives.
Conforming CAs MUST NOT encode explicitText as IA5String.

Must a CA revoke all certificates that use IA5String?

- A customer has a registered domain name that has characters that current
internationalized domain name RFCs do not allow (for example xn--df-oiy.ws/✪
df.ws).  A CA issues because this is a registered domain name according to
the responsible TLD registry.  Must this be revoked within 5 days if the CA
notices?

- A customer has a certificate with a single domain name in the SAN which
is an internationalized domain name.  The commonName attribute in the
subject contains the IDN.  However the CN attribute uses U-labels while the
SAN uses A-labels.  Whether this is allowed has been the subject of debate
at the CA/Browser Forum as neither BRs nor RFCs make this clear.  Do any
certificates using U-labels in the CN need to be revoked?

The list can continue to go on, but I bring these up as examples of
reasonable cases that may have surprising results.

Thanks,
Peter

The list goes on, but
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Use cases of publicly-trusted certificates

2018-12-27 Thread Peter Bowen via dev-security-policy
On Thu, Dec 27, 2018 at 9:04 AM Nick Lamb via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Thu, 27 Dec 2018 15:30:01 +0100
> Jakob Bohm via dev-security-policy
>  wrote:
>
> > The problem here is that the prohibition lies in a complex legal
> > reading of multiple documents, similar to a situation where a court
> > rules that a set of laws has an (unexpected to many) legal
> > consequence.
>
> I completely disagree. This prohibition was an obvious fact, well known
> to (I had assumed prior to this present fever) everyone who cared about
> the Internet's underlying infrastructure.
>
> The only species of technical people I ever ran into previously who
> professed "ignorance" of the rule were the sort who see documents like
> RFCs as descriptive rather than prescriptive and so their position
> would be (as it seems yours is) "Whatever I can do is allowed". Hardly
> a useful rule for the Web PKI.
>

As I wrote in the thread on underscores, I am one of the people who
believed it was not clear if underscores were allowed or not.  This was
reflected in the earliest versions of certlint/cablint.

If you think it should have been clear, consider the following examples
from the real world:
- The character Asterisk (U+002A, '*') is not allowed in dNSName SANs per
the same rule forbidding Low Line (U+005F, '_').   RFC 5280 does say:
"Finally, the semantics of subject alternative names that include wildcard
characters (e.g., as a placeholder for a set of names) are not addressed by
this specification.  Applications with specific requirements MAY use such
names, but they must define the semantics."  However it never defines what
"wildcard characters" are acceptable.  As Wikipedia helpfully documents,
there are many different characters that can be wildcards:
https://en.wikipedia.org/wiki/Wildcard_character.  The very same ballot
that attempted to clarify the status of the Low Line character tried to
clarify wildcards, but it failed.  The current BRs state "Wildcard FQDNs
are permitted." in the section about subjectAltName, but the term "Wildcard
FQDN" is never defined.  Given the poor drafting, I might be able to argue
that Low Line should be considered a wildcard character that is designed to
match a single character, similar to Full Stop (U+002E, '.') in regular
expressions.

- The meaning of the extendedKeyUsage extension in a CA certificate is
unclear.  There are at least two views: 1) It constrains the use of the
public key in the certificate and 2) It constrains the use of end-entity
public keys certified by the CA named in the CA certificate.  This has been
discussed multiple times on the IETF PKIX mailing list and no consensus has
been reached.  Similarly, the X.509 standard does not clarify.  Mozilla
takes the second option, but it is entirely possible that a clarification
could show up in a future RFC or X.500-series doc that goes with the first
option.

These are just two cases where the widely deployed and widely accepted
status does not match the RFC.


> > It would benefit the honesty of this discussion if the side that won
> > in the CAB/F stops pretending that everybody else "should have known"
> > that their victory was the only legally possible outcome and should
> > never have acted otherwise.
>
> I would suggest it would more benefit the honesty of the discussion if
> those who somehow convinced themselves of falsehood would accept this
> was a serious flaw and resolve to do better in future, rather than
> suppose that it was unavoidable and so we have to expect they'll keep
> doing it.
>

Of course people are going to try to do better, but part of that is
understanding that people are not perfect and that even automation can
break. I wrote certlint/cablint with hundreds of tests and continue to get
reports of gaps in the tests.  Yes, things will get better, but we need to
get them there in an orderly way.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Underscore characters

2018-12-27 Thread Peter Bowen via dev-security-policy
On Thu, Dec 27, 2018 at 12:53 PM thomas.gh.horn--- via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

>
> As to why these certificates have to be revoked, you should see this the
> other way round: as a very generous service of the community to you and
> your customers!
>
> Certificates with (pseudo-)hostnames in them are clearly invalid, so a
> conforming implementation should not accept them for anything and they
> should not pose any security risk. Based on this assessment (no revokation
> if no security risk), a CA could very well issue a certificate including
> any of the (psuedo-)hostnames "example.com_cvs.com", "example.com/cvs.com",
> "cvs.com/example.com", "https://example.com/cvs.com;, "example@cvs.com"
> to the owner of example.com (who, arguably, has the exact same right to
> them as the owner of cvs.com has) and refuse to revoke them.
>

I'm not clear how you get that the owner of example.com is covered anywhere
here.  Parsed into labels, these all have com as the label closet to the
root and then have 'com_cvs', 'com/cvs', 'com/example', 'com/cvs', and
'com@cvs' as the next label respectively.  None have 'example' as the next
label.


> As to the consequences (in case this really becomes an incident
> report/incident reports): this shows a SEVERE lack of ability to revoke
> certificates on DigiCert's side, which must have been known AND ACCEPTED
> for a long time (this cannot be the first "blackout period" of (in the best
> case) 3.5 months).


I don't see how this follows.  DigiCert has made it clear they are able to
technically revoke these certificates and presumably are contractually able
to revoke them as well.  What is being said is that their customers are
asking them to delay revoking them because the _customers_ have blackout
periods where the customers do not want to make changes to their systems.
DigiCert's customers are saying that they are judging the risk from
revocation is greater than the risk from leaving them unrevoked and asking
DigiCert to not revoke. DigiCert is then presenting this request along to
Mozilla to get feedback from Mozilla.


> Thus, it seems to be a good idea to:
>
> 1. Henceforth, make NSS only accept certificates by DigiCert with a
> maximum validity of 100 days. Let's Encrypt has shown that this is clearly
> feasible.
>
> or
>
> 2. Henceforth, require DigiCert to revoke a small, randomly (e.g., using
> RFC 3797) selected subset of their certificates every day (within 7 days).
> If this, e.g., for the same reasons as outlined in these incident reports,
> is not possible, it will trigger (a incrementally decreasing number of)
> more incident reports.
>
> Both proposals would lead to more automation and a better understanding of
> the requirement of timely revocation, while pushing the ecosystem in the
> right direction. For its easiness, the first proposal would be my favorite
> but I would be very interested in hearing other people's thoughts about
> these proposals.
>

I don't agree that demanding all certificate customers have "more
automation" is desirable.  I am very familiar with the Chaos Monkey
approach Netflix has implemented and companies like Gremlin that offer
similar "Failure as a Service" products, but forcing this on customers
seems like a poor idea.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Use cases of publicly-trusted certificates

2018-12-27 Thread Peter Bowen via dev-security-policy
On Thu, Dec 27, 2018 at 12:12 PM Wayne Thayer  wrote:

> On Wed, Dec 26, 2018 at 2:42 PM Peter Bowen via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
>> In the discussion of how to handle certain certificates that no longer
>> meet
>> CA/Browser Forum baseline requirements, Wayne asked for the "Reason that
>> publicly-trusted certificates are in use" by the customers.  This seems to
>> imply that Mozilla has an opinion that the default should not be to use
>> "publicly-trusted certificates".  I've not seen this previously raised, so
>> I want to better understand the expectations here and what customers
>> should
>> consider for their future plans.
>>
>
> The context for the question is that at least one of the organizations
> having difficulty with the underscore sunset stated that they couldn't just
> replace the certificates - they need to ship updates to the client. If you
> are hard-coding certificate information into client software, it's fair to
> ask why you're using publicly-trusted certificates (PTCs).
>

I was not aware of this being an issue in this case.  Thanks for this
explanation.

I believe a similar concern was discussed at length during the SHA-1 sunset
> in relation to payment terminals. As has been suggested, maybe it's simply
> a matter of cost. I suspect, however, that it is more about a lack of
> recognition of the responsibilities that come along with using PTCs. In the
> spirit of incident reporting, I think it would help to have a better
> understanding of the decisions that are driving the use of PTCs in these
> use cases
>

I agree that many people developing products do not understand the fully
scope of the responsibilities that come with using Mozilla PTCs.  From what
I've personally observed, the requirements are frequently: "I want to have
a third party manage the CA at no cost to me", "I want that third party to
make it relatively easy and fairly inexpensive for arbitrary people and
organizations to get certificates that are signed by/chain to the CA", "I
want some level of assurance that the third party is doing the right things
without having to figure out what the right things are", and (usually only
realized much later) "I want to be able to make a decision on whether the
risk of not revoking a given a certificate outweigh the benefit of leaving
it unrevoked and have the third party not suffer any negative consequences
from my decision".

I have seen these requirements from organizations large and small.  They
are not usually written out in these terms, rather there are other
requirements that boil down to these.


> Is the expectation that "publicly trusted certificates" should only be used
>> by customers who for servers that are:
>> - meant to be accessed with a Mozilla web browser, and
>>
>
> No.
>
> - publicly accessible on the Internet (meaning the DNS name is publicly
>> resolvable to a public IP), and
>>
>
> No.
>
> - committed to complying with a 24-hour (wall time) response time
>> certificate replacement upon demand by Mozilla?
>>
>> Committed to comply with section 4.9.1.1 (Reasons for Revoking a
> Subscriber Certificate) of the BRs - yes.
>

In recent revisions to the BRs, it seems that this is extended to 5 days
for many cases, including this underscore case.  However I think that many
customers ("subscribers" in BR terminology) would be very surprised at this
requirement, even though it is long standing.


> Is the recommendation from Mozilla that customers who want to allow Mozilla
>> browsers to access sites but do not want to meet one or both of the other
>> two use the Firefox policies for Certificates (
>>
>> https://github.com/mozilla/policy-templates/blob/master/README.md#certificates
>> ) to add a new CA to the browser?
>>
>>  No, that was not my intent. Rather, I am hoping for a better recognition
> of the commitments (per the Subscriber Agreement and CPS) and risks
> involved when an organization chooses to use PTCs, especially for
> non-browser use cases.]
>

I think this is a good callout.  Mozilla PTCs are a fairly unique situation
because there is very little ability to negotiate terms. Most large
organizations are accustomed to having a set of requirements as a starting
point but working person to person (or organization to organization) to
modify the terms to meet their needs.  It is clear that this is not an
option for Mozilla PTCs and this lack of option is very surprising to the
organizations.  I'm not sure what can be done about existing deployments of
roots in places other than Mozilla software, but it is clear that CAs
should be working on options for future non-Mozilla software cases if their
customers need more policy flexibility and do not need compatibility with
Mozilla software.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Use cases of publicly-trusted certificates

2018-12-27 Thread Peter Bowen via dev-security-policy
On Thu, Dec 27, 2018 at 8:34 AM Ryan Sleevi via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Thu, Dec 27, 2018 at 11:12 AM Jakob Bohm via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
> > Yes, you are consistently mischaracterizing everything I post.
> >
> > My question was a refinement of the original question to the one case
> > where the alternative in the original question (configuring the browser
> > to trust a non-default PKI) would not be meaningful.
> >
>
> I hope you can understand my confusion, as again, you've provided a
> statement, but not an actual question.
>
> Peter provided two, fairly simple to understand, very direct questions:
>

>From earlier messages, I realized that the answer to my initial question is
obviously "no", because there is at least one more supported Mozilla
product that  uses the same trust store: Thunderbird.  The second part is
also faulty, because it doesn't account for certificates for public IP
addresses.  Fixing this is makes the question more complex:

Is it the expectation of Mozilla that "publicly trusted certificates" for
server authentication should only be used by customers for servers that are:
a) meant be accessed by Mozilla Firefox and/or Mozilla Thunderbird
  - This effectively means the server is serving at least one of HTTP, FTP,
WS (WebSocket), NNTP, IMAP, POP3, SMTP, IRC, or XMPP over TLS (including
iCalendar, CalDAV, WCAP, RSS, and Twitter API over one of the supported
protocols)
b) are publicly accessible on the Internet
  - This mean either server is accessed via an IP address is a public IP or
via a hostname is publicly resolvable to a public IP
  - Thunderbird does do SRV record lookups, but SRV records are just
pointers to a hostname, so this does not change the above
c) committed to complying with a 24-hour (wall time) response time
certificate replacement upon demand by Mozilla?

This is a longer question, but more accurately reflects how Mozilla uses
publicly trusted certificates.

Is the expectation that "publicly trusted certificates" should only be used
> > by customers who for servers that are:
> > - meant to be accessed with a Mozilla web browser, and
> > - publicly accessible on the Internet (meaning the DNS name is publicly
> > resolvable to a public IP), and
> > - committed to complying with a 24-hour (wall time) response time
> > certificate replacement upon demand by Mozilla?
>

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Use cases of publicly-trusted certificates

2018-12-26 Thread Peter Bowen via dev-security-policy
In the discussion of how to handle certain certificates that no longer meet
CA/Browser Forum baseline requirements, Wayne asked for the "Reason that
publicly-trusted certificates are in use" by the customers.  This seems to
imply that Mozilla has an opinion that the default should not be to use
"publicly-trusted certificates".  I've not seen this previously raised, so
I want to better understand the expectations here and what customers should
consider for their future plans.

Is the expectation that "publicly trusted certificates" should only be used
by customers who for servers that are:
- meant to be accessed with a Mozilla web browser, and
- publicly accessible on the Internet (meaning the DNS name is publicly
resolvable to a public IP), and
- committed to complying with a 24-hour (wall time) response time
certificate replacement upon demand by Mozilla?

Is the recommendation from Mozilla that customers who want to allow Mozilla
browsers to access sites but do not want to meet one or both of the other
two use the Firefox policies for Certificates (
https://github.com/mozilla/policy-templates/blob/master/README.md#certificates
) to add a new CA to the browser?

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Underscore characters

2018-12-18 Thread Peter Bowen via dev-security-policy
On Tue, Dec 18, 2018 at 6:52 PM Jeremy Rowley via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Ballot 202 failed. I’m not sure how it’s relevant other than to indicate
> there was definite disagreement about whether underscores were permitted or
> not. As previously mentioned, I didn’t consider underscore characters
> prohibited until the ballot was proposed eliminating them in Oct. I know
> the general Mozilla population disagrees but, right or wrong, that’s the
> root cause of it all. I can explain my reasoning again here, but I doubt it
> materially alters the conversation and outcome.
>

I agree that Jeremy that the situation with underscores was unclear prior
to the ballot in October.  Three years ago when I was writing certlint, my
very first public commit has the comment:
# Allow RFC defying '*' and '_'

I honestly haven't been pay a lot of attention to the CA/Browser Forum
recently.  Given the rationale for getting rid of underscores is RFC
compliance, did the ballot also disallow asterisks?  They are also not
allowed by the "preferred name syntax", as specified by Section 3.5 of
[RFC1034]  and as modified
by Section 2.1 of 
 [RFC1123] .

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Re: Google Trust Services Root Inclusion Request

2018-09-27 Thread Peter Bowen via dev-security-policy
Richard,

Unfortunately Gerv is no longer with us, so he cannot respond to this
accusation.  Having been involved in many discussions on m.d.s.p and with
Gerv directly, I am very sure Gerv deeply owned the decisions on StartCom
and WoSign.  It was by no means Ryan telling Gerv or Mozilla what to do.
Gerv put many hours into researching the issues and is the one who wrote
the wiki and summary docs.

Please give Gerv credit where credit is due.

Thanks,
Peter

On Wed, Sep 26, 2018 at 11:55 PM Richard Wang via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Sorry, I don't agree with this point. Ryan Sleevi is the Mozilla Module
> Peer that gave too many pressures to the M.D.S.P community to misleading
> the Community and to let Mozilla make the decision that Google want.
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: GoDaddy Revocations Due to a Variety of Issues

2018-07-26 Thread Peter Bowen via dev-security-policy
On Wed, Jul 25, 2018 at 2:08 PM Joanna Fox via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Friday, July 20, 2018 at 9:39:04 PM UTC-7, Peter Bowen wrote:
> > > *Total of 17 certificates issued in 2018 were revoked due to invalid
> > > extended ascii characters.  CertLint was not catching these issues,
> which
> > > would have prevented issuance. We have since remediated these
> problems, and
> > > are adding zLint to our certificate issuance process as a second check.
> > > Issued in 2018 certificate serial numbers 4329668077199547083,
> > > 8815069853166416488, 8835430332440327484, 13229652153750393997,
> > > 12375089233389451640, 11484792606267277228, 11919098489171585007,
> > > 9486648889515633287, 14583473664717830410, 7612308405142602244,
> > > 4011153125742917275, 6919066797946454186, 15449193186990222652,
> > > 14380872970193550115, 1792501994142248245, 12601193235728728125,
> > > 10465762057746987360
> > > Cert.sh was unavailable when this was crafted else I would provide
> links
> > > to the 4 certs which were CT logged.
> >
> >
> >  https://crt.sh/?id=294808610=zlint,cablint is one of the
> > certificates.  It is not clear to me that there is an error here.  The
> DNS
> > names in the SAN are correctly encoded and the Common Name in the subject
> > has one of the names found in the SAN.  The Common Name contains a DNS
> name
> > that is the U-label form of one of the SAN entries.
> >
> > It is currently undefined if this is acceptable or unacceptable for
> > certificates covered by the BRs.  I put a CA/Browser Forum ballot
> forward a
> > while ago to try to clarify it was not acceptable, but it did not pass as
> > several CAs felt it was not only acceptable but is needed and desirable.
> >
> > If Mozilla (or another browser) puts forward a policy on this, I'm happy
> to
> > update certlint to reflect the poicy.
>


> Using the example provided of
> https://crt.sh/?id=294808610=zlint,cablint, the error to which we
> were addressing is, “ERROR: Characters in labels of DNSNames MUST be
> alphanumeric, - , _ or *”. RFC 5280 states that the SAN field can contain a
> dnsName but it must be in the IA5String format.  IA5String is defined as
> the first 128 characters in the ASCII alphabet.  Right now as this is
> defined, it does not include international variance of ISO 646.  Should we
> revisit this issue to clarify if international characters should be
> included?  GoDaddy would be in support of adding this clarification.


That error is coming from zlint and appears, from my reading, to be a bug
in zlint.  The DNSName entries in the SAN in that certificate only contain
allowable characters.  The commonName attribute value in the Subject does
have characters that are not allowed in the DNSName entry, but commonName
is allowed to be a utf8string, which allows these characters.  Further the
commonName contains a fully qualified domain name that also appears in the
SAN, so that BR requirement is met.

The challenge in this case is that the BRs do not specify the required
encoding of domain names.  This doesn't really matter when handling pre-IDN
domain names, but with Internationalized Domain Names (IDNs), encoding does
need to be specified.  Until Mozilla or the CA/Browser Forum clarifies, I
do not think this certificate has any errors.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: GoDaddy Revocations Due to a Variety of Issues

2018-07-20 Thread Peter Bowen via dev-security-policy
On Fri, Jul 20, 2018 at 6:39 PM Daymion Reynolds via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> The certificates were identified by analyzing results from both zlint and
> certlint. We also verified all lint findings against current and past BRs.
> We discovered multiple defects with the linters, and submitted pull
> requests to correct them. See below.
>
> CertLint PRs to correct issues:
>
> In Progress, will publish if requested.
>

Yes, I would very much like to have either PRs or just a list of issues.


> | e_dnsname_not_valid_tld,  |
>  |
> |e_subject_common_name_not_from_san,|
>  |
> |e_dnsname_bad_character_in_label   |4
>   |*7/5/18 11:48  |
>
> 
> | e_subject_common_name_not_from_san,   |   |
>  |
> |e_dnsname_bad_character_in_label   |28
>  |*7/9/18 21:12  |
>
> 
> *Total of 17 certificates issued in 2018 were revoked due to invalid
> extended ascii characters.  CertLint was not catching these issues, which
> would have prevented issuance. We have since remediated these problems, and
> are adding zLint to our certificate issuance process as a second check.
> Issued in 2018 certificate serial numbers 4329668077199547083,
> 8815069853166416488, 8835430332440327484, 13229652153750393997,
> 12375089233389451640, 11484792606267277228, 11919098489171585007,
> 9486648889515633287, 14583473664717830410, 7612308405142602244,
> 4011153125742917275, 6919066797946454186, 15449193186990222652,
> 14380872970193550115, 1792501994142248245, 12601193235728728125,
> 10465762057746987360
> Cert.sh was unavailable when this was crafted else I would provide links
> to the 4 certs which were CT logged.


 https://crt.sh/?id=294808610=zlint,cablint is one of the
certificates.  It is not clear to me that there is an error here.  The DNS
names in the SAN are correctly encoded and the Common Name in the subject
has one of the names found in the SAN.  The Common Name contains a DNS name
that is the U-label form of one of the SAN entries.

It is currently undefined if this is acceptable or unacceptable for
certificates covered by the BRs.  I put a CA/Browser Forum ballot forward a
while ago to try to clarify it was not acceptable, but it did not pass as
several CAs felt it was not only acceptable but is needed and desirable.

If Mozilla (or another browser) puts forward a policy on this, I'm happy to
update certlint to reflect the poicy.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] TeletexString

2018-07-08 Thread Peter Bowen via dev-security-policy
On Sun, Jul 8, 2018 at 2:34 PM Kurt Roeckx  wrote:
> On Sun, Jul 08, 2018 at 04:41:27PM -0400, Ryan Sleevi wrote:
> >
> > Is that because you believe it forbidden by spec, or simply unwise?
>
> It's because nobody implements the spec. Those the claim some
> support for it are just broken. I have yet to see a certificate
> that doesn't just put latin1 in it, which should get rejected.
>
> Anyway, at some point I started writing a proper parser for
> teletexstring. But I don't think it's worth my time if there are 0
> valid certificates using it. If someone can point me to a proper
> parser of it, that is open source, I'm willing to use that.

My solution was a somewhat pragmatic and somewhat lazy:
https://github.com/awslabs/certlint/blob/master/lib/certlint/certlint.rb#L138

NULL is always bad.  Other than that, if we find any escape characters
in the string let it pass unchecked, otherwise do what Kurt suggested.

This avoids false hits of properly encoded strings at the cost of
skipping some improperly encoded strings.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


TeletexString

2018-07-06 Thread Peter Bowen via dev-security-policy
In reviewing a recent CA application, the question came up of what is
allowed in a certificate in data encoded as "TeletexString" (which is
also sometimes called T61String).

Specifically, certlint will report an error if a TeletexString
contains any characters not in the "Teletex Primary Set of Graphic
Characters" unless the TeletexString contains an escape sequence. For
example, including 'ä', or 'ö' will trigger this error unless preceded
by an escape sequence.

In order to figure out what can be used, one need to reference X.690
Table 3, which notes that G0 is assumed to start with character set
102.  Character set 102 is defined at
https://www.itscj.ipsj.or.jp/iso-ir/102.pdf.  Note that 102 isn't the
same as ASCII nor is it i the same as the first part of Unicode.

I hope that this helps explain why these errors show in certlint.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Bit encoding (AW: Policy 2.6 Proposal: Add prohibition on CA key generation to policy)

2018-05-15 Thread Peter Bowen via dev-security-policy
I don't think that is true.  Remember for OV/IV/EV certificates, the
Subscriber is the natural person or Legal Entity identified in the
certificate Subject.  If the Subscriber is using the certificate on a
CDN, it is probably better to have the CDN generate the key rather
than the Subscriber.  The key is never being passed around, in PKCS#12
format or otherwise, even though the Subscriber isn't generating the
key.

On Tue, May 15, 2018 at 9:17 PM, Tim Hollebeek via dev-security-policy
 wrote:
> My only objection is that this will cause key generation to shift to partners 
> and
> affiliates, who will almost certainly do an even worse job.
>
> If you want to ban key generation by anyone but the end entity, ban key
> generation by anyone but the end entity.
>
> -Tim
>
>> -Original Message-
>> From: dev-security-policy [mailto:dev-security-policy-
>> bounces+tim.hollebeek=digicert@lists.mozilla.org] On Behalf Of Wayne
>> Thayer via dev-security-policy
>> Sent: Tuesday, May 15, 2018 4:10 PM
>> To: Dimitris Zacharopoulos 
>> Cc: mozilla-dev-security-policy 
>> 
>> Subject: Re: Bit encoding (AW: Policy 2.6 Proposal: Add prohibition on CA key
>> generation to policy)
>>
>> I'm coming to the conclusion that this discussion is about "security 
>> theater"[1].
>> As long as we allow CAs to generate S/MIME key pairs, there are gaping holes
>> in the PKCS#12 requirements, the most obvious being that a CA can just
>> transfer the private key to the user in pem format! Are there any objections 
>> to
>> dropping the PKCS#12 requirements altogether and just forbidding key
>> generation for TLS certificates as follows?
>>
>> CAs MUST NOT generate the key pairs for end-entity certificates that have an
>> EKU extension containing the KeyPurposeIds id-kp-serverAuth or
>> anyExtendedKeyUsage.
>>
>> - Wayne
>>
>> [1] https://en.wikipedia.org/wiki/Security_theater
>>
>> On Tue, May 15, 2018 at 10:23 AM Dimitris Zacharopoulos 
>> wrote:
>>
>> >
>> >
>> > On 15/5/2018 6:51 μμ, Wayne Thayer via dev-security-policy wrote:
>> >
>> > Did you consider any changes based on Jakob’s comments?  If the
>> > PKCS#12 is distributed via secure channels, how strong does the password
>> need to be?
>> >
>> >
>> >
>> >
>> >
>> > I think this depends on our threat model, which to be fair is not
>> > something we've defined. If we're only concerned with protecting the
>> > delivery of the
>> > PKCS#12 file to the user, then this makes sense. If we're also
>> > concerned with protection of the file while in possession of the user,
>> > then a strong password makes sense regardless of the delivery mechanism.
>> >
>> >
>> > I think once the key material is securely delivered to the user, it is
>> > no longer under the CA's control and we shouldn't assume that it is.
>> > The user might change the passphrase of the PKCS#12 file to whatever,
>> > or store the private key without any encryption.
>> >
>> >
>> > Dimitris.
>> >
>> ___
>> dev-security-policy mailing list
>> dev-security-policy@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-security-policy
>
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: c=US policy layer in development

2018-04-10 Thread Peter Bowen via dev-security-policy
As far as I know, this has nothing to do with Mozilla policy.

On Mon, Apr 9, 2018 at 10:28 PM westmail24--- via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> If Mozilla develops an open product, then why are some discussions
> unavailable to users even for reading? (I'm not sure that this will protect
> against the PRISM intelligence system inside Google groups, so you have
> secrets from random users?)
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Audits for new subCAs

2018-04-06 Thread Peter Bowen via dev-security-policy
On Mon, Apr 2, 2018 at 5:15 PM, Wayne Thayer via dev-security-policy
 wrote:
> On Mon, Apr 2, 2018 at 4:36 PM, Jakob Bohm via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
>>
>> While Entrust happens to do this, as a relying party, I dislike frequent
>> updates to CP/CPS documents just for such formal changes.
>>
> This creates a huge loophole. The CP/CPS is the master set of policies the
> TSP agrees to be bound by and audited against. If a TSP doesn't include a
> new subCA certificate in the scope of their CP/CPS, then from an audit
> perspective  there is effectively no policy that applies to the subCA.
> Similarly, if the TSP claims to implement a new policy but doesn't include
> it in their CP/CPS, then the audit will not cover it (unless it's a BR
> requirement that has made it into the BR audit criteria).

A CP is an optional document and may be maintained by an entity other
than the CA.  For example there may be a common policy that applies to
all CAs that have a path to a certain anchor.  So including the CA
list in a CP is not useful.

I also don't think that the CPS is the right place to list the CAs.
The CPS of a CA is an attribute of the CA, but the CAs are not an
attribute of a CPS.  This is why I strongly suggest that the CA make a
signed binding statement that a new subCA will follow CPS X and that
the new subCA will be included in the next audit.  It gets the
relationship correct.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Audits for new subCAs

2018-03-26 Thread Peter Bowen via dev-security-policy
Both :)

Having a new audit per online CA is going to be very expensive and
cause TSPs heavily limit the number of online CAs they have.
Additionally all of these would be point-in-time audits, which only
report on design of controls.  Assuming the design is consistent
between CAs, then there is no value in having additional audit
reports.

However I do think there is value in having the CA provide a statement
that the same controls will apply to the new CA and commit to
including the CA in the next audit.  Otherwise it could be over a year
before it is noted that the CA does not have adequate controls.  The
"same audit as parent" does not make sense for new CAs, as the new CA
is not included in the audit scope.  I'm sure automated audit report
processing will notice this and throw an error once it is online.

On Mon, Mar 26, 2018 at 9:24 AM, Wayne Thayer  wrote:
> Peter,
>
> Are you advocating for option #2 (TSP self-attestation) because you think
> that option #3 (audit) is unreasonable, or because you believe there is a
> benefit to Mozilla's users in a self-attestation beyond what we get from the
> existing requirement for CCADB disclosure?
>
> On Fri, Mar 23, 2018 at 6:18 PM, Peter Bowen  wrote:
>>
>> On Fri, Mar 23, 2018 at 11:34 AM, Wayne Thayer via dev-security-policy
>>  wrote:
>> > Recently I've received a few questions about audit requirements for
>> > subordinate CAs newly issued from roots in our program. Mozilla policy
>> > section 5.3.2 requires these to be disclosed "within a week of
>> > certificate
>> > creation, and before any such subCA is allowed to issue certificates.",
>> > but
>> > says nothing about audits.
>> >
>> > The fundamental question is 'when must a new subCA be audited?' It is
>> > clear
>> > that the TSP's [1] next period-of-time statement must cover all subCAs,
>> > including any new ones. However, it is not clear if issuance from a new
>> > subCA is permitted prior to being explicitly included in an audit.
>> >
>> > I believe that it is common practice for TSPs to begin issuing from new
>> > subCAs prior to inclusion in an audit. This practice is arguably
>> > supported
>> > by paragraph 3 of BR 8.1 which reads:
>> >
>> > If the CA has a currently valid Audit Report indicating compliance with
>> > an
>> >> audit scheme listed in Section 8.1, then no pre-issuance readiness
>> >> assessment is necessary.
>> >>
>> >
>> > When disclosing a new subCA, the TSP can select "CP/CPS same as parent"
>> > and
>> > "Audits same as parent" in CCADB to indicate that the same policies
>> > apply
>> > to the new subordinate as to the root.
>> >
>> > This issue was raised at the CA/Browser Forum meeting in October 2016
>> > [2].
>> >
>> > Three options have been proposed to resolve this ambiguity:
>> > 1. Permit a new subCA to be used for issuance prior to being listed on
>> > an
>> > audit report.
>> > 2. Require the TSP to attest that the new subCA complies with a set of
>> > existing policies prior to issuance [3].
>> > 3. Require an audit report (point-in-time or period-of-time) covering
>> > the
>> > new subCA before any issuance (possibly with an exception for test
>> > certificates or certificates required for audit purposes).
>> >
>> > Please consider these options in the context of a TSP with a current
>> > audit
>> > for the parent root that has issued a new subCA, and for which the new
>> > subCA is operating under existing policies and in an existing
>> > operational
>> > environment. If this is not the case, I would propose that a new audit
>> > covering the subCA be required.
>>
>> Unsurprisingly, I support option #2.  However I think is it important
>> that there are three distinct things that need to be covered:
>>
>> 1) Key generation for the new CA
>> 2) Assertion of controls for the new CA
>> 3) Issuance of a CA certificate, by an existing trusted CA, that names
>> the new CA as the subject
>>
>> I does make sense to allow a slight delay in disclosure such that a
>> single ceremony can be used to generate the key and issue a CA
>> certificate, but a week seems plenty generous.
>>
>> Thanks,
>> Peter
>
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Audits for new subCAs

2018-03-23 Thread Peter Bowen via dev-security-policy
On Fri, Mar 23, 2018 at 11:34 AM, Wayne Thayer via dev-security-policy
 wrote:
> Recently I've received a few questions about audit requirements for
> subordinate CAs newly issued from roots in our program. Mozilla policy
> section 5.3.2 requires these to be disclosed "within a week of certificate
> creation, and before any such subCA is allowed to issue certificates.", but
> says nothing about audits.
>
> The fundamental question is 'when must a new subCA be audited?' It is clear
> that the TSP's [1] next period-of-time statement must cover all subCAs,
> including any new ones. However, it is not clear if issuance from a new
> subCA is permitted prior to being explicitly included in an audit.
>
> I believe that it is common practice for TSPs to begin issuing from new
> subCAs prior to inclusion in an audit. This practice is arguably supported
> by paragraph 3 of BR 8.1 which reads:
>
> If the CA has a currently valid Audit Report indicating compliance with an
>> audit scheme listed in Section 8.1, then no pre-issuance readiness
>> assessment is necessary.
>>
>
> When disclosing a new subCA, the TSP can select "CP/CPS same as parent" and
> "Audits same as parent" in CCADB to indicate that the same policies apply
> to the new subordinate as to the root.
>
> This issue was raised at the CA/Browser Forum meeting in October 2016 [2].
>
> Three options have been proposed to resolve this ambiguity:
> 1. Permit a new subCA to be used for issuance prior to being listed on an
> audit report.
> 2. Require the TSP to attest that the new subCA complies with a set of
> existing policies prior to issuance [3].
> 3. Require an audit report (point-in-time or period-of-time) covering the
> new subCA before any issuance (possibly with an exception for test
> certificates or certificates required for audit purposes).
>
> Please consider these options in the context of a TSP with a current audit
> for the parent root that has issued a new subCA, and for which the new
> subCA is operating under existing policies and in an existing operational
> environment. If this is not the case, I would propose that a new audit
> covering the subCA be required.

Unsurprisingly, I support option #2.  However I think is it important
that there are three distinct things that need to be covered:

1) Key generation for the new CA
2) Assertion of controls for the new CA
3) Issuance of a CA certificate, by an existing trusted CA, that names
the new CA as the subject

I does make sense to allow a slight delay in disclosure such that a
single ceremony can be used to generate the key and issue a CA
certificate, but a week seems plenty generous.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla Security Blog re Symantec TLS Certs

2018-03-13 Thread Peter Bowen via dev-security-policy
On Tue, Mar 13, 2018 at 7:55 AM, Kai Engert via dev-security-policy
 wrote:
> On 13.03.2018 15:35, Ryan Sleevi via dev-security-policy wrote:
>>
>>> Are the DigiCert transition CAs, which are part of the exclusion list,
>>> and which you say are used for "Managed Partner Infrastructure",
>>> strictly limited to support the needs of the Apple and Google companies?
>>
>>
>> No.
>
> If the answer is "no", it means there are additional beneficials besides
> Apple and Google.
>
>
>> Apple is Apple. Google is Google. DigiCert is running the Managed Partner
>> Infrastructure from the consensus plan, using the two transition CAs, in
>> addition to the two pre-existing roots participating in Mozilla's root
>> store.
>
> Which companies, other than Apple and Google, benefit from DigiCert
> running the Manager Partner Infrastructure and from DigiCert being part
> of the exclusion list?

An unlimited set.  Any company who purchases a certificate from
DigiCert that is issued by one of the Managed Partner Infrastructure
CAs benefits.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla Security Blog re Symantec TLS Certs

2018-03-13 Thread Peter Bowen via dev-security-policy
On Tue, Mar 13, 2018 at 7:19 AM, Kai Engert via dev-security-policy
 wrote:
> On 13.03.2018 14:59, Ryan Sleevi wrote:
>> the blog post says, the subCAs controlled by Apple and Google are the
>> ONLY exceptions.
>>
>> However, the Mozilla Firefox code also treats certain DigiCert subCAs as
>> exceptions.
>>
>> Based on Ryan Sleevi's recent comments on this list, I had concluded
>> that the excluded DigiCert subCAs are used to support companies other
>> than Apple and Google. Is my understanding right or wrong?
>>
>>
>> I think your understanding is incorrect. The DigiCert SubCAs are being
>> treated as part of the Managed Partner Infrastructure (aka the consensus
>> plan), and the (cross-signed DigiCert Roots) are excluded to avoid path
>> building issues in Firefox.
>
> Your earlier explanations were very complex, and had increased my
> uncertainty about who is covered by the Managed Partner Infrastructure.
>
> In your earlier explanations, you had mentioned additional company names
> besides Apple and Google. This had given me the impression that the
> Managed Partner Infrastructure isn't limited to support the Apple and
> Google companies, but to also support other companies.
>
>
>> That is, the exclusion of those DigiCert Sub-CAs *is* the consensus plan
>> referred to - what else could it be?
>>
>>
>> Are Apple and Google really the only beneficials of the exceptions, or
>> should the blog post get updated to mention the additional exceptions?
>>
>>
>> Do you think the above clarifies?
>
> I hope we are close.
>
> I really wish we could bring it down to a simple yes or no question, and
> you being able to respond with a clear yes or no.
>
> Let me try again.
>
> Are the DigiCert transition CAs, which are part of the exclusion list,
> and which you say are used for "Managed Partner Infrastructure",
> strictly limited to support the needs of the Apple and Google companies?

I'll try answering and let Ryan correct me.

Managed Partner Infrastructure CAs are NOT strictly limited to support
the needs of Apple/Google.

As I understand it, there are five different sets of CAs when it comes
to applying trust rules:

1) CAs that are not cross-signed by any of the roots owned by Symantec
as of June 2017 ("Symantec roots").  This is the majority of CAs in
the world.

2) Online/Non-root CAs that are cross-signed by a Symantec root and
which had their own non-Symantec audit as of June 2017 and have
current audits - this is currently a set of CAs owned by Alphabet and
Apple companies

3) Root CAs that are cross-signed by a Symantec root and which had
their own non-Symantec audit as of June 2017 and have current audits -
this is currently a set of root CAs that are owned by DigiCert and
that existed prior to DigiCert acquiring the Symantec roots

4) CAs that are cross-signed by a Symantec root which were explicitly
created for compatibility with existing clients.  These are not
cross-signed by any roots that are not Symantec roots.  These were
created by DigiCert are not under their DigiCert branded CAs; they are
the "Managed Partner Infrastructure" CAs.

5) Any CAs not covered above (that is a CAs cross-signed by a Symantec
root but not in #2, #3, or #4).

CAs in group #2, #3, and #4 are able to continue issuing.  #4 have a
maximum validity period restriction that is less than the BR maximum.
#5 CAs are not trusted for certificates issued after
2017-12-01T00:00:00Z or before 2016-06-01T00:00:00Z.

Does this make it clear?
Ryan, did I get this wrong?

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: How do you handle mass revocation requests?

2018-02-28 Thread Peter Bowen via dev-security-policy
On Wed, Feb 28, 2018 at 11:29 AM, Wayne Thayer via dev-security-policy
 wrote:
> On Wed, Feb 28, 2018 at 12:13 PM, timx84039--- via dev-security-policy 
>  wrote:
>
>>
>> Regarding to our investigation they were only able to send the private
>> keys for those certificates where the CSR / private key pair were generated
>> within their online private key generating tool. This has to be the 23k
>> amount of keys which Jeremy received.
>>
>> I am not aware of guidelines of the CA/B forum but keeping 23.000 (!)
>> private keys at your online platform seems more than alarming and is
>> careless and the public should be made aware of this fact.
>>
> I agree with this sentiment, but I also think it creates another policy
> question with respect to DigiCert's decision to revoke due to key
> compromise: were these 23,000 keys really compromised? The BR definition of
> Key Compromise is:
>
> A Private Key is said to be compromised if its value has been disclosed to
> an unauthorized person, an unauthorized person has had access to it, or
> there exists a practical technique by which an unauthorized person may
> discover its value. A Private Key is also considered compromised if methods
> have been developed that can easily calculate it based on the Public Key
> (such as a Debian weak key, see http://wiki.debian.org/SSLkeys) or if there
> is clear evidence that the specific method used to generate the Private Key
> was flawed.
>
> In this case it might be reasonable to argue that Trustico was unauthorized
> (unless their customers agreed to key escrow when using the online key
> generation tool). However, in the case of a hosting provider reselling
> certificates for use on their platform, it's required that they hold the
> private key and we don't consider that a Key Compromise.

Jeremy's email suggests that the keys were emailed to him.  If this is
accurate, then it is reasonable that they have been "disclosed to an
unauthorized person".  The only other alternative, again assuming
Jeremy did receive the keys, is to determine that he was authorized by
the subscriber to access the keys.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: How do you handle mass revocation requests?

2018-02-28 Thread Peter Bowen via dev-security-policy
On Wed, Feb 28, 2018 at 9:37 AM, Jeremy Rowley via dev-security-policy
 wrote:
> Once we were alerted, the team kicked
> off a debate that I wanted to bring to the CAB Forum. Basically, our
> position is that resellers do not constitute subscribers under the Baseline
> Requirement's definitions (Section 1.6.1). As such, we needed to confirm
> that either the key was compromised or that they revocation was authorized
> by the domain holder (the subscriber) prior to revoking the certificate. The
> certificates were not alleged as compromised at that time.

> This raises a question about the MDSP policy and CAB Forum requirements. Who
> is the subscriber in the reseller relation?  We believe this to be the key
> holder. However, the language is unclear. I think we followed the letter and
> spirit of the BRs here, but I'd like feedback, perhaps leading to a ballot
> that clarifies the subscriber in a reseller relationship.

For certs with subject identity information (commonly called IV, OV,
and EV certs), there is no question about the subscriber.  The
Subscriber is the entity identified in the subject: "The Subject is
either the Subscriber or a device under the control and operation of
the Subscriber."

For certificates without subject identity information (DV
certificates), the certificate does not list the subscriber.  However
the CA clearly knows the subscriber, as the subscriber is the "natural
person or Legal Entity to whom a Certificate is issued and who is
legally bound by a Subscriber Agreement or Terms of Use"

In some cases the "reseller" might be the subscriber if the reseller
is a hosting company and is the one that accepts the subscriber
agreement but in the traditional reseller model their customer is the
subscriber as the reseller's customer is the one accepting the
subscriber agreement.

Given that DigiCert appears to have contact information for the
Trustico customers, that suggests that the Trustico customer is likely
the subscriber, but looking at IV/OV/EV certificates (if any) should
tell for sure.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: TLS everywhere has a major flaw and needs refining to the page level.

2018-02-16 Thread Peter Bowen via dev-security-policy
On Fri, Feb 16, 2018 at 3:34 AM, Kevin Chadwick via
dev-security-policy  wrote:
>
> On that subject I think the chromium reported plan to label sites as
> insecure should perhaps be revised to page insecured or something more
> accurate?

Given this group focused on Mozilla, it is likely out of scope to
discuss Chromium design.  I do suggest you look at
https://security.googleblog.com/2018/02/a-secure-web-is-here-to-stay.html
 It seems reasonably clear the marking is per top level page load.
This is very similar to the UI for Firefox which shows the lock (and
EV info) per top level page load.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DRAFT January 2018 CA Communication

2018-01-25 Thread Peter Bowen via dev-security-policy
On Thu, Jan 25, 2018 at 1:02 PM, Ryan Sleevi via dev-security-policy
 wrote:
> On Thu, Jan 25, 2018 at 3:34 PM, Wayne Thayer  wrote:
>
>> On Thu, Jan 25, 2018 at 11:48 AM, Jonathan Rudenberg <
>> jonat...@titanous.com> wrote:
>>
>>> This is a great improvement. I think we should also ask that any CAs
>>> using these methods immediate disclose that they are and the procedures
>>> they are using, as well as the date they expect to complete a review of
>>> their implementation, and then provide the review when it is complete.
>>
>>
>> The scope of this issue is much different from the method .9 and .10
>> vulnerabilities - lot of CAs use methods .1 and .5. Asking them all to
>> answer these questions seems likely to just yield a bunch of "we reviewed
>> our implementation and it is perfect" emails. What do you hope to learn
>> from this disclosure that hasn't already been discussed? What do others
>> think?
>>
>> If we want to hold CAs accountable for this disclosure, we'll need to turn
>> this communication into a survey and give CAs a certain amount of time to
>> respond, so we won't have answers for weeks.
>>
>
> I'm curious why the "for weeks" disclosure.
>
> Mozilla has required since April 2017 that CAs disclose the method of
> validation they use - https://wiki.mozilla.org/CA/Communications#April_2017
> (Specifically, Action #1), which MUST be completed before July 21, 2017.
>
> Jonathan's proposal to require the CAs "immediately disclose that they are"
> is thus consistent with the CA simply reading its CP/CPS. Further, "the
> procedures that they are using" is also a matter of existing CP/CPS
> documentation and/or supporting documents - making them explicitly public.
>
> So this merely leaves the question of "The date they expect to complete a
> review of their implementation, and then provide the review when it is
> complete".

What incentive is there for a CA to ever answer with anything other than:

a) that they may use any method allowed by Mozilla, and

b) they have reviewed their implementation and believe that it
complies with Mozilla's requirements?

Given the Mozilla CA policy says "the CA must ensure that the
applicant has registered the domain(s) referenced in the certificate
or has been authorized by the domain registrant to act on their
behalf", is the implication here that showing technical control of the
domain is not adequate and that CAs have to confirm with the
registrant for every issuance.  As I read it, the policy does not call
for validating technical control over the domain and does not allow a
simple technical control validation to suffice.

I think Mozilla should update the policy to make sure the policy
language accurately reflects Mozilla's intent, then ask CAs to double
check that they comply with the policy.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Retirement of RSA-2048

2018-01-20 Thread Peter Bowen via dev-security-policy
On Sat, Jan 20, 2018 at 8:31 AM, James Burton via dev-security-policy
 wrote:
> Approximate date of retirement of RSA-2048?

This is a very broad question, as you don't specify the usage.  If you
look at the US National Institute of Standards and Technology's SP
800-57 part 1 rev 4
(http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-57pt1r4.pdf),
they discuss the difference between "applying" and "processing".
Applying would usually be either encrypting or signing and processing
would usually be decrypting or verifying.

Given that RSA is used by Mozilla products for signing long term data
(intermediate CA certificates, for example), encrypting data (for
example, encrypting email), as part of key exchange (in TLS), and for
signing for instant authentication (signature during a TLS handshake),
the appropriate retirement date may vary.

That being said, the NIST publication above uses the assumption that
RSA with a 2048-bit modulus, where the two factors are each 1024-bit
long prime numbers, provides approximately 112-bits of strength.
Later on it states that 112-bits of strength is acceptable until 2030.

The German Federal Office for Information Security (BSI) reportedly
recommends using a modulus length of at least 3000 bits starting in
2023 [1].

Does that help answer your question?

Thanks,
Peter

[1] My German is very poor.  If yours is better than mine, you can
read the original doc from the BSI at
https://www.bsi.bund.de/SharedDocs/Downloads/DE/BSI/Publikationen/TechnischeRichtlinien/TR02102/BSI-TR-02102.pdf?__blob=publicationFile
and confirm that Google Translate did not cause me to misunderstand
the recommendation
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: TLS-SNI-01 and compliance with BRs

2018-01-19 Thread Peter Bowen via dev-security-policy


> On Jan 19, 2018, at 7:22 AM, Doug Beattie via dev-security-policy 
>  wrote:
> 
> Many CA’s haven’t complied with the Mozilla requirement to list the methods 
> they use (including Google btw), so it’s hard to tell which CAs are using 
> method 10.  Of the CA CPSs I checked, only Symantec has method 10 listed, and 
> with the DigiCert acquisition, it’s not clear if that CPS is still active.  
> We should find out on January 31st who else uses it.
> 
> In the meantime, we should ban anyone from using TLS-SNI as a non-compliant 
> implementation, even outside shared hosting environments.  There could well 
> be other implementations that comply with method 10, so I’m not suggesting we 
> remove that from the BRs yet (those that don’t allow SNI when validating the 
> presence of the random number within the certificate of a TLS handshake are 
> better).
[snip]

> Personally, I think the use of TLS-SNI-01  should be banned immediately, 
> globally (not just by Let’s Encrypt), but without knowing which CAs use it, 
> it’s difficult to enforce.

Doug,

I don’t agree that TLS-SNI-01 should be banned immediately, globally.  Amazon 
does not use TLS-SNI-01 today, so it would not directly impact Amazon 
operations.

I think we need to look back to the Mozilla Root Store Policy.  The relevant 
portions are:

"2.1 CA Operations

prior to issuing certificates, verify certificate requests in a manner that we 
deem acceptable for the stated purpose(s) of the certificates;

2.2 Validation Practices
We consider verification of certificate signing requests to be acceptable if it 
meets or exceeds the following requirements:

For a certificate capable of being used for SSL-enabled servers, the CA must 
ensure that the applicant has registered the domain(s) referenced in the 
certificate or has been authorized by the domain registrant to act on their 
behalf. This must be done using one or more of the 10 methods documented in 
section 3.2.2.4 of version 1.4.1 (and not any other version) of the CA/Browser 
Forum Baseline Requirements. The CA's CP/CPS must clearly specify the 
procedure(s) that the CA employs, and each documented procedure should state 
which subsection of 3.2.2.4 it is complying with. Even if the current version 
of the BRs contains a method 3.2.2.4.11, CAs are not permitted to use this 
method.”

While this clearly does call out that the methods are acceptable, it isn’t a 
results oriented statement.  The BRs also do not have clear results 
requirements for validation methods.

What does Mozilla expect to be verified?  We know the 10 methods allow issuance 
where "the applicant has registered the domain(s) referenced in the certificate 
or has been authorized by the domain registrant to act on their behalf” is not 
true.

I think the next step should be for Mozilla to clearly lay out the requirements 
for CAs and then the validation methods can be compared to see if they met the 
bar.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Updating Root Inclusion Criteria (organizations)

2018-01-17 Thread Peter Bowen via dev-security-policy
On Wed, Jan 17, 2018 at 11:49 AM, Jakob Bohm via dev-security-policy
 wrote:
> 4. Selected company CAs for a handful of too-bit-to-ignore companies
>   that refuse to use a true public CA.  This would currently probably
>   be Microsoft, Amazon and Google.  These should be admitted only on
>   a temporary basis to pressure such companies to use generally trusted
>   independent CAs.

Jakob,

Can you please explain how you define "true public CA"?  How long
should new CAs have to meet this criteria?   I don't like carve outs
for "too-big-to-ignore".

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Updating Root Inclusion Criteria

2018-01-17 Thread Peter Bowen via dev-security-policy
On Tue, Jan 16, 2018 at 3:45 PM, Wayne Thayer via dev-security-policy
 wrote:
> I would like to open a discussion about the criteria by which Mozilla
> decides which CAs we should allow to apply for inclusion in our root store.
>
> Section 2.1 of Mozilla’s current Root Store Policy states:
>
> CAs whose certificates are included in Mozilla's root program MUST:
>> 1.provide some service relevant to typical users of our software
>> products;
>>
>
> Further non-normative guidance for which organizations may apply to the CA
> program is documented in the ‘Who May Apply’ section of the application
> process at https://wiki.mozilla.org/CA/Application_Process . The original
> intent of this provision in the policy and the guidance was to discourage a
> large number of organizations from applying to the program solely for the
> purpose of avoiding the difficulties of distributing private roots for
> their own internal use.
>
> Recently, we’ve encountered a number of examples that cause us to question
> the usefulness of the currently-vague statement(s) we have that define
> which CAs to accept, along a number of different axes:
>
[snip]
>
> There are many potential options for resolving this issue. Ideally, we
> would like to establish some objective criteria that can be measured and
> applied fairly. It’s possible that this could require us to define
> different categories of CAs, each with different inclusion criteria. Or it
> could be that we should remove the existing ‘relevance’ requirement and
> inclusion guidelines and accept any applicant who can meet all of our other
> requirements.
>
> With this background, I would like to encourage everyone to provide
> constructive input on this topic.

Wayne,

In the interest of transparency, I would like to add one more example
to your list:

* Amazon Trust Services is a current program member.  Amazon applied
independently but then subsequently bought a root from Go Daddy
(obvious disclosure: Wayne was VP at Go Daddy at the time).  So far
there is no public path to bring Amazon a public key/CSR you generate
on you own server and have Amazon issue a certificate containing that
public key.  The primary path to getting a certificate issued by
Amazon is to use AWS Certificate Manager.  That being said, we have
issued certificates to hundreds of thousands of domains and Mozilla
telemetry data shows they are being widely used by users of Mozilla
software products.

Thanks,
Peter

P.S. I'm very much looking forward to the Firefox ESR 60 release, as
that will mark Amazon inclusion for EV in all Mozilla products.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Serial number length

2017-12-28 Thread Peter Bowen via dev-security-policy
On Thu, Dec 28, 2017 at 10:24 PM, Jakob Bohm via dev-security-policy
 wrote:
> After looking at some real certificates both in the browser and on crt.sh, I
> have some followup questions on certificate serial numbers:
>
> 4. If the answers are yes, no, yes, why doesn't cablint flag
>   certificates with serial numbers of less than or equal to 64 bits as
>   non-compliant?

I can answer #4 -- your trusty cablint maintainer has fallen behind
and hasn't added lints for recent ballots.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certificates with shared private keys by gaming software (EA origin, Blizzard battle.net)

2017-12-25 Thread Peter Bowen via dev-security-policy
On Mon, Dec 25, 2017 at 7:10 AM, Adrian R. via dev-security-policy
 wrote:
> since it's a webserver running on the local machine and is using that 
> certificate key/pair, i think that someone more capable than me can easily 
> extract the key from it.
>
> From my point of view as an observer it's plainly obvious that the private 
> key must be on my local machine too, even if i haven't actually got to the 
> key itself yet.

The problem is that this is not true.  I've not investigated this
software at all, but there are two designs I have seen in other
software:

1) TCP Proxy: A pure TCP proxy could be forwarding all the packets to
another host which has the key.

2) "Keyless" SSL: https://www.cloudflare.com/ssl/keyless-ssl/ - they
key is on a different host from the content

I'm sure there are other designs which would end up with the same
result: 127.0.0.1 does not have the private key.  Given this,
conjecture that there "must" be a private key compromise seems
exaggerated.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Verisign signed speedport.ip ?

2017-12-09 Thread Peter Bowen via dev-security-policy
On Sat, Dec 9, 2017 at 11:42 AM, Lewis Resmond via dev-security-policy
 wrote:
> I was researching about some older routers by Telekom, and I found out that 
> some of them had SSL certificates for their (LAN) configuration interface, 
> issued by Verisign for the fake-domain "speedport.ip".
>
> They (all?) are logged here: https://crt.sh/?q=speedport.ip
>
> I wonder, since this domain and even the TLD is non-existing, how could 
> Verisign sign these? Isn't this violating the rules, if they sign anything 
> just because a router factory tells them to do so?
>
> Although they are all expired since several years, I am interested how this 
> could happen, and if such incidents of signing non-existing domains could 
> still happen today.

Before the CA/Browser Forum Baseline Requirements were created, this
was not explicitly forbidden.  Since approximately July 1, 2012 no new
certificates have been allowed for unqualified names or names for
which the TLD does not exist in the IANA root zone.

So, to answer your questions:

Q: How could Verisign sign these?
A: These were all issued prior to the Baseline Requirements coming into effect

Q: Could [...] such incidents of signing non-existing domains could
still happen today?
A: Not like this.  All Domain Names in certificates now must be Fully
Qualified Domains and the CA must validate that the FQDN falls in a
valid namespace.  It is allowable for me to get a certificate for
nonexistent.home.peterbowen.org, even though that FQDN does not exist,
as I am the registrant of peterbowen.org.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certificate with duplicate commonname

2017-10-29 Thread Peter Bowen via dev-security-policy
This has been discussed previously and my recollection is that
multiple CNs are allowed as long as each one has a single entry from
the subjectAlternativeName extension.

On Sun, Oct 29, 2017 at 11:42 AM, Hanno Böck via dev-security-policy
 wrote:
> Hi,
>
> This certificate has a duplicate commonname:
> https://crt.sh/?id=242683153=problemreporting
>
> This was pointed out by Mattias Geniar:
> https://twitter.com/mattiasgeniar/status/924705516974112768
>
> I'm not entirely sure if the wording of the BRs forbid this (they say
> the CN field must contain a single IP or fqdn, but don't really
> consider the case that 2 CNs can be present), though this is
> clearly malformed.
>
> I have informed telesec / Deutsche Telekom about this (this is
> indirectly signed by them) via their contact form.
>
> I haven't checked if other such certificates exist.
>
> --
> Hanno Böck
> https://hboeck.de/
>
> mail/jabber: ha...@hboeck.de
> GPG: FE73757FA60E4E21B937579FA5880072BBB51E42
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla’s Plan for Symantec Roots

2017-10-27 Thread Peter Bowen via dev-security-policy
On Fri, Oct 27, 2017 at 9:21 AM, Jeremy Rowley
<jeremy.row...@digicert.com> wrote:
> I'm also very interested in this scenario.
>
> I'm also interested in what happens if a trusted DigiCert root is signed by
> a Symantec root. I assume this wouldn't impact trust since the chain
> building would stop at a DigiCert root, but I wanted to be sure.

Jeremy,

To clarify your scenario, do you mean what happens if a DigiCert owned
and operated CyberTrust or DigiCert branded root is cross-signed by a
DigiCert owned and operated VeriSign, Thawte, or GeoTrust branded
root? (Assuming all the roots are roots currently listed at
https://ccadb-public.secure.force.com/mozilla/IncludedCACertificateReport)

Thanks,
Peter


> -Original Message-
> From: dev-security-policy
> [mailto:dev-security-policy-bounces+jeremy.rowley=digicert.com@lists.mozilla
> .org] On Behalf Of Peter Bowen via dev-security-policy
> Sent: Friday, October 27, 2017 9:52 AM
> To: Gervase Markham <g...@mozilla.org>
> Cc: mozilla-dev-security-pol...@lists.mozilla.org; Kathleen Wilson
> <kwil...@mozilla.com>
> Subject: Re: Mozilla's Plan for Symantec Roots
>
> On Tue, Oct 17, 2017 at 2:06 AM, Gervase Markham <g...@mozilla.org> wrote:
>> On 16/10/17 20:22, Peter Bowen wrote:
>>> Will the new managed CAs, which will operated by DigiCert under
>>> CP/CPS/Audit independent from the current Symantec ones, also be
>>> included on the list of subCAs that will continue to function?
>>
>> AIUI we are still working out the exact configuration of the new PKI
>> but my understanding is that the new managed CAs will be issued by
>> DigiCert roots and cross-signed by old Symantec roots. Therefore, they
>> will be trusted in Firefox using a chain up to the DigiCert roots.
>
> Gerv,
>
> I'm hoping you can clarify the Mozilla position a little, given a
> hypothetical.
>
> For this, please assume that DigiCert is the owner and operator of the
> VeriSign, Thawte, and GeoTrust branded roots currently included in NSS and
> that they became the owner and operator on 15 November 2017 (i.e.
> unquestionably before 1 December 2017).
>
> If DigiCert generates a new online issuing CA on 20 March 2018 and
> cross-signs it using their VeriSign Class 3 Public Primary Certification
> Authority - G5 offline root CA, will certificates from this new issuing CA
> be trusted by Firefox?  If so, what are the parameters of trust, for example
> not trusted until the new CA is whitelisted by Mozilla or only trusted until
> a certain date?
>
> What about the same scenario except the new issuing CA is generated on
> 30 June 2019?
>
> Thanks,
> Peter
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla’s Plan for Symantec Roots

2017-10-27 Thread Peter Bowen via dev-security-policy
On Tue, Oct 17, 2017 at 2:06 AM, Gervase Markham  wrote:
> On 16/10/17 20:22, Peter Bowen wrote:
>> Will the new managed CAs, which will operated by DigiCert under
>> CP/CPS/Audit independent from the current Symantec ones, also be
>> included on the list of subCAs that will continue to function?
>
> AIUI we are still working out the exact configuration of the new PKI but
> my understanding is that the new managed CAs will be issued by DigiCert
> roots and cross-signed by old Symantec roots. Therefore, they will be
> trusted in Firefox using a chain up to the DigiCert roots.

Gerv,

I'm hoping you can clarify the Mozilla position a little, given a hypothetical.

For this, please assume that DigiCert is the owner and operator of the
VeriSign, Thawte, and GeoTrust branded roots currently included in NSS
and that they became the owner and operator on 15 November 2017 (i.e.
unquestionably before 1 December 2017).

If DigiCert generates a new online issuing CA on 20 March 2018 and
cross-signs it using their VeriSign Class 3 Public Primary
Certification Authority - G5 offline root CA, will certificates from
this new issuing CA be trusted by Firefox?  If so, what are the
parameters of trust, for example not trusted until the new CA is
whitelisted by Mozilla or only trusted until a certain date?

What about the same scenario except the new issuing CA is generated on
30 June 2019?

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla’s Plan for Symantec Roots

2017-10-16 Thread Peter Bowen via dev-security-policy
On Mon, Oct 16, 2017 at 10:32 AM, Gervase Markham via
dev-security-policy  wrote:
> As per previous discussions and
> https://wiki.mozilla.org/CA:Symantec_Issues, a consensus proposal[0] was
> reached among multiple browser makers for a graduated distrust of
> Symantec roots.
>
> Here is Mozilla’s planned timeline for the graduated distrust of
> Symantec roots (subject to change):
>
> * October 2018 (Firefox 63): Removal/distrust of Symantec roots, with
> caveats described below.
>
> However, there are some subCAs of the Symantec roots that are
> independently operated by companies whose operations have not been
> called into question, and they will experience significant hardship if
> we do not provide a longer transition period for them. For both
> technical and non-technical reasons, a year is an extremely unrealistic
> timeframe for these subCAs to transition to having their certificates
> cross-signed by another CA. For example, the subCA may have implemented
> a host of pinning solutions in their products that would fail with
> non-Symantec-chaining certificates, or the subCA may have large numbers
> of devices that would need to be tested for interoperability with any
> potential future vendor. And, of course contractual negotiations may
> take a significant amount of time.

This pattern also exists for companies that have endpoints which have
clients which are pinned to the Symantec-owned roots.  These endpoints
may also be used by browser clients. It was my understanding that the
intent was existing roots would cross sign new managed CAs that would
be used for transition.

> Add code to Firefox to disable the root such that only certain subCAs
> will continue to function. So, the final dis-trust of Symantec roots may
> actually involve letting one or two of the root certs remain in
> Mozilla’s trust store, but having special code to distrust all but
> specified subCAs. We would document the information here:
> https://wiki.mozilla.org/CA/Additional_Trust_Changes
> And Mozilla would add tooling to the CCADB to track these special subCAs
> to ensure proper CP/CPS/audits until they have been migrated and
> disabled, and the root certs removed. Mozilla will need to also follow
> up with these subCAs to ensure they are moving away from these root
> certificates and are getting cross-signed by more than one CA in order
> to avoid repeating this situation.

Will the new managed CAs, which will operated by DigiCert under
CP/CPS/Audit independent from the current Symantec ones, also be
included on the list of subCAs that will continue to function?

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: New Version Notification for draft-belyavskiy-certificate-limitation-policy-04.txt

2017-10-07 Thread Peter Bowen via dev-security-policy
On Tue, Sep 12, 2017 at 5:59 AM, Dmitry Belyavsky via
dev-security-policy  wrote:
> Here is the new version of the draft updated according to the discussion on
> mozilla-dev-security list.

Given that RFC 5914 already defines a TrustAnchorList and
TrustAnchorInfo object and that the Trust Anchor List object is
explicitly contemplated as being included in a signed CMS message,
would it not make more sense to start from 5914 and define new
extensions encode constraints not currently defined?

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DigiCert-Symantec Announcement

2017-09-22 Thread Peter Bowen via dev-security-policy
On Fri, Sep 22, 2017 at 6:22 AM, Nick Lamb via dev-security-policy
 wrote:
> On Friday, 22 September 2017 05:01:03 UTC+1, Peter Bowen  wrote:
>> I realize this is somewhat more complex than what you, Ryan, or Jeremy
>> proposed, but it the only way I see root pins working across both
>> "old" and "new" trust stores.
>
> I would suggest that a better way to spend the remaining time would be 
> remedial work so that your business isn't dependant on a single third party 
> happening to make choices that are compatible with your existing processes. 
> Trust agility should be built into existing processes and systems, where it 
> doesn't exist today it must be retro-fitted, systems which can't be 
> retrofitted are an ongoing risk to the company's ability to deliver.
>
> Trust agility doesn't have to mean you give up all control, but if you were 
> in a situation where the business trusted roots from Symantec, Comodo and 
> say, GlobalSign then you would have an obvious path forwards in today's 
> scenario without also needing to trust dozens of organisations you've no 
> contact with.
>
> I know the Mozilla organisation has made this mistake itself in the past, and 
> I'm sure Google has too, but I don't want too much sympathy here to get in 
> the way of actually making us safer.

Nick,

I agree with pretty much everything you said :)

However, as you point out, many organisations have run into problems
in this area.  As a community, we saw similar issues come up during
the SHA-1 deprecation phase and seemed surprised.  I want to try to
make sure there is not surprise, especially when it comes to
configurations that are not obvious.

For example, on some mobile platforms it is common to have the app
enforce pinning but the OS handle chain building and validation.  This
can have poor interaction if the OS were to update the trust store as
the returned chain may no longer have the pinned CA.

Consider what Jeremy drew:

GeoTrust Primary Certification Authority -> DigiCert Global G2 -> (new
issuing CA) -> (end entity)

If the platform trusts DigiCert Global G2, then the chain that is
returned to the application will be:

DigiCert Global G2 -> (new issuing CA) -> (end entity)

In this case, any application pinned to GeoTrust will fail.

Even if it was a new Root:

GeoTrust Primary Certification Authority -> DigiCert GeoTrust G2 ->
(new issuing CA) -> (end entity)

The same problem will occur if the OS updates the trust store but the
application does not update.

One notable thing is that the server operator, application vendor, OS
vendor, and CA may be four unrelated parties.  If the application is
expected to work with "new" and "old" OS versions, this will take some
careful work if the keys in the built chain change over time.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DigiCert-Symantec Announcement

2017-09-21 Thread Peter Bowen via dev-security-policy
On Thu, Sep 21, 2017 at 7:17 PM, Ryan Sleevi via dev-security-policy
 wrote:
> I think we can divide the discussion into two parts, similar to the
> previous mail: How to effectively transition Symantec customers with
> minimum disruption, whether acting as the Managed CA or as the future
> operator of Symantec’s PKI, and how to effectively transition DigiCert’s
> infrastructure. This is a slightly different order than your e-mail
> message, but given the time sensitivity of the Symantec transition, it
> seems more effective to discuss that first.
>
> I think there may have been some confusion on the Managed CA side. It’s
> excellent that DigiCert plans to transition Symantec customers to DigiCert
> roots, as that helps with an expedient reduction in risk, but the plan
> outlined may create some of the compatibility risks that I was trying to
> highlight. In the discussions of the proposed remediations, one of the big
> concerns we heard raised by both Symantec and site operators was related to
> pinning - both in the Web and in mobile applications. We also heard about
> embedded or legacy devices, and their needs for particular chains.
>
> It sounds like this plan may have been based on a concern that I’d tried to
> address in the previous message. That is, the removal of the existing
> Symantec roots defines a policy goal - the elimination in trust in these
> legacy roots, due to the unknown scope of issues. However, that goal could
> be achieved by a number of technical means - for example, ‘whitelisting’ a
> set of Managed CAs (as proposed by Chrome), or replacing the existing
> Symantec roots with these new Managed CA roots in a 1:1 swap. Both of these
> approaches achieve the same policy objective, while reducing the
> compatibility risk.

Ryan,

As an existing Symantec customer, I'm not clear that this really
addresses the challenges we face.

So far we have found several different failure modes.  We hope that
any solution deployed will assure that these don't trigger.

First, we found that some clients have a limited set of roots in their
trust store.   The "VeriSign Class 3 Public Primary Certification
Authority - G5" root with SPKI SHA-256 hash of
25b41b506e4930952823a6eb9f1d31def645ea38a5c6c6a96d71957e384df058 is
the only root trusted by some clients. They do, somewhat
unfortunately, check the certificate issuer, issuer key id, and
signature, so they changing any will break things.  However they don't
update their trust store.  So the (DN, key id, public key) tuple needs
to be in the chain for years to come.

Second, we have found that some applications use the system trust
store but implement additional checks on the built and validated
chain.  The most common case is  checking that at least one public key
in the chain matches a list of keys the application has internally.

As there is an assumption that the current root (DN, public key)
tuples will be replaced relatively soon by some trust store
maintainers, there needs to be a way that that both of these cases can
work.  The only way I can see this working long term on both devices
with updated trust stores as well as devices that have not updated the
trust store is to do a little bit of hackery and create new (DN,
public key) tuples with the existing public key.  This way apps with
pinning will work on systems with old trust stores and one systems
with updated trust stores.

As a specific example, again using the Class 3 G5 root, today a chain
looks like:

1) End-entity info
2) 
spkisha256:f67d22cd39d2445f96e16e094eae756af49791685007c76e4b66f154b7f35ec6,KeyID:5F:60:CF:61:90:55:DF:84:43:14:8A:60:2A:B2:F5:7A:F4:43:18:EF,
DN:CN=Symantec Class 3 Secure Server CA - G4, OU=Symantec Trust
Network, O=Symantec Corporation, C=US,
3) spkisha256:25b41b506e4930952823a6eb9f1d31def645ea38a5c6c6a96d71957e384df058,
KeyID:7F:D3:65:A7:C2:DD:EC:BB:F0:30:09:F3:43:39:FA:02:AF:33:31:33,
DN:CN=VeriSign Class 3 Public Primary Certification Authority - G5,
OU=(c) 2006 VeriSign, Inc. - For authorized use only, OU=VeriSign
Trust Network, O=VeriSign\, Inc., C=US

If there is a desire to (a) remove the Class 3 G5 root and (b) keep
the pin to its key working, the only solution I can see is to create a
new root that uses the same key.  This would result in a chain that
looks something like:

1) End-entity info
2b) spkisha256:,KeyID:, DN:CN=New Server Issuing CA, O=DigiCert, C=US,
3b) spkisha256:25b41b506e4930952823a6eb9f1d31def645ea38a5c6c6a96d71957e384df058,
KeyID:6c:e5:3f:7b:45:1f:66:b4:e6:7c:70:05:86:19:79:4f:a6,
DN:CN=VeriSign Class 3 Public Primary Certification Authority - G5,
OU=DigiCert Compatibility Root, OU=(c) 2006 VeriSign, Inc. - For
authorized use only, OU=VeriSign Trust Network, O=VeriSign\, Inc.,
C=US
3) spkisha256:25b41b506e4930952823a6eb9f1d31def645ea38a5c6c6a96d71957e384df058,
KeyID:7F:D3:65:A7:C2:DD:EC:BB:F0:30:09:F3:43:39:FA:02:AF:33:31:33,
DN:CN=VeriSign Class 3 Public Primary Certification Authority - G5,

Re: DigiCert-Symantec Announcement

2017-09-20 Thread Peter Bowen via dev-security-policy
On Tue, Sep 19, 2017 at 8:39 PM, Jeremy Rowley via dev-security-policy
 wrote:
>
> The current end-state plan for root cross-signing is provided at 
> https://bugzilla.mozilla.org/show_bug.cgi?id=1401384. The diagrams there show 
> all of the existing sub CAs along with the new Sub CAs and root signings 
> planned for post-close. Some of these don’t have names so they are lumped in 
> a general “Intermediate” box.
>
> The Global G2 root will become the transition root to DigiCert for customers 
> who can’t move fully to an operational DigiCert roots prior to September 
> 2018. Any customers that require a specific root can use the transition root 
> for as long as they want, realizing that path validation may be an issue as 
> Symantec roots are removed by platform operators. Although we cannot 
> currently move to a single root because of the lack of EV support and trust 
> in non-Mozilla platforms, we can move to the existing three roots in an 
> orderly fashion.
>
> If the agreement closes prior to Dec 1, the Managed CA will never exist. 
> Instead, all issuance will occur through one of the three primary DigiCert 
> roots mentioned above with the exception of customers required to use a 
> Symantec root for certain platforms or pinning. The cross-signed Global root 
> will be only transitory, meaning we’d hope customers would migrate to the 
> DigiCert roots once the systems requiring a specific Symantec roots are 
> deprecated or as path validation errors arise.

Jeremy,

Am I correct that a key input into this plan was the Mozilla plan to
fully remove the Symantec roots from the trust store before then end
of 2018?  Google seemed to suggest they would keep trusting them for a
longer period with a restriction on which subordinate CAs are trusted.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Public trust of VISA's CA

2017-09-20 Thread Peter Bowen via dev-security-policy
On Wed, Sep 20, 2017 at 12:37 AM, Martin Rublik via
dev-security-policy  wrote:
> On Tue, Sep 19, 2017 at 5:22 PM, Alex Gaynor via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
>> https://crt.sh/mozilla-certvalidations?group=version=896972 is a very
>> informative graph for me -- this is the number of validations performed by
>> Firefox for certs under this CA. It looks like at the absolute peak, there
>> were 1000 validations in a day. That's very little value for our users, in
>> return for an awful lot of risk.
>>
>> Alex
>
>
> Hi,
>
> I agree that 1000 validations in a day is not much, or better to say really
> low number. Anyway I was wondering what should be a minimum value or
> whether this number is a good metric at all. I went through the Mozilla
> validations telemetrics and there are more CAs with similliar number of
> validations.

Note that Firefox 55 had a regression on how it does chain building
(https://bugzilla.mozilla.org/show_bug.cgi?id=1400913) that causes it
prefer the longest chain rather than the shortest chain.  This means,
for Root CAs that are cross-signed, Firefox 55 will frequently
attribute to the wrong bucket.  The total on the buckets does not
change, but the validations per day did shift.  For example, Firefox
55 shows "AddTrust External CA Root" is a super popular root while
prior versions had "COMODO RSA Certification Authority" as a top root.
"Go Daddy Class 2 CA" and "Go Daddy Root Certificate Authority - G2"
also flipped in Firefox 55.

This does not impact the Visa bucket, as far as I know, as the Visa
root is not cross-signed by any other root.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CAA Certificate Problem Report

2017-09-09 Thread Peter Bowen via dev-security-policy
On Sat, Sep 9, 2017 at 1:59 PM, Andrew Ayer <a...@andrewayer.name> wrote:
> On Sat, 9 Sep 2017 13:53:52 -0700
> Peter Bowen via dev-security-policy
> <dev-security-policy@lists.mozilla.org> wrote:
>
>> On Sat, Sep 9, 2017 at 1:50 PM, Andrew Ayer <a...@andrewayer.name>
>> wrote:
>> >
>> > drill is buggy and insecure.  Obviously, such implementations can
>> > be found.  Note that drill is just a "debugging/query" tool, not a
>> > resolver you would actually use in production.  You'll find that the
>> > production-grade resolver from that family (unbound) correctly
>> > reports an error when you try to query the CAA record for
>> > refused.caatestsuite-dnssec.com: https://unboundtest.com/
>>
>> Just as I received this, I finished testing with unbound, to see what
>> it does.  See the results below.  For your blackhole, servfail, and
>> refused cases it clearly says insecure, not bogus.
>
> That is very clearly against RFC4033, which says defines Insecure as:
>
> The validating resolver has a trust anchor, a chain
> trust, and, at some delegation point, signed proof of the
> non-existence of a DS record.  This indicates that subsequent
> branches in the tree are provably insecure.  A validating resolver
> may have a local policy to mark parts of the domain space as
> insecure.
>
> There is no "signed proof of the non-existence of a DS record" for
> blackhole, servfail, and refused, so it cannot possibly be insecure.

I just found another tool that does checks and has a similar but
distinct response set:

https://portfolio.sidnlabs.nl/check/expired.caatestsuite-dnssec.com/CAA (bogus)
https://portfolio.sidnlabs.nl/check/missing.caatestsuite-dnssec.com/CAA (bogus)
https://portfolio.sidnlabs.nl/check/blackhole.caatestsuite-dnssec.com/CAA
(error)
https://portfolio.sidnlabs.nl/check/servfail.caatestsuite-dnssec.com/CAA (error)
https://portfolio.sidnlabs.nl/check/refused.caatestsuite-dnssec.com/CAA (error)
https://portfolio.sidnlabs.nl/check/sigfail.verteiltesysteme.net/A (bogus)
https://portfolio.sidnlabs.nl/check/bogus.d4a16n3.rootcanary.net/A (insecure)
https://portfolio.sidnlabs.nl/check/www.google.com/A (insecure)
https://portfolio.sidnlabs.nl/check/www.dnssec-failed.org/A (bogus)

Given that there does not seem to be a consistent definition on how
"broken" DNSSEC should be handled, I think it is reasonable that CAs
should be given benefit of the doubt on the broken DNSSEC tests.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CAA Certificate Problem Report

2017-09-09 Thread Peter Bowen via dev-security-policy
On Sat, Sep 9, 2017 at 1:50 PM, Andrew Ayer  wrote:
>
> drill is buggy and insecure.  Obviously, such implementations can
> be found.  Note that drill is just a "debugging/query" tool, not a
> resolver you would actually use in production.  You'll find that the
> production-grade resolver from that family (unbound) correctly reports
> an error when you try to query the CAA record for
> refused.caatestsuite-dnssec.com: https://unboundtest.com/

Just as I received this, I finished testing with unbound, to see what
it does.  See the results below.  For your blackhole, servfail, and
refused cases it clearly says insecure, not bogus.

[ec2-user@ip-10-0-0-18 ~]$ unbound-host -h
Usage: unbound-host [-vdhr46] [-c class] [-t type] hostname
 [-y key] [-f keyfile] [-F namedkeyfile]
 [-C configfile]
  Queries the DNS for information.
  The hostname is looked up for IP4, IP6 and mail.
  If an ip-address is given a reverse lookup is done.
  Use the -v option to see DNSSEC security information.
-t type what type to look for.
-c class what class to look for, if not class IN.
-y 'keystring' specify trust anchor, DS or DNSKEY, like
-y 'example.com DS 31560 5 1 1CFED8478...'
-D DNSSEC enable with default root anchor
from /usr/local/etc/unbound/root.key
-f keyfile read trust anchors from file, with lines as -y.
-F keyfile read named.conf-style trust anchors.
-C config use the specified unbound.conf (none read by default)
-r read forwarder information from /etc/resolv.conf
  breaks validation if the forwarder does not do DNSSEC.
-v be more verbose, shows nodata and security.
-d debug, traces the action, -d -d shows more.
-4 use ipv4 network, avoid ipv6.
-6 use ipv6 network, avoid ipv4.
-h show this usage help.
Version 1.6.5
BSD licensed, see LICENSE in source package for details.
Report bugs to unbound-b...@nlnetlabs.nl
[ec2-user@ip-10-0-0-18 ~]$ unbound-host -v -t CAA -D -f
/usr/local/etc/unbound/root.key expired.caatestsuite-dnssec.com.
expired.caatestsuite-dnssec.com. has no CAA record (BOGUS (security failure))
validation failure :
signature expired from 96.126.110.12 for key
expired.caatestsuite-dnssec.com. while building chain of trust
[ec2-user@ip-10-0-0-18 ~]$ unbound-host -v -t CAA -D -f
/usr/local/etc/unbound/root.key missing.caatestsuite-dnssec.com.
missing.caatestsuite-dnssec.com. has no CAA record (BOGUS (security failure))
validation failure : no
signatures from 96.126.110.12 for key missing.caatestsuite-dnssec.com.
while building chain of trust
[ec2-user@ip-10-0-0-18 ~]$ unbound-host -v -t CAA -D -f
/usr/local/etc/unbound/root.key blackhole.caatestsuite-dnssec.com.
Host blackhole.caatestsuite-dnssec.com. not found: 2(SERVFAIL). (insecure)
[ec2-user@ip-10-0-0-18 ~]$ unbound-host -v -t CAA -D -f
/usr/local/etc/unbound/root.key servfail.caatestsuite-dnssec.com.
Host servfail.caatestsuite-dnssec.com. not found: 2(SERVFAIL). (insecure)
[ec2-user@ip-10-0-0-18 ~]$ unbound-host -v -t CAA -D -f
/usr/local/etc/unbound/root.key refused.caatestsuite-dnssec.com.
Host refused.caatestsuite-dnssec.com. not found: 2(SERVFAIL). (insecure)
[ec2-user@ip-10-0-0-18 ~]$ unbound-host -v -t NS -D -f
/usr/local/etc/unbound/root.key blackhole.caatestsuite-dnssec.com.
Host blackhole.caatestsuite-dnssec.com. not found: 2(SERVFAIL). (insecure)
[ec2-user@ip-10-0-0-18 ~]$ unbound-host -v -t NS -D -f
/usr/local/etc/unbound/root.key servfail.caatestsuite-dnssec.com.
Host servfail.caatestsuite-dnssec.com. not found: 2(SERVFAIL). (insecure)
[ec2-user@ip-10-0-0-18 ~]$ unbound-host -v -t NS -D -f
/usr/local/etc/unbound/root.key refused.caatestsuite-dnssec.com.
Host refused.caatestsuite-dnssec.com. not found: 2(SERVFAIL). (insecure)
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CAA Certificate Problem Report

2017-09-09 Thread Peter Bowen via dev-security-policy
On Sat, Sep 9, 2017 at 11:50 AM, Andrew Ayer <a...@andrewayer.name> wrote:
> On Sat, 9 Sep 2017 08:49:01 -0700
> Peter Bowen via dev-security-policy
> <dev-security-policy@lists.mozilla.org> wrote:
>
>> On Sat, Sep 9, 2017 at 3:57 AM, Jonathan Rudenberg
>> <jonat...@titanous.com> wrote:
>> >
>> >> On Sep 9, 2017, at 06:19, Peter Bowen via dev-security-policy
>> >> <dev-security-policy@lists.mozilla.org> wrote:
>> >>
>> >> In all three of these cases, the "domain's zone does not have a
>> >> DNSSEC validation chain to the ICANN root" -- I requested SOA,
>> >> DNSKEY, NS, and CAA records types for each zone and in no case did
>> >> I get a response that had a valid DNSSEC chain to the ICANN root.
>> >
>> > This comes down to what exactly ___does not have a valid DNSSEC
>> > chain___ means.
>> >
>> > I had assumed that given the reference to DNSSEC in the BRs that
>> > the relevant DNSSEC RFCs were incorporated by reference via RFC
>> > 6844 and that DNSSEC validation is required. However, this is not
>> > entirely the case, using DNSSEC for CAA lookups is only RECOMMENDED
>> > in section 4.1 and explicitly ___not required.___ Which means this is
>> > all pretty pointless. The existence or non-existence of DNSSEC
>> > records doesn___t matter if there is no requirement to use them.
>> >
>> > Given this context, I think that your interpretation of this clause
>> > is not problematic since there is no requirement anywhere to use
>> > DNSSEC.
>> >
>> > I think this should probably be taken to the CAB Forum for a ballot
>> > to either:
>> >
>> > 1) purge this reference to DNSSEC from the BRs making it entirely
>> > optional instead of just having this pointless check; or
>> > 2) add a requirement to the BRs that DNSSEC validation be used from
>> > the ICANN root for CAA lookups and then tweak the relevant clause
>> > to only allow lookup failures if there is a valid non-existence
>> > proof of DNSSEC records in the chain that allows an insecure lookup.
>> >
>> > None of my comments in this thread should be interpreted as support
>> > for DNSSEC :)
>>
>> My recollection from the discussion that led to the ballot was that
>> this line in the BRs was specifically to create a special hard fail if
>> the zone was properly signed but the server returned an error when
>> looking up CAA records.
>
> Your recollection is not consistent with the most recent cabfpub thread
> on the topic: https://cabforum.org/pipermail/public/2017-August/011800.html
>
>> As a big of background, in order to be properly signed [...]
>
> The BRs do not say that the zone has to be "properly signed" for this
> line to trigger.  Nor do they require a "valid chain" of signatures
> from particular records in the zone to the root, as you suggested in
> another email.
>
> Rather, the BRs say the line triggers if there is "a DNSSEC validation
> chain to the ICANN root."  A "validation chain" doesn't mean signatures,
> but rather the information needed to validate the zone.  "Validation
> chain" is not the precise term that DNSSEC uses, but the synonymous term
> "authentication chain" is defined by RFC 4033 (incorporated by reference
> from RFC 6844) as follows:
>
> An alternating sequence of DNS public key
> (DNSKEY) RRsets and Delegation Signer (DS) RRsets forms a chain of
> signed data, with each link in the chain vouching for the next.  A
> DNSKEY RR is used to verify the signature covering a DS RR and
> allows the DS RR to be authenticated.  The DS RR contains a hash
> of another DNSKEY RR and this new DNSKEY RR is authenticated by
> matching the hash in the DS RR.  This new DNSKEY RR in turn
> authenticates another DNSKEY RRset and, in turn, some DNSKEY RR in
> this set may be used to authenticate another DS RR, and so forth
> until the chain finally ends with a DNSKEY RR whose corresponding
> private key signs the desired DNS data.  For example, the root
> DNSKEY RRset can be used to authenticate the DS RRset for
> "example."  The "example." DS RRset contains a hash that matches
> some "example." DNSKEY, and this DNSKEY's corresponding private
> key signs the "example." DNSKEY RRset.  Private key counterparts
> of the "example." DNSKEY RRset sign data records such as
> 

Re: CAA Certificate Problem Report

2017-09-09 Thread Peter Bowen via dev-security-policy
On Sat, Sep 9, 2017 at 3:57 AM, Jonathan Rudenberg
<jonat...@titanous.com> wrote:
>
>> On Sep 9, 2017, at 06:19, Peter Bowen via dev-security-policy 
>> <dev-security-policy@lists.mozilla.org> wrote:
>>
>> In all three of these cases, the "domain's zone does not have a DNSSEC
>> validation chain to the ICANN root" -- I requested SOA, DNSKEY, NS,
>> and CAA records types for each zone and in no case did I get a
>> response that had a valid DNSSEC chain to the ICANN root.
>
> This comes down to what exactly “does not have a valid DNSSEC chain” means.
>
> I had assumed that given the reference to DNSSEC in the BRs that the relevant 
> DNSSEC RFCs were incorporated by reference via RFC 6844 and that DNSSEC 
> validation is required. However, this is not entirely the case, using DNSSEC 
> for CAA lookups is only RECOMMENDED in section 4.1 and explicitly “not 
> required.” Which means this is all pretty pointless. The existence or 
> non-existence of DNSSEC records doesn’t matter if there is no requirement to 
> use them.
>
> Given this context, I think that your interpretation of this clause is not 
> problematic since there is no requirement anywhere to use DNSSEC.
>
> I think this should probably be taken to the CAB Forum for a ballot to either:
>
> 1) purge this reference to DNSSEC from the BRs making it entirely optional 
> instead of just having this pointless check; or
> 2) add a requirement to the BRs that DNSSEC validation be used from the ICANN 
> root for CAA lookups and then tweak the relevant clause to only allow lookup 
> failures if there is a valid non-existence proof of DNSSEC records in the 
> chain that allows an insecure lookup.
>
> None of my comments in this thread should be interpreted as support for 
> DNSSEC :)

My recollection from the discussion that led to the ballot was that
this line in the BRs was specifically to create a special hard fail if
the zone was properly signed but the server returned an error when
looking up CAA records.

As a big of background, in order to be properly signed, the zone must
have unexpired signatures over at least the SOA record (as this is the
minimal allowed signature when using NSEC3 with Opt-Out).
Additionally this case never exists with zones signed using NSEC or
NSEC3 without opt-out, as they will provide a either a denial of
existence or a signature that disclaims CAA record type existence.

So this bullet in the BRs only triggers when:
- SOA record has a valid signature
- There is a DNSKEY for the zone that matches the DS record in the parent zone
- The DS record in the parent zone is signed
- The above three are true for all zones back to the root zone
- The request for a CAA record for the QNAME returns an error
- The request for DNSSEC information for the QNAME succeeds
- The DNSSEC information does not provide information on the name
(e.g. is for records before and after but the opt-out flag is set)

In all of these are present, the CA may not issue.  If the DNSSEC
information is valid and says there is a CAA record in the type
bitmaps but the server returned an error for CAA records, then the CA
must not issue.

I don't think your tests cover either of these cases.  I think any
other case allows issuance as it follows the path of no CAA record.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CAA Certificate Problem Report

2017-09-09 Thread Peter Bowen via dev-security-policy
> Certificate 3 contains a single DNS identifier for
> refused.caatestsuite-dnssec.com
> Attempts to query the CAA record for this DNS name result in a REFUSED DNS
> response.  Since there is a DNSSEC validation chain from this zone to the
> ICANN root, CAs are not permitted to treat the lookup failure as permission
> to issue.
>
>
> Certificate 4 contains a single DNS identifier for
> missing.caatestsuite-dnssec.com  .
> This DNS name has no CAA records, but the zone is missing RRSIG records.
> Since there is a DNSSEC validation chain from this zone to the ICANN root,
> the DNS lookup should fail and this failure cannot be treated by the CA as
> permission to issue.
>
> Certificate 6 contains a single DNS identifier for
> blackhole.caatestsuite-dnssec.com 
> .  All DNS requests for this DNS name will be dropped, causing a lookup
> failure.  Since there is a DNSSEC validation chain from this zone to the
> ICANN root, CAs are not permitted to treat the lookup failure as permission
> to issue.

Based on my own queries, I do not believe the statement that there is
"a DNSSEC validation chain from this zone to the ICANN root" is
correct for these.

All of these names have NS records in the parent zone, indicating they
are zones themselves:

refused.caatestsuite-dnssec.com. 60 IN NS nsrefused.caatestsuite-dnssec.com.
blackhole.caatestsuite-dnssec.com. 60 IN NS nsblackhole.caatestsuite-dnssec.com.
missing.caatestsuite-dnssec.com. 60 IN NS ns0.caatestsuite-dnssec.com.
missing.caatestsuite-dnssec.com. 60 IN NS ns1.caatestsuite-dnssec.com.

In all three of these cases, the "domain's zone does not have a DNSSEC
validation chain to the ICANN root" -- I requested SOA, DNSKEY, NS,
and CAA records types for each zone and in no case did I get a
response that had a valid DNSSEC chain to the ICANN root.

This leads me to believe these tests are incorrect and I agree with
Jeremy's conclusion for these.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CAs not compliant with CAA CP/CPS requirement

2017-09-08 Thread Peter Bowen via dev-security-policy
On Fri, Sep 8, 2017 at 12:24 PM, Andrew Ayer via dev-security-policy
 wrote:
> The BRs state:
>
> "Effective as of 8 September 2017, section 4.2 of a CA's Certificate
> Policy and/or Certification Practice Statement (section 4.1 for CAs
> still conforming to RFC 2527) SHALL state the CA's policy or practice
> on processing CAA Records for Fully Qualified Domain Names; that policy
> shall be consistent with these Requirements. It shall clearly specify
> the set of Issuer Domain Names that the CA recognises in CAA 'issue' or
> 'issuewild' records as permitting it to issue. The CA SHALL log all
> actions taken, if any, consistent with its processing practice."
>
> Since it is now 8 September 2017, I decided to spot check the CP/CPSes
> of some CAs.
>
> At time of writing, the latest published CP/CPSes of the following CAs
> are not compliant with the above provision of the BRs:
>
> Amazon (https://www.amazontrust.com/repository/) - Does not check CAA
>
>
> It would be nice to hear confirmation from the non-compliant CAs that they
> really are checking CAA as required, and if so, why they overlooked the
> requirement to update their CP/CPS.

Amazon Trust Services is checking CAA prior to issuance of
certificates.  We provided the domain list in our responses to the
last Mozilla communication and will be updating our externally
published policy and practice documentation to match shortly.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: BR compliance of legacy certs at root inclusion time

2017-08-20 Thread Peter Bowen via dev-security-policy
On Fri, Aug 18, 2017 at 8:47 AM, Ryan Sleevi via dev-security-policy
 wrote:
> On Fri, Aug 18, 2017 at 11:02 AM, Gervase Markham via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
>> Sometimes, CAs apply for inclusion with new, clean roots. Other times,
>> CAs apply to include roots which already have a history of issuance. The
>> previous certs issued by that CA aren't always all BR-compliant. Which
>> is in one sense understandable, because up to this point the CA has not
>> been bound by the BRs. Heck, the CA may never even have heard of the BRs
>> until they come to apply - although this seems less likely than it would
>> once have been.
>>
>> What should our policy be regarding BR compliance for certificates
>> issued by a root requesting inclusion, which were issued before the date
>> of their request? Do we:
>>
>> A) Require all certs be BR-compliant going forward, but grandfather in
>>the old ones; or
>> B) Require that any non-BR-compliant old certs be revoked; or
>> C) Require that any seriously (TBD) non-BR-compliant old certs be
>>revoked; or
>> D) something else?
>>
>
> D) Require that the CA create a new root certificate to be included within
> Mozilla products, and which all future BR-compliant certificates will be
> issued from this new root. In the event this CA has an existing root
> included within one or more software products, this CA may cross-certify
> their new root with their old root, thus ensuring their newly-issued
> certificates (which are BR compliant) work with such legacy software.
>
> This ensures that all included CAs operate from a 'clean slate' with no
> baggage or risk. It also ensures that the slate always starts from "BR
> compliant" and continues forward.
>
> However, some (new) CAs may rightfully point out that existing, 'legacy'
> CAs have not had this standard applied to them, and have operated in a
> manner that is not BR compliant in the past.
>
> To reduce and/or eliminate the risk from existing CAs, particularly those
> with long and storied histories of misissuance, which similar present
> unknowns to the community (roots that may have been included for >5 years,
> thus prior to the BR effective date), require the same of existing roots
> who cannot demonstrate that they have had BR audits from the moment of
> their inclusion. That is, require 'legacy' CAs to create and stand up new
> roots, which will be certified by their existing roots, and transition all
> new certificate issuance to these new 'roots' (which will appear to be
> cross-signed/intermediates, at first). Within 39 months, Mozilla will be
> able to remove all 'legacy' roots for purposes of website authentication,
> adding these 'clean' roots in their stead, without any disruption to the
> community. Note that this is separable from D, and represents an effort to
> holistically clean up and reduce risk.
>
> The transition period at present cannot be less than 39 months (the maximum
> validity of a newly issued certificate), plus whatever time is afforded to
> CAs to transition (presumably, on the order of 6 months should be
> sufficient). In the future, it would also be worth considering reducing the
> maximum validity of certificates, such that such rollovers can be completed
> in a more timely fashion, thus keeping the ecosystem in a constant 'clean'
> state.

>From the perspective of being "clean" from a given NSS version this,
makes sense.  However the reality for most situations is there is
demand to support applications and devices with trust stores that have
not been updated for a while.  This could be something as similar as
Firefox ESR or it could be a some device with an older trust store.
Assuming there is a need to have the same certificate chain work in
both scenarios, the TLS server may need to send a chain with multiple
root to root cross-certificates.

To get a feel for how long a not looping path might be, I recently
pulled trust stores for dozens of versions of Windows, Netscape,
Mozilla, and Java.  I then used unexpired cross-certificates from CT
to group these trust anchors into unique clusters or disconnected
graphs.  The results are available as gists.

https://gist.github.com/pzb/cd10fbfffd7cb25bb57c38c3865f18f2 is just
the roots in each unique disconnected graph.  Having the entries there
does not imply that all have cross-signed each other, rather than
there is a path from each pair of roots to a common node.  For
example, Root A and Root B might each have a subordinate CA that have
each cross-certified the same, third subordinate.

https://gist.github.com/pzb/ffab25cbe7d32c616792a5dec3711315 is the
same data with all the unexpired subordinate cross-certificates
included.

Note that the clustering does not take into account anything besides
expiration; for example it is possible that two paths to a common node
have conflicting constraints.

Considering we already see paths like:

OU=Class 3 Public 

Re: SRVNames in name constraints

2017-08-15 Thread Peter Bowen via dev-security-policy
On Tue, Aug 15, 2017 at 8:01 AM, Jeremy Rowley
 wrote:
> I realize use of underscore characters was been debated and explained at the
> CAB Forum, but I think it's pretty evident (based on the certs issued and
> responses to Ballot 202) that not all CAs believe certs for SRVNames are
> prohibited. I realize the rationale against underscores is that 5280
> requires a valid host name for DNS and X.509 does not necessarily permit
> underscores, but it's not explicitly stated. Ballot 202 went a long way
> towards clarification on when underscores are permitted, but that failed,
> creating all new confusion on the issue.  Any CA not paying careful
> attention to the discussion and looking at only the results, would probably
> believe SRVNames are permitted as long as the entry is in SAN:dNSName
> instead of otherName.

Jeremy,

I was assuming the definition of "SRVname" meant an otherName type
entry.  Obviously a dNSName of _xmpp.example.com would have name
constraints applied, so I don't think that there is an issue there.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SRVNames in name constraints

2017-08-15 Thread Peter Bowen via dev-security-policy
On Tue, Aug 15, 2017 at 4:20 AM, Gervase Markham via
dev-security-policy  wrote:
> On 06/07/17 16:56, Ryan Sleevi wrote:
>> Relevant to this group, id-kp-serverAuth (and perhaps id-kp-clientAuth)
>
> So what do we do? There are loads of "name-constrained" certs out there
> with id-kp-serverAuth but no constraints on SRVName. Does that mean they
> can issue for any SRVName they like? Is that a problem once we start
> allowing it?
>
> I've filed:
> https://github.com/mozilla/pkipolicy/issues/96
> on this issue in general.

Right now no CA is allowed to issue for SRVName.  Part of the
CA/Browser Forum ballot I had drafted a while ago had language that
said something like "If a CA certificate contains at least one DNSName
entry in NameConstraints and does not have any SRVName entries in
NameConstraints, then the CA MUST NOT issue any certificates
containing SRVname names."

However this is a morass, as it is defining what a CA can do based on
something outside the CA's scope.  I'm not sure how to deal with this,
to be honest.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: 2017.08.10 Let's Encrypt Unicode Normalization Compliance Incident

2017-08-13 Thread Peter Bowen via dev-security-policy
On Sun, Aug 13, 2017 at 5:59 PM, Matt Palmer via dev-security-policy
 wrote:
> On Fri, Aug 11, 2017 at 06:32:11PM +0200, Kurt Roeckx via dev-security-policy 
> wrote:
>> On Fri, Aug 11, 2017 at 11:48:50AM -0400, Ryan Sleevi via 
>> dev-security-policy wrote:
>> >
>> > Could you expand on what you mean by "cablint breaks" or "won't complete in
>> > a timely fashion"? That doesn't match my understanding of what it is or how
>> > it's written, so perhaps I'm misunderstanding what you're proposing?
>>
>> My understand is that it used to be very slow for crt.sh, but
>> that something was done to speed it up. I don't know if that change
>> was something crt.sh specific. I think it was changed to not
>> always restart, but have a process that checks multiple
>> certificates.
>
> I suspect you're referring to the problem of certlint calling out to an
> external program to do ASN.1 validation, which was fixed in
> https://github.com/awslabs/certlint/pull/38.  I believe the feedback from
> Rob was that it did, indeed, do Very Good Things to certlint performance.

I just benchmarked the current cablint code, using 2000 certs from CT
as a sample.  On a single thread of a Intel(R) Xeon(R) CPU E5-2670 v2
@ 2.50GHz, it processes 394.5 certificates per second.  This is 2.53ms
per certificate or 1.4 million certificates per hour.

Thank you Matt for that patch!  This was a _massive_ improvement over
the old design.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certificates with reserved IP addresses

2017-08-12 Thread Peter Bowen via dev-security-policy
Congratulations on finding something not caught by certlint.  It turns
out that cabtlint does zero checks for reserved IPs.  Something else
for my TODO list.

On Sat, Aug 12, 2017 at 6:52 PM, Jonathan Rudenberg via
dev-security-policy  wrote:
> Baseline Requirements section 7.1.4.2.1 prohibits ipAddress SANs from 
> containing IANA reserved IP addresses and any certificates containing them 
> should have been revoked by 2016-10-01.
>
> There are seven unexpired unrevoked certificates that are known to CT and 
> trusted by NSS containing reserved IP addresses.
>
> The full list can be found at: https://misissued.com/batch/7/
>
> DigiCert
> TI Trust Technologies Global CA (5)
> Cybertrust Japan Public CA G2 (1)
>
> PROCERT
> PSCProcert (1)
>
> It’s also worth noting that three of the "TI Trust Technologies” certificates 
> contain dnsNames with internal names, which are prohibited under the same BR 
> section.
>
> Jonathan
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certificates with improperly normalized IDNs

2017-08-11 Thread Peter Bowen via dev-security-policy
On Thu, Aug 10, 2017 at 1:22 PM, Jonathan Rudenberg via
dev-security-policy  wrote:
> RFC 5280 section 7.2 and the associated IDNA RFC requires that 
> Internationalized Domain Names are normalized before encoding to punycode.
>
> Let’s Encrypt appears to have issued at least three certificates that have at 
> least one dnsName without the proper Unicode normalization applied.
>
> It’s also worth noting that RFC 3491 (referenced by RFC 5280 via RFC 3490) 
> requires normalization form KC, but RFC 5891 which replaces RFC 3491 requires 
> normalization form C. I believe that the BRs and/or RFC 5280 should be 
> updated to reference RFC 5890 and by extension RFC 5891 instead.

I did some reading on Unicode normalization today, and it strongly
appears that any string that has been normalized to normalization form
KC is by definition also in normalization form C.  Normalization is
idempotent, so doing toNFKC(toNKFC()) will result in the same string
as just doing toNFKC() and toNFC(toNFC()) is the same as toNFC().
Additionally toNFKC is the same as toNFC(toK()).

This means that checking that a string matches the result of
toNFC(string) is a valid check regardless of whether using the 349* or
589* RFCs.  It does mean that Certlint will not catch strings that are
in NFC but not in NFKC.

Thanks,
Peter

P.S. I've yet to find a registered domain name not in NFC, and that
includes checking every name in the the zone files for all ICANN gTLDs
and a few ccTLDs
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certificates with improperly normalized IDNs

2017-08-10 Thread Peter Bowen via dev-security-policy
On Thu, Aug 10, 2017 at 2:31 PM, Jakob Bohm via dev-security-policy
 wrote:
> On 10/08/2017 22:22, Jonathan Rudenberg wrote:
>>
>> RFC 5280 section 7.2 and the associated IDNA RFC requires that
>> Internationalized Domain Names are normalized before encoding to punycode.
>>
>> Let’s Encrypt appears to have issued at least three certificates that have
>> at least one dnsName without the proper Unicode normalization applied.
>>
>> https://crt.sh/?id=187634027=cablint
>> https://crt.sh/?id=187628042=cablint
>> https://crt.sh/?id=173493962=cablint
>>
>> It’s also worth noting that RFC 3491 (referenced by RFC 5280 via RFC 3490)
>> requires normalization form KC, but RFC 5891 which replaces RFC 3491
>> requires normalization form C. I believe that the BRs and/or RFC 5280 should
>> be updated to reference RFC 5890 and by extension RFC 5891 instead.
>>
>> Jonathan
>>
>
> All 3 dnsName values exist in the DNS and point to the same server (IP
> address). Whois says that the two second level names are both registered
> to OOO "JilfondService" .
>
> This raises the question if CAs should be responsible for misissued
> domain names, or if they should be allowed to issue certificates to
> actually existing DNS names.
>
> I don't know if the bad punycode encodings are in the 2nd level names (a
> registrar/registry responsibility, both were from 2012 or before) or in
> the 3rd level names (locally created at an unknown date).
>
> An online utility based on the older RFC349x round trips all of these.
> So if the issue is only compatibility with a newer RFC not referenced from
> the current BRs, these would probably be OK under the current BRs and
> certLint needs to accept them.
>
> Note: The DNS names are:
>
> xn--80aqafgnbi.xn--b1addckdrqixje4a.xn--p1ai
> xn--80aqafgnbi.xn--f1awi.xn--p1ai
> xn-blcihca2aqinbjzlgp0hrd8c.xn--f1awi.xn--p1ai

These are not the names causing issues.

"xn--109-3veba6djs1bfxlfmx6c9g.xn--b1addckdrqixje4a.xn--p1ai" from
https://crt.sh/?id=187634027=cablint
"xn--109-3veba6djs1bfxlfmx6c9g.xn--f1awi.xn--p1ai" from
https://crt.sh/?id=187628042=cablint
"xn--109-3veba6djs1bfxlfmx6c9g.xn--f1awi.xn--p1ai" from
https://crt.sh/?id=173493962=cablint (same name as the prior cert)

It is the xn--109-3veba6djs1bfxlfmx6c9g label that is incorrect in all
three.  In all three the bad label is not in the registered domain or
any public suffix.

Directly decoded, this string is:

"\u0608\u061c\u0628\u0031\u0608\u0611\u0618\u061e\u0608\u0621\u0612\u0614\u0030\u061b\u0039\u061a\u0618\u061c"

However the string when normalized to NFC is:

"\u0608\u061c\u0628\u0031\u0608\u0618\u0611\u061e\u0608\u0621\u0612\u0614\u0030\u061b\u0039\u0618\u061a\u061c"

If you look carefully, you will see two different pairs of codepoints
that are swapped in the normalized string.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certificates with metadata-only subject fields

2017-08-09 Thread Peter Bowen via dev-security-policy
The point of certlint was to help identify issues.  While I appreciate
it getting broad usage, I don't think pushing for revocation of every
certificate that trips any of the Error level checks is productive.
This reminds of me of people trawling a database of known
vulnerabilities then reporting them to the vendors and asking for a
reward, which happens all too often in bug bounty programs.

I think it would be much more valuable to have a "score card" by CA
Operator that shows absolute defects and defect rate.

Thanks,
Peter

On Wed, Aug 9, 2017 at 2:21 PM, Jeremy Rowley via dev-security-policy
 wrote:
> And this is exactly why we need separate tiers of revocation. Here, there is 
> zero risk to the end user.  I do think it should be fixed and remediated, but 
> revoking all these certs within 24 hours seems unnecessarily harsh.  I think 
> there was a post about this a while ago, but I haven't been able to find it.  
> If someone remembers where it was, I'd appreciate it.
>
> -Original Message-
> From: dev-security-policy 
> [mailto:dev-security-policy-bounces+jeremy.rowley=digicert@lists.mozilla.org]
>  On Behalf Of Jonathan Rudenberg via dev-security-policy
> Sent: Wednesday, August 9, 2017 10:08 AM
> To: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Certificates with metadata-only subject fields
>
> Baseline Requirements section 7.1.4.2.2(j) says:
>
>> All other optional attributes, when present within the subject field, MUST 
>> contain information that has been verified by the CA. Optional attributes 
>> MUST NOT contain metadata such as ‘.’, ‘‐‘, and ‘ ‘ (i.e. space) characters, 
>> and/or any other indication that the value is absent, incomplete, or not 
>> applicable.
>
> There are 522 unexpired unrevoked certificates known to CT issued after 
> 2015-11-01 that are trusted by NSS for server authentication and have at 
> least one subject field that only contains ASCII punctuation characters.
>
> The full list can be found here: https://misissued.com/batch/5/
>
> Since there are so many, I have included a list of the CCADB owner, 
> intermediate commonName, and count of certificates for the 311 certificates 
> in this batch that were issued in the last 365 days so that the relevant CAs 
> can add the appropriate technical controls and policy to comply with this 
> requirement in the future. Please let me know if there is any additional 
> information that would be useful.
>
> Jonathan
>
> —
>
> DigiCert (131)
> Cybertrust Japan Public CA G3 (64)
> DigiCert SHA2 Extended Validation Server CA (36)
> DigiCert SHA2 High Assurance Server CA (12)
> TERENA SSL CA 3 (7)
> DigiCert SHA2 Secure Server CA (6)
> Cybertrust Japan EV CA G2 (6)
>
> GlobalSign (62)
> GlobalSign Organization Validation CA - SHA256 - G2 (46)
> GlobalSign Extended Validation CA - SHA256 - G2 (8)
> GlobalSign Extended Validation CA - SHA256 - G3 (8)
>
> Symantec / VeriSign (35)
> Symantec Class 3 Secure Server CA - G4 (32)
> Symantec Class 3 EV SSL CA - G3 (2)
> Wells Fargo Certificate Authority WS1 (1)
>
> Symantec / GeoTrust (34)
> GeoTrust SSL CA - G3 (25)
> GeoTrust SHA256 SSL CA (5)
> RapidSSL SHA256 CA (2)
> GeoTrust Extended Validation SHA256 SSL CA (2)
>
> Comodo (19)
> COMODO RSA Organization Validation Secure Server CA (11)
> COMODO RSA Extended Validation Secure Server CA (8)
>
> Symantec / Thawte (17)
> thawte SSL CA - G2 (12)
> thawte SHA256 SSL CA (3)
> thawte EV SSL CA - G3 (2)
>
> T-Systems International GmbH (Deutsche Telekom) (6)
> Zertifizierungsstelle FH Duesseldorf - G02 (3)
> TeleSec ServerPass Class 2 CA (2)
> Helmholtz-Zentrum fuer Infektionsforschung (1)
>
> QuoVadis (3)
> QuoVadis EV SSL ICA G1 (2)
> QuoVadis Global SSL ICA G2 (1)
>
> SECOM Trust Systems Co. Ltd. (2)
> NII Open Domain CA - G4 (2)
>
> SwissSign AG (1)
> SwissSign Server Gold CA 2014 - G22 (1)
>
> Entrust (1)
> Entrust Certification Authority - L1K (1) 
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certificates with invalidly long serial numbers

2017-08-07 Thread Peter Bowen via dev-security-policy
(inserted missed word; off to get coffee now)

On Mon, Aug 7, 2017 at 7:54 AM, Peter Bowen  wrote:
> On Mon, Aug 7, 2017 at 12:53 AM, Franck Leroy via dev-security-policy
>  wrote:
>> Hello
>>
>> I checked only one but I think they are all the same.
>>
>> The integer value of the serial number is 20 octets, but when encoded into 
>> DER a starting 00 may be necessary to mark the integer as a positive value :
>>
>>0 1606: SEQUENCE {
>>4 1070:   SEQUENCE {
>>83: [0] {
>>   101:   INTEGER 2
>>  :   }
>>   13   21: INTEGER
>>  :   00 A5 45 35 99 1C E2 8B 6D D9 BC 1E 94 48 CC 86
>>  :   7C 6B 59 9E B3
>>
>> So the serialNumber (integer) value is 20 octets long but lenght can be more 
>> depending on the encoding representation.
>>
>> Here is ASCII (common representation when stored into a database: 
>> "A54535991CE28B6DD9BC1E9448CC867C6B599EB3" it is 40 octets long, VARCHAR(40) 
>> is needed.
>
> The text from 5280 says:
>
> " CAs MUST force the serialNumber to be a non-negative integer, that
>is, the sign bit in the DER encoding of the INTEGER value MUST be
>zero.  This can be done by adding a leading (leftmost) `00'H octet if
>necessary.  This removes a potential ambiguity in mapping between a
>string of octets and an integer value.
>
>As noted in Section 4.1.2.2, serial numbers can be expected to
>contain long integers.  Certificate users MUST be able to handle
>serialNumber values up to 20 octets in length.  Conforming CAs MUST
>NOT use serialNumber values longer than 20 octets."
>
> This makes it somewhat unclear whether the `00'H octet is to be included in
> the 20 octet limit or not. While I can see how one might view it
> differently, I think the correct interpretation is to include the
> leading `00'H octet in the count.  This is because
> CertificateSerialNumber is defined as being an INTEGER, which means
> "octet" is not applicable.  If it was defined as OCTET STRING, similar
> to how KeyIdentifier is defined, then octet could be seen as applying
> to the unencoded value.  However, given this is an INTEGER, the only
> way to get octets is to encode and this requires the leading bit to be
> zero for non-negative values.
>
> That being said, I think that it is reasonable to add "DER encoding of
> Serial must be 20 octets or less including any leading 00 octets" to
> the list of ambiguities that CAs must fix by date X, rather than
> something that requires revocation.
>
> Thanks,
> Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certificates with invalidly long serial numbers

2017-08-07 Thread Peter Bowen via dev-security-policy
On Mon, Aug 7, 2017 at 12:53 AM, Franck Leroy via dev-security-policy
 wrote:
> Hello
>
> I checked only one but I think they are all the same.
>
> The integer value of the serial number is 20 octets, but when encoded into 
> DER a starting 00 may be necessary to mark the integer as a positive value :
>
>0 1606: SEQUENCE {
>4 1070:   SEQUENCE {
>83: [0] {
>   101:   INTEGER 2
>  :   }
>   13   21: INTEGER
>  :   00 A5 45 35 99 1C E2 8B 6D D9 BC 1E 94 48 CC 86
>  :   7C 6B 59 9E B3
>
> So the serialNumber (integer) value is 20 octets long but lenght can be more 
> depending on the encoding representation.
>
> Here is ASCII (common representation when stored into a database: 
> "A54535991CE28B6DD9BC1E9448CC867C6B599EB3" it is 40 octets long, VARCHAR(40) 
> is needed.

The text from 5280 says:

" CAs MUST force the serialNumber to be a non-negative integer, that
   is, the sign bit in the DER encoding of the INTEGER value MUST be
   zero.  This can be done by adding a leading (leftmost) `00'H octet if
   necessary.  This removes a potential ambiguity in mapping between a
   string of octets and an integer value.

   As noted in Section 4.1.2.2, serial numbers can be expected to
   contain long integers.  Certificate users MUST be able to handle
   serialNumber values up to 20 octets in length.  Conforming CAs MUST
   NOT use serialNumber values longer than 20 octets."

This makes it somewhat whether the `00'H octet is to be included in
the 20 octet limit or not. While I can see how one might view it
differently, I think the correct interpretation is to include the
leading `00'H octet in the count.  This is because
CertificateSerialNumber is defined as being an INTEGER, which means
"octet" is not applicable.  If it was defined as OCTET STRING, similar
to how KeyIdentifier is defined, then octet could be seen as applying
to the unencoded value.  However, given this is an INTEGER, the only
way to get octets is to encode and this requires the leading bit to be
zero for non-negative values.

That being said, I think that it is reasonable to add "DER encoding of
Serial must be 20 octets or less including any leading 00 octets" to
the list of ambiguities that CAs must fix by date X, rather than
something that requires revocation.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DigiCert-Symantec Announcement

2017-08-02 Thread Peter Bowen via dev-security-policy
On Wed, Aug 2, 2017 at 8:10 PM, Peter Gutmann via dev-security-policy
 wrote:
> Jeremy Rowley via dev-security-policy  
> writes:
>
>>Today, DigiCert and Symantec announced that DigiCert is acquiring the
>>Symantec CA assets, including the infrastructure, personnel, roots, and
>>platforms.
>
> I realise this is a bit off-topic for the list but someone has to bring up the
> elephant in the room: How does this affect the Google vs. Symantec situation?
> Is it pure coincidence that Symantec now re-emerges as DigiCert, presumably
> avoiding the sanctions since now things will chain up to DigiCert roots?

Peter,

On topic for this list is Mozilla policy.  Gerv's email was clear that
sale to DigiCert will not impact the plan, saying: "any change of
control of some or all of Symantec's roots
would not be grounds for a renegotiation of these dates."

So the sanctions are still intact.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DigiCert-Symantec Announcement

2017-08-02 Thread Peter Bowen via dev-security-policy
On Wed, Aug 2, 2017 at 2:12 PM, Jeremy Rowley via dev-security-policy
 wrote:
> Today, DigiCert and Symantec announced that DigiCert is acquiring the
> Symantec CA assets, including the infrastructure, personnel, roots, and
> platforms.  At the same time, DigiCert signed a Sub CA agreement wherein we
> will validate and issue all Symantec certs as of Dec 1, 2017.  We are
> committed to meeting the Mozilla and Google plans in transitioning away from
> the Symantec infrastructure. The deal is expected to close near the end of
> the year, after which we will be solely responsible for operation of the CA.
> From there, we will migrate customers and systems as necessary to
> consolidate platforms and operations while continuing to run all issuance
> and validation through DigiCert.  We will post updates and plans to the
> community as things change and progress.
>
> Thanks a ton for any thoughts you offer.

Jeremy,

A while ago I put together a list of all the certificates that are or
were included in trust stores that were known to be owned by Symantec
or companies that Symantec acquired.  The list is in Google Sheets at
https://docs.google.com/spreadsheets/d/1piCTtgMz1Uf3SHXoNEFYZKAjKGPJdRDGFuGehdzcvo8/edit?usp=sharing

Can you confirm that DigiCert will be "solely responsible for
operation" of all of these CAs once the deal closes?

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Final Decision by Google on Symantec

2017-07-31 Thread Peter Bowen via dev-security-policy
On Mon, Jul 31, 2017 at 7:17 AM, Jakob Bohm via dev-security-policy
 wrote:
> On 31/07/2017 16:06, Gervase Markham wrote:
>>
>> On 31/07/17 15:00, Jakob Bohm wrote:
>>>
>>> - Due to current Mozilla implementation bugs,
>>
>>
>> Reference, please?
>>
>
> I am referring to the fact that EV-trust is currently assigned to roots,
> not to SubCAs, at least as far as visible root store descriptions go.
>
> Since I know of no standard way for a SubCA certificate to state if it
> intended for EV certs or not, that would cause EV-trust to percolate
> into SubCAs that were never intended for this purpose by the root CA.

This is common to every EV implementation I know about, not just
Mozilla.  Therefore I would not call this a bug.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Final Decision by Google on Symantec

2017-07-29 Thread Peter Bowen via dev-security-policy
On Thu, Jul 27, 2017 at 11:14 PM, Gervase Markham via
dev-security-policy  wrote:
> Google have made a final decision on the various dates they plan to
> implement as part of the consensus plan in the Symantec matter. The
> message from blink-dev is included below.
>
[...]
>
> We now have two choices. We can accept the Google date for ourselves, or
> we can decide to implement something earlier. Implementing something
> earlier would involve us leading on compatibility risk, and so would
> need to get wider sign-off from within Mozilla, but nevertheless I would
> like to get the opinions of the m.d.s.p community.
>
> I would like to make a decision on this matter on or before July 31st,
> as Symantec have asked for dates to be nailed down by then in order for
> them to be on track with their Managed CA implementation timetable. If
> no alternative decision is taken and communicated here and to Symantec,
> the default will be that we will accept Google's final proposal as a
> consensus date.

Gerv,

I think there three more things that Mozilla needs to decide.

First, when the server authentication trust will bits be removed from
the existing roots.  This is of notable importance for non-Firefox
users of NSS.  Based on the Chrome email, it looks like they will
remove trust bits in their git repo around August 23, 2018.  When will
NSS remove the trust bits?

Second, how the dates apply to email protection certificates, if at
all.  Chrome only deals with server authentication certificates, so
their decision does not cover other types of certificates.  Will the
email protection trust bits be turned off at some point?

Third, what the requirements are for Symantec to submit new roots,
including any limit to how many may be submitted.
https://ccadb-public.secure.force.com/mozilla/IncludedCACertificateReport
shows that there are currently 20 Symantec roots included.  Would it
be reasonable for them to submit replacements on a 1:1 basis -- that
is 20 new roots?

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [EXT] Symantec Update on SubCA Proposal

2017-07-21 Thread Peter Bowen via dev-security-policy
Steve,

I think this level of public detail is very helpful when it comes to
understanding the proposal.

On Thu, Jul 20, 2017 at 8:00 AM, Steve Medin via dev-security-policy
 wrote:
> 1)  December 1, 2017 is the earliest credible date that any RFP 
> respondent can provide the Managed CA solution proposed by Google, assuming a 
> start date of August 1, 2017. Only one RFP respondent initially proposed a 
> schedule targeting August 8, 2017 (assuming a start date of June 12, 2017). 
> We did not deem this proposal to be credible, however, based on the lack of 
> specificity around our RFP evaluation criteria, as compared to all other RFP 
> responses which provided detailed responses to all aspects of the RFP, and we 
> have received no subsequent information from this bidder to increase our 
> confidence.

You note that this assumes a start date of June 12.   A later email
from Rick Andrews says "Our proposed dates assume we are able to
finalize negotiation of contracts with the selected Managed CA
partner(s), [...] by no later than July 31, 2017."

Presumably the June 12 date is long gone.  However if one assumes the
delta of 57 days from start to delivery stands, this would put
delivery at September 26, 2017.  This is two months sooner than the
December 1 date.  This seems like a pretty big difference.  Given you
are asking to delay the timeline based on other RFP respondents being
unable to hit earlier dates, it seems prudent to ask whether the you
attempted to investigate the proposal from the bidder who proposed
August 8.

Given that one of the requirements stated by Google is that the SubCA
operator had to have roots that have been in the Google trust store
for several years, it seems unusual that any eligible respondent would
not be "credible" out of the gate.

Did you ask them to provide more information and details to help
determine if it was a "credible" offer?

> 2)  We are using several selection criteria for evaluating RFP responses, 
> including the depth of plan to address key technical integration and 
> operational requirements, the timeframe to execute, the ability to handle the 
> scope, volume, language, and customer support requirements both for ongoing 
> issuance and for one-time replacement of certificates issued prior to June 1, 
> 2016, compliance program and posture, and the ability to meet uptime, 
> interface performance, and other SLAs. Certain RFP respondents have 
> distinguished themselves based on the quality and depth of their integration 
> planning assumptions, requirements and activities, which have directly 
> influenced the dates we have proposed for the SubCA proposal.
>
> 3)  The RFP was first released on May 26, 2017. The first round of bidder 
> responses was first received on June 12, 2017.

In the 
https://groups.google.com/a/chromium.org/d/msg/blink-dev/eUAKwjihhBs/ovLalSBRBQAJ
message, it was implied that Symantec was aware of the SubCA plan and
dates since at least May 12.  Given the plan to sign an agreement by
July 31, the August 8 date seems rather impossible. Did Symantec push
back on the August 8 date at that point?

In the original email that started this subthread, you said, "Some of
the prospective Managed CAs have proposed supporting only a portion of
our volume (some by customer segment, others by geographic focus), so
we are also evaluating options that involve working with multiple
Managed CAs."

Have you considered a staggered date system for different classes of
certificates.  For example, I would assume that certificates that
don't contain subject identity information would have less work for
migration integration than EV certificates.  Given that it is common
practice to have a different SubCA for different certificates types,
could you hit an earlier date for non-EV certificates and then later
have the EV SubCA ready?

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: WoSign new system passed Cure 53 system security audit

2017-07-13 Thread Peter Bowen via dev-security-policy
Richard,

I can only guess what Ryan is talking about as the report wasn't sent
to this group, but it is possible that the system described could not
meet the Baseline Requirements, as the BRs do require certain system
designs.  For example, two requirements are:

"Require that each individual in a Trusted Role use a unique
credential created by or assigned to that person in order to
authenticate to Certificate Systems" and "Enforce multi-factor
authentication for administrator access to Issuing Systems and
Certificate Management Systems"

If the system does not do these things, then it "cannot meet the BRs,
you would have to change that system to meet the BR" (quoting Ryan).

Please keep in mind that these are only guesses; there are numerous
other things that could be the report that could lead to the same
conclusion.

Thanks,
Peter

On Thu, Jul 13, 2017 at 5:04 PM, Richard Wang via dev-security-policy
 wrote:
> Hi Ryan,
>
> Thanks for your detail info.
>
> But I still CAN NOT understand why you say and confirm that the new system 
> cannot and does not comply with BR before we start to use it.
>
> We will do the BR audit soon.
>
> Best Regards,
>
> Richard
>
> On 14 Jul 2017, at 00:50, Ryan Sleevi 
> > wrote:
>
> You will fail #4. Because your system, as designed, cannot and does not 
> comply with the Baseline Requirements.
>
> As such, you will then
> (4.1) Update new system, developing new code and new integrations
> (4.2) Engage the auditor to come back on side
> (4.3) Hope you get it right this time
> (4.4) Generate a new root
> (4.5) Do the PITRA audit and hopefully pass
> (4.6) Hope that the security audit from #1 still applies to #4.1 [but because 
> the changes needed are large, it's hard to imagine]
> (5) Apply for the new root inclusion
>
> The system you had security audited in #1 cannot pass #4. That's why working 
> with an auditor to do a readiness assessment in conjunction with or before 
> the security assessment can help ensure you can meet the BRs, and then ensure 
> you can meet them securely.
>
> On Thu, Jul 13, 2017 at 11:04 AM, Richard Wang 
> > wrote:
> Hi Ryan,
>
> I really don't understand where the new system can't meet the BR, we don't 
> use the new system to issue one certificate, how it violate the BR?
>
> Our step is:
> (1) develop a new secure system in the new infrastructure, then do the new 
> system security audit, pass the security audit;
> (2) engage a WebTrust auditor onsite to generate the new root in the new 
> system;
> (3) use the new audited system to issue certificate;
> (4) do the PITRA audit and WebTrust audit;
> (5) apply the new root inclusion.
>  While we start to apply the new root application, we will follow the 
> requirements here: https://bugzilla.mozilla.org/show_bug.cgi?id=1311824
> to demonstrate we meet the 6 requirements.
>
> We will discard the old system and facilitates, so the right order should be 
> have-new-system first, then audit the new system, then apply the new root 
> inclusion. We can not use the old system to do the BR audit.
>
> Please advise, thanks.
>
>
> Best Regards,
>
> Richard
>
> On 13 Jul 2017, at 21:53, Ryan Sleevi 
> > wrote:
>
> Richard,
>
> That's great, but the system that passed the full security audit cannot meet 
> the BRs, you would have to change that system to meet the BRs, and then that 
> new system would no longer be what was audited.
>
> I would encourage you to address the items in the order that Mozilla posed 
> them - such as first systematically identifying and addressing the flaws 
> you've found, and then working with a qualified auditor to demonstrate both 
> remediation and that the resulting system is BR compliant. And then perform 
> the security audit. This helps ensure your end result is most aligned with 
> the desired state - and provides the public the necessary assurances that 
> WoSign, and their management, understand what's required of a publicly 
> trusted CA.
>
> On Wed, Jul 12, 2017 at 10:24 PM, Richard Wang 
> > wrote:
> Hi Ryan,
>
> We got confirmation from Cure 53 that new system passed the full security 
> audit. Please contact Cure 53 directly to verify this, thanks.
>
> We don't start the BR audit now.
>
> Best Regards,
>
> Richard
>
> On 12 Jul 2017, at 22:09, Ryan Sleevi 
> > wrote:
>
>
>
> On Tue, Jul 11, 2017 at 8:18 PM, Richard Wang 
> > wrote:
> Hi all,
>
> Your reported BR issues is from StartCom, not WoSign, we don't use the new 
> system to issue any certificate now since the new root is not generated.
> PLEASE DO NOT mix it, thanks.
>
> Best Regards,
>
> Richard
>
> No, the BR non-compliance is demonstrated from the report provided to 
> browsers - that is, the full report 

Re: SRVNames in name constraints

2017-07-05 Thread Peter Bowen via dev-security-policy

> On Jul 5, 2017, at 4:23 AM, Gervase Markham via dev-security-policy 
>  wrote:
> 
> On 03/07/17 17:44, Peter Bowen wrote:
>> We still need to get the policy changed, even with the ballot.  As
>> written right now, all name constrained certificates are no longer
>> considered constrained.
> 
> I'm not sure what you mean... What's the issue you are raising here?

Right now (Policy v2.5) says:

Intermediate certificates which have at least one valid, unrevoked chain up to 
such a CA certificate and which are not technically constrained to prevent 
issuance of working server or email certificates. Such technical constraints 
could consist of either:

an Extended Key Usage (EKU) extension which does not contain any of these 
KeyPurposeIds: anyExtendedKeyUsage, id-kp-serverAuth, id-kp-emailProtection; or:
name constraints which do not allow Subject Alternative Names (SANs) of any of 
the following types: dNSName, iPAddress, SRVName, rfc822Name
The second bullet says “any”.  As the rule for name constraints is that if they 
are not present for a type, then any name is allowed, you have to include name 
constraints for all four types.  The issue comes down to the definition of 
“working server” certificates.  Mozilla does not use either rfc822names or 
SRVName for name validation for server authentication, but you could have a 
valid server certificate that has only these names.  Is NSS/Firefox code 
considered a “technical constraint”?  If not, then all technically constrained 
CA certificates need to have constraints on SRVName and rfc822Name type General 
Names in addition to what they have now.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SRVNames in name constraints

2017-07-03 Thread Peter Bowen via dev-security-policy
We still need to get the policy changed, even with the ballot.  As
written right now, all name constrained certificates are no longer
considered constrained.

On Mon, Jul 3, 2017 at 9:42 AM, Jeremy Rowley
<jeremy.row...@digicert.com> wrote:
> Isn't this ballot ready to go?  If we start the review period now, it'll be
> passed by the time the Mozilla policy is updated.
>
> -Original Message-
> From: dev-security-policy
> [mailto:dev-security-policy-bounces+jeremy.rowley=digicert.com@lists.mozilla
> .org] On Behalf Of Peter Bowen via dev-security-policy
> Sent: Monday, July 3, 2017 10:30 AM
> To: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: SRVNames in name constraints
>
> In reviewing the Mozilla CA policy, I noticed one bug that is probably my
> fault.  It says:
>
> "name constraints which do not allow Subject Alternative Names (SANs) of any
> of the following types: dNSName, iPAddress, SRVName, rfc822Name"
>
> SRVName is not yet allowed by the CA/Browser Forum Baseline Requirements
> (BRs), so I highly doubt any CA has issued a cross-certificate containing
> constraints on SRVName-type names.  Until the Forum allows such issuance, I
> think this requirement should be changed to remove SRVName from the list.
> If the Forum does allow such in the future, adding this back can be
> revisited at such time.
>
> Thanks,
> Peter
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


SRVNames in name constraints

2017-07-03 Thread Peter Bowen via dev-security-policy
In reviewing the Mozilla CA policy, I noticed one bug that is probably
my fault.  It says:

"name constraints which do not allow Subject Alternative Names (SANs)
of any of the following types: dNSName, iPAddress, SRVName,
rfc822Name"

SRVName is not yet allowed by the CA/Browser Forum Baseline
Requirements (BRs), so I highly doubt any CA has issued a
cross-certificate containing constraints on SRVName-type names.  Until
the Forum allows such issuance, I think this requirement should be
changed to remove SRVName from the list.  If the Forum does allow such
in the future, adding this back can be revisited at such time.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Unknown Intermediates

2017-06-23 Thread Peter Bowen via dev-security-policy
On Fri, Jun 23, 2017 at 6:17 AM, Rob Stradling via dev-security-policy
 wrote:
> On 23/06/17 14:10, Kurt Roeckx via dev-security-policy wrote:
>>
>> On 2017-06-23 14:59, Rob Stradling wrote:
>>>
>>> Reasons:
>>>- Some are only trusted by the old Adobe CDS program.
>>>- Some are only trusted for Microsoft Kernel Mode Code Signing.
>>>- Some are very old roots that are no longer trusted.
>>
>>
>> I wonder if Google's daedalus would like to see some of those.
>
>
> Daedalus only accepts expired certs.  Most of these haven't expired.
>
> If there's interest, I could add these to our Dodo log.

For those three, I would be interested in seeing them.  I wonder if
any match submariner as well.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Root Store Policy 2.5: Call For Review and Phase-In Periods

2017-06-21 Thread Peter Bowen via dev-security-policy
On Wed, Jun 21, 2017 at 7:15 AM, Gervase Markham via
dev-security-policy  wrote:
> On 21/06/17 13:13, Doug Beattie wrote:
>>> Do they have audits of any sort?
>>
>> There had not been any audit requirements for EKU technically
>> constrained CAs, so no, there are no audits.
>
> In your view, having an EKU limiting the intermediate to just SSL or to
> just email makes it a technically constrained CA, and therefore not
> subject to audit under any root program?
>
> I ask because Microsoft's policy at http://aka.ms/auditreqs says:
>
> "Microsoft requires that every CA submit evidence of a Qualifying Audit
> on an annual basis for the CA and any non-limited root within its PKI
> chain."
>
> In your view, are these two intermediates, which are constrained only by
> having the email and client auth EKUs, "limited" or "non-limited"?

What is probably not obvious is that there is a very specific
definition of non-limited with respect to the Microsoft policy.  The
definition is unfortunately contained in the contract, which is
confidential, but the definition makes it clear that these CAs are out
of scope for audits.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: ETSI auditors still not performing full annual audits?

2017-06-19 Thread Peter Bowen via dev-security-policy
On Mon, Jun 19, 2017 at 12:14 PM, Kathleen Wilson via
dev-security-policy  wrote:
> I just filed https://bugzilla.mozilla.org/show_bug.cgi?id=1374381 about an 
> audit statement that I received for SwissSign. I have copied the bug 
> description below, because I am concerned that there still may be ETSI 
> auditors (and CAs?) who do not understand the audit requirements, see below.
>
> ~~~
> SwissSign provided their annual audit statement:
> https://bug1142323.bmoattachments.org/attachment.cgi?id=8853299
>
> Problems noted in it:
> -- "Agreed-upon procedures engagement" - special words for audits - does not 
> necessarily encompass the full scope
> -- "surveillance certification audits" - does not necessarily mean a full 
> audit (which the BRs require annually)
> -- "point in time audit" -- this means that the auditor's evaluation only 
> covered that point in time (note a period in time)
> -- "only intended for the client" -- Doesn't meet Mozilla's requirement for 
> public-facing audit statement.
> -- "We were not engaged to and did not conduct an examination, the objective 
> of which would be the expression of an opinion on the Application for 
> Extended Validation (EV) Certificate. Accordingly, we do not express such an 
> opinion. Had we performed additional procedures, other matters might have 
> come to our attention that would have been reported to you." -- some of the 
> included root certs are enabled for EV treatment, so need an EV audit as well.
>
>
> According to section 8.1 of the CA/Browser Forum's Baseline Requirements:
> "Certificates that are capable of being used to issue new certificates MUST 
> ... be ... fully audited in line with all remaining requirements from this 
> section.
> ...
> The period during which the CA issues Certificates SHALL be divided into an 
> unbroken sequence of audit periods. An audit period MUST NOT exceed one year 
> in duration."
>
> So, a full period-in-time audit is required every year.
>
> After I voiced concern 
> (https://bugzilla.mozilla.org/show_bug.cgi?id=1142323#c27) the CA provided an 
> updated audit statement to address the concerns I had raised in the bug:
> https://bugzilla.mozilla.org/attachment.cgi?id=8867948
> I do not understand how the audit statement can magically change from 
> point-in-time to a period-in-time.
> ~~~
>
> I will greatly appreciate thoughtful and constructive input into this 
> discussion about what to do about this SwissSign audit situation, and if this 
> is an indicator that ETSI auditors are still not performing full annual 
> audits that satisfy the CA/Browser Forum's Baseline Requirements.

Kathleen,

It seems there is some confusion. The document presented would appear
to be a Verified Accountant Letter (as defined in the EV Guidelines)
and can used as part of the process to validate a request for an EV
certificate.  It is not an audit report and is not something normally
submitted to browsers.

I suspect someone simply attached the wrong document to an email or
uploaded the wrong document.  This makes no sense to be part of an
audit report.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: New undisclosed intermediates

2017-06-09 Thread Peter Bowen via dev-security-policy
On Fri, Jun 9, 2017 at 9:11 AM, Matthew Hardeman via
dev-security-policy  wrote:
> For these self-signed roots which have a certificate subject and key which 
> match to a different certificate which is in a trusted path (like an 
> intermediate to a trusted root), the concern is that the mere existence of 
> the certificate speaks to a signature produced by a private key which DOES 
> have the privileged status of extending the trust of the Web PKI.
>
> The question then is whether that signature was properly accounted for, 
> audited, etc.
>
> Additionally, if said root is in active use, are the issuances descending 
> from _that_ self-signed root being audited?  If not, that's a problem, 
> because those certificates could just be served up with the same-subject, 
> same-key trusted intermediate and chain to publicly trusted roots, all 
> without having been actually issued from the trusted intermediate.

I think there is some confusion here.  Certificates do not sign
certificates.  The existence of mulitiple self-signed certificates
with the same {subject, public key} combination does not imply there
are multiple issuers.  Further, audits does not audit root
_certificates_, they audit CA operations.  The audit will look at
practices for signing certificates but you cannot audit an object
itself.

Additionally, there is nothing that says a CA operator may not have
multiple issuers that have the same private key and use the same
issuer name.  The only requirement is that they avoid serial number
collision and that the CRL contain the union of both revocations.

The mere existence of multiple self-signed certificates does not
change any of this.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: New undisclosed intermediates

2017-06-08 Thread Peter Bowen via dev-security-policy
On Thu, Jun 8, 2017 at 7:09 PM, Jonathan Rudenberg via
dev-security-policy  wrote:
>
>> On Jun 8, 2017, at 20:43, Ben Wilson via dev-security-policy 
>>  wrote:
>>
>> I don't believe that disclosure of root certificates is the responsibility
>> of a CA that has cross-certified a key.  For instance, the CCADB interface
>> talks in terms of "Intermediate CAs".  Root CAs are the responsibility of
>> browsers to upload.  I don't even have access to upload a "root"
>> certificate.
>
> I think the Mozilla Root Store policy is pretty clear on this point:
>
>> All certificates that are capable of being used to issue new certificates, 
>> and which directly or transitively chain to a certificate included in 
>> Mozilla’s CA Certificate Program, MUST be operated in accordance with this 
>> policy and MUST either be technically constrained or be publicly disclosed 
>> and audited.
>
> The self-signed certificates in the present set are all in scope for the 
> disclosure policy because they are capable of being used to issue new 
> certificates and chain to a certificate included in Mozilla’s CA Certificate 
> Program. From the perspective of the Mozilla root store they look like 
> intermediates because they can be used as intermediates in a valid path to a 
> root certificate trusted by Mozilla.

There are two important things about self-issued certificates:

1) They cannot expand the scope of what is allowed.
Cross-certificates can create alternative paths with different
restrictions.  Self-issued certificates do not provide alternative
paths that may have fewer constraints.

2) There is no way for a "parent" CA to prevent them from existing.
Even if the only cross-sign has a path length constraint of zero, the
"child" CA can issue self-issued certificates all day long.  If they
are self-signed there is no real value in disclosing them, given #1.

I think that it is reasonable to say that self-signed certificates are
out of scope.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: New undisclosed intermediates

2017-06-08 Thread Peter Bowen via dev-security-policy
On Thu, Jun 8, 2017 at 7:02 PM, Matthew Hardeman via
dev-security-policy  wrote:
> On Thursday, June 8, 2017 at 7:44:08 PM UTC-5, Ben Wilson wrote:
>> I don't believe that disclosure of root certificates is the responsibility
>> of a CA that has cross-certified a key.  For instance, the CCADB interface
>> talks in terms of "Intermediate CAs".  Root CAs are the responsibility of
>> browsers to upload.  I don't even have access to upload a "root"
>> certificate.
>
> At least in terms of intention of disclosing the intermediates, I don't think 
> you've made a fair assessment of the situation.
>
> The responsibility to disclose must fall upon the signer.  Not the one who 
> was signed.
>
> Cross-signature certificates are, effectively, intermediates granting an 
> alternate / enhanced validation path to trust to a distinct, separate 
> hierarchy.
>
> While IdenTrust signs Let's Encrypt's intermediates rather than a cross-sign 
> of their root, the principle is ultimately the same.  The browser programs 
> clearly wish to have those who are positioned to grant trust accountable for 
> any such trust that they grant.
>
> It's one question if the other root is already in the trust store, but 
> imagine it's some large enterprise root that's been running, perhaps under 
> appropriate audits but maybe not, cross-signed by a widely trusted program 
> participant.
>
> Perhaps the text needs clarifying, but I find it hard to believe that any of 
> the browser programs is of the opinion that you can cross-sign someone else's 
> root cert and not disclose that.

I don't think that is the question at hand.  I think Ben means
"self-signed" or "self-issued" when he says "root" certificate.

I agree with Ben that self-signed certificates should be out of scope.
Self-issued certificates that are not self-signed probably should be
in scope.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla requirements of Symantec

2017-06-08 Thread Peter Bowen via dev-security-policy
On Thu, Jun 8, 2017 at 9:38 AM, Jakob Bohm via dev-security-policy
 wrote:
>
> As the linked proposal was worded (I am not on Blink mailing lists), it
> seemed obvious that the original timeline was:
>
>   Later: Once the new roots are generally accepted, Symantec can actually
> issue from the new SubCAs.
>
>   Long term: CRL and OCSP management for the managed SubCAs remain with the
> third party CAs.  This continues until the managed SubCAs expire or are
> revoked.

I don't see this last part in the proposal.  Instead the proposal
appears to specifically contemplate the SubCAs being transferred to
Symantec once the new roots are accepted in the required trust stores.

Additionally, there is no policy, as far as I know, that governs
transfer of non-Root CAs.  This is possibly a gap, but an existing
one.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Add definition of "mis-issuance"

2017-06-02 Thread Peter Bowen via dev-security-policy
On Fri, Jun 2, 2017 at 8:12 AM, Ryan Sleevi wrote:
> On Fri, Jun 2, 2017 at 10:09 AM Jakob Bohm wrote:
>
>> On 02/06/2017 15:54, Ryan Sleevi wrote:
>> > On Fri, Jun 2, 2017 at 9:33 AM, Peter Bowen wrote:
>> >
>> >> Yes, my concern is that this could make SIGNED{ToBeSigned} considered
>> >> misissuance if ToBeSigned is not a TBSCertificate.  For example, if I
>> >> could sign an ASN.1 sequence which had the following syntax:
>> >>
>> >> TBSNotCertificate ::= {
>> >> notACertificateUTF8String,
>> >> COMPONENTS OF TBSCertificate
>> >> }
>> >>
>> >> Someone could argue that this is mis-issuance because the resulting
>> >> "certificate" is clearly corrupt, as it fails to start with an
>> >> INTEGER.  On the other hand, I think that this is clearly not
>> >> mis-issuance of a certificate, as there is no sane implementation that
>> >> would accept this as a certificate.
>> >>
>> >
>> > Would it be a misissuance of a certificate? Hard to argue, I think.
>> >
>> > Would it be a misuse of key? I would argue yes, unless the
>> > TBSNotCertificate is specified/accepted for use in the CA side (e.g. IETF
>> > WD, at the least).
>> >
>> >
>> > The general principle I was trying to capture was one of "Only sign these
>> > defined structures, and only do so in a manner conforming to their
>> > appropriate encoding, and only do so after validating all the necessary
>> > information. Anything else is 'misissuance' - of a certificate, a CRL, an
>> > OCSP response, or a Signed-Thingy"
>> >
>>
>> Thing is, that there are still serious work involving the definition of
>> new CA-signed things, such as the recent (2017) paper on a super-
>> compressed CRL-equivalent format (available as a Firefox plugin).
>
>
> This does ny rely on CA signatures - but also perfectly demonstrates the
> point - that these things should be getting widely reviewed before
> implementing.
>>
>> Banning those by policy would be as bad as banning the first OCSP
>> responder because it was not yet on the old list {Certificate, CRL}.
>
>
> This argument presumes technical competence of CAs, for which collectively
> there is no demonstrable evidence.
>
> Functionally, this is identical to banning the "any other method" for
> domain validation. Yes, it allowed flexibility - but at the extreme cost to
> security.
>
> If there are new and compelling thing to sign, the community can review and
> the policy be updated. I cannot understand the argument against this basic
> security sanity check.
>
>
>>
>> Hence my suggested phrasing of "Anything that resembles a certificate"
>> (my actual wording a few posts up was more precise of cause).
>
>
> Yes, and I think that wording is insufficient and dangerous, despite your
> understandable goals, for the reasons I outlined.
>
> There is little objective technical or security reason to distinguish the
> thing that is signed - it should be a closed set (whitelists, not
> blacklists), just like algorithms, keysizes, or validation methods - due to
> the significant risk to security and stability.

Back in November 2016, I suggested that we try to create stricter
rules around CAs:
https://cabforum.org/pipermail/public/2016-November/008966.html and
https://groups.google.com/d/msg/mozilla.dev.security.policy/UqjD1Rff4pg/8sYO2uzNBwAJ.
It generated some discussion but I never pushed things forward.  Maybe
the following portion should be part of Mozilla policy?

Private Keys which are CA private keys must only be used to generate signatures
that meet the following requirements:

1. The signature must be over a SHA-256, SHA-384, or SHA-512 hash
2. The data being signed must be one of the following:
  * CA Certificate (a signed TBSCertificate, as defined in [RFC
5280](https://tools.ietf.org/html/rfc5280), with a
id-ce-basicConstraints extension with the cA component set to true)
  * End-entity Certificate (a signed TBSCertificate, as defined in
[RFC 5280](https://tools.ietf.org/html/rfc5280), that is not a CA
Certificate)
  * Certificate Revocation Lists (a signed TBSCertList as defined in
[RFC 5280](https://tools.ietf.org/html/rfc5280))
  * OCSP response (a signed ResponseData as defined in [RFC
6960](https://tools.ietf.org/html/rfc6960))
  * Precertificate (as defined in draft-ietf-trans-rfc6962-bis)
3. Data that does not meet the above requirements must not be signed

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Make it clear that Mozilla policy has wider scope than the BRs

2017-06-02 Thread Peter Bowen via dev-security-policy
On Fri, Jun 2, 2017 at 8:50 AM, Gervase Markham via
dev-security-policy  wrote:
> On 02/06/17 12:24, Kurt Roeckx wrote:
>> Should that be "all certificates" instead of "all SSL certificates"?
>
> No; the Baseline Requirements apply only to SSL certificates.

Should Mozilla include a clear definition of "SSL certificates" in the
policy?  And should it be based on technical attributes rather than
intent of the issuer?

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Add definition of "mis-issuance"

2017-06-02 Thread Peter Bowen via dev-security-policy
On Fri, Jun 2, 2017 at 4:27 AM, Ryan Sleevi <r...@sleevi.com> wrote:
>
>
> On Thu, Jun 1, 2017 at 10:19 PM, Peter Bowen via dev-security-policy
> <dev-security-policy@lists.mozilla.org> wrote:
>>
>> On Thu, Jun 1, 2017 at 5:49 AM, Ryan Sleevi via dev-security-policy
>> > So I would definitely encourage that improper application of the
>> > protocols
>> > and data formats constitutes misissuance, as they directly affect
>> > interoperability and indirectly affect security :)
>>
>> I think the policy needs to be carefully thought out here, as there is
>> no limitation to what can be signed with the key used to sign
>> certificates.   What is a malformed certificate to one person might be
>> a valid document to someone else.  Maybe you could disallow signing
>> things that are not valid ASN.1 DER?
>
>
> I suspect you're raising a concern since a CA can use a SIGNED{ToBeSigned}
> construct from RFC 6025[1] to express a signature over a structure defined
> by "ToBeSigned", and wanting to distinguish that, for example, a certificate
> is not a CRL, as they're distinguished from their ToBeSigned construct. I
> would argue here that any signatures produced / structures provided should
> have an appropriate protocol or data format definition to justify the
> application of that signature, and that it would be misissuance in the
> absence of that support. Logically, I'm suggesting it's misissuance to, for
> example, expose a prehash signing oracle using a CA key, or to sign
> arbitrary data if it's not encoded 'like' a certificate (without having an
> equivalent appropriate standard defining what the CA is signing)

Yes, my concern is that this could make SIGNED{ToBeSigned} considered
misissuance if ToBeSigned is not a TBSCertificate.  For example, if I
could sign an ASN.1 sequence which had the following syntax:

TBSNotCertificate ::= {
   notACertificateUTF8String,
   COMPONENTS OF TBSCertificate
}

Someone could argue that this is mis-issuance because the resulting
"certificate" is clearly corrupt, as it fails to start with an
INTEGER.  On the other hand, I think that this is clearly not
mis-issuance of a certificate, as there is no sane implementation that
would accept this as a certificate.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Add definition of "mis-issuance"

2017-06-01 Thread Peter Bowen via dev-security-policy
On Thu, Jun 1, 2017 at 5:49 AM, Ryan Sleevi via dev-security-policy
 wrote:
> On Thu, Jun 1, 2017 at 4:35 AM, Gervase Markham via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
>> On 31/05/17 18:02, Matthew Hardeman wrote:
>> > Perhaps some reference to technologically incorrect syntax (i.e. an
>> incorrectly encoded certificate) being a mis-issuance?
>>
>> Well, if it's so badly encoded Firefox doesn't recognise it, we don't
>> care too much (apart from how it speaks to incompetence). If Firefox
>> does recognise it, then I'm not sure "misissuance" is the right word if
>> all the data is correct.
>>
>
> I would encourage you to reconsider this, or perhaps I've misunderstood
> your position. To the extent that Mozilla's mission includes "The
> effectiveness of the Internet as a public resource depends upon
> interoperability (protocols, data formats, content) ", the
> well-formedness and encoding directly affects Mozilla users (sites working
> in Vendors A, B, C but not Mozilla) and the broader ecosystem (sites
> Mozilla users are protected from that vendors A, B, C are not).
>
> I think considering this in the context of "CA problematic practices" may
> help make this clearer - they are all things that speak to either
> incompetence or confusion (and a generous dose of Hanlon's Razor) - but
> their compatibility issues presented both complexity and risk to Mozilla
> users.
>
> So I would definitely encourage that improper application of the protocols
> and data formats constitutes misissuance, as they directly affect
> interoperability and indirectly affect security :)

I think the policy needs to be carefully thought out here, as there is
no limitation to what can be signed with the key used to sign
certificates.   What is a malformed certificate to one person might be
a valid document to someone else.  Maybe you could disallow signing
things that are not valid ASN.1 DER?

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Plan for Symantec posted

2017-05-24 Thread Peter Bowen via dev-security-policy
On Mon, May 22, 2017 at 9:33 AM, Gervase Markham via
dev-security-policy  wrote:
> On 19/05/17 21:04, Kathleen Wilson wrote:
>> - What validity periods should be allowed for SSL certs being issued
>> in the old PKI (until the new PKI is ready)?
>
> Symantec is required only to be issuing in the new PKI by 2017-08-08 -
> in around ten weeks time. In the mean time, there is no restriction
> beyond the normal one on the length they can issue. This makes sense,
> because if certs issued yesterday will expire 39 months from yesterday,
> then certs issued in 10 weeks will only expire 10 weeks after that - not
> much difference.

Can you clarify the meaning of "new PKI"?  I can see two reasonable
interpretations:

1) The systems and processes used to issue end-entity certificates
(server authentication and email protection) must be distinct from the
existing systems.  This implies that a new set of subordinate CAs
under the existing Symantec-owned roots would meet the requirements.
These new subordinate CAs could be owned and operated by either
Symantec or owned and operated by a third party who has their own
WebTrust audit.

2) The new PKI includes both new offline CAs that meet the
requirements to be Root CAs and new subordinate CAs that issue
end-entity certificates. the The new root CAs could be cross-signed by
existing CAs (regardless of owner), but the new subordinate CAs must
not be directly signed by any Symantec-owned root CA that currently
exists.

Can you also clarify the expectations with regards to the existing
roots?  You say "only to be issuing in the new PKI".  Does Mozilla
intend to require that all CAs that chain to a specific set of roots
cease issuing all server authentication and email protection after a
certain date, unless they are also under one of the "new" roots?  If
so, will issuance be allowed from CAs that chain to the "old" roots
once certain actions take place (e.g. removed from the trust stores in
all supported versions of Mozilla products)?

>> - I'm not sold on the idea of requiring Symantec to use third-party
>> CAs to perform validation/issuance on Symantec's behalf. The most
>> serious concerns that I have with Symantec's old PKI is with their
>> third-party subCAs and third-party RAs. I don't have particular
>> concern about Symantec doing the validation/issuance in-house. So, I
>> think it would be better/safer for Symantec to staff up to do the
>> validation/re-validation in-house rather than using third parties. If
>> the concern is about regaining trust, then add auditing to this.
>
> Of course, if we don't require something but Google do (or vice versa)
> then Symantec will need to do it anyway. But I will investigate in
> discussions whether some scheme like this might be acceptable to both
> the other two sides and might lead to a quicker migration timetable to
> the new PKI.

Google has proposed adding some indication to certificates of whether
the information validation was performed by Symantec or another party.
If Mozilla does not require a third-party to perform validation, would
it make sense to have a concept of validations performed by the "new"
RA and validations performed by the "old" RA or validations performed
in the scope of Symantec audits versus validations performed in the
scope of another audit?

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla Policy and CCADB Disclosure scope

2017-05-22 Thread Peter Bowen via dev-security-policy
On Mon, May 22, 2017 at 12:21 PM, Ryan Sleevi via dev-security-policy
 wrote:
> Consider, on one extreme, if every of the Top 1 sites used TCSCs to
> issue their leaves. A policy, such as deprecating SHA-1, would be
> substantially harder, as now there's a communication overhead of O(1 +
> every root CA) rather than O(# of root store CAs).

Why do you need to add 10,000 communication points?  A TCSC is, by
definition, a subordinate CA.  The WebPKI is not a single PKi, is a
set of parallel PKIs which do not share a common anchor.  The browser
to CA relationship is between the browser vendor and each root CA.
This is O(root CA operator) not even O(every root CA).  If a root CA
issues 10,000 subordinate CAs, then they better have a compliance plan
in place to have assurance that all of them will do the necessary
things.

> It may be that the benefits of TCSCs are worth such risk - after all, the
> Web Platform and the evolution of its related specs (URL, Fetch, HTML)
> deals with this problem routinely. But it's also worth noting the
> incredible difficulty and friction of deprecating insecure, dangerous APIs
> - and the difficulty in SHA-1 (or commonNames) for "enterprise" PKIs - and
> as such, may represent a significant slowdown in progress, and a
> corresponding significant increase in user-exposed risk.
>
> This is why it may be more useful to take a principled approach, and to, on
> a case by case basis, evaluate the risk of reducing requirements for TCSCs
> (which are already required to abide by the BRs, and simply exempted from
> auditing requirements - and this is independent of any Mozilla
> dispensations), both in the short-term and in the "If every site used this"
> long-term.

It seems this discussion is painting TCSCs with a broad brush.  I
don't see anything in this discussion that makes the TCSC relationship
any different from any other subordinate CA.  Both can be operated
either by the same organization that operates the root CA or an
unrelated organization.  The Apple and Google subordinate CAs are
clearly not TCSCs but raise the same concerns.  If there were 10,000
subordinates all with WebTrust audits, you would have the exact same
problem.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla Policy and CCADB Disclosure scope

2017-05-22 Thread Peter Bowen via dev-security-policy
On Mon, May 22, 2017 at 1:02 PM, Matthew Hardeman via
dev-security-policy  wrote:
> On Monday, May 22, 2017 at 2:43:14 PM UTC-5, Peter Bowen wrote:
>
>>
>> I would say that any CA-certificate signed by a CA that does not have
>> name constraints and not constrained to things outside the set
>> {id-kp-serverAuth, id-kp-emailProtection, anyEKU} should be disclosed.
>> This would mean that the top level of all constrained hierarchies is
>> disclosed but subordinate CAs further down the tree and EE certs are
>> not.  I think that this is a reasonable trade off of privacy vs
>> disclosure.
>
> I would agree that those you've identified as "should be disclosed" 
> definitely should be disclosed.  I am concerned, however, that SOME of the 
> remaining certificates beyond those should probably also be disclosed.  For 
> safety sake, it may be better to start with an assumption that all CA and 
> SubCA certificates require full disclosure to CCADB and then define 
> particular specific rule sets for those which don't require that level.

Right now the list excludes anything with a certain set of name
constraints and anything that has EKU constraints outside the in-scope
set.  I'm suggesting that the first "layer" of CA certs always should
be disclosed.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla Policy and CCADB Disclosure scope

2017-05-22 Thread Peter Bowen via dev-security-policy
On Fri, May 19, 2017 at 6:47 AM, Gervase Markham via
dev-security-policy  wrote:
> We need to have a discussion about the appropriate scope for:
>
> 1) the applicability of Mozilla's root policy
> 2) required disclosure in the CCADB
>
> The two questions are related, with 2) obviously being a subset of 1).
> It's also possible we might decide that for some certificates, some
> subset of the Mozilla policy applies, but not all of it.
>
> I'm not even sure how best to frame this discussion, so let's have a go
> from this angle, and if it runs into the weeds, we can try again another
> way.
>
> The goal of scoping the Mozilla policy is, to my mind, to have Mozilla
> policy sufficiently broadly applicable that it covers all
> publicly-trusted certs and also doesn't leave unregulated sufficiently
> large number of untrusted certs inside publicly-trusted hierarchies that
> it will hold back forward progress on standards and security.
>
> The goal of CCADB disclosure is to see what's going on inside the WebPKI
> in sufficient detail that we don't miss important things. Yes, that's vague.
>
> Here follow a list of scenarios for certificate issuance. Which of these
> situations should be in full Mozilla policy scope, which should be in
> partial scope (if any), and which of those should require CCADB
> disclosure? Are there scenarios I've missed?

You seem to be assuming each of A-I have a path length constraint of
0, as your scenarios don't include CA-certs below each category.

> A) Unconstrained intermediate
>   AA) EE below
> B) Intermediate constrained to id-kp-serverAuth
>   BB) EE below
> C) Intermediate constrained to id-kp-emailProtection
>   CC) EE below
> D) Intermediate constrained to anyEKU
>   DD) EE below
> E) Intermediate usage-constrained some other way
>   EE) EE below
> F) Intermediate name-constrained (dnsName/ipAddress)
>   FF) EE below
> G) Intermediate name-constrained (rfc822Name)
>   GG) EE below
> H) Intermediate name-constrained (srvName)
>   HH) EE below
> I) Intermediate name-constrained some other way
>   II) EE below
>
> If a certificate were to only be partially in scope, one could imagine
> it being exempt from one or more of the following sections of the
> Mozilla policy:
>
> * BR Compliance (2.3)
> * Audit (3.1) and auditors (3.2)
> * CP and CPS (3.3)
> * CCADB (4)
> * Revocation (6)

I would say that any CA-certificate signed by a CA that does not have
name constraints and not constrained to things outside the set
{id-kp-serverAuth, id-kp-emailProtection, anyEKU} should be disclosed.
This would mean that the top level of all constrained hierarchies is
disclosed but subordinate CAs further down the tree and EE certs are
not.  I think that this is a reasonable trade off of privacy vs
disclosure.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec: Update

2017-05-19 Thread Peter Bowen via dev-security-policy
On Fri, May 19, 2017 at 7:25 AM, Gervase Markham via
dev-security-policy  wrote:
> On 15/05/17 21:06, Michael Casadevall wrote:
>
>>> Are there any RA's left for Symantec?
>>
>> TBH, I'm not sure. I think Gervase asked for clarification on this
>> point, but its hard to keep track of who could issue as an RA. I know
>> quite a few got killed, but I'm not sure if there are any other subCAs
>> based off re-reading posts in this thread.
>
> Symantec say they have closed their RA program, only Apple and Google
> are left in their GeoRoot program, and they have no other programs which
> allow third parties to have issuance capability.

This is not accurate.  They have indicated that the SSP customers have
some level of issuance capability.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Configuring Graduated Trust for Non-Browser Consumption

2017-05-16 Thread Peter Bowen via dev-security-policy
On Tue, May 16, 2017 at 10:52 AM, Jakob Bohm via dev-security-policy
 wrote:
> On 16/05/2017 19:36, Peter Bowen wrote:
>>
>> My experience is that Mozilla is very open to taking patches and will
>> help contributors get things into acceptable form, so I'm sure they
>> would be happy to take patches if there is demand for such.  It is
>> fairly important that someone who is going to use the attributes put
>> together the patch, otherwise it may prove to be useless.  For
>> example, I could easily create a patch that add a CKA_TRUST_FILTER
>> attribute that is designed to be fed into a case statement to indicate
>> the filter to be applied.  Based on the code, it looks like I probably
>> needs a "cnnic" case, a "wosign" case, and a "globalsignr2" case.
>> This meets my needs, but it might not need your needs.
>>
>
> Ok, can you point me to any "graduated trust" actually present in
> certdata.txt ?

See the CKA_TRUST_SERVER_AUTH, CKA_TRUST_EMAIL_PROTECTION,
CKA_TRUST_CODE_SIGNING, and CKA_TRUST_STEP_UP_APPROVED attributes in
CKO_NSS_TRUST class objects.  They all represent non-binary trust of
roots, similar to that contained in the OpenSSL X509_AUX structure
mentioned much earlier in the thread.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Configuring Graduated Trust for Non-Browser Consumption

2017-05-16 Thread Peter Bowen via dev-security-policy
On Tue, May 16, 2017 at 10:04 AM, Jakob Bohm via dev-security-policy
 wrote:
> On 16/05/2017 18:10, Peter Bowen wrote:
>>
>> On Tue, May 16, 2017 at 9:00 AM, Jakob Bohm via dev-security-policy
>>  wrote:
>>>
>>> Your post above is the first response actually saying what is wrong
>>> with the Microsoft format and the first post saying all the
>>> restrictions are actually in the certdata.txt file, and not just in the
>>> binary file used by the the NSS library.
>>
>>
>> What "binary file" are you referring to?  NSS is distributed as source
>> and I'm unaware of any binary file used by the NSS library for trust
>> decisions.
>>
>
> Source code for Mozilla products presumably includes some binary files
> (such as PNG files), so why not a binary database file that becomes
> that data that end users can view (and partially edit) in the Mozilla
> product dialogs.  Existence of a file named "generate_certdata.py",
> which is not easily grokked also confused me into thinking that
> certdata.txt was some kind of extracted subset.
>
> Anyway, having now looked closer at the file contents (which does look
> like computer output), I have been unable to find a line that actually
> expresses any of the already established "gradual trusts".
>
> Could you please point out where in certdata.txt the following are
> expressed, as I couldn't find it in a quick scan:
>
> 1. The date restrictions on WoSign-issued certificates.
>
> 2. The EV trust bit for some CAs.

These are not included in certdata.txt for the reasons described
earlier -- they are application-only things, not Mozilla platform
things.  I know it is non-obvious, but there are two parts of
processing certificates in many applications:

1) The certificate is passed to the platform library (along with some
other data, like name to validate) and a result is returned.
2) Then the application makes further decisions.

This is not only true for Chrome but also Firefox.  EV information is
decided by the application.  See
https://dxr.mozilla.org/mozilla-central/source/security/certverifier/ExtendedValidation.cpp
for information about deciding on EV.  See
https://dxr.mozilla.org/mozilla-central/source/security/certverifier/NSSCertDBTrustDomain.cpp#898
for additional checks (outside NSS) added by Firefox.

This could be moved into NSS, but there hasn't been demand to do so at
this point.  It could also be added as unused attributes in
certdata.txt (which is the master), but no one has volunteered
extended this to support the additional info and add the necessary
tests to ensure that it doesn't go stale.

My experience is that Mozilla is very open to taking patches and will
help contributors get things into acceptable form, so I'm sure they
would be happy to take patches if there is demand for such.  It is
fairly important that someone who is going to use the attributes put
together the patch, otherwise it may prove to be useless.  For
example, I could easily create a patch that add a CKA_TRUST_FILTER
attribute that is designed to be fed into a case statement to indicate
the filter to be applied.  Based on the code, it looks like I probably
needs a "cnnic" case, a "wosign" case, and a "globalsignr2" case.
This meets my needs, but it might not need your needs.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


  1   2   >