Re: CA Communication: Underscores in dNSNames

2018-12-07 Thread Ryan Sleevi via dev-security-policy
On Fri, Dec 7, 2018 at 4:35 PM Jeremy Rowley 
wrote:

> I only ask because telling people to go back to the CA and work something
> out isn’t a great answer when the retort is that the CA will be distrusted
> if they don’t. Either the customer doesn’t replace all their certs and they
> are made non-functional by revocation or the certs are distrusted because
> the CA isn’t operational anymore. Telling people to go have the CA cover
> the risk when those are the two options seems like we’re avoiding the
> public discussion.
>

Why not? It's fundamentally the CA taking the risk on when deciding whether
or not to meet the requirements of the programs that they participate in -
whether technical, policy, or contractual. If a customer wanted to ask a CA
to break a contractual requirement, isn't it ultimately the CA they should
be asking?

I think we're in agreement that, regardless, the CA MUST receive a
qualified audit. There's seemingly no defense that if they fail to revoke,
it should be a qualification, and there's even an argument that the
issuance itself should have been a qualification (and result in their
auditor re-evaluating these material facts, such as under AT-C 205.A54-A57
regarding revisions to opinions in light of subsequent events and
application guidance).

It's unclear what you expect to result from a public discussion, so perhaps
it would be helpful if you could clarify. It sounds like you're looking for
a blanket rubric for which to ignore the requirements of the Baseline
Requirements, so that the rubric can then be applied customer requests, and
determine whether or not these individual customers justify violating the
BRs. A good CA would acknowledge the rubric is, in fact, zero - zero
violations is the "justified" case, and everything else is the risk case.
Alternatively, it may be a desire that a rubric should exist, a priori to
any violations, so as to help determine whether a violation is justified,
even when the stated goal is zero violations.

If you look at public discussions, think about what the goals are of the
Incident Response template, which is about understanding how the CA's
processes failed. If you were to imagine intentionally violating the BRs,
knowingly, it seems like an incident response template would be far more
damning for that CA's operational competencies. That's not to suggest to
CAs its better to ask forgiveness than permission - a CA that ignored
changes in the BRs, clearly communicated (as Wayne mentioned in the
original post), also seems likely to have an incident report template that
is quite damning.

Using the experiences from the SHA-1 exception process, the only formalized
exception process, you can see that even in those limited cases, there was
significant skepticism towards the reasons. I would think that any proposal
for exceptions minimally achieve that degree of transparency, but would be
equally damning if those justifications were the same as those used for
SHA-1 - as the SHA-1 exception process "should" have revealed that those
are not, in fact, seen as legitimate.

If it helps to imagine this as a "How would this incident be received if it
became part of a Wiki page that listed a series of ongoing violations", I
think any CA contemplating not meeting the required transition date should
be asking "How many issues have I (or my sub-CAs) had under
https://wiki.mozilla.org/CA/Incident_Dashboard and
https://wiki.mozilla.org/CA/Closed_Incidents ). And there are definitely
some CAs that do not look to great in that light.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA Communication: Underscores in dNSNames

2018-12-07 Thread Ryan Sleevi via dev-security-policy
On Fri, Dec 7, 2018 at 2:00 PM Jeremy Rowley via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> This isn't a CA-issue because the risk associated with non-compliance isn't
> defined yet.


https://www.mozilla.org/en-US/about/governance/policies/security-group/certs/policy/

""Mozilla MAY, at its sole discretion, decide to disable (partially or
fully) or remove a certificate at any time and for any reason. This may
happen immediately or on a planned future date. Mozilla will disable or
remove a certificate if the CA demonstrates ongoing or egregious practices
that do not maintain the expected level of service or that do not comply
with the requirements of this policy.""

Sounds like the risk is well-defined and documented.


> From what I've heard here, the risk is distrust or loss of EV
> indicators, which is distrust-like. That's a pretty big thing to push back
> on the CA for a non-security issue.  Thus, I think the risk of missing the
> underscore revocation date needs to be discussed here so everyone,
> including
> website operators and the relying parties, know first-hand what the risks
> of
> the CA missing the deadline are.


Any and every qualification or failure to abide by the program requirements
comes with it the risk of sanction, up to, and including, distrust.

It sounds like you're looking for a way for CAs to make a cost/benefit
analysis as to whether it's more beneficial to them to violate
requirements, by having a clearer guarantee what it will cost them if they
intentionally do so, versus what they may gain from their Subscribers. That
doesn't really seem aligned with the incentives of the ecosystem or the
relying parties, since CAs (and their Subscribers) are not able to, on
purely technical level, evaluate the cost or impact to Relying Parties,
since Relying Parties include every possible entity that trusts that root.


> If the risk is that there is a note on the
> audit, that is an acceptable risk. If the risk is a loss of the
> root...probably less so.  Pushing the question back to the CA without
> better
> discussion by the browsers makes finding a solution or understanding the
> risks impossible.


While I think it's positive and encouraging to see CAs acknowledge that
their audits exist to disclose non-conformities/qualifications, I don't
think it should be seen as legitimizing or accepting that intentional
non-conformities/qualifications are desirable. A well-run CA should strive
to have zero qualifications, findings, or non-conformities - not because
they were able to convince their auditor that they weren't issues / were
minor, but because they operated above and beyond reproach, and there were
literally no issues. Anything short of that is an indicator that a CA is
failing in its role as a trusted steward, and repeated failures seem
reasonable to discuss sanction or distrust. CAs (and their sub-CAs) with a
pattern of incidents on the incident dashboard (
https://wiki.mozilla.org/CA/Incident_Dashboard and
https://wiki.mozilla.org/CA/Closed_Incidents ) probably have the greatest
risk of sanction, given the pre-existing patterns of non-compliance.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Incident report Certum CA: Corrupted certificates

2018-12-05 Thread Ryan Sleevi via dev-security-policy
On Wed, Dec 5, 2018 at 7:53 AM Wojciech Trapczyński 
wrote:

> Ryan, thank you for your comment. The answers to your questions below:
>

Again, thank you for filing a good post-mortem.

I want to call out a number of positive things here rather explicitly, so
that it hopefully can serve as a future illustration from CAs:
* The timestamp included the times, as requested and required, which help
provide a picture as to how responsive the CA is
* It includes the details about the steps the CA actively took during the
investigation (e.g. within 1 hour, 50 minutes, the initial cause had been
identified)
* It demonstrates an approach that triages (10.11.2018 12:00), mitigates
(10.11.2018 18:00), and then further investigates (11.11.2018 07:30) the
holistic system. Short-term steps are taken (11.11.2018 19:30), followed by
longer term steps (19.11.2018)
* It provides rather detailed data about the problem, how the problem was
triggered, the scope of the impact, why it was possible, and what steps are
being taken.

That said, I can't say positive things without highlighting opportunities
for improvement:
* It appears you were aware of the issue beginning on 10.11.2018, but the
notification to the community was not until 03.12.2018 - that's a
significant gap. I see Wayne already raised it in
https://bugzilla.mozilla.org/show_bug.cgi?id=1511459#c1 and that has been
responded to in https://bugzilla.mozilla.org/show_bug.cgi?id=1511459#c2
* It appears, based on that bug and related discussion (
https://bugzilla.mozilla.org/show_bug.cgi?id=1511459#c2 ), that from
10.11.2018 01:05 (UTC±00:00) and 14.10.2018 07:35 (UTC±00:00) an invalid
CRL was being served. That seems relevant for the timeline, as it speaks to
the period of CRL non-compliance. In this regard, I think we're talking
about two different BR "violations" that share the same incident root cause
- a set of invalid certificates being published and a set of invalid CRLs
being published. Of these two, the latter is far more impactful than the
former, but it's unclear based on the report if the report was being made
for the former (certificates) rather than the latter (CRLs)

Beyond that, a few selected remarks below.


> There are two things here: how we monitor our infrastructure and how our
> software operates.
>
> Our system for issuing and managing certificates and CRLs has module
> responsible for monitor any issue which may occur during generating
> certificate or CRL. The main task of this module is to inform us that
> "something went wrong" during the process of issuing certificate or CRL.
> In this case we have got notification that several CRLs had not been
> published. This monitoring did not inform us about corrupted signature
> in one CRL. It only indicated that there are some problems with CRLs. To
> identify the source of the problem human action was required.
>

Based on your timeline, it appears the issue was introduced at 10.11.2018
01:05 and not alerted on until 10.11.2018 10:10. Is that correct? If so,
can you speak to why the delay between the issue and notification, and what
the target delay is with the improvements you're making? Understanding that
alerting is finding a balance between signal and noise, it does seem like a
rather large gap. It may be that this gap is reflective of 'on-call' or
'business hours', it may be a threshold in the number of failures, it may
have been some other cause, etc. Understanding a bit more can help here.


> Additionally, we have the main monitoring system with thousands of tests
> of the whole infrastructure. For example, in the case of CRLs we have
> tests like check HTTP status code, check downloading time, check
> NextUpdate date and others. After the incident we have added tests which
> allow us to quickly detect CRLs published with invalid signature (we are
> using simple OpenSSL based script).
>

So, this is an example of a good response. It includes a statement that
requires trust ("we have ... thousands of tests"), but then provides
examples that demonstrate an understanding and awareness of the potential
issues.

Separate from the incident report, I think publishing or providing details
about these tests could be a huge benefit to the community, with an ideal
outcome of codifying them all as requirements that ALL CAs should perform.
This is where we go from "minimum required" to "best practice", and it
sounds like y'all are operating at a level that seeks to capture the spirit
and intent, and not just the letter, and that's the kind of ideal
requirement to codify and capture.


> As I described in the incident report we also have improved the part of
> the signing module responsible for verification of signature, because at
> the time of failure it did not work properly.
>

This is an area where I think more detail could help. Understanding what
caused it to "not work properly" seems useful in understanding the issues
and how to mitigate. For example, it could be that "it did not work
properly" 

Re: Incident report Certum CA: Corrupted certificates

2018-12-04 Thread Ryan Sleevi via dev-security-policy
On Tue, Dec 4, 2018 at 2:08 PM Kurt Roeckx  wrote:

> He explained before that the module that generated the corrupt
> signature for the CRL was in a weird state after that and all
> the newly issued certificates signed by that module also had
> corrupt signatures.
>

Ah! Thanks, I misparsed that. I agree, it does seem to be clearly addressed
:)
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA disclosure of revocations that exceed 5 days [Was: Re: Incident report D-TRUST: syntax error in one tls certificate]

2018-12-04 Thread Ryan Sleevi via dev-security-policy
On Tue, Dec 4, 2018 at 1:29 PM Dimitris Zacharopoulos via
dev-security-policy  wrote:

> I tried to highlight in this discussion that there were real cases in
> m.d.s.p. where the revocation was delayed in practice. However, the
> circumstances of these extended revocations remain unclear. Yet, the
> community didn't ask for more details.


The expectation is that there will already be a discussion about this. At
the worst case, this discussion will be delayed until the audit
qualifications come in - the absence of audit qualifications in such
situations would be incredibly damning. It sounds like you believe this is
not, in fact, a requirement today, and it may be possible to clarify that
already.

Do you think the language in
https://wiki.mozilla.org/CA/Responding_To_An_Incident is sufficient, or do
you feel it's ambiguous as to whether or not a failure to abide by the BRs
constitutes "an incident"?

As to the second half, the community not asking for details, as a member of
this community, you can and should feel empowered to ask the details you
feel are relevant. Do you believe that something about the handling of this
makes it inappropriate for you to ask questions you believe are relevant?


> Seeing this repeated, was the
> reason I suggested that more disclosure is necessary for CAs that
> require more time to revoke than the BRs require.


It's not at all clear how this result is linked to the remarks you make
above. Above, your remark seems to focus on CAs not disclosing in a timely
fashion, nor disclosing the circumstances. The former is a violation of the
existing requirements, the latter is something you can and should inquire
on if you feel is relevant. It's unclear what is "more" about the existing
disclosure, and certainly, the framing used in this statement implies that
the issue is time, but seemingly acknowledges we don't have data to support
that.


> At the very minimum,
> it would help the community understand in more detail the circumstances
> why a CA asks for more time to revoke.
>

I think there's an equally flawed assumption here - which is that CAs
should be asking for exceptions to policies. I don't think this is at all a
reasonable model - and the one time it did happen (with respect to SHA-1)
was one that caused a lot of pain and harm overall. I think it should be
uncontroversial to suggest that "exceptions" don't exempt the need from
qualifications - certainly, neither the professional standards behind the
ETSI audit criteria nor the standards behind the WebTrust would allow a CA
to argue an event is not a qualification solely because Mozilla "granted an
exception".

Instead, the concept of "exceptions" is one of asking the community whether
or not they will agree to ignore, apriori, a matter of non-compliance. In a
world without "exceptions", the CA will take the qualification, and will
need to disclose (as part of an Incident Report and, later, the audit
report) the nature behind the incident, the facts, and those details. In
determining ongoing trust, the community will take a holistic look at the
incidents and qualifications, whether sufficient detail was presented, and
what the patterns and issues are.

This is a healthy system, whereas introducing "exceptions" and agreement, a
priori, to exclude certain facts from consideration is not. For one, it
prevents the determination and establishment of patterns - granting
exceptions as "one-offs" can (and demonstrably does) lead to patterns of
misissuance, and asking the community to overlook those patterns because it
agreed to overlook the specific events is very much an unreasonable, and
harmful, request. This is similar to the harm of creating "tiers" of
misissuance, as both acts seek to legitimize some forms of non-compliance,
without concrete data, which then collectively erodes the very notion of
compliance to begin with.

Thus, if we disabuse the notion that some CAs have, or worse, have promoted
to their subscribers - that browsers can, do, and will grant promises to
overlook certain areas of non-compliance - then the proposal itself goes
away. That's because the existing mechanisms - for disclosure and detail
gathering - function, and the community can and will consider those facts
when holistically considering the CA. It may be that some forms of
misissuance are so egregious that no CA should ever attempt (e.g. granting
an unconstrained CA), and it may be that other forms are considered
holistically as part of patterns, but the CA is ultimately going to be
gambling, and that's all the more reason that a CA shouldn't violate in the
first place.

If (some) CAs do feel the requirements are overly burdensome, then
proposing changes is not unreasonble - but it MUST be accompanied with
concrete and meaningful data. Absent that, it leads to the harmful problems
I discuss above, and thus is not worth the time spent or electrons wasted
on the discussion. However, if (most) CAs systemically provide data, then
we can have 

Re: CA disclosure of revocations that exceed 5 days [Was: Re: Incident report D-TRUST: syntax error in one tls certificate]

2018-12-04 Thread Ryan Sleevi via dev-security-policy
On Tue, Dec 4, 2018 at 5:02 AM Fotis Loukos 
wrote:

> An initial comment is that statements such as "I disagree that CAs are
> "doing their best" to comply with the rules." because some CAs are
> indeed not doing their best is simply a fallacy in Ryan's argumentation,
> the fallacy of composition. Dimitris does not represent all CAs, and I'm
> pretty sure that you are aware of this Ryan. Generalizations and the
> distinction of two teams, our team (the browsers) and their team (the
> CAs), where by default our team are the good guys and their team are
> malicious is plain demagoguery. Since you like extreme examples, please
> note that generalizations (we don't like a member of a demographic thus
> all people from that demographic are bad) have lead humanity to
> committing atrocities, let's not go down that road, especially since I
> know you Ryan and you're definitely not that type of person.


I appreciate you breaking this down. I think it's important to respond to
the remark, because there is a substantive bit of this criticism that I
think meaningfully affects this conversation, and it's worth diving into.

Broadly speaking, it seems the interpretation of the first remark 'CAs are
"doing their best"' can be interpreted as "(Some) CAs are doing their best"
or "(All) CAs are doing their best". You rightfully point out that Dimitris
does not represent all CAs, but that lack of representation can't be
assumed to mean the statement could not possibly be meant as all CAs - that
could have been the intent, and is a valid interpretation. Similarly, in
the criticism, it seems the interpretation for 'I disagree that CAs are
"doing their best"' can be interpreted as "I disagree that (some) CAs are
doing their best", "I disagree that (all) CAs are doing their best", or "I
disagree that (any) CAs are doing their best".

While I doubt that any of these interpretations are likely to be seen as
supporting genocide, they do underscore an issue: Ambiguity about whether
we're talking about some CAs or all CAs. When we speak about policy
requirements, whether in the CA/Browser Forum or here, it's necessary in
the framing to consider all CAs in aggregate. Dimitris proposed a
distinction between "good" CAs and "bad" CAs, on the basis that flexibility
is needed for "good" CAs, while my counter-argument is that such
flexibility is easily abused by "bad" CAs, and when "bad" CAs are the
majority, there's no longer the distinction between "good" and "bad".
Policies that propose ambiguity, flexibility and trust, whether through
validation methods or revocation decisions, fundamentally rest on the
assumption that all entities with that flexibility will use the flexibility
"correctly." Codifying what that means removes the flexibility, and thus is
incompatible with flexibility - so if there exists the possibility of
abuse, it has to be dealt with by avoiding ambiguity and flexibility, and
removing trust where it's "misused".

This isn't a fallacy of composition - it's the fundamental risk assessment
that others on this thread have proposed. The risk of a single bad CA
spoiling the bunch, as it were, which is absolutely the case in a public
trust ecosystem, is such that it cannot afford considerations of
flexibility for the 'good' CAs. It's equally telling that the distinction
between 'bad' CAs and 'good CAs' are "Those that are not following the
rules" vs "Those that are", rather than the far more desirable "Those that
are doing the bare minimum required of the rules" and "Those that are going
above and beyond". If it truly was that latter case, one could imagine more
flexibility being possible, but when we're at a state where there are
literally CAs routinely failing to abide by the core minimum, then it's
necessary and critical to consider in any conversation that is granting
more trust to consider what "all CAs" when we talk about what "CAs are
doing", just like we already assume that negative discussions and removing
trust necessarily begin about "some CAs" when we talk about what "CAs are
doing".
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Incident report Certum CA: Corrupted certificates

2018-12-04 Thread Ryan Sleevi via dev-security-policy
>
> Thanks for filing this, Wojciech. This is definitely one of the better
incident reports in terms of providing details and structure, while also
speaking to the steps the CA has taken in response. There was sufficient
detail here that I don't have a lot of questions - if anything, it sounds
like a number of best practices that all CAs should abide by result. The
few questions I do have are inline below:

>
On Mon, Dec 3, 2018 at 6:06 AM Wojciech Trapczyński via dev-security-policy
 wrote:

> (All times in UTC±00:00)
>
>>
> 10.11.2018 10:10 – We received a notification from our internal
>
>> monitoring system for issuing certificates and CRLs concerning issues
> with publishing CRLs. We started verification.
>

Understanding what system you had in place before hand is valuable in
understanding what changes you propose to make. In particular, in
remediation, you note "We have deployed additional verification of
certificate and CRL signatures in the external component"

It's unclear here what the monitoring system monitored, or what the
challenges in publishing were. It sounds like there was already monitoring
in place in the internal system that detected the issue with corrupted
signatures. Is that a misunderstanding? I could also see an interpretation
being that "It was having trouble publishing large files", which would seem
a different issue.

Thus, it's helpful if you could discuss a bit more about what this
monitoring system already monitors, and how you're improving it to catch
this sort of issue. This may reveal other possible gaps, or it may be so
comprehensive as to also serve as a best practice that all CAs should
follow. In either event, the community wins :)


> 6. Explanation about how and why the mistakes were made or bugs
> introduced, and how they avoided detection until now.
>


> All issued certificates were unusable due to corrupted signature.
>

Could you speak to more about how you assessed this? An incorrect signature
on the CRL would not necessarily prevent the certificate from being used;
it may merely prevent it from being revoked. That is, all 30,000 (revoked)
certificates may have been usable due to the corrupted signature.


> 7. List of steps your CA is taking to resolve the situation and ensure
>
>> such issuance will not be repeated in the future, accompanied with a
>
>> timeline of when your CA expects to accomplish these things.
>
>>
> We have deployed a new version of the signing module that correctly
>
>> signs large CRLs. From now, we are able to sign a CRL that is up to 128
>
>> MB. In addition, we have improved the part of the signing module
>
>> responsible for verification of signatures (at the time of failure it
>
>> did not work properly).
>
>>
> We have deployed additional verification of certificate and CRL
>
>> signatures in the external component, in addition to the signing module.
>
>> This module blocks the issuance of certificates and CRLs that have an
>
>> corrupted signature.
>
>>
> We have extended the monitoring system tests that will allow us to
> faster detection of incorrectly signed certificates or CRLs.
>

As others have highlighted, there is still an operational gap, in that 1MB
CRLs are rather large and unwieldy. To help manage this, CRLs support
"sharding", by virtue of the CRL distribution point URL and the (critical)
CRL extension of Issuing Distribution Point (
https://tools.ietf.org/html/rfc5280#section-5.2.5 ). For example, the same
(Subject DN + key) intermediate CA can divide the certificates it issues
into an arbitrary number of CRLs. It does this by ensuring distinct URLs in
the certificates' CRLDP extension, and then, for each of the URLs
referenced, hosting a CRL for all certificates bearing that URL, and with a
critical IDP extension in the CRL (ensuring the IDP is present and critical
is a critical security function).

By doing this, you can roll a new CRL for every X number of subscriber
certificates you've issued, allowing you to bound the worst-case
revocation. For example, if the average size of your CRL entry was 32 bytes
(easier for the math), then every 2,000 certificates, you could create a
new CRL URL, and the maximum size your CRL would be (in the worst case) for
those 2,000 certificates is 64K.

Have you considered such an option? Several other CAs already apply this
practice, at varying degrees of scale and size, but it seems like it would
be a further mitigation to a root cause, which is that the revocation of
30,000 certificates would not balloon things so much.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA disclosure of revocations that exceed 5 days [Was: Re: Incident report D-TRUST: syntax error in one tls certificate]

2018-11-30 Thread Ryan Sleevi via dev-security-policy
On Fri, Nov 30, 2018 at 4:24 AM Dimitris Zacharopoulos 
wrote:

>
>
> On 30/11/2018 1:49 π.μ., Ryan Sleevi wrote:
>
>
>
> On Thu, Nov 29, 2018 at 4:03 PM Dimitris Zacharopoulos via
> dev-security-policy  wrote:
>
>> I didn't want to hijack the thread so here's a new one.
>>
>>
>> Times and circumstances change.
>
>
> You have to demonstrate that.
>
>
> It's self-proved :-)
>

This sort of glib reply shows a lack of good-faith effort to meaningfully
engage. It's like forcing the discussion every minute, since, yanno, "times
and circumstances have changed".

I gave you concrete reasons why saying something like this is a
demonstration of a weak and bad-faith argument. If you would like to
meaningfully assert this, you would need to demonstrate what circumstances
have changed in such a way as to warrant a rediscussion of something that
gets 'relitigated' regularly - and, in fact, was something discussed in the
CA/Browser Forum for the past two years. Just because you're unsatisfied
with the result and now we're in a month that ends in "R" doesn't mean time
and circumstances have changed meaningfully to support the discussion.

Concrete suggestions involved a holistic look at _all_ revocations, since
the discussion of exceptions is relevant to know whether we are discussing
something that is 10%, 1%, .1%, or .1%. Similarly, having the framework
in place to consistently and objectively measure that helps us assess
whether any proposals for exceptions would change that "1%" from being
exceptional to seeing "10%" or "100%" being claimed as exceptional under
some new regime.

In the absence of that, it's an abusive and harmful act.


> I already mentioned that this is separate from the incident report (of the
> actual mis-issuance). We have repeatedly seen post-mortems that say that
> for some specific cases (not all of them), the revocation of certificates
> will require more time.
>

No. We've seen the claim it will require more time, frequently without
evidence. However, I do think you're not understanding - there is nothing
preventing CAs from sharing details, for all revocations they do, about the
factors they considered, and the 'exceptional' cases to the customers,
without requiring any BR violations (of the 24 hour / 5 day rule). That CAs
don't do this only undermines any validity of the argument you are making.

There is zero legitimate reason to normalize aberrant behaviour.


> Even the underscore revocation deadline creates problems for some large
> organizations as Jeremy pointed out. I understand the compatibility
> argument and CAs are doing their best to comply with the rules but you are
> advocating there should be no exceptions and you say that without having
> looked at specific evidence that would be provided by CAs asking for
> exceptions. You would rather have Relying Parties loose their internet
> services from one of the Fortune 500 companies. As a Relying Party myself,
> I would hate it if I couldn't connect to my favorite online e-shop or bank
> or webmail. So I'm still confused about which Relying Party we are trying
> to help/protect by requiring the immediate revocation of a Certificate that
> has 65 characters in the OU field.
>
> I also see your point that "if we start making exceptions..." it's too
> risky. I'm just suggesting that there should be some tolerance for extended
> revocations (to help with collecting more information) which doesn't
> necessarily mean that we are dealing with a "bad" CA. I trust the Mozilla
> module owner's judgement to balance that. If the community believes that
> this problem is already solved, I'm happy with that :)
>

The argument being made here is as odious as saying "We should have one day
where all crime is legal, including murder" or "Those who knowingly buy
stolen goods should be able to keep them, because they're using them".

I disagree that CAs are "doing their best" to comply with the rules. The
post-mortems continually show a lack of applied best practice. DigiCert's
example is, I think, a good one - because I do not believe it's reasonable
for DigiCert to have argued that there was ambiguity, given that prior to
the ballot, it was agreed they were forbidden, a ballot to explicitly
permit them failed, and the discussion of that ballot explicitly cited why
they weren't valid. From that, several non-DigiCert CAs took steps to
migrate their customers and cease issuance. As such, you cannot reasonably
argue DigiCert was doing "their best", unless you're willing to accept that
DigiCert's best is, in fact, far lower than the industry norm.

The framing about "Think about harm to the Subscriber" is, again, one that
is actively harmful, and, as coming from a CA, somewhat offensive, 

Re: DigiCert Assured ID Root CA and Global Root CA EV Request

2018-11-29 Thread Ryan Sleevi via dev-security-policy
Sure, my intent was to keep it narrowed to understanding the potential
impact to this conversation.

I raise this concern because I think it would reflect poorly if these
certificates were not revoked. There has been past precedent - e.g. not
granting EV to Turktrust after misissuance came to light, post inclusion
process discussions - that are relevant and applicable to know whether this
precedent still holds. And, as Jeremy’s reply highlights, it sounds like
there is non-trivial risk of such actions happening.

I would find it difficult, especially if these certificates are EV
certificates, to believe that the standards are being upheld in a way that
deserves EV recognition if a CA does not make a timely revocation.
Similarly, there has been past precedent that failures are best called out
early, during the inclusion process, as they become more difficult to
remediate, short of distrust, once they are included, and thus are also
treated more seriously.

Given these past precedents, it should not seem unreasonable to suggest
that any recognition of EV is perhaps contingent upon no new incidents
coming to light in the weeks following such discussions. Alternatively, if
that is seen to be too extreme, that any incidents being shared following
that deadline should result in a return to public discussion, with the
default assumption being that EV will not be granted/be removed, might
equally provide a clearer set of expectations, and align with Mozilla’s
interest in ensuring CAs consistently meet expectations.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA disclosure of revocations that exceed 5 days [Was: Re: Incident report D-TRUST: syntax error in one tls certificate]

2018-11-29 Thread Ryan Sleevi via dev-security-policy
On Thu, Nov 29, 2018 at 4:03 PM Dimitris Zacharopoulos via
dev-security-policy  wrote:

> I didn't want to hijack the thread so here's a new one.
>
>
> Times and circumstances change.


You have to demonstrate that.

When I brought this up at the Server
> Certificate Working Group of the CA/B Forum
> (https://cabforum.org/pipermail/servercert-wg/2018-September/000165.html),
>
> there was no open disagreement from CAs.


Look at the discussion during Wayne’s ballot. Look at the discussion back
when it was Jeremy’s ballot. The proposal was as simplified as could be -
modeled after 9.16.3 of the BRs. It would have allowed for a longer period
- NOT an unbounded period, which is grossly negligent for publicly trusted
CAs.

However, think about CAs that
> decide to extend the 5-days (at their own risk) because of extenuating
> circumstances. Doesn't this community want to know what these
> circumstances are and evaluate the gravity (or not) of the situation?
> The only way this could happen in a consistent way among CAs would be to
> require it in some kind of policy.


This already happens. This is a matter of the CA violating any contracts or
policies of the root store it is in, and is already being handled by those
root stores - e.g. misissuance reports. What you’re describing as a problem
is already solved, as are the expectations for CAs - that violating
requirements is a path to distrust.

The only “problem” you’re solving is giving CAs more time, and there is
zero demonstrable evidence, to date, about that being necessary or good -
and rich and ample evidence of it being bad.

> Phrased differently: You don't think large organizations are currently
> > capable, and believe the rest of the industry should accommodate that.
>
> "Tolerate" would probably be the word I'd use instead of "accommodate".


I chose accommodate, because you’d like the entire world to take on
systemic risk - and it is indeed systemic risk, to users especially - to
benefit some large companies.

Why stop with revocation, though? Why not just let CAs define their own
validation methods of they think they’re equivalent? After all, if we can
trust CAs to make good judgements on revocation, why can’t we also trust
them with validation? Some large companies struggle with our existing
validation methods, why can’t we accommodate them?

That’s exactly what one of the arguments against restricting validation
methods was.

As I said, I think this discussion will not accomplish anything productive
without a structured analysis of the data. Not anecdata from one or two
incidents, but holistic - because for every 1 real need, there may have
been 9,999 unnecessary delays in revocation with real risk.

How do CAs provide this? For *all* revocations, provide meaningful data. I
do not see there being any value to discussing further extensions until we
have systemic transparency in place, and I do not see any good coming from
trying to change at the same time as placing that systemic transparency in
place, because there’s no way to measure the (negative) impact such change
would have.

>
> > Do you believe these organizations could respond within 5 days if
> > their internet connectivity was lost?
>
> I think there is different impact. Losing network connectivity would
> have "real" and large (i.e. all RPs) impact compared to installing a

certificate with -say- 65 characters in the OU field which may cause
> very few problems to some RPs that want to use a certain web site.


So you do believe organizations are capable of making timely changes when
necessary, and thus we aren’t discussing capabilities, but perceived
necessity. And because some organizations have been mislead as to the role
of CAs, and thus don’t feel its necessary, don’t feel they should have to
use that capability.

I’m not terribly sympathetic to that at all. As you mention, they can
respond when all RPs are affected, so they can respond when their
certificate is misissused and thus revoked.

You describe it as a black/white issue. I understand your argument that
> other control areas will likely have issues but it always comes down to
> what impact and what damage these failed controls can produce. Layered
> controls and compensating controls in critical areas usually lower the
> risk of severe impact. The Internet is probably safe and will not break
> if for example a certificate with 65-character OU is used on a public
> web site. It's not the same as a CA issuing SHA1 Certificates with
> collision risk.


It absolutely is, and we have seen this time and time again. The CAs most
likely to argue the position you’re taking are the CAs that have had the
most issues.

Do we agree, at least, that any CA violating the BRs or Root Policies puts
the Internet ecosystem at risk?

It seems the core of your argument is how much risk should be acceptable,
and the answer is none. Zero. The point of postmortems is to get us to a
point where, as an industry, we’ve taken every available step 

Re: DigiCert Assured ID Root CA and Global Root CA EV Request

2018-11-29 Thread Ryan Sleevi via dev-security-policy
This deadline is roughly five weeks before all underscore certificates must
be revoked (per Ballot SC12). Given the number of underscore certificates
under various DigiCert operated hierarchies, would you think it appropriate
to consider whether or not SC12 (and, prior to that, the existing BR
requirements in force when those certificates were issued) was followed by
that date?

More concretely: If DigiCert were to fail to revoke certificates by that
deadline, would it be a reason to consider denying EV status to these roots
/ removing (if a decision is made to grant) it?

I realize the goal is to close discussion a month prior to that date, but I
suspect such guidance about the risk of failing to abide by SC12, and
failing to revoke by January 15, would be incredibly valuable to DigiCert
and their customers.

On Thu, Nov 29, 2018 at 1:39 PM Wayne Thayer via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Reminder: the 3-week discussion period for this request to EV-enable two
> DigiCert roots ends next Friday 7-December.
>
> - Wayne
>
> On Fri, Nov 16, 2018 at 5:00 PM Wayne Thayer  wrote:
>
> > This request is to enable EV treatment for the DigiCert Assured ID Root
> CA
> > and DigiCert Global Root CA as documented in the following bug:
> > https://bugzilla.mozilla.org/show_bug.cgi?id=1165472
> >
> > * BR Self Assessment is here:
> > https://bug1165472.bmoattachments.org/attachment.cgi?id=8960346
> >
> > * Summary of Information Gathered and Verified:
> > https://bug1165472.bmoattachments.org/attachment.cgi?id=8987141
> >
> > * Root Certificate Download URLs:
> > ** Global: https://www.digicert.com/CACerts/DigiCertGlobalRootCA.crt
> > ** Assured: https://www.digicert.com/CACerts/DigiCertAssuredIDRootCA.crt
> >
> > * CP/CPS:
> > ** CP:
> > https://www.digicert.com/wp-content/uploads/2018/08/DigiCert_CP_v416.pdf
> > ** CPS:
> >
> https://www.digicert.com/wp-content/uploads/2018/08/DigiCert_CPS_v416.pdf
> >
> > * These roots are already included with Websites and Email trust bits. EV
> > treatment is requested.
> > ** EV Policy OID: 2.23.140.1.1
> > ** Original inclusion request:
> > https://bugzilla.mozilla.org/show_bug.cgi?id=364568
> >
> > * Test Websites:
> > ** Global:
> > *** Valid: https://global-root-ca.chain-demos.digicert.com/
> > ***Expired: https://global-root-ca-expired.chain-demos.digicert.com/
> > *** Revoked: https://global-root-ca-revoked.chain-demos.digicert.com/
> > ** Assured:
> > *** Valid: https://assured-id-root-ca.chain-demos.digicert.com/
> > ***Expired: https://assured-id-root-ca-expired.chain-demos.digicert.com/
> > *** Revoked:
> https://assured-id-root-ca-revoked.chain-demos.digicert.com/
> >
> > * CRL URLs:
> > ** Global: http://crl3.digicert.com/DigiCertGlobalRootCA.crl and
> > http://crl4.digicert.com/DigiCertGlobalRootCA.crl
> > ** Assured: http://crl3.digicert.com/DigiCertAssuredIDRootCA.crl and
> > http://crl4.digicert.com/DigiCertAssuredIDRootCA.crl
> >
> > * OCSP URL: http://ocsp.digicert.com/
> >
> > * Audit: Annual audits are performed by Scott S Perry, CPA according to
> > the WebTrust for CA, BR, and EV audit criteria.
> > ** WebTrust: https://cert.webtrust.org/ViewSeal?id=2452
> > ** BR: https://www.cpacanada.ca/webtrustseal?sealid=2453
> > ** EV: https://www.cpacanada.ca/webtrustseal?sealid=2454
> >
> > Additionally, DigiCert is undergoing quarterly audits (due to the
> Symantec
> > acquisition) that include the DigiCert Global Root CA and has been
> posting
> > the reports [1].
> >
> >
> > I’ve reviewed the CPS, BR Self Assessment, and related information for
> the
> > DigiCert Assured ID Root CA and DigiCert Global Root CA request that is
> > being tracked in this bug and have the following comments:
> >
> > ==Good==
> > * Other than my two comments below, the CP and CPS are in good shape and
> > they are well written and regularly updated.
> >
> > ==Meh==
> > * These are old roots, created in 2006, however, DigiCert has provided a
> > continuous chain of audits back to their creation [1]
> > * CPS section 3.2.2 permitted DigiCert to use vulnerable BR domain
> > validation methods 3.2.2.4.9 and 3.2.2.4.10. They are described as
> > deprecated in the latest version.
> > * DigiCert has had quite a number of compliance bugs over the past 18
> > months [2]. All but one is resolved (that one is awaiting the subordinate
> > CA to move to a managed PKI), DigiCert is generally responsive, and they
> > have self-reported a number of these issues.
> >
> > ==Bad==
> > * DigiCert’s most recent quarterly audit report states “During our
> > examination, we noted DigiCert publicly reported (
> > https://bugzilla.mozilla.org/show_bug.cgi?id=1483715) that it continued
> > to rely on a deprecated method of domain validation when renewing
> > certificates after the stated transition date of August 1, 2018. As a
> > result, DigiCert had to revalidate all affected 1233 certificates over
> 154
> > domains.“ At least one of the certificates the required 

Re: Incident report D-TRUST: syntax error in one tls certificate

2018-11-29 Thread Ryan Sleevi via dev-security-policy
On Thu, Nov 29, 2018 at 2:16 AM Dimitris Zacharopoulos 
wrote:

> Mandating that CAs disclose revocation situations that exceed the 5-day
> requirement with some risk analysis information, might be a good place
> to start.


This was proposed several times by Google in the Forum, and consistently
rejected, unfortunately.


> I don't consider 5 days (they are not even working days) to be adequate
> warning period to a large organization with slow reflexes and long
> procedures.


Phrased differently: You don't think large organizations are currently
capable, and believe the rest of the industry should accommodate that.

Do you believe these organizations could respond within 5 days if their
internet connectivity was lost?


> For example, if many CAs violate the 5-day rule for revocations related
> to improper subject information encoding, out of range, wrong syntax and
> that sort, Mozilla or the BRs might decide to have a separate category
> with a different time frame and/or different actions.
>

Given the security risks in this, I think this is extremely harmful to the
ecosystem and to users.

It is not the first time we talk about this and it might be worth
> exploring further.
>

I don't think any of the facts have changed. We've discussed for several
years that CAs have the opportunity to provide this information, and
haven't, so I don't think it's at all proper to suggest starting a
conversation without structured data. CAs that are passionate about this
could have supported such efforts in the Forum to provide this information,
or could have demonstrated doing so on their own. I don't think it would at
all be productive to discuss these situations in abstract hypotheticals, as
some of the discussions here try to do - without data, that would be an
extremely unproductive use of time.


> As a general comment, IMHO when we talk about RP risk when a CA issues a
> Certificate with -say- longer than 64 characters in an OU field, that
> would only pose risk to Relying Parties *that want to interact with that
> particular Subscriber*, not the entire Internet.


No. This is demonstrably and factually wrong.

First, we already know that technical errors are a strong sign that the
policies and practices themselves are not being followed - both the
validation activities and the issuance activities result from the CA
following it's practices and procedures. If a CA is not following its
practices and procedures, that's a security risk to the Internet, full stop.

Second, it presumes (incorrectly) that interoperability is not something
valuable. That is, if say the three existing, most popular implementations
all do not check whether or not it's longer than 64 characters (for
example), and a fourth implementation would like to come along, they cannot
read the relevant standards and implement something interoperable. This is
because 'interoperability' is being redefined as 'ignoring' the standard -
which defeats the purposes of standards to begin with. These choices - to
permit deviations - creates risks for the entire ecosystem, because there's
no longer interoperability. This is equally captured in
https://tools.ietf.org/html/draft-iab-protocol-maintenance-01

The premise to all of this is that "CAs shouldn't have to follow rules,
browsers should just enforce them," which is shocking and unfortunate. It's
like saying "It's OK to lie about whatever you want, as long as you don't
get caught" - no, that line of thinking is just as problematic for morality
as it is for technical interoperability. CAs that routinely violate the
standards create risk, because they have full trust on the Internet. If the
argument is that the CA's actions (of accidentally or deliberately
introducing risk) is the problem, but that we shouldn't worry about
correcting the individual certificate, that entirely misses the point that
without correcting the certificate, there's zero incentive to actually
follow the standards, and as a result, that creates risk for everyone.
Revocation, if you will, is the "less worse" alternative to complete
distrust - it only affects that single certificate, rather than every one
of the certificates the CA has issued. The alternative - not revoking -
simply says that it's better to look at distrust options, and that's more
risk for everyone.

Finally, CAs are terrible at assessing the risk to RPs. For example,
negative serial numbers were prolific prior to the linters, and those have
issues in as much as they are, for some systems, irrevocable. This is
because those systems implemented the standards correctly - serials are
positive INTEGERs - yet had to account for the fact that CAs are improperly
encoding them, such as by "making" them positive (adding the leading zero).
This leading zero then doesn't get stripped off when looking up by Issuer &
Serial Number, because they're using the "spec-correct" serial rather than
the "issuer-broken" serial. That's an example where the certificate
"works", no report 

Re: Incident report D-TRUST: syntax error in one tls certificate

2018-11-26 Thread Ryan Sleevi via dev-security-policy
On Mon, Nov 26, 2018 at 12:12 PM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> 1. Having a spare certificate ready (if done with proper security, e.g.
>a separate key) from a different CA may unfortunately conflict with
>badly thought out parts of various certificate "pinning" standards.
>

You blame the standards, but that seems an operational risk that the site
(knowingly) took. That doesn't make a compelling argument.


> 2. Being critical from a society perspective (e.g. being the contact
>point for a service to help protect the planet), doesn't mean that the
>people running such a service can be expected to be IT superstars
>capable of dealing with complex IT issues such as unscheduled
>certificate replacement due to no fault of their own.
>

That sounds like an operational risk the site (knowingly) took. Solutions
for automation exist, as do concepts such as "hiring multiple people"
(having a NOC/SOC). I see nothing to argue that a single person is somehow
the risk here.


> 3. Not every site can be expected to have the 24/7 staff on hand to do
>"top security credentials required" changes, for example a high-
>security end site may have a rule that two senior officials need to
>sign off on any change in cryptographic keys and certificates, while a
>limited-staff end-site may have to schedule a visit from their outside
>security consultant to perform the certificate replacement.
>

This is exactly describing a known risk that the site took, accepting the
tradeoffs. I fail to see a compelling argument that there should be no
tradeoffs - given the harm presented to the ecosystem - and if sites want
to make such policies, rather than promoting automation and CI/CD, then it
seems that's a risk they should bear and make an informed choice.

Thus I would be all for an official BR ballot to clarify/introduce
> that 24 hour revocation for non-compliance doesn't apply to non-
> dangerous technical violations.
>

As discussed elsewhere, there is no such thing as "non-dangerous technical
violations". It is a construct, much like "clean coal", that has an
appealing turn of phrase, but without the evidence to support it.


> Another category that would justify a longer CA response time would be a
> situation where a large batch of certificates need to be revalidated due
> to a weakness in validation procedures (such as finding out that a
> validation method had a vulnerability, but not knowing which if any of
> the validated identities were actually fake).  For example to recheck a
> typical domain-control method, a CA would have to ask each certificate
> holder to respond to a fresh challenge (lots of manual work by end
> sites), then do the actual check (automated).


Like the other examples, this is not at all compelling. Solutions exist to
mitigate this risk entirely. CAs and their Subscribers that choose not to
avail themselves of these methods - for whatever the reason - are making an
informed market choice about these. If they're not informed, that's on the
CAs. If they are making the choice, that's on the Subscribers.

There's zero reason to change, especially when such revalidation can be,
and is, being done automatically.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Incident report D-TRUST: syntax error in one tls certificate

2018-11-26 Thread Ryan Sleevi via dev-security-policy
On Mon, Nov 26, 2018 at 10:31 AM Nick Lamb via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> CA/B is the right place for CAs to make the case for a general rule about
> giving themselves more time to handle technical non-compliances whose
> correct resolution will annoy customers but impose little or no risk to
> relying parties,
>

CAs have made the case - it was not accepted.

On a more fundamental and philosophical level, I think this is
well-intentioned but misguided. Let's consider that the issue is one that
the CA had the full power-and-ability to prevent - namely, they violated
the requirements and misissued. A CA is only in this situation if they are
a bad CA - a good CA will never run the risk of "annoying" the customer.

This also presumes that "annoyance" of the subscriber is a bad thing - but
this is also wrong. If we accept that CAs are differentiated based on
security, then a CA that regularly misissues and annoys its customers is a
CA that will lose customers. This is, arguably, better than the
alternative, which is to remove trust in a CA entirely, which will annoy
all of its customers.

This presumes that the customer cannot take steps to avoid this. However,
as suggested by others, the customer could have minimized or eliminated
annoyance, such as by ensuring they have a robust system to automate the
issuance/replacement of certificates. That they didn't is an operational
failure on their fault.

This presumes that there is "little or no risk to relying parties."
Unfortunately, they are by design not a stakeholder in those conversations
- the stakeholders are the CA and the Subscriber, both of which are
incentivized to do nothing (it avoids annoying the customer for the CA, it
avoids having to change for the customer). This creates the tragedy of the
commons that we absolutely saw result from browsers not regularly enforcing
compliance on CAs - areas of technical non-compliance that prevented
developing interoperable solutions from the spec, which required all sorts
of hacks, which then subsequently introduced security issues. This is not a
'broken windows' argument so much as a statement of the demonstrable
reality we lived in prior to Amazon's development and publication of
linting tools that simplified compliance and enforcement, and the
subsequent improvements by ZLint.

Conceptually, this is similar to an ISP that regularly cuts its own
backbone cables or publishes bad routes. By ensuring that the system
consistently functions as designs - and that the CA follows their own
stated practices and procedures and revokes everything that doesn't - the
disruption is entirely self-inflicted and avoidable, and the market can be
left to correct for that.


> I personally at least would much rather see CAs actually formally agree
> they should all have say 28 days in such cases - even though that's surely
> far longer than it should be - than a series of increasingly implausible
> "important" but ultimately purely self-serving undocumented exceptions that
> make the rules on paper worthless.
>

I disagree that encouraging regulatory capture (and the CA/Browser Forum
doesn't work by formal agreement of CAs, nor does it alter root program
expectations) is the solution here.

I agree that it's entirely worthless the increasingly implausible
"important" revocations. I think a real and meaningful solution is what is
being more consistently pursued, and that's to distrust CAs that are not
adhering to the set of expectations. There's no reason to believe the
"impact" argument, particularly when it's one that both the Subscriber and
the CA can and should have avoided, and CAs that continue to make that
argument are increasingly showing that they're not working in the best
interests of Relying Parties (see above) or Subscribers (by "annoying" them
or lying to them), and that's worthy of distrust.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Questions regarding the qualifications and competency of TUVIT

2018-11-15 Thread Ryan Sleevi via dev-security-policy
On Wed, Nov 14, 2018 at 10:39 PM Wayne Thayer via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> While I see some small steps being made toward a common understanding of
> the issue, there is still fundamental and subjective disagreement on the
> severity, and it's not clear to me that this thread is headed toward any
> sort of a constructive conclusion.
>

I think one area that I've been trying to focus on, independent of the past
issues that Jakob is exploring, is a better understanding of TUViT's
processes with respect to compliance. While it's certainly true that
they've acknowledged that they have not and did not develop tools to check
compliance of certificates against the published ASN.1 modules, I think it
would benefit the community to better understand TUViT's approach to
auditing and ensuring compliance. For example, how many processes rely on
human review? Is sampling employed, how are sample sizes selected, what is
tested within the sample, etc.

These are matters that can be discussed and explored without the
retrospective analysis, and provide insight into the current issue. The
benefit of the retrospective analysis is that we can then also explore and
understand if and how these processes were changed due to past oversights,
and whether or not past oversights should have been caught by the described
processes. This helps ensure that future issues can be detected more timely.

Separate from that discussion - of the present issue - is a question about
whether or not TUViT's adherence to the minimum amount required represents
a sufficient level of assurance going forward. If, as a community, the
approach TUViT is taking is not acceptably transparent, next steps can be
explored. These next steps may include suspending TUViT's recognition until
process changes to achieve the necessary transparency are met, or may
involve clarifying more generally the degree of transparency required for
audits within the program. This may be accompanied by a further exploration
of the ETSI accreditation standards with regards to best practices. Put
differently, the demonstration of more transparent reports from other
auditors accredited under ETSI-developed standards may indicate that TUViT
is failing to meet industry best practices, or it may serve as an
opportunity to codify those best practices as program requirements.

Obviously from the discussion, I believe disagree with Jakob on the best
approach to achieving these goals. I think it's far more important and
relevant to make sure we have a comprehensive understanding of the
/current/ issue with respect to competence and transparency before
comparing and contrasting that with past issues. I think if we can spend
our energies focused on this specific issue, then we can make some forward
progress.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Questions regarding the qualifications and competency of TUVIT

2018-11-13 Thread Ryan Sleevi via dev-security-policy
>
>
>
> On Tue, Nov 13, 2018 at 11:26 AM Jakob Bohm via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
>> Furthermore the start of the thread was off-list.  Also neither I, nor
>> some other participants have access to the audit reports etc. in CCADB.
>>
>
> Sure you do. That information is publicly available through
> https://wiki.mozilla.org/CA/Included_Certificates
>
>
>> This basic combination of noise and missing data is why I asked for a
>> one-stop overview of your complaints against TUVIT, similar to the lists
>> compiled for previous situations with multiple complaints against one
>> party.
>>
>
> Those are the output of these discussions, not the input or structure to
> them. There are certainly broader complaints, but if you'll note, my focus
> has been on attempting to satisfactorily resolve the current set of issues.
> Several times you've attempted to move it to the meta discussion, while
> I've tried to again focus on the specific lack of resolution for those
> initially identified issues. The reference to the other issues is precisely
> because the explanation and resolution of *these* issues can inform or be
> compared with the *past* issues, which would be used to build the list
> seemingly so desired.
>
>
>> "Misconfiguration and misapplication of the relevant rules..." is so
>> broad as to describe the majority of CA failures without giving any
>> useful specifics to assess the situation.  It's like saying someone's
>> crime was to "violate and break the relevant laws" (which would apply to
>> anything from jaywalking to mass murder).
>>
>
> While sympathetic to your frustration, I think that's a rather extreme
> interpretation. For example, CAs seem to believe that the majority of their
> failures are "human error" and that human error is corrected by "additional
> training". Perhaps you would like to propose a better wording to
> distinguish between the "Guaranteed to produce the wrong result, 100% of
> the time" configuration issues, in which a certificate profile is
> functionally unable to meet the stated configuration, and those which are
> tied to, for example, data validation issues (or lack thereof). My intent
> was to capture the former, while acknowledging that the latter is something
> that is primarily accounted for through design review, sampling, and
> testing.
>
>
>> It would also be useful to quantify the word substantial: Of all the
>> certificates issued by the audited CA organization, how large a
>> percentage suffered from each flaw, how many from none.  This is a key
>> number when assessing if statistical sampling by the auditor should have
>> caught an issue.  It is also a key number when assessing the level of
>> incompetence of the CA (but the CA is not the subject of this thread).
>>
>
> I already responded to this previously, and again in my more recent
> messages. In the issue that started this thread, we can see it's 100%. In
> the past issuance examples, we can see that it was 100% of certificates
> going through certain systems. While that is less than 100% of total
> volume, sampling methodology also must consider variances and other
> factors. For example, if a CA issues DV, OV, and EV, a sampling methodology
> would approach each profile distinctly for sample selection, rather than
> overall issuance. A sampling method for a CA may involve 100+ such samples
> (each representing a percentage), based on the design review that
> identifies variations and permutations relevant to the service provide.
> Similarly, the selection of 3% is relevant to CA self-audits primarily.
>
> This is where the initial request for the discussion about methodology - a
> discussion about how a CAB can miss 100% of certificates being misissued -
> is relevant. And, as of yet, unaddressed.
>
> Issue U1 (Qc-statement misencoding) apparently affected all certificates
>> from one issuing CA, and should thus have been caught by sampling by the
>> auditor.  The auditor has (according to earlier posts) admitted that the
>> bug was present in the sampled certificates from that issuing CA, but
>> that this was overlooked because that particular extension was not one
>> they had specific experience looking at.  Once the problem was pointed
>> out the auditor looked at the previously collected evidence and
>> confirmed the problem by checking that detail from first principles
>> (similar to software developer hand-executing a function with pen and
>> paper to confirm a bug).
>>
>
> I don't believe that is a correct summary. The auditor reported things
> were correct - i.e. no bug - and only after pushing further to state very
> clearly that there was a bug did the auditor confirm that, oh yes, there
> was a bug, we just overlooked it. Now, I can understand that the favorable
> reading for the auditor was simply that they were busy and on the road and
> favoring expediency over correctness - but we've seen CAs using this same
> reasoning for years. Multiple 

Re: Questions regarding the qualifications and competency of TUVIT

2018-11-13 Thread Ryan Sleevi via dev-security-policy
On Tue, Nov 13, 2018 at 9:46 AM things things 
wrote:

> >> I hope you can see that this is actively damaging the community by
> promoting magniloquent indictments instead of discussing
> >> clear facts. It would be far more productive to provide a concrete and
> structured list of TUVITs failings, as suggested by Jakob.
>
> > Do you believe the initial message did not contain that?
>
> Yes. Your inital message contained a lot of information, a timeline about
> contacting TUVIT, expressions of your dissatisfaction with TUVITs answers
> etc etc. It also contained two paragraphs labeled "Issue A" and "Issue C",
> but it is far from a concrete and structured list.
>
> I don't think that it is currently transparent or its lost in the approx
> 50 message with partly heated exchanges about ETSI and whatnot that
> followed, what the core of the issues is.
>

I think, then, that we'll have to agree to disagree on both approach and
substance.

It would appear that your desire is for a small, bulleted list of items,
and to make your opinion solely based on that, without any context. The
initial thread started by both contextualizing a set of issues and, from
there, enumerating specific issues. The discussion, to date, has been to
review those facts, ensure they're accurate and meaningfully presented, and
allow opportunity for both other concerns to be raised and for other
considerations. This will be, inherently, a messy process, but is
fundamental to the essence of building a shared understanding. There have
been several attempts to derail the thread, including suggestions these
issues shouldn't be discussed before December (at the earliest) or possibly
into the next year, but those are fundamentally unproductive.

>From the 40 messages, we've converged on a set of things starting to be
understood and agreed upon, and other issues still being debated. It would
be both premature and unproductive to attempt to distill that into a curt
list while the discussion is ongoing, especially given that the
responsiveness of TUVIT to the concerns - and in particular, the lack of
any explanation of methodology that would explain why the concerns are
unfounded.

If you consider past discussions - such as CAs like StartCom or Symantec -
you'll see that they similarly followed an evolutionary approach, in which
an initial issue was reported, it spiraled into a broader discussion, and
the *output* of that discussion was a structured list.

This is why I disagree with you on substance and approach; I think it would
be premature to attempt to distill that into a list while the discussions
are ongoing, to the point of seeming to attempt to stifle conversation.
Indeed, most of the messages following
https://groups.google.com/d/msg/mozilla.dev.security.policy/Q9whve-HJfM/T6W4i2XHAwAJ
have not been attempting to discuss the substance of the issues, or to
further explore, but instead suggest that it's not appropriate to have this
conversation, or to attempt to restructure the conversation. It seems like
far more productive conversations can be made on the substance, rather than
structure-policing.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Questions regarding the qualifications and competency of TUVIT

2018-11-13 Thread Ryan Sleevi via dev-security-policy
I suppose I had unreasonably hoped it would be self-evident, particularly
for someone who claims to follow the issues, to understand how directly
that issue was related. Unfortunately, whether for intent or otherwise, it
appears not.

While I do not believe nor agree with your approach to framing the issues,
I do hope you can agree that both through the bug - which itself is an
amalgamation of and reference to several bugs - that during the prior two
audit cycles, T-Systems contained a substantial amount of misissuance that
were undetected by TUVIT and that shared the same root cause:
misconfiguration and misapplication of the relevant rules, both in terms of
ASN.1 and in terms of normative requirement.

If you are attempting to excuse such misissuance, rather than address it,
one would take a similar tact as you are here; suggesting, for example,
that it was T-Systems rather than TUVIT that did the misissuance or by
suggesting that the incidence was low to be insignificant. I was careful
not to try to muddy the conversation through an indictment on T-Systems, to
avoid diluting the conversation, and because they’ve already provided
several enumerations of the issues and that doing so again, as you’ve done,
does not add value. However, it should be readily apparent from both the
bug discussion and the list of issues a common pattern of misconfiguring
relevant profiles and failing to ensure they comply with the relevant
requirements.

In the context of ETSI, each of these configuration changes - particularly
once qualified - undergoes some review; whether after the fact
(pre-qualification) or prior to such change. Similarly, misissuance
involves a degree of notification to the CAB. As such, it is entirely
reasonable to expect a degree of supervision, as that is the value of the
certification scheme. All of this information would have been available at
the time of configuring qualified certificates, including the pattern of
issues existing when configuring profiles and templates.

As such, we functionally see two issues; the inadequate supervision that
resulted in the first batch of misissuance, which can be attempted to be
argued away by suggesting it was some small volume that sampling would not
have caught (despite the inconsistencies of that argument with the
criteria), and inadequate supervision leading to this current issue,
despite having all of that previous information available as context during
the review and despite their being 100% misissuance rate. Both of these
share a clear commonality of inadequate supervision, a key role played over
the past several years.

Audits understandably and obviously do not prevent a CA from making a
change tomorrow that undermines the past audits; there is no guarantee they
won’t start actively misissuing once the auditor has left the building. It
is, however, meant to provide assurance regarding the present (and past)
configuration. When a CA like T-Systems does misissue, whether this or
previous incidents, it is entirely reasonable to ask “Was this
configuration something the auditor previously reviewed, and did they catch
it?” and, in the case of ETSI, “was this a change the auditor approved in
relation to ongoing certification?”

The qcStatements demonstrates a failure of the latter, the bug demonstrates
a failure of the former, both speak to the process of review and the
qualifications of the reviewer.

If you don’t agree with the large swath of undetected past misissuance
being a concern, it would be helpful if you could explain why it isn’t
concerning. For example, do you believe that these requirements
(collectively, for any of these issues) were not covered by existing
criteria? Do you believe that sufficient documentation of TUVIT’s
methodology exists so as to explain why such failure to detect may be seen
as reasonable? Do you believe that ETSI does not require consideration by
auditors prior to operational and configuration changes? In short, do you
disagree that, when presented with CA misissuance, such as by T-Systems,
that it is both relevant and appropriate to question why the auditor failed
to detect and/or prevent such misissuance?

I am not arguing that an audit be a guarantee against misissuance; for
example, a statistical sample will be just that, a sample, and stuff can
reasonably slip through. I am, however, advocating that it’s both
appropriate and necessary to question whether sampling was even done, and
how it was constructed (e.g. CA selects the samples and sizes vs auditor),
and what was reviewed, in order to ascertain whether or not it was
“reasonable” to have missed something. In the case of T-Systems past
misissuances, the collective sum - especially with respect to things like
misconfigured templates - raises legitimate concerns about TUVITs approach
and methodology, and those concerns are each themselves distinct issues
with TUVIT for every misissuance “type” by T-Systems.
___

Re: Questions regarding the qualifications and competency of TUVIT

2018-11-13 Thread Ryan Sleevi via dev-security-policy
On Tue, Nov 13, 2018 at 5:30 AM things things via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Ryan,
>
> I feel you are trying to derail the discussion and are muddying the waters.
>
> I hope you can see that this is actively damaging the community by
> promoting magniloquent indictments instead of discussing clear facts. It
> would be far more productive to provide a concrete and structured list of
> TUVITs failings, as suggested by Jakob.


Do you believe the initial message did not contain that?

Up to now, there is no readable summary of facts to understand what this
> all is about. In your initial posting you wordily talked about an Issue A,
> then Issue C, and then you skipped that entirely.


That is not an accurate summary. The matter of Issue A was discussed, and
similar concerns expressed by Wayne in the subsequent response. Would you
like to discuss it further? Otherwise, it’s unclear your point.

>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Questions regarding the qualifications and competency of TUVIT

2018-11-12 Thread Ryan Sleevi via dev-security-policy
Jakob,

Please see
https://groups.google.com/d/msg/mozilla.dev.security.policy/Q9whve-HJfM/lpwKQXOfAgAJ
, which was already provided previously.
It includes details regarding T-Systems areas of non-compliance that were
1) Demonstrably not identified by the auditor
2) Covered by existing audit criteria
3) Sharing the similar root cause as this incident

Even if we accept a notion that an auditor would not have been looking for
those issues at that time (despite the clear auditable criteria that
existed), the examination of root cause reveals a common pattern shared
with this incident, and a pattern where the auditor would have been
responsible for the review of the changes as part of the certification
scheme. T-Systems has still not provided a satisfactory response to the
questions raised by Gerv and Wayne in response to the past incident (
https://bugzilla.mozilla.org/show_bug.cgi?id=1391074 ), which, while
separable from the concerns of TUVIT, should have factored into any such
considerations - such as Gerv's prescient expectation of exactly this issue
in https://bugzilla.mozilla.org/show_bug.cgi?id=1391074#c22
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Questions regarding the qualifications and competency of TUVIT

2018-11-12 Thread Ryan Sleevi via dev-security-policy
Nick,

I find your continued suggestions to be actively harmful - to the
discussion, for sure, but also to the reputation of ETSI.

You've attempted to frame this, again, as an either/or approach - that is,
that we can only have one of these discussions. You've attempted to
"thread-jack" the conversation by suggesting that we ignore specific
failures of a specific auditor, repeatedly, by instead suggesting that
through closed-door negotiations and smokey-room summits (notably, in which
browsers will be absent) will somehow resolve the issues. That's difficult
to believe, and even more difficult to stomach, given the attempt to
deflect any responsibility or accountability in favor of some abstract
'process'.

That's not to dismiss there being value in improving ETSI. Certainly, if
ETSI is to provide any value to the Web Ecosystem going forward, it needs
to address those needs. There's nothing inherently valuable in the ETSI
audits that makes them immune from concern or rejection.

However, this thread is about specific failures of a specific auditor. If
you do not believe these are failures - that is, you do not believe the
ETSI EN 319 * series has any normative guidance on CABs with respect to
assessing compliance with the stated certificate profiles - then we should
reject ETSI for the time being. If you do agree, however, that there is
specific guidance throughout those series regarding the expectations of
CABs, and that there is a pattern of failing to examine or adhere to that,
then I hope you can see and agree on the critical necessity of why the CAB
is failing.

We still have yet to receive a meaningful post-mortem from TUVIT regarding
this failure, nor of any acknowledgement of the pattern, as demonstrated by
past CAs they have audited, in which they failed to detect or account for
material non-conformities. That silence and lack of a meaningful response -
as to what practices are applied in the audit, why they failed, and what
can be done to improve - is exactly why it's reasonable to discuss
rejection of their future audit statements.

Suggesting that taking this up with ETSI will resolve this is akin to
suggesting that the CA/Browser Forum should be consulted every time there's
material misissuance. That misunderstands the ecosystem, misunderstands the
purpose, and misunderstands how to appropriately protect users.

There needs to be a resolution, to this thread. If you would like to
continue suggesting improvements to ETSI, which while I agree with, do not
believe this is at all an appropriate time, I would request you create a
new thread to share your thoughts. They are not, despite any possible
intent, productive for this conversation.

On Mon, Nov 12, 2018 at 11:00 AM Nick Pope via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

>
> Ryan,
>
> I see the main question is what is the most productive way ahead.  We can
> continue discussing a specific concern in the context of just 1 of the
> European auditor, or work in the EU on a considered approach to all the
> concerns which can be applied to all European based audits.  The first does
> not seem to be working towards something that you are happy with and even
> then would only provide an answer in a limited context.   With the second
> approach we can take into account all your concerns and work towards an
> approach that can be applied to all EU audits which is acceptable to all.
>
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Questions regarding the qualifications and competency of TUVIT

2018-11-09 Thread Ryan Sleevi via dev-security-policy
On Fri, Nov 9, 2018 at 7:05 AM Nick Pope via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> I am asking that we get a clear statement of what you would like to see
> from EU audits based on ETSI standards and so that we (European Auditors
> and ETSI) can come back with a considered response on how we can meet you
> concerns.  Rather than saying what a particular individual person thinks,
> we would like to understand what your concerns are in as much detail as
> possible against what is specified as the current requirements for EU
> audits.We can then make a considered joint response to your concerns to
> ensure that ETSI audits meet your needs in a way works for the existing
> European environment.
>
> I note your concerns about transparency and ensuring that the requirements
> certificate profile are met.  If you can put these concerns down in detail,
> along with any other issue you have, as a joint document from the root
> stores, we can provide a coordinated response on how we can address your
> concerns.
>
> If you see this as "basics that are already required" rather than "wish
> list" fine, again just provide us with a clear set requirements so that we
> can properly respond.


I really don’t see how this is a productive response. It really is rather
simple - do you believe auditors should be assessing compliance with EN 319
412-* under the existing standards?

If yes, TUVIT has demonstrated a pattern of failing to do so, and it’s
appropriate to discuss what next steps are appropriate to minimize the risk
from such repeated failures - such as no longer accepting.

If not, then ETSI audits are quite literally missing one of the most basic
expectations, and their acceptance should be immediately stopped until such
a time as they do.

I fail to see how there’s any other possible response there; it really is
cut and dry like that.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: How harsh (in general) should Mozilla be towards CAs?

2018-11-08 Thread Ryan Sleevi via dev-security-policy
On Thu, Nov 8, 2018 at 5:51 PM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> This thread is for the general principles, it takes no stance on any
> particular cases, as that would quickly derail the discussion.
>
> Over the years, there has been some variation among participants in how
> harshly individual mistakes by CAs should be judged, ranging from "just
> file a satisfactory incident report, and all will be fine" to "Any tiny
> mistake could legally be construed as violating a formal requirement
> that would be much more catastrophic under other circumstances,
> therefore the maximum penalty of immediate distrust must be imposed".
>
> I believe some middle ground between those extremes would be better for
> all involved (including relying parties/users).


Concretely, could you explain what that practically looks like, as you
believe?

Can you also state what you believe were appropriate alternatives raised by
the community, and that were ignored, when considering past incidents?

I ask these, because it’s not reasonable to suggest there’s some
as-of-yet-unmet middle ground without actually defining what you believe to
be examples of both ends of the spectrum are. The reality is that almost
everything done in the past several years has been on the “more lenient
than the middle” in practical terms, yet you’re implying, especially later
in your message, that you believe them to be on some extreme.

Without providing those sorts of concrete examples, it can come off very
shady - like asking “have you stopped beating your wife yet”. It’s
suggestive  without being constructive or educational.

I believe that the assessment of cases should be based on a balanced
> view of the actual circumstances, and that blindly taking either the
> "extremely lenient" or "extremely harsh" stance is unfair for everybody
> directly or indirectly affected.


This is a bit leading, or perhaps, misleading. I don’t think anyone here
would disagree with the first half - that’s very much what the process is
currently designed to support and accomplish. Either you’re stating a fact
that everyone agrees with, or you’re presenting it as if somehow you’re
unique in this or perhaps (combined with later remarks) a minority in this
view. The second half, while also agreeable and part of the principles, is
worded in such a way that it suggests you believe those things are
happening. Unfortunately, you don’t actually detail how - it’s just an
implication.

If you believe that extremes are being blindly taken, you should call it
out. That’s part of the community process, designed to get feedback. It may
be that people disagree with you, but that doesn’t mean you can’t or
shouldn’t feel free to call it out. If you find people are constantly
disagreeing with you, that might help provide an opportunity to explore if
maybe you’re the one in the wrong. Either way, the first step to that is to
be direct at it; merely implying things helps no one and hurts real
progress.

Furthermore, people with some clout tend to shut down all
> counterarguments when taking either extreme position, creating situation
> there only their own position is heard, making the entire "community"
> aspect an illusion.


Without wanting to tone police, you could have achieved a lot more without
this closing paragraph. You have intimated as much before, and it’s been
responded to before. Repeating it here undermines it for those who’ve seen
those past discussions, and misleads those who haven’t.

There hasn’t been “shouting” down of arguments; different people have
disagreed with in the past, and presented more or less compelling arguments
for their position. Opinions were heard, facts were considered, and a
result was chosen. Just because some arguments were poor doesn’t mean they
weren’t considered, and just because some priorities were different doesn’t
mean they aren’t still important as well.

I hope you can see how messages like this can result in future arguments
being undermined. On its whole, it’s all fundamentally agreeable - yes, the
process for action is designed to be transparent, designed to consider all
the details so as not to be blind, to consider community feedback to not be
hasty, and to ensure consistency and fairness. Either it’s a position that
adds no value, because it’s restating things, making it easier to ignore
future ideas as being equally reductive and repetitive, or it’s a position
that comes off shady, by trying to hint that these things aren’t happening
without providing concrete examples.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Questions regarding the qualifications and competency of TUVIT

2018-11-08 Thread Ryan Sleevi via dev-security-policy
On Thu, Nov 8, 2018 at 6:24 AM Nick Pope via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Following on from Waynes earlier positive statement:
>
> "I look forward to more open and constructive discussions aimed at
> improving
> the quality and transparency of CA audits, regardless of the audit scheme."
>
> I believe centring discussion on one particular auditor is not progressing
> things with regards generally improving audits.


That sounds very much like you don’t believe in either accountability or in
trustworthiness being necessary for auditors. Statements like this, which
actively promote overlooking fundamentally defective application of the
existing requirements, calls the ETSI model itself into disrepute. I
realize the opposite is your goal, but I hope you can understand how such
an approach is fundamentally and deeply offensive to the trust ecosystem.

Perhaps put differently: Do you believe that the audit criteria under ETSI
are sufficiently clear to set forward an expectation that certificates
conform to a profile?

If no, we should not use or accept ETSI audits until such a time as the
issues are resolved.
If yes, then it is absolutely appropriate and necessary to discuss why
specific auditors are failing to deliver on that.

There is no middle ground, and this is not about wishlists. This is about
fundamentally not meeting base level expectations.


>
> I understood from my EU colleagues that Ryan and Wayne had undertaken to
> produce a "wish list" covering requirements that they had on audits.  We
> can then we can then discuss this with the European stakeholders and see
> how we could best answer the wish list.  This wish list would be most
> helpful if it builds on the measures already proposed in TS 119 403-2 and
> its parent standards which provide specific requirements on all European
> audits for PTC.  I understand also that we undertook to meet with WebTrust
> in December to get an understand of each other schemes which could lead to
> resolution of any alignment issues.


This is entirely unrelated and unproductive to even suggest. Yes, ETSI
should and must improve overall. But with regards to the current
requirements and auditors such as TUVIT failing to appropriately apply
them, that’s an issue that needs discussion and resolution now, and in
public. I am glad the ESI TC recognizes there is room for improvement, just
as there is room for improvement with WebTrust, but it is inaccurate to
conflate that room for improvement with current failures in the
application. This is not about not having things that are wanted - this is
about not having the basics that are already required.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Questions regarding the qualifications and competency of TUVIT

2018-11-06 Thread Ryan Sleevi via dev-security-policy
On Tue, Nov 6, 2018 at 4:48 AM Wiedenhorst, Matthias via
dev-security-policy  wrote:

> Section 4.5 of ISO/IEC 17065 states that in general all non-public
> information shall be regarded as confidential. However, that section also
> allows that CAB and TSP can agree between each other about information not
> to be regarded as confidential.
> Our interpretation (which we think is aligned with current interpretation
> of general data protection legislation informally stated as “everything
> which is not explicitly required/allowed is forbidden”), indeed follows a
> minimum principle. So you are right, with consent of the TSP it is possible
> and we are willing to request such consent in future.
> We suggest the establishment of general rules / requirements valid for all
> auditors instead of individual / different commitments. These rules could
> be on the content of public audit reports and on the roles of audits during
> security incidents including reporting and should allow browsers and the
> interested community to obtain the necessary information to get a good
> picture on the incident and the assessment of the auditor.
>

I fundamentally disagree with this approach, and believe that rather than
creating a common baseline, this would lower the bars for security and
reliability. This is because auditors, such as TUVIT is doing so here,
would argue "We were only doing what we required" - without recognizing
fundamentally that things evolve, and auditors need to evolve with them.

Consider the examples given of other auditors that have found ways to
disclose more information to relying parties and browsers. They've shown
that there's the ability and necessity to step up to meet community
expectations. All auditors should be encouraged to do so - to constantly
improve. To the extent we specify a common baseline, it will forever be
that - the lowest bar, but not reflective of expectation or need.

A CAB wishing to provide high assurance to the users of its reports, and to
the TSP-using ecosystem, would constantly be looking for ways to improve
the assurance, and to publicize those best practices, so that other CABs
may learn and integrate such practices.


> 3. The argument that T-Systems has 3-months to revoke these certificates -
> while I understand that under ETSI TSPs have 3 months to correct minor
> non-conformities, using that as an excuse to ignore CAB Forum revocation
> requirements is unacceptable, and perhaps explains why we see such poor
> compliance with this requirement. If this is indeed the accepted
> interpretation (please confirm), then I will look for ways to fix this via
> Mozilla policy.
>
> - Wayne
>
> From the ETSI certification point of view, this is the interpretation.
> Failure to revoke within the required timeframe is clearly a
> non-conformity. Nevertheless, if the non-conformity has been rated as minor
> non-conformity (due to the individual circumstances), there would be a
> period of 3 month before the corresponding ETSI certification would be
> withdrawn.
> However, we do see your concern and it is a very reasonable one. Using
> this construct to deliberately delay revocation is not at all desired. How
> could we deal with it?
>

One is to reconsider how you're classifying minor non-conformities. It
certainly does not align with industry best practice - as reflected through
the Root Program agreements that exist, so how do you, as the CAB, defend
those classifications?

Another is to recognize that the CAB (and/or SB, depending) must be
notified of anything the TSP changes that may affect the conformity, and
there is a public interest in making that information available. Having the
CAB notify the program of any non-conformities found, both those that
affect certification and those that do not ("minor" non-conformities),
would help ensure the necessary public confidence in ETSI, and be a step
above what WebTrust provides.


> One possibility would be for the CAB to mandatorily require the TSP  to
> publish the failure to adhere to the certificate revocation timeline
> requirement as bug in the Bugzilla (as already required from the TSP by
> Mozilla Policy) before the rating as minor non-conformity is possible.
> Without publication, it has to be rated as major non-conformity and hence
> an existing ETSI certificate will be withdrawn. This would facilitate the
> interest of transparency and would allow Mozilla, if regarded necessary, to
> take further action regardless of a still existing ETSI certificate. In the
> past, the ETSI certificate was not regarded as the primary audit
> deliverable by root store operators; this is the audit attestation letter.
> Combined with number 2 above, in such the case the next audit attestation
> letter would also state the failed revocation deadline as non-conformity.
>

This is somewhat self-contradictory. For years, we've been told by ETSI
CABs and audited CAs that the value is in the certification, and the
certification has consistently 

Re: Questions regarding the qualifications and competency of TUVIT

2018-11-05 Thread Ryan Sleevi via dev-security-policy
On Mon, Nov 5, 2018 at 3:28 PM Nick Pope via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> It is very unfortunate that at this time the owners of root store programs
> openly criticise one of the main auditors working on improvements to
> European based audits.  After a number of years of audits of European CAs
> based on ETSI EN 319 403 being recognised as meeting the requirements of
> publicly trusted certificates, ETSI is working and with European auditors
> on further updates to improve the acceptability of European audits to root
> store programs.   It seems to be going against this initiative to suggest
> draconian measures of excluding TUVIT audit from the root programs whose
> impact are totally out of proportion the possible impact of the issues
> raised.
>
> I suggest that the providers of root stores to return to the negotiations
> for further improving European based audits that I understood had started
> at the recent CA/Browser forum.  The current approach of making public
> criticisms against those who are trying to make improvements to the
> European CA audits is making the current direct discussions with root store
> providers difficult to progress.  So unless it is the objective to
> deliberately exclude European CAs from their root programs, which I believe
> is not the case, I suggest that we return to the direct discussions with
> the providers of root store on how to further improve European audits so
> that can better take into account the root program requirements.
>
> Nick Pope, Vice-Chair ETSI TC on Electronic Signatures and [Trust]
> Infrastructures
>

Respectfully, comments like this unfortunately bring even greater concern
with respect to the ETSI process.

A significant number of improvements have been made to the ecosystem by
recognizing when mistakes are made and taking steps to improve. It has now
seen both TUVIT and the Vice-Chair of the ETSI TC on ESI instead suggest
these are not mistakes and downplay their significance. This prevents
meaningful improvements, because it fails to recognize that there exist
fundamental issues.

I am all in favor of ensuring that all accepted audit schemes meet the
necessary level of robustness for the community. Much work has been done
with WebTrust, through their active engagement with Browsers to ensure that
the needs of the consumers are being met. ETSI has only recently begun to
recognize these issues, and while we are indeed seeing the beginnings of
fruitful engagement, we should not suggest that such seeds are a reasonable
justification to ignore gross negligence in security-critical functions OR
the deeply concerning dismissiveness of those concerns.

I'm sure you can understand it would be deeply offensive if, on the basis
of such collaborations with WebTrust, it be suggested that no WebTrust
auditor be disqualified. Similarly, I'm sure you can understand it would be
deeply offensive to the purpose, values, and goals to suggest that because
CAs participate in m.d.s.p., they should be excluded from accountability.
At the end of the day, browsers are accountable to ensuring their users are
secure, and regardless of how productive our conversations may be, if the
level of security is not met, it's entirely appropriate and necessary to
take steps to protect users.

I hope that, as Vice-Chair of the ETSI TC on ESI, and on behalf of
auditors, careful introspection is performed in comparing how these
statements sound when compared with CAs that have been distrusted due to
gross negligence and misissuance. Failures to acknowledge or recognize the
problem, failures to have implemented reasonable steps to resolve such
issues, repeated failures to achieve the necessary level of security, do
more to harm the brand of that organization and its products than
statements suggesting distrust.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Clarifications on ETSI terminology and scheme

2018-11-02 Thread Ryan Sleevi via dev-security-policy
On Fri, Nov 2, 2018 at 1:31 PM clemens.wanko--- via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> II. Assessment and certification statements:
> - ETSI requires the auditing of the past period as well as of the current
> operations status:
> o In chapter 7.9 of the ETSI EN 319 403, it is clearly stated that the
> operation records shall be audited (that will be detailed within a future
> updated version of ETSI EN 319 403. On top of that it is planned to make
> the ETSI TS 319 403-2 binding in order to have an even better definition
> what is required to be audits for the past period).
>

However, as has been privately noted in the past, a CAB may achieve this
requirement by examining a single operational record or issued certificate,
without regard to any other actions. Or they may not consider any
certificates at all, and merely consider other operational aspects; for
example, that the policy management authority met to review the policy
documentation. While you may argue that a well-behaved CAB would not
undertake such a limited examination, I hope you can provide a citation if
this is, in fact, not permitted within the scheme.

I do not believe that having 119 403-2 binding will, in and of itself,
improve this situation. This is perhaps a difference in fundamental
approach with regards to the framework used by ISO/IEC 17065 and the
necessary and desired output. More work would be needed to resolve this in
a satisfactory way.

However, independent of whatever normative requirements may existing within
the framing of 119 403-2, 319 403, or more broadly, ISO/IEC 17065, we must
separately assess whether or not the result is meeting that of community
expectations. As we've seen, there are auditors who have found ways to
achieve the necessary transparency and assurance despite 319 403 not
formally requiring this. Similarly, in the context of WebTrust, we've seen
there are auditors who meet not just the letter, but the spirit, and there
are auditors that ostensibly fail to meet both.

With regards to past activities - or those of assessing for future - I hope
we can agree that repeated failures by a TSP demonstrate a failure of the
CAB to appropriately review and supervise. As the CAB and Supervisory Body
(in the context of qualified services, as that is the only place it
operates ex ante) both serve to review those changes, output which is
non-conforming demonstrates not just a failure by the TSP, but also that of
the CAB.


> Let’s keep in mind - please! - we are all pulling the same rope for more
> security more confidence and reliability. We should take extra care to pull
> in the same direction - all together - and invest our precious energy in
> improving the ecosystem rather than blaming each other with the high risk
> of damaging it.


While this is a compelling call, within the context of CABs, a CAB that
does not "pull their weight" is in fact a CAB that "drags everyone down";
whether you prefer to think of that, in the context of this metaphor, as
pulling in the opposite direction or, perhaps more clearly compared,
'getting in the way,' the role of CABs in the CA ecosystem is not one of
receiving participation stickers for showing up. They perform a necessary
and critical function, and the failure to adhere to those expectations is
reason to have meaningful discussion and take steps to correct the issue.

In the CA and auditor ecosystem, those steps include matters like
distrusting CAs or disqualifying auditors. It is not acceptable, nor has it
ever been, to suggest that there are no consequences for misissuance or
dereliction of duty, whether on behalf of the CA or the auditor. When an
entity poses or introduces risk to the ecosystem, the ecosystem
appropriately responds by cutting that risk out. That is not to say there
are not opportunities to improve, but it would be irresponsible to suggest
that we ignore the active damage being caused in the name of comity and
camaraderie - to do so is foolishness.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Questions regarding the qualifications and competency of TUVIT

2018-11-02 Thread Ryan Sleevi via dev-security-policy
On Fri, Nov 2, 2018 at 10:24 AM Wiedenhorst, Matthias via
dev-security-policy  wrote:

> Auditor and Reviewer, as stated on
> https://www.tuvit.de/fileadmin/Content/TUV_IT/zertifikate/en/AA2018072001_Audit_Attestation_E_Deutsche-Telekom-Root-CA-2_20180718_s.pdf
> - the parties tasked with ensuring that the audit is meaningfully able to
> ensure the criteria were met and the testing procedures were able to meet
> those requirements.
>
> Auditors and reviewers need to be distinguished: ISO/IEC 17065 §7.5.1
> forbids that the person(s) performing the review is involved in the audit
> process.
>

Indeed - and yet do you agree that the Reviewer is tasked with reviewing
the audit methodology and artifacts to ensure it appropriately meets the
objectives expected and required? A multi-party failure to assure the
necessary assurance is just that - a multi-party failure.

>Issue A) As part of their initial response to my complaint, TUVIT, by way
> >of Matthias Wiedenhorst (Head of Certification Division TSP) stated "As a
> >very first, quick cross check with our audit evidence record, I can only
> >say that we did check issued certificates from within the declared period
> >and that they did contain the proper qcStatement-6.3 / id-etsi-qcs-QcType
> >3". However, this statement was in direct conflict with the TSP's own
> >investigation and incident report, provided at
> >https://bugzilla.mozilla.org/show_bug.cgi?id=1498463#c3 , which states
> the
> >mistake was introduced during the development of support - that is, no
> such
> >properly issued certificates were issued.
>
> We do not understand why the important fact that Matthias was not in
> office and replied what he remembered from an audit that was month ago is
> not mentioned here. In addition, he replied that he would verify and come
> back as soon as possible (when he is in office again). That actually
> happened, see below.
>
> Wrong or misleading information - which was only corrected upon specific
> questioning and a request for proof or evidence of the claim - has been
> used to disqualify CAs in the past. This statement was made after the TSP
> had themselves already investigated and confirmed this was not possible.
>
> The same standard being applied to CA incident reports is being proposed
> here - that incomplete and improper investigations raise serious questions.
>
> Let’s stay with present case (not with the past): In his email, Matthias
> said that he is not in his office and will begin to investigate the
> situation as soon as possible. We think that it is clear, that further
> information contained in the email cannot be based on the result of the
> investigation. Back in his office after looking into the audit logs and
> verifying the qcStatement he gave the proper answer.
>

I appreciate the attempt to narrowly scope the issue, but that is equally
an attempt to deflect or ignore the ample set of past precedent and
expectations. As such, I reject the premise that this should be considered
without regard to past failures by CAs or other auditors - as an auditor
being entrusted to report truthfully and faithfully to the community about
a CAs compliance with its own CP, CPS, and the appropriate supervisory
framework, auditors are expected to consider the best practices and
precedents in their activities and actions.

With respect to your suggestion that the information can not be relied
upon, I'm more than happy to provide the full e-mail chain if there's some
consideration given to misinterpretation. However, I do not believe the
reply at all indicates that the information contained was not reliable or
may be counter-factual and to be corrected later. Yes, Matthias stated they
were out of office - but then immediately began with a remark that "As a
very first, quick cross check with our audit evidence record, I can only
say that we did check issued certificates".

I can understand the argument being made here that, on a more detailed
examination of your audit evidence record, you discovered that it was not,
in fact, checked. However, that raises significant concerns that a quick
check can lead to a completely opposite conclusion, particularly for a
supposedly-skilled practitioner.


> As said before, we are using tools in the audit process. The sentence
> about lint tools should be seen as additional information and nothing else.
>

Yet you've failed to describe what these tools encompass, beyond what is
readily available off the shelf. Further, in your description of the
methodology used, it was clear that human visual inspection without regard
to the actual specification was performed. While you may have used a tool
to, say, dump the DER-encoded contents into a structural representation,
the procedures for examining that structural representation against the
profile are clearly and significantly deficient.


> Considering the significance of misencoding of profiles - which has lead
> to critical misissuance and security risk (see, for example, 

Re: Clarifications on ETSI terminology and scheme

2018-10-31 Thread Ryan Sleevi via dev-security-policy
On Wed, Oct 31, 2018 at 4:05 PM Dimitris Zacharopoulos 
wrote:

> > For example, when we talk about expectations of CAs, we don't talk about
> > what they 'could' do, we talk about what they MUST do, because at the end
> > of the day, that's the bar they're being held to. It's certainly true
> that
> > a given TSP may go above and beyond some bar, but that doesn't mean we
> can
> > say "CAs do X", because they aren't required to. The same logic applies
> in
> > the discussion of CABs - it does not make sense to discuss how they
> 'could'
> > interpret it, but rather, what they MUST do.
>
> ISO 17065 and ETSI EN 319 403 include normative requirements for CABs
> just as the Baseline Requirements and ETSI EN 319 411-1 do for TSPs. I
> don't understand why you think it is any different for CABs. For
> example, the baseline requirements mandate that "The CA SHALL maintain a
> continuous 24x7 ability to respond internally to a high-priority
> Certificate Problem Report, and where appropriate, forward such a
> complaint to law enforcement authorities, and/or revoke a Certificate
> that is the subject of such a complaint".
>
> It doesn't specify exactly how a CA shall respond internally to such a
> request. It must have a process and the CAB will evaluate it.
>
> Similarly for CABs, under 7.13.1 of 17065 "The certification body shall
> have a documented process to receive, evaluate and make decisions on
> complaints and appeals. The certification body shall record and track
> complaints and appeals, as well as actions undertaken to resolve them".
>
> It doesn't specify exactly how the CAB shall fulfill this requirement
> but each competent CAB must demonstrate to their NAB that they have a
> documented process that fulfills this criteria, is effective and efficient.
>
> Even in the introduction section of 17065, you see the same use of words
> SHALL, SHOULD, MAY, CAN as described in RFC2119.
>

We've now so fully drifted from the original discussion that it's clear to
see we're talking about very different things, and thus confusion is
understandable.

For context, this particular discussion began in
https://groups.google.com/d/msg/mozilla.dev.security.policy/Q9whve-HJfM/niS5Y2f0AQAJ
, and in particular, the discussion about "If the CAB" does X. The point of
my criticism to this statement has been that the CAB is not required to do
X, they may do X, but they aren't mandated to do X.

I believe you interpreted my remark of "no notification will be made to
relying parties" is that it's impossible to notify RPs, and so you raised
an example of how a CAB might hypothetically do so. I was not trying to
state that - merely, that CABs do not have a normative requirement to
notify RPs of this change, and so as a matter of "Can you rely on this",
the answer is "No". Yes, some CABs may, but if a CAB doesn't, or if they
make mistakes, they've not violated any requirements.

And that's just as equally applicable to CAs. When a CA MUST "have a
process", we cannot make assumptions or rely on what that process does or
how it might result. When a CAB MUST "have a process", we cannot make
assumptions or rely on what they do in that process. I would hope we're in
violent agreement there.

> It's the same way that when we talk about the BRs, it's pointless to talk
> > about how some CAs may go above and beyond in their CPS, when discussing
> > the CA ecosystem or a particular (different) CA's misissuance. What
> matters
> > is the baseline expectations here.
>
> Some baseline expectations are not overly prescriptive in the BRs for
> CAs nor in the "BRs for CABs" (i.e. ISO 17065 and ETSI EN 319 403).
> Information Security Management Systems can have significantly different
> implementations but must maintain specific principles. This is where
> illustrative controls and recommended practices come in so that the
> auditors can evaluate if the principles are met but it's always subject
> to the opinion of the CAB (when auditing TSPs), and to the NAB (when
> assessing CABs).
>

My attempt to explain-by-analogy, to hopefully get us talking about the
same thing, has unfortunately lead to even more divergence. I hope that,
with the above clarifications to what we're originally talking about, we
can get back on the same page.

That said, because it contains some confusion, it at least bears
highlighting. As mentioned during the recent CABForum, illustrative
controls do not serve the purpose you are describing, neither do
recommended practices. They have no 'force' to ensure that things must be
'equivalent-or-better'. They're not even hints. They're just that -
illustrative.

One cannot and should not make any assumptions that the existence of
illustrative examples means that you will get the same results. It's
entirely valid to wholly ignore those illustrative controls and recommended
practices, end up with something completely opposite, and yet have fully
fulfilled the requirements.

This is why prescriptive controls are more 

Re: Clarifications on ETSI terminology and scheme

2018-10-31 Thread Ryan Sleevi via dev-security-policy
On Wed, Oct 31, 2018 at 12:55 PM Dimitris Zacharopoulos via
dev-security-policy  wrote:

>
>
> On 31/10/2018 4:47 μμ, Ryan Sleevi via dev-security-policy wrote:
> > There's a lot of nitpicking in this, and I feel that if you want to
> > continue this discussion, it would be better off in a separate thread on
> > terminology. I disagree with some of the claims you've made, so have
> > corrected them for the discussion.
> >
> > I would much rather keep this focused on the discussion of TUVIT as
> > auditors; if you feel that the nitpicking is relevant to that discussion
> > (which I don't believe anything you've said rises to that level), we
> should
> > certainly hash it out here. This is why I haven't forked this thread yet
> -
> > to make sure I've not misread your concern. However, if there's more
> > broadly a disagreement, but without impact to this discussion, we should
> > spin that out.
>
> Indeed, my comments were more related to the ETSI terminology so I
> created a new thread. More answers in-line.
>
> >
> > On Wed, Oct 31, 2018 at 7:11 AM Dimitris Zacharopoulos  >
> > wrote:
> >
> >> On 30/10/2018 6:28 μμ, Ryan Sleevi via dev-security-policy wrote:
> >>> This establishes who the CAB is and who the NAB is. As the scheme used
> in
> >>> eIDAS for CABs is ETSI EN 319 403, the CAB must perform their
> assessments
> >>> in concordance with this scheme, and the NAB is tasked with assessing
> >> their
> >>> qualification and certification under any local legislation (if
> >>> appropriate) or, lacking such, under the framework for the NAB applying
> >> the
> >>> principles of ISO/IEC 17065 in evaluating the CAB against EN 319 403.
> The
> >>> NAB is the singular national entity recognized for issuing
> certifications
> >>> against ISO/IEC 17065 through the MLA/BLA and the EU Regulation No
> >> 765/2008
> >>> (as appropriate), which is then recognized trans-nationally.
> >> Some clarifications/corrections because I saw some wrong usage of terms
> >> being repeated.
> >>
> >> A CAB MUST perform their assessments applying ISO/IEC 17065 AND ETSI EN
> >> 319 403 AND any applicable legislation (for EU CABs this includes
> >> European and National legislation).
> >>
> > Dimitris, I'm sorry, but I don't believe this is a correct correction.
> >
> > EN 319 403 incorporates ISO/IEC 17065; much like the discussion about EN
> > 319 411-2 incorporating, but being separate from, EN 319 411-1, the
> > structure of EN 319 403 is that it incorporates normatively the structure
> > of ISO/IEC 17065, and, at some places, extends.
> >
> > Your description of the system is logically incompatible, given the
> > incompatibilities in 319 403 and 17065.
> >
> > You're correct that any applicable national legislation applies, with
> > respect to the context of eIDAS. However, one can be assessed solely
> > against the scheme of EN 319 403 and 319 411-1, without going for
> qualified.
>
> I have to disappoint you and insist that your statement "As the scheme
> used in eIDAS for CABs is ETSI EN 319 403, the CAB must perform their
> assessments in concordance with this scheme, and the NAB is tasked with
> assessing their qualification and certification under any local
> legislation (if appropriate) or, lacking such, under the framework for
> the NAB applying the principles of ISO/IEC 17065 in evaluating the CAB
> against EN 319 403"
>
> and specifically the use of "or" in your statement, is incorrect. NABs
> *always* assess qualification of CABs applying ISO/IEC 17065 AND ETSI EN
> 319 403 AND any applicable legislation. Only Austria is an exception (if
> I recall correctly) because they don't apply ETSI EN 319 403 for CAB
> accreditation.
>
> Then, each CAB is accredited for specific standards (e.g. ETSI EN 319
> 411-1, 411-2, 421, eIDAS regulation and so on).
>
> ISO 17065 and ETSI EN 319 403 apply only to CABs and ETSI EN 319 411-1,
> 411-2 apply only for TSPs. 411-2 incorporates 411-1 and 401 but does not
> incorporate 403 or 17065. They are completely unrelated.
>

I'm afraid you're still misunderstanding, and I believe, mistating.

It is not ISO/IEC 17065 AND EN 319 403 that a CAB is assessed against.
They're assessed against EN 319 403, which *incorporates* ISO/IEC 17065.
This is the same way that when a TSP is assessed against ETSI EN 319 411-2,
they're not also (as in, separate audit report) assessed against EN 319
411-1; EN 319 411-2 *incorporates* EN 319 411-1.

Now, the CAB may ALSO be accredited for ISO/IEC 17065

Re: Questions regarding the qualifications and competency of TUVIT

2018-10-31 Thread Ryan Sleevi via dev-security-policy
On Wed, Oct 31, 2018 at 11:43 AM Wiedenhorst, Matthias via
dev-security-policy  wrote:

> · Since January 2018, T-Systems issued EV certificates with an
> incorrect qcStatement. T-Systems was made aware of the problem in October
> 2018, i.e. for about 9 month the error was not detected/reported
> https://bugzilla.mozilla.org/show_bug.cgi?id=1498463#c3
> T-Systems fixed the error in a timely manner so that certificates now
> contain the correct qcStatement.
>

T-Systems identified a deficiency within their systems, made a change on
October 5, but did not notify their CAB and SB until October 16.

Under the requirements provided by EN 319 401, 7.9, that does not seem
consistent with meeting those requirements.


> · TUVIT performed an audit of T-Systems according to ETSI policies
> EVCP and QCP-w in the beginning of 2018. During the audit the incorrect
> coding of the qcStatement was not detected.
>

Yes. I believe this is a significant issue, given the assessment report.


> · In several emails, we answered to his complaint, explained our
> procedures and justified the classification of the encoding error as minor
> (non-critical) non-conformity.
> For non-critical non-conformities, our certification requirements foresee
> a maximum period of 3 month for remediation before the certification shall
> be withdrawn. (see also ETSI EN 319 403, section 7.6 b) Based on the
> classification as minor, we do not see a necessity for revocation.
> That’s about the relevant facts.
> Let me now reply in detail to Ryans private contribution:
>
> >I would like to suggest that consideration be given to rejecting future
> >audits from TUVIT and from that of Matthias Wiedenhorst and Dr. Anja
> >Widermann, for some period of time. I would suggest this period be at
> least
> >one year long; however, given the technical details of ETSI accreditation,
> >believe a period of three years may be more appropriate.
>
> Dr. Anja Wiedemann (please mind the correct spelling) was not part of the
> audit team. We do not understand why her name is mentioned here.
> One / three years exclusion from audit sounds like a punishment. We do not
> understand where this time frame comes from and why such a time frame is
> needed.
>

Auditor and Reviewer, as stated on
https://www.tuvit.de/fileadmin/Content/TUV_IT/zertifikate/en/AA2018072001_Audit_Attestation_E_Deutsche-Telekom-Root-CA-2_20180718_s.pdf
-
the parties tasked with ensuring that the audit is meaningfully able to
ensure the criteria were met and the testing procedures were able to meet
those requirements.

The time frame selected is one that has been consistently used in the past
regarding questions about audits. Unfortunately, there lacks suitable means
to objectively determine whether or not the auditor is sufficiently
competent in remediation of problematic audits. Past procedures have
resulted in indefinite suspensions for some auditors, or temporary
suspensions of their recognition. The choice of three years, rather than
one year, is based on the fact that we have now seen auditors who were not
accredited perform audits against the frameworks, later become accredited,
and retroactively issue reports covering their activities prior to
accreditation. This does not instill confidence in the ETSI approach to
auditor supervision, and thus the longer period is to ensure that no
in-process audits are retroactively certified upon the expiration of the
period. Three years thus aligns with both the 1 year (CA/B Forum) and 2
year (eIDAS) time periods in ensuring that such a possibility is not
technically achievable.


> >If there is a belief that a TSP has failed to meet the requirements of
> >their accreditation, EN 319 403 describes a process for which complaints
> >may be made to either the TSP or to the CAB. This complaint process is
> >further expanded upon in ISO/IEC 17065, which 319 403 incorporates. This
> >same process also applies when there have been mistakes by the CAB to
> >adhere to its scheme requirements under EN 319 403 - a complaint may be
> >made with either the CAB or the NAB regarding the CAB's accreditation.
>
> TSPs are not accredited but certified, ETSI EN 319 403 §7.13 does not make
> any additional requirements on complaint procedures but just reference
> ISO/IEC 17065. (The requirements from ISO/IEC 17065 [1], clause 7.13 shall
> apply.) In particular, no procedures for complaints to TSPs or NABs are
> defined (only to CABs).
>

4.1.2.2 (j) provides for the client (the TSP) to inform the CAB any
complaints made known to it. You're correct that procedures for complaints
directly to the TSP are not normatively specified by ISO/IEC 17065 or that
of EN 319 403; however, the countenance is made that a client (the TSP) may
have knowledge of and records of complaints outside the scope and purview
of the CAB's complaints.

With respect to the NAB process,
https://www.dakks.de/sites/default/files/71_sd_0_009_e_beschwerdeverfahren_2018_v1.0_0.pdf

Re: Questions regarding the qualifications and competency of TUVIT

2018-10-31 Thread Ryan Sleevi via dev-security-policy
There's a lot of nitpicking in this, and I feel that if you want to
continue this discussion, it would be better off in a separate thread on
terminology. I disagree with some of the claims you've made, so have
corrected them for the discussion.

I would much rather keep this focused on the discussion of TUVIT as
auditors; if you feel that the nitpicking is relevant to that discussion
(which I don't believe anything you've said rises to that level), we should
certainly hash it out here. This is why I haven't forked this thread yet -
to make sure I've not misread your concern. However, if there's more
broadly a disagreement, but without impact to this discussion, we should
spin that out.

On Wed, Oct 31, 2018 at 7:11 AM Dimitris Zacharopoulos 
wrote:

> On 30/10/2018 6:28 μμ, Ryan Sleevi via dev-security-policy wrote:
> > This establishes who the CAB is and who the NAB is. As the scheme used in
> > eIDAS for CABs is ETSI EN 319 403, the CAB must perform their assessments
> > in concordance with this scheme, and the NAB is tasked with assessing
> their
> > qualification and certification under any local legislation (if
> > appropriate) or, lacking such, under the framework for the NAB applying
> the
> > principles of ISO/IEC 17065 in evaluating the CAB against EN 319 403. The
> > NAB is the singular national entity recognized for issuing certifications
> > against ISO/IEC 17065 through the MLA/BLA and the EU Regulation No
> 765/2008
> > (as appropriate), which is then recognized trans-nationally.
>
> Some clarifications/corrections because I saw some wrong usage of terms
> being repeated.
>
> A CAB MUST perform their assessments applying ISO/IEC 17065 AND ETSI EN
> 319 403 AND any applicable legislation (for EU CABs this includes
> European and National legislation).
>

Dimitris, I'm sorry, but I don't believe this is a correct correction.

EN 319 403 incorporates ISO/IEC 17065; much like the discussion about EN
319 411-2 incorporating, but being separate from, EN 319 411-1, the
structure of EN 319 403 is that it incorporates normatively the structure
of ISO/IEC 17065, and, at some places, extends.

Your description of the system is logically incompatible, given the
incompatibilities in 319 403 and 17065.

You're correct that any applicable national legislation applies, with
respect to the context of eIDAS. However, one can be assessed solely
against the scheme of EN 319 403 and 319 411-1, without going for qualified.


> Also, a NAB issues "Accreditations" to CABs and not "Certifications".
> Also, a CAB issues "Certifications" to TSPs and not "Accredidations".
> So, T-Systems is "Certified", not "Accredited".
>

Fair. If you replace these words, does it change the semantic meaning of
the message at all? I don't believe so.


> > As the framework utilizes ISO/IEC 17065, the complaints process and
> > certification process for both TSPs and CABs bears strong similarity,
> which
> > is why I wanted to explore how this process works in function.
> >
> > Note that if either the TSP is suspended of their certification or
> > withdrawn, no notification will be made to relying parties.
>
> This depends on applicable legislation and the implementation of ISO
> 17065 sections 4.6, 7.11.3 by each CAB. Some CABs have a public
> repository where RPs can query the validity of TSP Certifications so if
> a Certification is Suspended or Revoked, it will be displayed
> accordingly. I don't think WT has a notification scheme for RPs either.


> If the TSP publishes the seal URL or the CAB's URL to the TSP
> Certificate (which is not mandatory), RPs can manually check the
> validity of the TSP Certification.
>

I don't think this is a valid criticism, particularly in the context of the
specific case we're speaking about. I'm speaking about what's required -
you're speaking about what's possible. Many things are possible, but what
matters for expectations is what is required. 7.11.3 simply defers to the
scheme to specify, which EN 319 403 does not as it relates to this
discussion.


> Note that Supervisory Bodies (only related to eIDAS) have no authority
> for TSP Certifications under ETSI EN 319 411-1, but only ETSI EN 319
> 411-2. In all cases of Certification (ETSI EN 319 411-1 or ETSI EN 319
> 411-2), the NAB is assessing the CAB. In most EU countries, the NAB IS
> NOT the Supervisory Body.
>

The NAB is still responsible for the oversight of the CAB's execution of EN
319 403, and the investigation therein. The SB suspends qualified status,
but the NAB ensures that the CAB is meeting the requirements of the
certification scheme (EN 319 403) as part of supervising the accreditation
of that CAB.


> Similarly with TSPs losing their Certification, if a CAB loses their
>

Re: Questions regarding the qualifications and competency of TUVIT

2018-10-30 Thread Ryan Sleevi via dev-security-policy
On Tue, Oct 30, 2018 at 5:08 PM Erwann Abalea  wrote:

> Not seeing this on Google Groups :/
>
> Le mar. 30 oct. 2018 à 18:28, Ryan Sleevi  a
> écrit :
>
>>
>>
>> On Tue, Oct 30, 2018 at 1:20 PM Erwann Abalea via dev-security-policy <
>> dev-security-policy@lists.mozilla.org> wrote:
>>
>>> Le mardi 30 octobre 2018 17:29:14 UTC+1, Ryan Sleevi a écrit :
>>> [...]
>>> > Note that if either the TSP is suspended of their certification or
>>> > withdrawn, no notification will be made to relying parties. The closest
>>> > that it comes is that if they're accredited according to EN 319 411-2
>>> > (Qualified Certificates), the suspension/withdrawing will be reported
>>> to
>>> > the Supervisory Body, which will them update the Qualified Trust List
>>> for
>>> > that country and that will flow into the EU Qualified Trust List.
>>>
>>> Quick correction here: this certification suspension/withdrawal does not
>>> automatically imply a qualification suspension/withdrawal by the SB. The SB
>>> is the sole responsible of the TL content, and can ignore the certification
>>> suspension (or certification success, failure, absence, or whatever).
>>>
>>
>> Got a citation?
>>
>
> Other that the eIDAS regulation? No.
> What you wrote would mean that the CAB is finally responsible of the
> Qualified status of a TSP. And this is wrong.
>

Perhaps it was poorly stated, but I think we're in agreement that the
Supervisory Body ultimately makes the decision regarding both the addition
to and removal from the qualified trust list within that country. That
said, in re-examining Article 20(3) and Article 17, I agree, it's clear
that the suspension of accreditation does not itself trigger an obligation
to suspend certified status.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Questions regarding the qualifications and competency of TUVIT

2018-10-30 Thread Ryan Sleevi via dev-security-policy
On Tue, Oct 30, 2018 at 4:37 PM Erwann Abalea via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> > On what basis do you believe this claim is to be made? By virtue of
> > asserting qcStatement-1? If qcStatement was mis-encoded, or qcStatement-1
> > was absent, do you believe the same?
>
> qcStatement-1 is not mandatory, by law, if the policyId is QCP or QCP+, or
> if there's a matching Qualification stating that the certificate is
> Qualified. Implementing decision 2015/1505 defines the common EU rules, and
> I haven't found the specific German rules (they're asserted in the German
> TL).
> 2015/1505 can be found at
> https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32015D1505
>
> My mistake was that I looked at the Sie and didn't check if there was a
> QualificationsExtension node.
>

Ah hah! Thanks for that context! That means I should re-examine the case of
Certinomis in the context of
https://groups.google.com/d/msg/mozilla.dev.security.policy/x3s3CSKdIlY/R1C4JAvOBQAJ
, since I was downplaying the significance based upon the lack of asserting
id-etsi-qcs-QcCompliance in the QCStatements, without consideration of the
Certificate Policies.

> I'm not sure the ambiguity can be as easily resolved as you suggest, given
> > the description within EN 319 412-5
>
> The weight taken by EN 319412-5 is important for the EN 319412-2
> certification, but not for the Qualified status and usage of the
> certificate (because that's a legal issue).
>

I'm not sure the issues are so easily disentangled, given the other
QCStatements supported. For example, the constitutive value of an
id-etsi-qcs-QcLimitValue. Or is the view that such issues are to be
addressed at the national level in accordance with TL maintenance?


> > Considering that even prior versions (e.g. 2.0.12) included an ASN.1
> module
> > as a normative inclusion (Annex B), I find this profoundly
> non-compelling.
> > See
> >
> https://www.etsi.org/deliver/etsi_en/319400_319499/31941205/02.01.01_60/en_31941205v020101p.pdf
> > .
>
> I was talking about draft versions. The QcType definition was SEQUENCE {
> qcType OBJECT IDENTIFIER } just before that.
>

Oh, for sure, that was in
https://www.etsi.org/deliver/etsi_en/319400_319499/31941205/02.00.12_20/en_31941205v020012a.pdf
- but even still, OIDs never aligned


> > As you dig through these versions, the adopted versions do not share the
> > ambiguity issues. You're correct that 2.2.0 formalized the corrigenda
> > against 2.2.1 to include, textually, the normative requirement of "one
> and
> > exactly one" method, but in either event, such encodings violate it
> > entirely.
>
> I agree, having the id-etsi-qct-web OID used for the statementId is a
> clear violation. I'm just pointing that this specific QCStatement was
> really stupidly defined from the start.
>

Sure. It's also unlikely that will stop anytime soon though (c.f. PSD2 in
TS 119 495, although that may be withdrawn now?
https://portal.etsi.org/webapp/workProgram/Report_Schedule.asp?WKI_ID=53961
)
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Questions regarding the qualifications and competency of TUVIT

2018-10-30 Thread Ryan Sleevi via dev-security-policy
On Tue, Oct 30, 2018 at 1:10 PM Erwann Abalea via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> In fact, for the Relying Party, these certificates are definitely
> considered as Qualified certificates for website authentication, regardless
> of the content of the QCStatement extension.
> Grab the German TL, find the T-Systems TSP, this specific service, you'll
> see it's declared as a CA/QC type, status granted, with a Sie equal to
> ForWebSiteAuthentication. There is no ambiguity (yet).
>

On what basis do you believe this claim is to be made? By virtue of
asserting qcStatement-1? If qcStatement was mis-encoded, or qcStatement-1
was absent, do you believe the same?

I'm not sure the ambiguity can be as easily resolved as you suggest, given
the description within EN 319 412-5


> Are we going to also revoke all the certificates which contain encoded
> DEFAULT values (in the ASN.1 sense), invalid PrintableString attributes,
> invalid hostnames,


Yes. This has already been the process now for several years, as shown
through both https://wiki.mozilla.org/CA/Incident_Dashboard and the
https://wiki.mozilla.org/CA/Closed_Incidents

It is interesting that you chose those examples, as several are explicitly
called out in the Root Policy at
https://github.com/mozilla/pkipolicy/blob/master/rootstore/policy.md#52-forbidden-and-required-practices


> and reject audits performed by auditors who missed such certificates?
>

Yes. In general, the failure to detect such issues has called into question
the competency of the auditor, as has the failure to disclose such issues.
Both are relevant here, combined with the approach to revocation and
overall testing methodology. Further, as detailed, the misleading remarks
regarding what was examined are equally reason to question the competency
of the auditor.

This is no different than the rejection of audits from some
WebTrust-accredited practioners due to significant oversights about ongoing
and persistent misissuance that is critical within the scope of the audit
scheme being used. The failure to detect such issues fundamentally calls
into question the validity of the current audit, as well as those audits
performed for other CAs. The response of the auditor to such issues equally
bears calling into question the competencies of the auditor.


> This esi4-qcStatement-6 QCStatement is a recent addition, has been really
> poorly designed (a SEQUENCE OF that shall contain only 1 element, what a
> great idea), and has seen several changes during the draft. It's an easy
> statement, I agree, and a decent TSP shouldn't make any mistake in encoding
> it.
> But on the control side, there's not that much available tool to decode a
> QCStatements extension (and no, "openssl asn1parse" and "dumpasn1" don't
> count).
>

Considering that even prior versions (e.g. 2.0.12) included an ASN.1 module
as a normative inclusion (Annex B), I find this profoundly non-compelling.
See
https://www.etsi.org/deliver/etsi_en/319400_319499/31941205/02.01.01_60/en_31941205v020101p.pdf
.
As you dig through these versions, the adopted versions do not share the
ambiguity issues. You're correct that 2.2.0 formalized the corrigenda
against 2.2.1 to include, textually, the normative requirement of "one and
exactly one" method, but in either event, such encodings violate it
entirely.

I also fail to understand how one can argue that the CAB is following
industry best practice, considering that industry best practice has
included within it the use of tools such as certlint (which can ingest
ASN.1 modules) or zlint (for which compliance support can be included). In
any event, the testing procedure of visual inspection without actually
conforming against the grammar is a fundamental approach to audit
methodologies that does not stand up to scrutiny, and seriously calls into
question the core competencies.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Questions regarding the qualifications and competency of TUVIT

2018-10-30 Thread Ryan Sleevi via dev-security-policy
On Tue, Oct 30, 2018 at 11:59 AM Kurt Roeckx via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On 2018-10-30 16:20, Ryan Sleevi wrote:
> > Given that the Supervisory Body and National Accreditation bodies exist
> to
> > protect the legal value of this scheme, the failure by TUVIT to uphold
> the
> > safety and security of the eIDAS regime represents an ongoing threat to
> the
> > ecosystem.
>
> Do we have a way of verifying the accreditation, and do we verify that
> they have a valid accreditation? Should it be enough for us to check the
> accreditation, and just follow the process you are already doing?
>

Yes. You can either begin with a 'top-down' approach or a 'bottom-up'
approach, depending on the information you have at hand. Conceptually, it's
very similar to Revocation Checking - and just as conceptually broken.

To begin with a 'bottom-up' approach, we start with the CA being assessed.
We'll use https://crt.sh/?id=3726125 as an example.
From there, we then look at the audit, which leads to
https://www.tuvit.de/fileadmin/Content/TUV_IT/zertifikate/en/AA2018072004_Audit_Attestation_E_T-TeleSec-GlobalRoot-Class-3_20180723_s.pdf
From this audit, we learn that TÜV Informationstechnik GmbH is accredited
by DAkkS with certificate D-ZE-12022-01 under ETSI EN 319 403 v2.2.2
In this case, TUVIT has included a direct link to their certificate in the
footnotes, but you could otherwise look up with DAkkS directly. In either
event,
https://www.dakks.de/en/content/accredited-bodies-dakks?Regnr=D-ZE-12022-01-01
takes you to the certification
You can then view the certificate itself at
https://www.dakks.de/as/ast/d/D-ZE-12022-01-01.pdf

From a top-down approach, you'd start with identifying who the NABs are
under the eIDAS scheme. As EU Regulation No. 910/2014 builds upon EU
Regulation No 765/2008 with respect for the establishment of NABs, your
starting point is with http://www.european-accreditation.org/
From there, you look for Members or MLA/BLA Signatories (with respect to
ISO 17065 and/or EN 319 403), and you can determine that the NAB for
Germany is DAkkS ( http://www.european-accreditation.org/ea-members )
From DAkkS, you can then examine the Directory of accredited bodies (
https://www.dakks.de/en/content/directory-accredited-bodies-0 ) and search
for the relevant Conformity Assessment Bodies certifications

Both approaches lead you to the certification of TUVIT. If your question
was with respect to T-Systems' certification, you follow roughly that same
process, with the top-down approach also involving looking through TUVIT's
directory of accredited TSPs to determine if T-Systems is accredited.

This establishes who the CAB is and who the NAB is. As the scheme used in
eIDAS for CABs is ETSI EN 319 403, the CAB must perform their assessments
in concordance with this scheme, and the NAB is tasked with assessing their
qualification and certification under any local legislation (if
appropriate) or, lacking such, under the framework for the NAB applying the
principles of ISO/IEC 17065 in evaluating the CAB against EN 319 403. The
NAB is the singular national entity recognized for issuing certifications
against ISO/IEC 17065 through the MLA/BLA and the EU Regulation No 765/2008
(as appropriate), which is then recognized trans-nationally.

As the framework utilizes ISO/IEC 17065, the complaints process and
certification process for both TSPs and CABs bears strong similarity, which
is why I wanted to explore how this process works in function.

Note that if either the TSP is suspended of their certification or
withdrawn, no notification will be made to relying parties. The closest
that it comes is that if they're accredited according to EN 319 411-2
(Qualified Certificates), the suspension/withdrawing will be reported to
the Supervisory Body, which will them update the Qualified Trust List for
that country and that will flow into the EU Qualified Trust List. If
they're accredited against EN 319 411-1, the Supervisory Body will be
informed by the CAB (in theory, although note my complaint about TSP
informing the CAB was not followed, and the same can exist with CAB to SB),
but no further notification may be made. Furthermore, if certification is
later reissued, after a full audit, the certification history will not
reflect that there was a period of 'failed' certification. This similarly
exists with respect to CABs - if a CAB has their accreditation suspended,
on the advice of or decision of the NAB based on feedback from the SB - the
community will not necessarily be informed. In theory, because
certification is 'forward' looking rather than 'past' looking, a suspension
or withdraw of a CAB by a NAB may not affect its past certification of
TSPs; this is an area of process that has not been well-specified or
determined.
___
dev-security-policy mailing list
dev-security-policy@lists.mo

Questions regarding the qualifications and competency of TUVIT

2018-10-30 Thread Ryan Sleevi via dev-security-policy
(Writing with an individual hat)

I would like to suggest that consideration be given to rejecting future
audits from TUVIT and from that of Matthias Wiedenhorst and Dr. Anja
Widermann, for some period of time. I would suggest this period be at least
one year long; however, given the technical details of ETSI accreditation,
believe a period of three years may be more appropriate.

As part of an investigation into the incorrect qcStatements reported at
https://groups.google.com/forum/#!topic/mozilla.dev.security.policy/x3s3CSKdIlY
, I've used this as an example to explore the complaint handling process
under ETSI EN 319 403.

For those not familiar with the process, ETSI EN 319 403 normatively
incorporates various portions of ISO/IEC 17065, which is an international
standard for Conformity Assessment Bodies. Under the eIDAS scheme, auditors
(henceforth called Conformity Assessment Bodies, or CAB) are assessed by
their National Accreditation Body, or NAB, as being accredited under EN 319
403 to conduct audits against the schemes in ETSI EN 319 411-1 and ETSI EN
319 411-2. When a CA (called a Trust Service Provider, or TSP, by eIDAS)
has been audited (accredited) by a CAB against the scheme in EN 319 411-1
or 411-2, the CAB is applying the principles in EN 319 403.

If there is a belief that a TSP has failed to meet the requirements of
their accreditation, EN 319 403 describes a process for which complaints
may be made to either the TSP or to the CAB. This complaint process is
further expanded upon in ISO/IEC 17065, which 319 403 incorporates. This
same process also applies when there have been mistakes by the CAB to
adhere to its scheme requirements under EN 319 403 - a complaint may be
made with either the CAB or the NAB regarding the CAB's accreditation.

Each CAB and NAB must make available their information for processing
complaints. Given that 100% of the qualified certificates issued by
T-Systems have failed to comply with the requirements of ETSI EN 319 411-2,
by virtue of failing to comply with ETSI EN 319 412-5, I lodged a complaint
regarding T-System's certification with TUVIT, which certified T-Systems.

As part of processing the complaint, the following issues regarding TUVIT's
handling of complaints and overall approach to audits were revealed.

Issue A) As part of their initial response to my complaint, TUVIT, by way
of Matthias Wiedenhorst (Head of Certification Division TSP) stated "As a
very first, quick cross check with our audit evidence record, I can only
say that we did check issued certificates from within the declared period
and that they did contain the proper qcStatement-6.3 / id-etsi-qcs-QcType
3". However, this statement was in direct conflict with the TSP's own
investigation and incident report, provided at
https://bugzilla.mozilla.org/show_bug.cgi?id=1498463#c3 , which states the
mistake was introduced during the development of support - that is, no such
properly issued certificates were issued.

In follow-up with TUVIT, I requested they provide any evidence to support
the claim that such certificates were issued, knowing the matter reported
by T-Systems means this is highly improbable.

In subsequent reply, Matthias Wiedenhorst acknowledged that their
methodology for testing compliance with ETSI EN 319 412-5 stated that the
process was "the auditors have to check the website certificates manually
for the presence of the QC-statement. In this case, the existence of the
id-etsi-qct-web was identified, but for any reason it was not realized that
the esi4-qcStatements-6 was missing. We informed the auditors about how to
check the QC-statement correctly so that we are confident, that in future
an incorrect coding will be detected. "

This implies a significant lack of core competencies within TUVIT to assess
against the criteria of ETSI EN 319 412-5, which provides a normative ASN.1
module within its specification.

Issue C) In regards to remedies, I requested a suspension of T-Systems
current ETSI EN 319 411-2 accreditation, and a subsequent assessment of a
Full audit, given concerns with respect to the audit team performing the
initial audit. TUVIT declined this, on the basis that they assess such
technical non-compliance to be a 'minor non-conformity' that, per ETSI EN
319 403, may be resolved in the next three months by T-Systems without
adversely affecting their accreditation.

Further, T-Systems and TUVIT decided that, on the basis that it is a
non-conformity, such certificates do not need to be revoked according to
the Baseline Requirements' timeline. Furthermore, the failure to revoke
according to the Baseline Requirements' timeline was a further minor
non-conformity, not adversely affecting the certification decision of
T-Systems, and may be remedied over the next 3 months.

Furthermore, in response to whether or not such matters of non-conformities
would be reported by TUVIT (the CAB), Matthis Wiedenhorst acknowledged that
they are not required to make such information 

Re: Incorrect qcStatements encoding at a number of Qualified Web Authentication Certificates (QWACs)

2018-10-29 Thread Ryan Sleevi via dev-security-policy
On Mon, Oct 29, 2018 at 1:56 PM Juan Angel Martin via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Hello,
>
> "MULTICERT SSL Certification Authority 001" is a cross-certificate’s CN.
>
> https://crt.sh/?id=479956216
> Issuer: (CA ID: 5842)
>  commonName = MULTICERT Root Certification Authority 01
>  organizationName = MULTICERT - Serviços de Certificação Electrónica S.A.
>  countryName = PT
> Validity
>  Not Before: Dec 12 16:00:08 2017 GMT
>  Not After : Jun 12 16:00:08 2030 GMT
> Subject: (CA ID: 84368)
>  commonName = MULTICERT SSL Certification Authority 001
>  organizationalUnitName = Certification Authority
>  organizationName = MULTICERT - Serviços de Certificação Electrónica S.A.
>  countryName = PT
>
> https://crt.sh/?id=573264407
> Issuer: (CA ID: 1114)
>  commonName = Global Chambersign Root - 2008
>  organizationName = AC Camerfirma S.A.
>  serialNumber = A82743287
>  localityName = Madrid (see current address at www.camerfirma.com/address)
>  countryName = EU
> Validity
>  Not Before: Jul  3 12:01:18 2018 GMT
>  Not After : May 20 12:01:18 2025 GMT
> Subject: (CA ID: 84368)
>  commonName = MULTICERT SSL Certification Authority 001
>  organizationalUnitName = Certification Authority
>  organizationName = MULTICERT - Serviços de Certificação Electrónica S.A.
>  countryName = PT
>
> The first one is included into this audit attestation letter that
> MULTICERT sent us (intermediate CA #5)
> http://docs.camerfirma.com/publico/Ficheros/I1002_v2_EN_Audit_letter_eIDAS_SSL.PDF
>
> We've claimed in Salesforce that the audits are the same as the parent
> interpreting that it's in the scope of this audit (what is obviously an
> error).
>

Thanks for replying. The issue
https://bugzilla.mozilla.org/show_bug.cgi?id=1502957 was filed regarding
this, so it would be good to follow the incident report.

While hindsight is 20/20, and it's encouraging you acknowledge this as
clearly an error, it would be useful for the community to treat this as an
incident and try to understand what the root causes of these errors. "Human
error" doesn't really help devise appropriate solutions and mitigations.
Exploring factors such as how the disclosures are made, what the disclosure
review process is, how that information is compared is more useful to the
community. It sounds like you did have additional information and had the
audit material ready, so understanding how that ended up failed to be
included in Mozilla is good.

While improvements to CCADB in terms of programattic reading of audit
reports would have caught this discrepency, it may be useful for both AC
Camerfirma - and all CAs - to ensure their disclosures are appropriate
based on the /issued/ certificate, and not just the audit statement. That
is, because of the existence of the cross-certificate, under AC
Camerfirma's hierarchy it was distinct. CCADB tries to make this clear, in
that each certificate only lists a single 'parent' (in this case, this
intermediate would be listed with the parent of Camerfirma's certificate),
but I suspect that if other CAs have made this mistake, or other issues
exist with Camerfirma's disclosure, this would be taken very unkindly by
the community.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Incorrect qcStatements encoding at a number of Qualified Web Authentication Certificates (QWACs)

2018-10-26 Thread Ryan Sleevi via dev-security-policy
I've reached to the auditor (in this case, TUVIT) to understand why they
failed to detect this major non-conformity.

Now that I'm back in the states, following the CA/Browser Forum F2F, I've
had a chance to look more closely at the set of CAs that have issued. This
is not a full lint check - I've only included certificates I can minimally
parse as correct in these extensions.

In addition to T-Systems, it appears Certinomis, ComSign, and MULTICERT
have similarly taken to misencoding.

In the case of Certinomis, an example certificate is
https://crt.sh/?q=38d025ffe77ac1cd2142764fce4fb5fc619e5da536b3adac6904036f40addf80

Although it includes a qcStatements, it includes an OID of 0.4.0.1.6, which
is not valid for purpose, with the id-etsi-qct-web extension. It appears
that the "1" is most likely a typo for the full id-etsi-qcs, which is that
of 0.4.0.1862.1.6 - that is, they omitted the 1862 arc. They properly
encode the id-etsi-qcs-QcPDS with its QcLocations. However, they do not
assert compliance with the ETSI EN 319 412-5 profile (that is, OID
"0.4.0.1862.1.1" is not asserted), so this is a semantic misencoding but
not clearly a violation of the asserted profile.

In the case of ComSign, an example certificate is
https://crt.sh/?q=2d5a2596f315ba823758b2a0380de1e0e9cc22d6e045abe45e1cd7fb2c5fe01e

This one is a hot mess of improper encoding, probably best captured by
pointing out the misencoding of RFC 3739's SemanticsInformation from
qcStatement-1 (OID "1.3.6.1.5.5.7.11.1") and the inclusion of an unassigned
ETSI OID ("0.4.0.1862.11.1") that appears to be combining these two.
However, Mozilla has already made a decision about ComSign going forward.

In the case of MULTICERT, they're trusted transitively by AC Camerfirma in
https://crt.sh/?id=573264407 , which is valid for SSL in Mozilla, and an
example cert is
https://crt.sh/?q=5bada9c841242c13c035496d5668d4a59cb91bb839e9e625a9c63c0e687269ab

In this certificate, they completely botched the syntax, treating
"0.4.0.1862.1.6.3" as permitting an optional UTF8String message, which
states "Certificate for website authentication as defined in Regulation
(EU) No 910/2014"

MULTICERT is the most interesting of these. AC Camerfirma has claimed in
Salesforce that the audits are the same as the parent - however, this does
not seem to be met by
https://bug1478933.bmoattachments.org/attachment.cgi?id=8995930 , which is
the disclosed audit for the AC Camerfirma roots (by Auren). Auren's audit
is dated July 14, 2018 and covers the period up to April 13, 2018. The
cross-certificates were issued on Jun 29, 2018 (
https://crt.sh/?id=568548659 ) and Jul 3, 2018 (
https://crt.sh/?id=573264407 ), the former being revoked, the latter, not.

I find this questionable and suspicious, because MULTICERT also operates
its own root (within the Microsoft program), yet no audit information has
been provided (by Microsoft or MULTICERT). The closest I've found is
https://bugzilla.mozilla.org/show_bug.cgi?id=1433320 , which was provided
by DigiCert because of https://crt.sh/?caid=1013 . Based on this, I believe
that the most likely result is that AC Camerfirma has potentially mislead
the community about the state of the audits - or that the misissuing
MULTICERT Sub-CA has been audited multiple times (by both Auren and by
APCER, which seems unlikely).

As a result, it appears we have one clear misissuance by a CA in Mozilla's
program - MULTICERT - and a potential issue with the auditor (APCER –
Associação Portuguesa de Certificação) - and two potential issues - AC
Camerfirma for the potential audit disclosure issue, and Certinomis for the
possible misassertion issue (unless it can demonstrate that ETSI authorized
that OID, it would be violating 7.1.2.4 of the BRs)
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: What does "No Stipulation" mean, and when is it OK to use it in CP/CPS?

2018-10-25 Thread Ryan Sleevi via dev-security-policy
On Thu, Oct 25, 2018 at 5:47 PM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On 25/10/2018 23:10, Wayne Thayer wrote:
> > On Thu, Oct 25, 2018 at 11:17 AM Joanna Fox via dev-security-policy <
> > dev-security-policy@lists.mozilla.org> wrote:
> >
> >> Questions about blank sections, thinking of a potential future
> >> requirement. Sections such as 1.INTRODUCTION would remain blank as they
> are
> >> more titles than components, correct?
> >> If no sections are allowed to be blank does this include both levels of
> >> components such as 1.4 and 1.4.1?
> >>
> >> I would argue that higher level sections (e.g. 1.4)  aren't blank if
> they
> > include subsections (e.g. 1.4.1). If there are no subsections under a
> > section (e.g. 1.1 or 2), then the section should not be blank.
> >
> > Also, what is the opinion on adding sections to the CP/CPS that are not
> >> included in the RFC?
> >>
> >> Good question. My opinion is that it's okay to add sections as long as
> > they come after RFC 3647 defined sections and thus don't change the RFC
> > numbering. I've noted this in the policy issue -
> > https://github.com/mozilla/pkipolicy/issues/158
> >
>
> Would it be OK (I think it should) to place new sublevel sections under
> appropriate higher level sections, after the RFC section numbers run out
> at that level?


Can you explain why that is valuable?

What purpose do you believe the CP/CPS structure serves? One of the goals
of developing the structure in the RFC was to identify the common sections
applicable to all CAs, with a consistent structure, to allow easy
comparison between policies. Indeed, early audit processes proposed making
these policies machine readable and templated, to expedite comparisons.

I can see quite a bit of harm from your hypothetical, and have seen it in
the policies reviewed, so it would be useful to understand why you would
like to do this and what you see the purpose for CP/CPS that this would
benefit.

If you’re merely posing it as “someone” might want to, it seems like it
would be better to let those “someones” speak to their needs and use cases.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: What does "No Stipulation" mean, and when is it OK to use it in CP/CPS?

2018-10-20 Thread Ryan Sleevi via dev-security-policy
I’m not sure that is at all an accurate representation - of the discussions
or of the practiced use of “no stipulation.”

The use of “minimal CPS” is highly desirable from an audit and
documentation practice. The concerns raised during such discussions are the
concerns captured here originally - CAs documenting what “could” be
possible (up to anything they want, at a no stipulation level) rather than
documenting what they actually practice.

However, that discussion is not to be confused with “copy/paste CPS”. At
best, you could say that “some” (i.e. two people beyond yourself) shared
that view initially, and the subsequent discussion and clarification on
those concerns led to a different result than what you’re saying here. This
discussion is far more productive, with the goal of making explicit that
something is NOT practiced or implemented, rather than no restrictions made
(no stipulation) or no documentation provided (blank).

On Sat, Oct 20, 2018 at 8:19 AM Tim Hollebeek via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> I think blank sections should be disallowed.  The entire purpose of "No
> stipulation" is to make it clear that the omission of content was
> intentional.
>
> With regards to some of these sections being useful, I agree that a good
> CPS contains more than the minimum content required from the BRs.  I
> personally view the use of a "minimal CPS" (lightly modified version of the
> BRs) by some organizations as a cause for concern.  From the discussion at
> CABF Shanghai, it sounds like many people share my concern.
>
> -Tim
>
> > -Original Message-
> > From: dev-security-policy 
> On
> > Behalf Of Joanna Fox via dev-security-policy
> > Sent: Friday, October 19, 2018 1:39 PM
> > To: mozilla-dev-security-pol...@lists.mozilla.org
> > Subject: Re: What does "No Stipulation" mean, and when is it OK to use
> it in
> > CP/CPS?
> >
> > On Thursday, October 18, 2018 at 5:47:14 PM UTC-7, Jakob Bohm wrote:
> > > On 15/10/2018 20:01, Kathleen Wilson wrote:
> > > > I have added the following section to the Required Practices wiki
> page:
> > > >
> > > >
> > https://clicktime.symantec.com/a/1/7p3XgkAIErb9F2a7I0h79owDFBAc5ICeK
> > > > jHbhT4MfRQ=?d=L_e81rE8kYCYgIlLjCzed5rAyhZvhU9-
> > WfrdxJja_havwHFfcaC7Z5
> > > > faQ8DCy5VncpZl1CphyvBJ8dTo3Ml-RY9dLRX4wfNVPbx50CV7AZO-
> > rBXKETGeFpAXd_
> > > >
> > O0TIYHZceBVPHbT24andbjyZvaQBHcPsoC62d0MAYDjoe9YrPnuAg2i15Z3SCJk
> > mVn4u
> > > >
> > HGorLgZriWMT1D9ae_3pUEAwdLQCdG5DeTh_XQuRfy6nkCNqdu33T21ke5AHr
> > kv4ynQk
> > > > rYfWuuUGTWbKXPcvTlLrtTXVRAHns1t0OUFp5gA-pXrw_2a-
> > YKf2C3IkEFjzQ1KM07-g
> > > > 8GY5W2f-
> > L8l3AVEwYckAMj7BuG5f3ZntpsITX9xYav31NhE_3hQCFHyuuoC89KtOt1cq
> > > > -
> > 0hOXZSjQMhd9U8HPl2InCYHhLLGeW2jqa_qXR5wk2SwKC9ecK3MSN7EWKQ7Z
> > FNQGGj8
> > > >
> > 9SqBMmPojMgjHEhA%3D%3D=https%3A%2F%2Fwiki.mozilla.org%2FCA%2
> > FRequi
> > > >
> > red_or_Recommended_Practices%23BR_Commitment_to_Comply_statement
> > _in_
> > > > CP.2FCPS
> > > >
> > > > I will continue to appreciate feedback on this update.
> > > >
> > > > Thanks,
> > > > Kathleen
> > > >
> > >
> > > Upon closer look, it appears that most of the "No Stipulation" entries
> > > in the BRs are things for which Mozilla would probably want explicit
> > > statements, even though there are no specific BR requirements.
> > >
> > > For example:
> > >
> > > 1.5.1 Organization Administering this document (CP/CPS)
> > > 1.5.3 Person Determining CPS suitability for the Policy
> > > 1.5.4 CPS Approval procedures
> > > 4.3.2 (Mostly relevant to customer relationship)
> > > 4.4.1 (Only relevant to customer relationship)
> > > 4.4.2 Publication of the certificate by the CA
> > > 4.4.3 Notification of certificate issuance by the CA to other entities
> > >(This would cover CT or other mechanisms suitable for CRLset
> > > generation by Mozilla).
> > > 4.5.2 Relying party public key and certificate usage
> > >(This would typically cover disclaiming responsibility if users turn
> > >off revocation checking or interpret the certificate as meaning
> > >something other than a proof of identity of the private key holder).
> > > 4.6 CERTIFICATE RENEWAL
> > >This has been the subject of many discussions about appropriateness
> of
> > >CA procedures.
> > >   Except:
> > > 4.6.4 (Mostly relevant to customer relationship)
> > > 4.6.5 (Only relevant to customer relationship)
> > > 4.7 CERTIFICATE RE-KEY
> > >This has been the subject of many discussions about appropriateness
> of
> > >CA procedures.
> > >   Except:
> > > 4.7.4 (Mostly relevant to customer relationship)
> > > 4.7.5 (Only relevant to customer relationship)
> > > 4.8 CERTIFICATE MODIFICATION
> > >This has much relevance to situations of later discoveries of
> > > discrepancies of changes in circumstances.  It is a recurring theme in
> > > discussions about revoking such certificates.
> > >   Except:
> > > 4.8.4 (Mostly relevant to customer relationship)
> > > 4.8.5 (Only relevant to 

Re: Violation report - Comodo CA certificates revocation delays

2018-10-12 Thread Ryan Sleevi via dev-security-policy
On Fri, Oct 12, 2018 at 8:33 AM Ben Laurie  wrote:

>
>
> On Fri, 12 Oct 2018 at 03:16, Ryan Sleevi via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
>> I believe that may be misunderstanding the concern.
>>
>> Once these certificates expire, there's not a good way to check whether or
>> not they were revoked, because such revocation information may be culled
>> after certificate expiration.
>>
>> Similarly, if one is looking to verify the claims about revocation dates
>> and timelines, once those are culled from the CRLs, you can only
>> demonstrate with past CRLs or responses that may have been archived.
>>
>> The concern about December 6 represents when some of the certificates
>> begin
>> to expire, and thus being able to examine whether or not and when things
>> were done may no longer be available.
>>
>
> This is one of the reasons we also need revocation transparency.
>

As tempting as the buzzword is, and as much as we love motherhood and apple
pie and must constantly think of the children, slapping transparency after
a word doesn't actually address the needs of the community or users, nor
does it resolve the challenging policy issues that arise. Just because
something is cryptographically verifiable does not mean it actually
resolves real world problems, or does not introduce additional ones.

A simpler solution, for example, is to maintain an archive of CRLs signed
by the CA. Which would address the need without the distraction, and
without having the technical equivalent of Fermat's Last Theorem being
invoked. Let's not let the perfect (and unspecified) be the enemy of the
good and reasonable.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Incorrect qcStatements encoding at a number of Qualified Web Authentication Certificates (QWACs)

2018-10-12 Thread Ryan Sleevi via dev-security-policy
Please provide citations that you believe support such an interpretation.
If you cannot provide such citations, then it seems as if interpretations
are being made up, which is no more productive than me suggesting that a CA
may have interpreted the relevant sections to mean that every third
Thursday, CAs that want to misissue should send me a bottle of whisky.

When discussing interpretations, it's useful to discuss what _you_ believe,
and not play devil's advocate, especially for other people. If you believe
that alternative interpretations exist, you should be expected to bear the
burden of proof that can describe and explain why you believe such an
interpretation is valid. Otherwise, speculating as to possible
interpretations, especially on behalf of other people, especially in the
context of violations, does not actually help productively resolve
confusion or concern - it merely adds to it and devalues the conversation.

So if your view is that it is a possible interpretation, please demonstrate
why that is.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Violation report - Comodo CA certificates revocation delays

2018-10-11 Thread Ryan Sleevi via dev-security-policy
I believe that may be misunderstanding the concern.

Once these certificates expire, there's not a good way to check whether or
not they were revoked, because such revocation information may be culled
after certificate expiration.

Similarly, if one is looking to verify the claims about revocation dates
and timelines, once those are culled from the CRLs, you can only
demonstrate with past CRLs or responses that may have been archived.

The concern about December 6 represents when some of the certificates begin
to expire, and thus being able to examine whether or not and when things
were done may no longer be available.

On Thu, Oct 11, 2018 at 10:00 PM Matt Palmer via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Thu, Oct 11, 2018 at 11:19:18PM +, please please via
> dev-security-policy wrote:
> > I was under the impression that CAs were allowed to remove CRL entries
> and OCSP support for expired certificates for some reason. Good to know!
>
> CT logs are not CRLs or OCSP responders, nor do they track revocation.
>
> - Matt
>
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Incorrect qcStatements encoding at a number of Qualified Web Authentication Certificates (QWACs)

2018-10-11 Thread Ryan Sleevi via dev-security-policy
On Fri, Oct 12, 2018 at 2:32 AM Wayne Thayer via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Thank you for this report Fotis.
>
> On Thu, Oct 11, 2018 at 6:13 AM Fotis Loukos via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
> > Summary
> > ---
> >
> > A number of Qualified Web Authentication Certificates have been issued
> > with incorrect qcStatements encoding. A small survey displays that all
> > certificates issued by a specific SubCA are affected by this issue
> > (https://crt.sh/?CN=%25=1481). The CA has been notified about
> > this, but more than a week has passed and it has not yet provided any
> > feedback, while it continues to issue such malformed certificates (e.g.
> > https://crt.sh/?id=816495298).
> >
> > Technical details
> > -
> >
> > According to ETSI EN 319 412-5 (Electronic Signature and Infrastructure
> > (ESI); Certificate Profiles; Part 5: QCStatements) section 4.2.3
> > (QCStatement claiming that the certificate is a EU qualified certificate
> > of a particular type), the QCStatement QcType with OID
> > id-etsi-qcs-QcType (0.4.0.1862.1.6) declares that a certificate has been
> > issued for a particular purpose (e-sign, e-seal, qualified web
> > authentication certificate). Every certificate containing this
> > QCStatement must have a SEQUENCE OF OBJECT IDENTIFIER which declares the
> > purpose, e.g. id-etsi-qct-web (0.4.0.1862.1.6.3).
> > T-Systems International GmbH has failed to follow this specification,
> > and instead issues certificates having id-etsi-qct-web as a QCStatement.
> > Such a certificate can be found at https://crt.sh/?asn1=795148644. You
> > can compare this with https://crt.sh/?asn1=844599393 which has the
> > QcType QCStatement correctly encoded.
> >
> > Disclosure to CA timeline
> > -
> >
> > - 2 October 2018: First notification to the CA, with a detailed
> > description of the issue.
> > - 2 October 2018: Reply by a CA representative that they will look at it.
> > - 8 October 2018: Second notification and request for feedback.
> >
> > No further communication has taken place.
> >
> > Impact to WebPKI
> > 
> >
> > Two issues can be identified.
> >
> > The first issue is the incorrect encoding of the QCStatement. My
> > assessment is that this problem does not affect the WebPKI, since as far
> > as I can tell, no browsers decode or utilize the QCStatements extension.
> >
> > The second issue is the failure of the CA to identify the problem, reply
> > in time, possibly revoke the problematic certificates and at least
> > momentarily pause the issuance of new certificates until the issue is
> > resolved. I consider this a serious issue that displays problematic
> > practices within the CA.
> >
> > I share your concern for the CA's responsiveness, but I'm not seeing
> anything that would make this a violation of the BRs or Mozilla's policies.


The BRs requires certificates to comply with RFC 5280, and that all
extensions meet the requirements of 7.1.2.4 of the BRs. Mozilla Root CA
Policy 5.2 prohibits both incorrect extensions and invalid DER encoding.
These two entries are distinct in policy, as the “invalid DER” rule
unambiguously sets restrictions on the encoding rules being adhered to,
while the “incorrect extensions” addresses any semantic violation of the
encoding (since both of those cases are possible to encode in DER, but
their extension definition says you MUST NOT encode entries like that
because they do not conform to the extension’s textual definition.

The expectation is that if there is a defined ASN.1 module for the
extension being included within a certificate, the CA will observe that
encoding (Moz Policy 5.2 - “invalid DER”) and semantics (“incorrect
extensions”) as defined by the entity responsible for that OID namespace
(BRs 7.1.2.4), and as stated in the CA’s CP/CPS (BRs & Moz Policy).

I don’t think 7.1.2.4, or Section 5.2 of Mozilla Policy, can be read as
“The only things in certificates that need to be properly encoded are those
explicitly defined or referenced in RFC 5280,” as that would prohibit the
effective deployment of any new extensions. Given that RFC 5280 was
explicitly meant to be extendable by a variety of other documents, and
through its use of OIDs, without the explicit consent of or coordination
with the IETF, taking the narrow view very rapidly leads to logical
inconsistencies.

For example, to take the narrow “only explicitly in 5280” view, then any
DER encoding errors within the subjectPublicKeyIdentifier are totally
permissible - because the relevant algorithms aren’t described by,
normatively referenced by, not explicitly update 5280. Instead, RFCs like
RFC 3447 stand alone, but are used by these other RFCs. With this narrow
view, it would be saying that there is carte blanche to ignore normative
requirements in any extensible field. That is, any field with an OID can
have whatever value or semantics that the 

Re: Odp.: 46 Certificates issued with BR violations (KIR)

2018-10-10 Thread Ryan Sleevi via dev-security-policy
On Wed, Oct 10, 2018 at 4:58 PM Grabowski Piotr 
wrote:

> Hello Ryan,
>
>
> In the design of this template, one of the concerns was about
> understanding *how* a problem happened, not just how a CA responded. This
> is why it includes text such as "This may include events before the
> incident was reported, such as when a particular requirement became
> applicable, or a document changed, or a bug was introduced, or an audit was
> done."
>
> 1) When were the policy templates introduced
>
> We are using Verizon UniCERT PKI software. Policy or templates are
> integral part of the software and they exists there all along.
>

I'm uncertain how to interpret your answer.

Are you saying that, until this incident, KIR S.A. operated the UniCERT PKI
software without any modifications whatsoever to the default policy
templates? Did KIR S.A. review these policy templates to ensure compliance
with the Baseline Requirements are met? If you created policy templates
yourselves, when were they created, reviewed, updated, etc.

>From your reply with Wayne, it's clear that the software maintains an audit
log for these operations. Based on your reply, the understanding is that
there are only two versions of the policy templates - the default
configuration as shipped, and now the updated one to mitigate this issue.
Is that a correct understanding?


> 2) When were the policy templates reviewed
>
>  All policies/templates were reviewed right after the incident occurred.
> We have also added procedural step for periodic certificate policy
> templates validation.
>

Again, this misunderstands the question. These questions are about
understanding the events *before* the incident, not the events *after*. A
root cause analysis must necessarily trace how the incident happened, which
means understanding what events happened before.

In light of this explanation, please review this question again. When were
the policy templates operating prior to this incident reviewed? When were
they last modified? When were they introduced? Working through the steps of
how the incident happened is an essential part in demonstrating that the
mitigations are appropriate.


> 3) What are the templates review practices.
>
> We have added dual CAO control for modifying policy template which
> requires the presence of 2 CAO's (Certification Authority Operators)
> All policies/templates are reviewed against the purpose of given policy
> and CP/CPS.
>

Similarly, this discusses what has been done, but what was done prior to
this incident? What were the review practices beforehand?

>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Odp.: Odp.: 46 Certificates issued with BR violations (KIR)

2018-10-10 Thread Ryan Sleevi via dev-security-policy
On Wed, Oct 10, 2018 at 4:33 PM Grabowski Piotr via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Hello Wayne,
>
> - Is the new dual control process documented in a manner that will be
> auditable by your external auditors?
>
>   Yes, the new dual control process is already included in the document
> called instruction of the security of system Szafir (internal name of the
> PKI system)   and it is one of the document that is presented to
> internal and external auditors.
>

Has this been added to your CP/CPS? If not, why not?

Can you please detail the additional controls that were specified?


> - Despite the review, is it possible for one malicious employee to modify
> a policy template by themselves? If not, why not?
>
> It is impossible. CAO role is one of the most trusted role so it has to
> have physical access to datacenter room,   dedicated domain
>  credentials, smartcard (PIN) with certificate to login to CAO application
> module.
>

This does not seem to describe why it is impossible. The description of
controls here reads that it is possible for a CAO to do so, if malicious,
and you merely trust the CAO to not be malicious.


> - Have you conducted an overall review of your practices looking for other
> areas where a human error can result in misissuance? If so, what   did you
> find and how are you addressing it?
>
>   Yes, we have conducted an overall review and  have not found any other
> areas where a human error can result in misissuance.
>

Put differently: Have you completed an examination of the controls in place
to ensure that any and all configuration changes that may result in a
change to the operation of the CA undergoes multi-party review prior to and
following implementation, to ensure consistency with the CP and CPS?

Are there any operations that may modify any state within the CA software,
hardware, or operating environment that does not undergo multi-party review
prior and following?

If so, what are those operations.
If not, what are the operations that you have considered and enumerated as
requiring multi-party review prior to and following the modification?


> - Why, despite the numerous misissuances documented on this list, has KIR
> not even begun the process of implementing pre-issuance linting   until now?
>
>   We have started process of implementing pre-issuance linting just after
> your email pointing our misissuance. We have requested pre-issuance
>  linting functionality/patch with high priority from Verizon.
>

This does not answer the question. It states that you have begun the
request, but it does not provide insight as to why you had not previously
done so.


> - Why is KIR not performing post-issuance linting? This problem had been
> occurring for over a year and there are readily available tools  (
> https://crt.sh) that allow anyone to identify these problems.
>
>   We will implement post-issuance linting as well.


This indicates what you will do, but does not answer why you didn't do.
Part of the post-mortem process is to understand what issues may have
existed, given the readily available nature of the tool and the discussions
on m.d.s.p. regarding other CAs.

For example, perhaps the CA did not have adequate staffing to ensure
participation in m.d.s.p. Perhaps the CA team did not have adequate
training to recognize the similarities and/or value in such.

The expectations upon CAs will continue to increase, and the question is
why did KIR S.A. not increase operational oversight in line with those
increased expectations, which would have allowed better detection and
prevention. It is positive to hear steps are being taken now to address it,
but it's reasonable to question why steps weren't taken then, when this was
a knowable and identified best practice and minimum expectation of CAs.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: 46 certificates issued with BR violations

2018-10-08 Thread Ryan Sleevi via dev-security-policy
>
> On Mon, Oct 8, 2018 at 4:06 PM Nick Lamb via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
>> On Mon, 8 Oct 2018 03:43:53 -0700 (PDT)
>> "piotr.grabowski--- via dev-security-policy"
>>  wrote:
>>
>> > We have by the way question about error: ERROR: The 'Organization
>> > Name' field of the subject MUST be less than 64 characters. According
>> > to https://www.ietf.org/rfc/rfc5280.txt and the note from this RFC
>> > 'ub-organization-name INTEGER ::= 64. For UTF8String or
>> > UniversalString at least four times the upper bound should be
>> > allowed. So what is the max length of this field  for UTF8String?
>>
>> As I understand it:
>>
>> Although the word "character" is vague and should generally be avoided
>> in modern technical documents, in this context it seems to refer to a
>> Unicode code point. And "at least four times" is referring to the prior
>> lines of the RFC which explain that you will need more than one octet
>> (byte) to represent some of these characters - this is important for
>> resource constrained implementations.
>>
>
> There is no need to speculate based on context, because the RFC uses
> precise and well-defined language.
>
> X520OrganizationName is defined precisely using ASN.1 size semantics.
>
> These semantics are specified in X.680 47.5.4, including the full
> explanation as to what the 'max length' of this field should be seen as.
> It's unambiguous.
>
> The encoding representation is then subject to the rules of X.690 8.21.10
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: 46 Certificates issued with BR violations (KIR)

2018-10-08 Thread Ryan Sleevi via dev-security-policy
On Mon, Oct 8, 2018 at 11:25 AM piotr.grabowski--- via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Here's the incident report:
>
> 1.How your CA first became aware of the problem (e.g. via a problem
> report submitted to your Problem Reporting Mechanism, via a
>
> discussion in mozilla.dev.security.policy, or via a Bugzilla bug), and the
> date.
>
> Email from Wayne Thayer Oct 1, 2018
>
> 2.A timeline of the actions your CA took in response.
>
> A. Oct 2, 2018 - Investigation began.
> B. Oct 4, 2018 - Found impacted certificate policy templates.
> C  Oct 4, 2018 - All the certificates owners were contacted and agreed on
> issuance new BR compliant certificates in time convenient for them,
>   preferably not later than by the end of this year and revocation
> current ones.
> D. Oct 8, 2018 - Fixed impacted certificate policy templates.
> E. Oct 8, 2018 - This disclosure.


Can you please re-review
https://wiki.mozilla.org/CA/Responding_To_An_Incident#Incident_Report ?

In the design of this template, one of the concerns was about understanding
*how* a problem happened, not just how a CA responded. This is why it
includes text such as "This may include events before the incident was
reported, such as when a particular requirement became applicable, or a
document changed, or a bug was introduced, or an audit was done."

1) When were the policy templates introduced
2) When were the policy templates reviewed
3) What are the templates review practices
4) What controls, if any, exist to ensure that all templates are
appropriate to the controls?

The misconfiguration of certificate policy templates is a significant
incident, precisely because there have been significant CA misissuances as
a result of it. In this regard, a CA that is misconfiguring policy
templates is arguably as negligent as one failing to perform domain
validation - this is an incredibly significant mistake by a CA. A
responsible CA seeking continued trust in their certificates would thus
want to demonstrate that they understood how significant this was, and
provide detailed descriptions about the timeline of events and the controls
and practices they have in place to mitigate the risk of template
misconfiguration. Anything short of that is gross negligence on behalf of a
CA.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Concerns with Dun & Bradstreet as a QIIS

2018-10-02 Thread Ryan Sleevi via dev-security-policy
On Tue, Oct 2, 2018 at 10:02 AM Dimitris Zacharopoulos 
wrote:

> >> But this inaccurate data is not used in the validation process nor
> >> included in the certificates. Perhaps I didn't describe my thoughts
> >> accurately. Let me have another try using my previous example. Consider
> an
> >> Information Source that documents, in its practices, that they provide:
> >>
> >>
> >> 1. the Jurisdiction of Incorporation (they check official government
> >> records),
> >> 2. registry number (they check official government records),
> >> 3. the name of legal representative (they check official government
> >> records),
> >> 4. the official name of the legal entity (they check official
> >> government records),
> >> 5. street address (they check the address of a utility bill issued
> >> under the name of the legal entity),
> >> 6. telephone numbers (self-reported),
> >> 7. color of the building (self-reported).
> >>
> >> The CA evaluates this practice document and accepts information 1-5 as
> >> reliable, dismisses information 6 as non-reliable, and dismisses
> >> information 7 as irrelevant.
> >>
> >> Your argument suggests that the CA should dismiss this information
> source
> >> altogether, even though it clearly has acceptable and verified
> information
> >> for 1-5. Is that an accurate representation of your statement?
> >>
> > Yes, I'm stating that the existence of and inclusion of 5-7 calls into
> > question whether or not this is a reliable data source.
>
> Right, but in my example, the data source has already described -via
> their practices- that this is how they collect each piece of data. The
> CA, as a recipient of this data, can choose how much trust to lay upon
> each piece of information. Therefore, IMHO the CA should evaluate and
> use the reasonably verified information from that data source and
> dismiss the rest. That seems more logical to me than dismissing a data
> source entirely because they include "the color of the building", which
> is self-reported.
>
> > Your parenthetical
> > about how they check that is what the CA has the burden to demonstrate,
> > particularly given that they have evidence that there is
> less-than-reliable
> > data included. How does the competent CA ensure that the registry number
> is
> > not self-reported -
>
> The information in the parenthesis would be documented in the trusted
> source practices and the CA would do an inquiry to check that these
> practices are actually implemented and followed.
>
> > or that the QIIS allows it to be self-reported in the
> > future?
>
> No one can predict the future, which is why there is a process for
> periodic re-evaluation.
>

So let me understand: Your view is that QIIS's publish detailed policies
about the information they obtain (they don't), and the CA must
periodically re-evaluate that (which isn't in the BRs) to determine which
information is reliable or not. Presumably, that RDS/QIIS is also audited
against such statements (they aren't) in order to establish their
reliability. That's a great world to imagine, but that's not the world of
RDS or QIIS, and so it's an entirely fictitious world to imagine.

That world is either saying the RDS/QIIS is a Delegated Third Party - and
all the audit issues attendant - or we're treating them like a DTP for all
intents and purposes, and have to deal with all of the attendant DTP
issues, such as the competency of the auditor, the scoping of the audits,
etc. I see no gain from an overly convoluted system that, notably, does not
exist today, as compared to an approach of whitelisting such that the CA no
longer has to independently assess each source, and can instead work with
the community to both report omissions of qualified sources AND report
issues with existing qualified sources. That seems like a net win, without
an unnecessary veneer of assurance that does not actually provide it (as
shown by the issues with DTP audits for a number of CAs)


> > This is where the 'stopped-clock' metaphor is incredibly appropriate.
> Just
> > because 1-5 happen to be right, and happen to be getting the right
> process,
> > is by no means a predictor of future guarantees or correctness or
> accuracy.
>
> Of course, this is why you need re-evaluation. You can't guarantee
> correctness for anything, otherwise we wouldn't have cases of
> mis-issuance or mis-behavior. We add controls in processes to minimize
> the risk of getting bad data.
>
> > More importantly, the inclusion of 5-7 in the reporting suggest that
> there
> > is *unreliable* data actively being seen as acceptable, and because of
> > that, the CA needs to take a view against including.
>
> I am not sure if you have misunderstood my description, but let me
> repeat that despite getting the full data set, the CA would use only the
> information pre-evaluated as reliable, and that doesn't include
> self-reported data which they know -beforehand- (because it is
> documented in the data 

Re: Concerns with Dun & Bradstreet as a QIIS

2018-10-01 Thread Ryan Sleevi via dev-security-policy
On Mon, Oct 1, 2018 at 9:21 AM Dimitris Zacharopoulos 
wrote:

> No, this was not about the domain name but about the information displayed
> to the Relying Party with the attributes included in the OV/EV Certificate
> (primarily the Organization). So, I'm still uncertain if Ian's "misleading
> street address" was trying to get a certificate for domain "stripe.com"
> owned by "Stripe Inc." in California, or was trying to get a certificate
> for "ian's domain.com" owned by "Stripe Inc." in Kentucky, as was the
> previous discussions. The discussion so far indicates that it's the latter,
> with the additional element that now the Street Address is also misleading.
>

I'm not sure the source of confusion. As the original message pointed out,
this was about a Cloudflare certificate (or more aptly, two entities named
Cloudflare). In both the "Stripe, Inc" and in this case, it was a domain
that Ian owned and could demonstrate, for a legally incorporated entity
that Ian represented. In the "Stripe, Inc" case, the information included
in the certificate reflected the accurate entity - that is, the only
"confusion" here was relying party confusion, while the information within
the certificate was accurate.

During those discussions, some suggested that it was this point - that the
information was accurate, and a 'discerning' RP could distinguish between
Kentucky and California - that prevented a "Stripe, Inc" cert from being
problematic. This more recent "Cloudflare" issue builds upon that claim, by
showing that CAs also use unreliable data sources, such that even a
discerning RP may not be able to fully distinguish. In this case, Ian's
attempted example was an 'off-by-one' error on a street address, while
otherwise keeping all of the same information (except for serial number,
since that's related to jurisdictional details).

However, independent of any "name-collidey" discussion between
Ian-Cloudflare and 'Real'-Cloudflare, the fact that some CAs treat D as a
Reliable Data Source shows that unreliable data is able to be introduced
into certificates.


> I am certainly not suggesting that CAs should put inaccurate and
> misleading information in certificates :-) I merely said that if the
> Subscriber introduces misleading or inaccurate information in certificates
> via a reliable information source, then there will probably be a trail
> leading back to the Subscriber. This fact, combined with the lack of clear
> damage that this can cause to Relying Parties, makes me wonder why doesn't
> the Subscriber, that wants to mislead Relying Parties, just use a DV
> Certificate where this probably doesn't leave so much evidence tracing back
> to the Subscriber?
>

"The lack of clear damage" - I'm not sure how better to communicate, since
we're discussing fundamental damage to the value that OV and EV are said to
provide. The only way we can say "lack of clear damage" is to say that OV
and EV are worthless - otherwise, it's incredibly damaging.

I have no idea where the notion of 'tracability' comes from, or why that's
relevant. It again seems to be anchoring on getting a certificate for the
real cloudflare.com or stripe.com, which is not the discussion. We're
talking about "confusing" a user (or subscriber or relying party or threat
monitoring system) by suggesting that the certificates being issued are
'benign' or 'authorized'.


> But this inaccurate data is not used in the validation process nor
> included in the certificates. Perhaps I didn't describe my thoughts
> accurately. Let me have another try using my previous example. Consider an
> Information Source that documents, in its practices, that they provide:
>
>
>1. the Jurisdiction of Incorporation (they check official government
>records),
>2. registry number (they check official government records),
>3. the name of legal representative (they check official government
>records),
>4. the official name of the legal entity (they check official
>government records),
>5. street address (they check the address of a utility bill issued
>under the name of the legal entity),
>6. telephone numbers (self-reported),
>7. color of the building (self-reported).
>
> The CA evaluates this practice document and accepts information 1-5 as
> reliable, dismisses information 6 as non-reliable, and dismisses
> information 7 as irrelevant.
>
> Your argument suggests that the CA should dismiss this information source
> altogether, even though it clearly has acceptable and verified information
> for 1-5. Is that an accurate representation of your statement?
>
Yes, I'm stating that the existence of and inclusion of 5-7 calls into
question whether or not this is a reliable data source. Your parenthetical
about how they check that is what the CA has the burden to demonstrate,
particularly given that they have evidence that there is less-than-reliable
data included. How does the competent CA ensure that the registry number is
not self-reported - or 

Re: Concerns with Dun & Bradstreet as a QIIS

2018-10-01 Thread Ryan Sleevi via dev-security-policy
On Mon, Oct 1, 2018 at 2:55 AM Dimitris Zacharopoulos 
wrote:

> Perhaps I am confusing different past discussions. If I recall correctly,
> in previous discussions we described the case where an attacker tries to
> get a certificate for a company "Example Inc." with domain "example.com".
> This domain has a domain Registrant Address as "123 Example Street".
>
> The attacker registers a company with the same name "Example Inc." in a
> different jurisdiction, with address "123 Example Street" and a different
> (attacker's) phone number. How is the attacker able to get a certificate
> for example.com? That would be a real "attack" scenario.
>

Yes, you are confusing things, as I would have thought this would be a
'simple' discussion. Perhaps this confusion comes from only thinking the
domain name matters in making an 'attack'. If that's the case, we can do
away with EV and OV entirely, because they do not provide value to that
domain validation. Alternatively, if we say that information is relevant,
then the ability to spoof any of that information also constitutes an
'attack' - to have the information for one organization presented in a
different (logical, legal) organization's associated information.


> Unless this topic comes as a follow-up to the previous discussion of
> displaying the "Stripe Inc." information to Relying Parties, with the
> additional similarity in Street Address and not just the name of the
> Organization. If I recall correctly, that second "Stripe Inc." was not a
> "fake" entity but a "real" entity that was properly registered in some
> Jurisdiction. This doesn't seem to be the same attack scenario as getting a
> certificate for a Domain for which you are not the owner nor control, but a
> way to confuse Relying Parties. Certainly, in case of fraud, this leaves a
> lot more evidence for the authorities to trail back to a source, than for a
> case without Organization information.
>

This also seems to be fixing on the domain name, but I have no idea why
you've chosen that as the fixation, as the discussion to date doesn't
involve that. I don't think it's your intent, but it sounds like you're
saying "It's better for CAs to put inaccurate and misleading information in
certificates, because at least then it's there" - which surely makes no
sense.


> But they do have some Reliable and Qualified Information according to our
> standards (for example registry number, legal representative, company
> name). If a CA uses only this information from that source, why shouldn't
> it be considered reliable? We all need to consider the fact that CAs use
> tools to do their validation job effectively and efficiently. These tools
> are evaluated continuously. Complete dismissal of tools must be justified
> in a very concrete way.
>

No, they are not Reliable Data Sources. Using unreliable data sources,
under the motto that "even a stopped clock is right twice a day", requires
clear and concrete justification. The burden is on the CA to demonstrate
the data sources reliability. If there is any reason to suspect that a
Reliable Data Source contains inaccurate data, you should not be using it -
for any data.


> I would accept your conclusion for an Information Source that claimed, in
> their practices, that they verify some information against a secondary
> government database and the CA gets evidence that they don't actually do
> that. This means that the rest of the "claimed as verified" information is
> now questionable. This is very much similar to the Browsers checking for
> misbehavior on CAs where they claim certain practices in their CP/CPS and
> don't actually implement them. That would be a case where the CA might
> decide to completely distrust that Information Source.
>
> I hope you can see the difference.
>

I hope you can understand that this is not an apt or accurate comparison.
An organization that lacks a process, which is the case for unreliable
data, is no different than an organization that declares a process but does
not follow it.


> I remember this argument being supported in the past and although I used
> to agree to it, with the recent developments and CA disqualifications, I
> support the opposite. That is, Subscribers start to choose their CA more
> carefully and pay attention to the trust, reputation and practices, because
> of the risk of getting their Certificates challenged, revoked or the CA
> distrusted.
>

So you believe it's in best interests of Subscribers to have CAs
distrusted, certificates challenged and revoked, and for relying parties to
constantly call into question the certificates they encounter? And that
this is somehow better than consistently applied and executed validation
processes? I wish I could share your "Mad Max" level of optimism, but it
also fails to understand that we're not talking about Subscriber selection,
we're talking about adversarial models. The weakest link matters, not
"market reputation", as much as some CAs would like to believe.



Re: Concerns with Dun & Bradstreet as a QIIS

2018-09-28 Thread Ryan Sleevi via dev-security-policy
Yes, we can punt the problem down a few years, by allowing CAs to
self-report in unauditable ways, and shift the burden of evaluation on to
the community to try and detect CAs misbehaving.

Or we can take sensible steps forward that nip the problem at its root,
don’t require misunderstanding or misusing unrelated technologies, and
instead achieve the goals that CAs have been claiming are valuable to
achieve years sooner.

Obviously, simpler is better - and a whitelist of QGIS quickly establishes
an interoperable and consistent baseline for organizational information,
and can be readily deployed today, without any unnecessary infrastructure,
and with immediate utility to existing relying parties.

On Fri, Sep 28, 2018 at 7:43 PM Tim Hollebeek 
wrote:

> Perhaps a simple first step is to mandate disclosure of which information
> source was used for validation.  Then if someone uses Google Maps or
> similar, People Who Pay Attention To Such Things can start a public
> discussion about whether the source is a QIIS, and whether the certificate
> is mis-issued.
>
> This would be yet another use case for my certificate metadata transparency
> idea.  CAs have lots of information about the validation, issuance,
> management, and revocation of certificates that really does not need to be
> private.  As I've noted a few times this year, the issue keeps coming up.
>
> If there were more hours in my day, there'd be an RFC for it already.  If
> anyone wants to help with it, I'd be more than happy to work with them.
>
> -Tim
>
> > -Original Message-
> > From: dev-security-policy  >
> On
> > Behalf Of Ryan Sleevi via dev-security-policy
> > Sent: Friday, September 28, 2018 10:04 AM
> > To: Dimitris Zacharopoulos 
> > Cc: mozilla-dev-security-policy
> ;
> > Ian Carroll 
> > Subject: Re: Concerns with Dun & Bradstreet as a QIIS
> >
> > On Fri, Sep 28, 2018 at 1:22 AM Dimitris Zacharopoulos via
> dev-security-policy
> >  wrote:
> >
> > >
> > > Forgive my ignorance, but could you please explain what was your
> > > ultimate goal, as "an attacker", what were you hoping to gain and how
> > > could you use this against Relying Parties?
> > >
> > > I read your email several times but I could not easily find a case
> > > where your fake address creates any serious concern for Relying
> > > Parties. Even if you used the same street address as CloudFlare, the
> > > CA would check against the database and would find two company records
> > > that share the same address. That would obviously block the process
> > > and additional checks would take place. Now, as a way to delay
> > > certificate issuance for CloudFlare, I find it interesting but it
> > > certainly doesn't seem to affect Relying Parties.
> > >
> >
> > I'm not Ian, but I would have thought his email would have been obvious
> and
> > clear. The confusion here is that two jurisdictions can allow different
> entities
> > the same name. The EVGs seek to resolve this by making use of a variety
> of
> > ancilliary fields - such as serialNumber and the incorporation
> information
> - to
> > presumably attempt to establish to the relying party the identity they're
> > speaking to.
> >
> > In the "Stripe, Inc" case, the user was able to distinguish 'real' from
> 'fake' by
> > virtue of the incorporation information - Kentucky vs California.
> > However, in this case, the attack went further, in as much as through the
> CA
> > using an unreliable datasource to verify the jurisdictional information.
> > If the CA used an unreliable datasource, then the end user would see
> > something that, for intents and purposes, appears the same.
> >
> > I'm not sure your point about the same address - Ian made it clear it was
> a
> > different but *similar* address - and I'm not sure why you suggest it
> would
> > block for the legitimate subscriber. Does that explain it simply enough?
> >
> >
> > > And to take this one step further, I believe there are several GISs
> > > that also accept whatever address you tell them because:
> > >
> > >  1. They have no reason to believe that you will lie to them (they know
> > > who you are and in some Jurisdictions you might be prosecuted for
> > > lying to the government)
> > >  2. No foreseeable harm to others could be done if you misrepresent
> your
> > > own address.
> > >
> >
> > Then they are not Reliable nor QIISes. Full stop.
> >
> >
> > > In my understanding, this is the process each CA 

Re: Concerns with Dun & Bradstreet as a QIIS

2018-09-28 Thread Ryan Sleevi via dev-security-policy
On Fri, Sep 28, 2018 at 1:22 AM Dimitris Zacharopoulos via
dev-security-policy  wrote:

>
> Forgive my ignorance, but could you please explain what was your
> ultimate goal, as "an attacker", what were you hoping to gain and how
> could you use this against Relying Parties?
>
> I read your email several times but I could not easily find a case where
> your fake address creates any serious concern for Relying Parties. Even
> if you used the same street address as CloudFlare, the CA would check
> against the database and would find two company records that share the
> same address. That would obviously block the process and additional
> checks would take place. Now, as a way to delay certificate issuance for
> CloudFlare, I find it interesting but it certainly doesn't seem to
> affect Relying Parties.
>

I'm not Ian, but I would have thought his email would have been obvious and
clear. The confusion here is that two jurisdictions can allow different
entities the same name. The EVGs seek to resolve this by making use of a
variety of ancilliary fields - such as serialNumber and the incorporation
information - to presumably attempt to establish to the relying party the
identity they're speaking to.

In the "Stripe, Inc" case, the user was able to distinguish 'real' from
'fake' by virtue of the incorporation information - Kentucky vs California.
However, in this case, the attack went further, in as much as through the
CA using an unreliable datasource to verify the jurisdictional information.
If the CA used an unreliable datasource, then the end user would see
something that, for intents and purposes, appears the same.

I'm not sure your point about the same address - Ian made it clear it was a
different but *similar* address - and I'm not sure why you suggest it would
block for the legitimate subscriber. Does that explain it simply enough?


> And to take this one step further, I believe there are several GISs that
> also accept whatever address you tell them because:
>
>  1. They have no reason to believe that you will lie to them (they know
> who you are and in some Jurisdictions you might be prosecuted for
> lying to the government)
>  2. No foreseeable harm to others could be done if you misrepresent your
> own address.
>

Then they are not Reliable nor QIISes. Full stop.


> In my understanding, this is the process each CA must perform to
> evaluate every Data Source before granting them the "Reliable" or
> "Qualified" status. Self-reported information without any supporting
> evidence is clearly not acceptable. I have not evaluated this database
> that you mention but if they accept self-reporting for "Street Address"
> and don't perform any additional verification (like asking you for a
> utility bill or cross-referencing it with a government database), then
> the "Street Address" information is unreliable and the CA's evaluation
> process should catch that.
>
> That doesn't mean that the rest of the information is also unreliable.
> For example, an Information Source might describe in their documentation
> practices how they verify each piece of information, for example:
>

I disagree with this assessment, and I think it's precisely why greater
restriction is needed on the flexibility of CAs to make such
interpretations. I understand the point you're trying to make - why throw
the baby out with the bathwater - but to its use within the EVGs and the
BRs, such structural issues throw into fundamental question the status as a
RDS or QIIS.

As you highlight, this is an assessment that each CA makes, according to
its own processes and skills, and based on their own understanding.
Auditors, which while required to have professional understanding of the
relevant standards but are by no means omniscient nor experts, then also
review these processes. Thus, we can easily end up in a situation where CA
A determines that Foo is an RDS and QIIS for (Address, Serial Number),
while CA B determines that Foo is an RDS and QIIS only for (Serial Number).

For an adversarial model, however, the strength of CA B's understanding and
recognition that Foo is not suitable as a QIIS for Address is irrelevant,
as an adversary needs only obtain a certificate from CA A instead. While
I'm sure CA B would love to market and crow about their excellence in the
market for recognizing that Foo was not a QIIS for Address, to the end
user, browser, and relying party, the fact that CA B deserves a cookie and
a pat on the back is irrelevant.

This is because the goal of a given root program is to ensure a
consistently operated PKI with a consistent degree of assurance. That CA A
accepts Foo as a QIIS and CA B does not, because they both participate in
the same PKI (namely, the browser's root program), the levels of assurance
are not equal to the expectations of the policy. One model of approaching
this is to try to outsource that lack of assurance to the relying party -
for example, telling them that "CA A shouldn't be relied 

Re: Concerns with Dun & Bradstreet as a QIIS

2018-09-27 Thread Ryan Sleevi via dev-security-policy
On Thu, Sep 27, 2018 at 10:39 PM Tim Hollebeek 
wrote:

> I'm glad you added the smiley, because in my experience CAs have rarely,
> if ever, have had any discretion in such matters.


That does not match reports from multiple former employees of various CAs.

Nor do we (DigiCert) particularly want to, to be honest.  I prefer clear,
> open, and transparent validation rules that other CAs can't play games with.
>
> Whitelisting and disclosure of validation sources was an active topic of
> discussion at the Redmond F2F, if I'm remembering my meetings correctly.
> I'm surprised that more small CAs didn't support me in that effort at my
> previous employer, as they generally have not taken as much time or effort
> to find the correct sources, and instead rely upon inferior sources.
>
> If that's the direction people want to move, I'd echo Matt's concern that
> it will be a complex and difficult process.  It's best to recall we spent a
> year or three trying to reach consensus about what localities existed in
> Taiwan and how companies could be identified there, and failed.


I think that’s conflating bad proposals with difficulty.

I'm always willing to work with people on improving the baseline
> requirements, but there needs to be a recognition up front that this is not
> going to be an easy problem to solve, and people need to be willing to
> volunteer and roll up their sleeves and do their part in we're going to
> undertake such a time consuming effort.


Indeed. I look forward to CAs with the day to day expertise to suggest QGIS
to be added. I’m sure any CA of considerable size and scale will no doubt
have a readily available list of QGIS as appropriate for their validation
efforts, as part of ensuring a consistent application of their own
validation policies. I can’t imagine any CA but the very smallest not
already having guidance for their validation staff as to what serves as an
appropriate and reliable source, as they surely wouldn’t be making it up on
the fly.



>
> -Tim
>
> > -Original Message-
> > From: dev-security-policy 
> On
> > Behalf Of Ryan Sleevi via dev-security-policy
> > Sent: Thursday, September 27, 2018 4:18 PM
> > To: Matthew Hardeman 
> > Cc: mozilla-dev-security-policy <
> mozilla-dev-security-pol...@lists.mozilla.org>;
> > Ian Carroll 
> > Subject: Re: Concerns with Dun & Bradstreet as a QIIS
> >
> > Yes, it would be work, but would result in consistent and reliable
> information,
> > and already reflective of the fact that an EV certificate needs to
> identify the
> > jurisdictionOfIncorporation and it's incorporating documents. Or are we
> saying
> > that OV doesn't need to make sure it's actually a valid and legal
> entity, and can
> > just display whatever information the CA feels is appropriate? ;)
> >
> >
> > On Thu, Sep 27, 2018 at 6:48 PM Matthew Hardeman via dev-security-policy
> <
> > dev-security-policy@lists.mozilla.org> wrote:
> >
> > > A whitelist of QGIS sounds fairly difficult.  And how long would it
> > > take to adopt a new one?
> > >
> > > In some states you're going to have an authority per county.  It'd be
> > > a big list.
> > >
> > > On Thu, Sep 27, 2018 at 5:35 PM, Ian Carroll via dev-security-policy <
> > > dev-security-policy@lists.mozilla.org> wrote:
> > >
> > > > On Wednesday, September 26, 2018 at 6:12:22 PM UTC-7, Ryan Sleevi
> > wrote:
> > > > > Thanks for raising this, Ian.
> > > > >
> > > > > The question and concern about QIIS is extremely reasonable. As
> > > discussed
> > > > > in past CA/Browser Forum activities, some CAs have extended the
> > > > definition
> > > > > to treat Google Maps as a QIIS (it is not), as well as third-party
> > > WHOIS
> > > > > services (they’re not; that’s using a DTP).
> > > > >
> > > > > In the discussions, I proposed a comprehensive set of reforms that
> > > would
> > > > > wholly remedy this issue. Given that the objective of OV and EV
> > > > > certificates is nominally to establish a legal identity, and the
> > > > > legal identity is derived from State power of recognition, I
> > > > > proposed that
> > > only
> > > > > QGIS be recognized for such information. This wholly resolves
> > > differences
> > > > > in interpretation on suitable QIIS.
> > > > >
> > > > > However, to ensure there do not also emerge conflicting
> > > > > understandings
> > > 

Re: Concerns with Dun & Bradstreet as a QIIS

2018-09-27 Thread Ryan Sleevi via dev-security-policy
Yes, it would be work, but would result in consistent and reliable
information, and already reflective of the fact that an EV certificate
needs to identify the jurisdictionOfIncorporation and it's incorporating
documents. Or are we saying that OV doesn't need to make sure it's actually
a valid and legal entity, and can just display whatever information the CA
feels is appropriate? ;)


On Thu, Sep 27, 2018 at 6:48 PM Matthew Hardeman via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> A whitelist of QGIS sounds fairly difficult.  And how long would it take to
> adopt a new one?
>
> In some states you're going to have an authority per county.  It'd be a big
> list.
>
> On Thu, Sep 27, 2018 at 5:35 PM, Ian Carroll via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
> > On Wednesday, September 26, 2018 at 6:12:22 PM UTC-7, Ryan Sleevi wrote:
> > > Thanks for raising this, Ian.
> > >
> > > The question and concern about QIIS is extremely reasonable. As
> discussed
> > > in past CA/Browser Forum activities, some CAs have extended the
> > definition
> > > to treat Google Maps as a QIIS (it is not), as well as third-party
> WHOIS
> > > services (they’re not; that’s using a DTP).
> > >
> > > In the discussions, I proposed a comprehensive set of reforms that
> would
> > > wholly remedy this issue. Given that the objective of OV and EV
> > > certificates is nominally to establish a legal identity, and the legal
> > > identity is derived from State power of recognition, I proposed that
> only
> > > QGIS be recognized for such information. This wholly resolves
> differences
> > > in interpretation on suitable QIIS.
> > >
> > > However, to ensure there do not also emerge conflicting understandings
> of
> > > appropriate QGIS - and in particular, since the BRs and EVGs recognize
> a
> > > variety of QGIS’s with variable levels of assurance relative to the
> > > information included - I further suggested that the determination of a
> > QGIS
> > > for a jurisdictional boundary should be maintained as a normative
> > whitelist
> > > that can be interoperably used and assessed against. If a given
> > > jurisdiction is not included within that whitelist, or the QGIS is not
> on
> > > it, it cannot be used. Additions to that whitelist can be maintained by
> > the
> > > Forum, based on an evaluation of the suitability of that QGIS for
> > purpose,
> > > and a consensus for adoption.
> > >
> > > This would significantly reduce the risk, while also further reducing
> > > ambiguities that have arisen from some CAs attempting to argue that
> > > non-employees of the CA or QGIS, but which act as intermediaries on
> > behalf
> > > of the CA to the QGIS, are not functionally and formally DTPs and this
> > > subject to the assessment requirements of DTPs. This ambiguity is being
> > > exploited in ways that can allow a CA to nominally say it checked a
> QGIS,
> > > but is relying on the word of a third-party, and with no assurance of
> the
> > > system security of that third party.
> > >
> > > Do you think such a proposal would wholly address your concern?
> >
> > I think I'll always agree with removing intermediaries from the
> validation
> > process. Outside of practical concerns, a whitelist of QGIS entities
> sounds
> > like a good idea.
> >
> > I would wonder what the replacement for D is in the United States. You
> > can normally get an address for a company from a QGIS but not (from the
> > states I've seen) a phone number for callback verification.
> > ___
> > dev-security-policy mailing list
> > dev-security-policy@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-security-policy
> >
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Trust Services Root Inclusion Request

2018-09-27 Thread Ryan Sleevi via dev-security-policy
On Thu, Sep 27, 2018 at 11:17 AM Jeremy Rowley 
wrote:

> Oh – I totally agree with you on the Google inclusion issue. Google meets
> the requirements for inclusion in Mozilla’s root policy so there’s no
> reason to exclude them. They have an audited CPS, support a community
> broader with certs than just Google, and have operated a CA without
> problems in the past. The discussion on Mozilla’s independence is important
> IMO where a) a Mozilla competitor as a module peer and b) having that same
> person also belong to a CA. There are legit concerns. Has any other CA
> served as a module owner? If not, why? I know Tim Hollebeek would be
> interested in being a peer. If he’s not permitted to be a peer, why not?
>

I think this again conflates peership with ownership, and it's good to
revisit what policies are actually specified by how it works.

I disagree with you as to the independence discussion being valuable,
because that conclusion rests on a misunderstanding about module ownership
and peership. Again,
https://www.mozilla.org/en-US/about/governance/policies/module-ownership/
addresses these concerns. It also is conflating MoCo and MoFo, which I know
was a topic that Gerv was particularly sensitive to.

To your second part, the selection of peers,
https://wiki.mozilla.org/Modules addresses this - "A peer is a person whom
the owner has appointed to help them." and "Owners may add and remove peers
from their modules as they wish, without reference to anyone else"


> To be fair, separating out Ryan as a Google browser representative and
> Ryan as a module peer is…hard. Perhaps, he specifically is seen as more
> influential (from my point of view) than others simply because of his dual
> role.
>

What is difficult separating out? You're intimating at some degree of
influence that is not transparent, but that's not supported by any
evidence. You're also intimating influence over Mozilla somehow, but that
seems like the separation would be easy.


> As I said before, Ryan’s a good module peer so I don’t disagree with your
> conclusion or any decision to keep him in that spot. But I think openness
> should include respectful conversation on the impact of influences,
> perceived or real, on the Mozilla direction.  What might help alleviate
> concerns is to describe how you (as a module owner) are going to ensure
> that if Ryan is reviewing and approving code or CA policies, they won’t be
> unfairly biased towards google or against its competitors? Maybe that’s a
> bad question, but I’m spit-balling on how we can move past speculation to
> address concerns raised.
>

Considering that all of this happens in the open, on m.d.s.p., what are you
using to support your thinking that there's some undue influence? Do you
believe that if the title peer is removed, the relationship changes?
Between questions asked and concerns raised? You're not just spit-balling,
you're intimating that the speculation has a reasonable foundation that
requires redress, but you're not actually addressing why that speculation
is seen as reasonable. That things happen here, transparently, should
itself serve to demonstrate the speculation as unfounded. Further, the
influence or lack of influence is based on the discussions that happen
here, and that regardless of any influence that may be perceived, the
community discussion that Wayne facilitates as Module Owner provides ample
opportunity to explore or influence in any other preferable direction.

But let's humour the specious reasoning here, and imagine there was some
undue influence on the peership
- One scenario is that such influence is exercised, and that there isn't a
public review or discussion phase to 'undo' that influence, and that's bad.
That's not a failure of peership though, that's a failure of Module
Ownership
- Another scenario is that such influence is exercised, and there is a
public review and discussion phase. If the result produced by that
influence is the same as the community expectation, then there's nothing
improper here. If the result produced by that influence is different from
the community expectation, then that can be corrected and identified during
the review and discussion phase, and such 'influence' is actually either
non-existent or equivalent to the same influence practiced by all
participating members of the community
- Another scenario is that there is no such influence, and the
participation and peership is identical to that of what the community
expects and concurs with.

It's almost as if influence is being conflated with consistency - that is,
if I'm expressing views that the community agrees with, I'm seen as
influential, while ignoring the fact that if I express views the community
disagrees with, they are just as influential as to call that out. Do you
see the logical flaws here?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org

Re: Concerns with Dun & Bradstreet as a QIIS

2018-09-26 Thread Ryan Sleevi via dev-security-policy
Thanks for raising this, Ian.

The question and concern about QIIS is extremely reasonable. As discussed
in past CA/Browser Forum activities, some CAs have extended the definition
to treat Google Maps as a QIIS (it is not), as well as third-party WHOIS
services (they’re not; that’s using a DTP).

In the discussions, I proposed a comprehensive set of reforms that would
wholly remedy this issue. Given that the objective of OV and EV
certificates is nominally to establish a legal identity, and the legal
identity is derived from State power of recognition, I proposed that only
QGIS be recognized for such information. This wholly resolves differences
in interpretation on suitable QIIS.

However, to ensure there do not also emerge conflicting understandings of
appropriate QGIS - and in particular, since the BRs and EVGs recognize a
variety of QGIS’s with variable levels of assurance relative to the
information included - I further suggested that the determination of a QGIS
for a jurisdictional boundary should be maintained as a normative whitelist
that can be interoperably used and assessed against. If a given
jurisdiction is not included within that whitelist, or the QGIS is not on
it, it cannot be used. Additions to that whitelist can be maintained by the
Forum, based on an evaluation of the suitability of that QGIS for purpose,
and a consensus for adoption.

This would significantly reduce the risk, while also further reducing
ambiguities that have arisen from some CAs attempting to argue that
non-employees of the CA or QGIS, but which act as intermediaries on behalf
of the CA to the QGIS, are not functionally and formally DTPs and this
subject to the assessment requirements of DTPs. This ambiguity is being
exploited in ways that can allow a CA to nominally say it checked a QGIS,
but is relying on the word of a third-party, and with no assurance of the
system security of that third party.

Do you think such a proposal would wholly address your concern?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Trust Services Root Inclusion Request

2018-09-26 Thread Ryan Sleevi via dev-security-policy
On Wed, Sep 26, 2018 at 12:04 PM Jeremy Rowley 
wrote:

> I also should also emphasize that I’m speaking as Jeremy Rowley, not as
> DigiCert.
>
>
>
> Note that I didn’t say Google controlled the policy. However, as a module
> peer, Google does have significant influence over the policy and what CAs
> are trusted by Mozilla. Although everyone can participate in Mozilla
> discussions publicly, it’s a fallacy to state that a general participant
> has similar sway or authority to a module peer. That’s the whole point of
> having a separate class for peers compared to us general public.  With
> Google acting as a CA and module peer, you now have one CA heavily
> influencing who its competitors are, how its competitors operate, and what
> its competitors can do.  Although I personally find that you never misuse
> your power as a module peer, I can see how Jake has concerns that Google
> (as a CA) has very heavy influence over the platform that has historically
> been the CA watchdog (Mozilla).
>

Jeremy, I think this again deserves calling out, because this is
misrepresenting what module peership does, as well as the CA relationship.

I linked you to the definition of Module Ownership, which highlights and
emphasizes that the module peer is simply a recognized helper. To the
extent there is any influence, it is through the public discussions here.
If your concern is that the title confers some special advantage, that's to
misread what module peer is. If your concern is that the participation -
which provides solid technical arguments as well as the policy alternatives
- is influential, then what you're arguing against is public participation.

You're presenting these as factual, and that's misleading, so I'd like to
highlight what is actually entailed.


> The circumstances are different between the scenarios you describe with
> respect to the other browsers, as is market share.  If Microsoft wants to
> change CAs (and they already use multiple), they can without impacting
> public perception. If Apple wants to use another CA, they can without
> people commenting how odd it is that Apple doesn’t use the Apple CA. With
> Google controlling the CA and the Google browser, all incentive to
> eliminate any misbehaving Google CA disappears for financial reasons,
> public perception, and because Google can control the messaging (through
> marketshare and influence over Mozilla policy). Note that there is
> historical precedent for Google treating Google special – i.e. the
> exclusion for Google in the Symantec distrust plan.  Thus, I think Jake’s
> concerns should not be discarded so readily.
>

I can understand and appreciate why you have this perspective. I disagree
that it's an accurate representation, and as shown by the previous message,
it does not have factual basis. I think it's misleading to suggest that the
concerns are being discarded, much like yours - they're being responded to
with supporting evidence and careful analysis. However, they do not hold
water, and while it would be ideal to convince you of this as well, it's
equally important to be transparent about it.

Your argument above seems to boil down to "People would notice if Google
changed CAs, but not if Microsoft" - yet that's not supported (see,
example, the usage of Let's Encrypt by Google, or the former usage of
WoSign by Microsoft). Your argument about incentives entirely ignores the
incentives I just described to you previously - which look at public
perception, internet security, and ecosystem stability. Your argument about
influence over Mozilla policy has already been demonstrated as false and
misleading, but it seems you won't be convinced by that. And your
suggestion of special treatment ignores the facts of the situation (the
validation issues, the scoping of audits, that Apple and 2 other CAs were
also included in the exclusion), ignores the more significant special
treatment granted by other vendors (e.g. Apple's exclusion of a host of
mismanaged Symantec sub-CAs now under DigiCert's operational control), the
past precedent (e.g. the gradual distrust of WoSign/StartCom through
whitelists, of CNNIC through whitelists), and the public discussion
involved so entirely that it's entirely unfounded.

So I think your continued suggestion that it's being discarded so readily
is, again, misleading and inaccurate.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Re: Google Trust Services Root Inclusion Request

2018-09-26 Thread Ryan Sleevi via dev-security-policy
Hi Richard,

A few corrections:

On Wed, Sep 26, 2018 at 11:36 AM Richard Wang via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Ryan mentioned WoSign/StartCom and 360, so I like to say some words.
>
> First, I think your idea is not a proper metaphor because 360 browser
> can't compare to Google browser, Google browser have absolutely strong
> market share to say YES/NO to all CAs, but I am sure not to Google CA.
>

That wasn't the comparison. I was more highlighting how you actively
mislead (lied?) to the community about the relationship between the
entities, by trying to argue as separate entities. While Google Trust
Services is a separate legal entity, which is about ensuring there is a
firewall between these organizations, my concern about bringing it up was
because of how you actively mislead the community.


> Third, your comparison of Apple and Microsoft is also not correct, they
> use its own CA system for their own system use only, not for public, not to
> be a global public CA like Google.
>

I'm afraid this also misunderstands things. Microsoft does issue
certificates for end-users using its services (like Google). To the point
of the discussion, however, it was about the assumption and implication
that you cannot distrust an entity that operates a large web presence and
also a CA, or that browsers would play special favors to the CAs of their
properties, whether in-house or external. Both of these apply to all
browsers - arguably, even Mozilla (which uses certs from DigiCert as well,
either through the Amazon-branded sub-CA that DigiCert operates or directly
through DigiCert)


> Ryan, thank you for still remembering WoSign.
>

I think it will be very hard for the community to ever forget
https://wiki.mozilla.org/CA:WoSign_Issues
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Trust Services Root Inclusion Request

2018-09-26 Thread Ryan Sleevi via dev-security-policy
(While covered in https://wiki.mozilla.org/CA/Policy_Participants , I'm
going to emphasize that this response is in a personal capacity)

On Wed, Sep 26, 2018 at 12:10 AM Jeremy Rowley via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Jake's concern is legit if you believe certain assumptions. Criticizing his
> rationale doesn't seem correct, especially since Google does indeed have a
> root store. Although not traditional, Google runs a store of blacklisted
> CAs
> (see Symantec), which is every bit as effective as controlling CA
> compliance
> and operation as the inclusion process.
>

To be clear: Google does indeed operate root stores on a host of devices,
including Android and ChromeOS, not to mention Google Cloud functionality.


> FACT: As a browser, Google already interprets the CA/Browser requirements
> in
> many ways via, intentionally or not.  The Google's policies, and how Google
> implements Chrome are all closely watched by CAs and help dictate how we
> interpret the various requirements.
>


> This fact combined with the assumption that Google will never distrust
> itself jumps to a conclusion that Google will only interpreting the BRs in
> Google CA's best interests. Why wouldn't they? Google is a for profit
> company. Self-promotion is kind-of in the description.
>

The problem with this assumption, or at least what logically follows, is
that every Browser would behave the same, beneficient towards the CA(s)
they use for services. For example, Microsoft operates a root program, yet
is also trusted by Mozilla through the subordinate CAs provided through the
Baltimore Cybertrust hierarchy, which is owned by... DigiCert. Similarly,
Apple operates a root program, yet is also trusted by Mozilla through
subordinate CAs provided through the GeoTrust hierarchy, which is owned
by... DigiCert.

Google operates a root program, yet is also trusted by Mozilla through...
the acquisition of key material (from GlobalSign) and the operation of
independent roots.

If we accept this assumption as sound, then it seems the argument is that
DigiCert can also never be distrusted, and interpretations will always be
to the benefit of DigiCert, because to distrust DigiCert or take sanction
would be to disrupt 'critical' services like Azure or iTunes.

Alternatively, one could argue/make assumptions that by virtue of Google
previously having operated a subordinate under the GeoTrust hierarchy
(DigiCert, formerly Symantec), that they've demonstrated it's possible to
operate as subordinate or root. They demonstrably took steps regarding the
Symantec hierarchy, even as it was the basis for trust of Google services.
In that model, if Google CA were to take actions detrimental to the
ecosystem, Google may interpret the BRs in the Internet trust and
stabilities best interests, distrust the Google CA, and force Google to
transition to an alternative solution precisely to avoid the alternative.

The problem here is these are all assumptions that rest on ignoring
pertinent details.


> FACT: Google is a module peer in Mozilla NSS, which means Google has
> significant influence over BR interpretation, the penalties related to CA
> mis-issuance, and the requirements a CA has for operating within the space.
> This gives one CA a lot of influence over how Mozilla treats the other
> CAs.
> The change in paradigm means a reasonable person (like Jake) could be
> concerned with potential obfuscation of problems, a loss of policy
> enforcement, and various other nefarious acts. I think most of us Mozilla
> users see Mozilla as a watch-dog of the Internet so this combination of
> Browser-CA-module peer reasonably causes unease.
>

Unfortunately, this FACT isn't correct - it doesn't reflect the Module
Ownership Governance System, which is covered in
https://www.mozilla.org/en-US/about/governance/policies/module-ownership/ .
A peer isn't the decision maker - that rests with the Owner, particularly
for matters of things like policy.

We could discuss the semantics of Google vs Google Trust Services, but I
fully acknowledge that it would go over about as well as WoSign vs StartCom
vs Qihoo 360.

We could discuss https://wiki.mozilla.org/CA/Policy_Participants and its
set of disclosures, but I'm sure some people will find that unsatisfying.

What is perhaps most relevant is to highlight the fact that these questions
of interpretation - of BRs or policies - happen on the list, that the
module owner is the decision maker, and that public participation is fully
welcomed, whether peers or otherwise. In that model - of transparency -
doesn't support the claims being presented here as 'fact', and instead
highlights them as 'assumption's that they are.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Visa Issues

2018-09-23 Thread Ryan Sleevi via dev-security-policy
On Thu, Sep 13, 2018 at 3:26 PM Wayne Thayer via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Visa recently delivered new qualified audit reports for their eCommerce
> Root that is included in the Mozilla program. I opened a bug [1] and
> requested an incident report from Visa.
>
> Visa was also the subject of a thread [2] earlier this year in which I
> stated that I would look into some of the concerns that were raised. I've
> done that and have compiled the following issues list:
>
> https://wiki.mozilla.org/CA:Visa_Issues
>
> While I have attempted to make this list as complete, accurate, and factual
> as possible, it may be updated as more information is received from Visa
> and the community.
>
> I would like to request that a representative from Visa engage in this
> discussion and provide responses to these issues.
>
> - Wayne
>
> [1] https://bugzilla.mozilla.org/show_bug.cgi?id=1485851
> [2]
>
> https://groups.google.com/d/msg/mozilla.dev.security.policy/NNV3zvX43vE/ns8UUwp8BgAJ


I've not seen Visa engage in this discussion. The silence is rather
deafening, and arguably unacceptably so.

With respect to the Qualified Audit, Visa's response as to the substance of
the issue is particularly unsettling.
https://bugzilla.mozilla.org/show_bug.cgi?id=1485851#c3 demonstrates that
they've not actually remediated the qualification, that they've further
failed to meet the BRs requirements on revocations by any reasonable
perspective, and they don't even have a plan yet to remedy this issue.

Examining the bug itself is fairly disturbing, and the responses likely
reveal further BR violations. For example, the inability to obtain evidence
of domain validation information reveals that there are further issues with
2-7.3 - namely, maintaining those logs for 7 years. The response to 2-7.3
suggests that there are likely more endemic issues around the issuance.

Given the past issues, the recently identified issues (that appear to have
been longstanding), and the new issues that Visa's PKI Policy team is
actively engaging in, I believe it would be appropriate and necessary to
consider removing trust in this CA.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Visa Issues

2018-09-13 Thread Ryan Sleevi via dev-security-policy
On Thu, Sep 13, 2018 at 3:26 PM Wayne Thayer via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Visa recently delivered new qualified audit reports for their eCommerce
> Root that is included in the Mozilla program. I opened a bug [1] and
> requested an incident report from Visa.
>
> Visa was also the subject of a thread [2] earlier this year in which I
> stated that I would look into some of the concerns that were raised. I've
> done that and have compiled the following issues list:
>
> https://wiki.mozilla.org/CA:Visa_Issues
>
> While I have attempted to make this list as complete, accurate, and factual
> as possible, it may be updated as more information is received from Visa
> and the community.
>
> I would like to request that a representative from Visa engage in this
> discussion and provide responses to these issues.
>
> - Wayne
>
> [1] https://bugzilla.mozilla.org/show_bug.cgi?id=1485851
> [2]
>
> https://groups.google.com/d/msg/mozilla.dev.security.policy/NNV3zvX43vE/ns8UUwp8BgAJ


Compared to the seriousness and scope of these issues, this is by far a
minor correction, and does not undermine any of the evaluation. However, as
a pedantic note, it's noted as "PITRA" while stating "Point in Time audit".
A point-in-time readiness assessment is for management's eyes only, while
the report provided is just a Point in time Audit. I think just deleting
the parenthetical PITRA is sufficient and just consistently used Point in
Time audit.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DRAFT September 2018 CA Communication

2018-09-07 Thread Ryan Sleevi via dev-security-policy
On Fri, Sep 7, 2018 at 9:55 AM Bruce via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Thursday, September 6, 2018 at 7:44:15 PM UTC-4, Wayne Thayer wrote:
> > All,
> >
> > I've drafted a new email and survey that I hope to send to all CAs in the
> > Mozilla program next week. it focuses on compliance with the new (2.6.1)
> > version of our Root Store Policy. I would appreciate your feedback on the
> > draft:
> >
> >
> https://ccadb-public.secure.force.com/mozillacommunications/CACommunicationSurveySample?CACommunicationId=a051J3rMGLL
> > <
> https://ccadb-public.secure.force.com/mozillacommunications/CACommunicationSurveySample?CACommunicationId=a051J3mogw7
> >
> >
> > Thanks,
> >
> > Wayne
>
> With regard to the actions.
>
> ACTION 6 - Can we select CA certificates which we do not want pre-loaded?
> In some cases the CA certificate is no longer used and does not need
> pre-loading.
>
> ACTION 7 - Although we support the Chrome CT requirement, we do have a
> process to allow customers to choose not to CT log their certain SSL
> certificates. We do not redact names, but I suppose we allow a customer to
> redact certificates. As such, I don't think the responses listed in action
> 7 covers this model.
>

Correct. Chrome does not require 100% disclosure of certificates - however,
it does not trust undisclosed certificates. However, as Certificate
Transparency (RFC 6962) also supports non-embedding methods - such as TLS
or OCSP - it should also not be presumed that disclosure occurs at the time
of issuance. In particular, a site operator may choose not to have the CA
embed SCTs through the disclosure of precertificates, and instead disclose
them at a time prior to making those certificates operational. As a
concrete example, Google does this for some of its certificates.

If Mozilla is pursuing a 100% disclosure rule, while permitting
non-preloaded intermediates, then it seems this suggests that Mozilla's
desired configuration is that customers that opt-out of disclosure are done
so via a dedicated intermediate.

That is, one can imagine you, today, have
Root
Intermediate
(Disclosed Leaf) (Disclosed Leaf) (Non-Disclosed Leaf)

In a model "tomorrow", you would have
Root
Standard Intermediate
(Disclosed Leaf) (Disclosed Leaf)

Private Intermediate
(Non-Disclosed Leaf)

That is, the choice of a customer to not affirmatively disclose would seem
to necessitate dedicated hierarchy in order to meet Mozilla's objectives.
Wayne, is that the intent? Is there a phase-in time for CAs to establish
such a hierarchy?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Audit Reminder Email Summary

2018-09-04 Thread Ryan Sleevi via dev-security-policy
On Mon, Sep 3, 2018 at 8:54 AM reinhard.dietrich--- via dev-security-policy
 wrote:

> Dear all,
> as already mentioned above, qualified auditors (nat person/organization)
> have been selected which fulfil the points as listed in our previous
> response. The auditors fulfilled these relevant requirements. Even the
> organization of TÜV AUSTRIA CERT was accredited according to ISO 17065 by
> that time –the only thing missing was the formal acknowledgement of the
> Austrian Federal Ministry for Digital and Economy Affairs (BMDW) for
> amending that accreditation by the ETSI ENs.
>

I think this is an incredibly important and meaningful distinction, for all
CAs.

It appears that, even today, September 4, there's no way to independently
confirm or assess TUV AUSTRIA CERT's accreditation to ISO 17065 in any of
the related or appropriate standards. If a WebTrust audit was presented by
an auditor who was not licensed by WebTrust, or could not be independently
assessed as such, it would absolutely be rejected - and CPA Canada/AICPA
would likely pursue action through misuse of the term WebTrust.

That ETSI lacks this is already problematic, but equally problematic is the
lack of demonstration through the agreed-upon method of determining an
auditor's competence to perform audits. The process that is recognized for
the ETSI norms is to look through european-accreditation.org to determine
the NAB, and from the NAB, determine the appropriate CABs. With respect to
Section 8.2, I'm having trouble understanding how the information presented
can meet the requirements of Item 2, 4, and 6. As my previous mail
suggested, the ability to determine that, independently, has been lacking
since April.

Related to our own SwissSign audit which was required to be performed as
> soon as possible, we decided to ask the browsers before the audit was
> started, whether they would accept the audit performed by our auditors
> under that circumstances described above. Based on the Mozilla Root Policy,
> clause 3.2, para 2 Mozilla can decide to accept the auditor. On top we
> considered that as the auditors are well known in the community and have
> long term experience auditing several CA included in the Root Stores
> according ETSI and BRG, therefore they should easily be accepted to perform
> our audit.


My understanding here is that your use of the term "auditors" refers to the
individuals, not to the organization TUV Austria, is that correct?


> We discussed that with Mozilla and Microsoft and both finally agreed to a
> one time exception so that we decided to start the audit project. That
> given exception included an agreement that the Audit Attestations will be
> re-issued now, after the formal accreditation process is finalized – which
> will happen during the next few weeks. All the Browsers will receive an
> updated Audit Attestation then referring the amended accreditation
> documentation.
> On top of that and as already mentioned above, we will repeat all the
> audits during the next weeks in order to start over and synchronize the
> audit period for the complete PKI of SwissSign. At this time the expansion
> of TÜV AUSTRIA CERTS accreditation according ISO 17065 and  ETSI EN 319 403
> will certainly be visible.
>

As it stands, the information for TUV AUSTRIA CERTS still does not appear
to meet the requirements of Section 8.2. I appreciate that the promise is
that it's forthcoming, that it's BMDW's fault, that everything surely is in
order, but you can surely see how from an objective and consistent
application of policies, this fails to meet Section 8.2 even to this day.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Audit Reminder Email Summary

2018-08-29 Thread Ryan Sleevi via dev-security-policy
On Mon, Aug 27, 2018 at 2:25 AM reinhard.dietrich--- via
dev-security-policy  wrote:

> Dear all
>
> This is a joint answer to Waynes' request.
>
> it was mentioned that the audit period was exceeded. We would like to
> explain the situation and what was undertaken to avoid such situation again.
>
> We all are aware that the audit period was exceeded by two months. However
> the conducted audit from April 2018 also covers the 2 months extension.  As
> you already mentioned, the reason is that SwissSign decided to change the
> auditors after 12 years for quality assurance reason. Additionally, it is a
> best practice within the IT security world or the financial sector to
> change the external auditors on a regular base.
> During the process, SwissSign defined some criteria in order to choose the
> new auditors these are:
>
> The auditors shall:
> - possess a knowledge and experience since years performing PKI audits as
> a full time job.
>
> - have experience in auditing different international CA which are also
> included in the Root Stores.
>
> - take their time to understand in detail the processes, infrastructure,
> implementations, etc. of SwissSign.
> - support SwissSign being conform over time between the annual audits,
> e.g. by pre-assessments of new solutions/processes/applications before
> these are going in live production.
> - be well known in the community.
> - be active in international and national working groups in order to keep
> their knowledge about requirements up-to-date.
> - are /going to be accredited to perform audit according the relevant
> standards.
> - Fullfills requirement according to BR 8.2 and BR 8.3
>

Could you explain a bit more about your selection process? The auditors you
selected did not meet BR 8.2 at the time you selected them, hence the
e-mail to multiple browser programs to accept audits from a non-accredited
entity, on the promise that the principals involved were respected and that
accreditation would be coming "soon".

In April, an attempt was made to determine the accreditation status of TUV
Austria Cert GMBH. Based on EA [1], the NAB for Austria is Akkreditierung
Austria's Federal Ministry for Digital and Economi Afairs (BMDW). The list
of CABs accredited by the NAB are available at [2]

Looking at the BRs, the expectation would be that the auditor is accredited
in the application of ISO 17065:2012 with the application of ETSI EN 319
403. Looking at the most recently updated list of Certification Bodies of
Products, Processes, and Services, according to EN ISO/IEC 17065:2012, TUV
Austria Cert GMBH's certification is [4], which doesn't cover that norm.

Looking at the eIDAS list of CABs [5], which is informational but updated
as of 2018-07-27, I don't see TUV Austria under AA either.

So it doesn't appear to meet either ISO/IEC 17065 or the eIDAS norms (of
which 17065 is effectively a pre-requisite).

Could you help me understand more how TUV Austria Cert GMBH meets the
criteria of BR 8.2 and 8.3?

[1] http://www.european-accreditation.org/ea-members
[2]
https://www.bmdw.gv.at/TechnikUndVermessung/Akkreditierung/Seiten/AkkreditiertePIZ-Stellen.aspx
[3]
https://www.bmdw.gv.at/TechnikUndVermessung/Akkreditierung/Documents/product%20certification%20bodies.pdf
[4]
https://www.bmdw.gv.at/TechnikUndVermessung/Akkreditierung/Documents/AA_0943_17065_TueV_AUSTRIA_CERT_GMBH.pdf
[5]
https://ec.europa.eu/futurium/en/content/list-conformity-assessment-bodies-cabs-accredited-against-requirements-eidas-regulation
[6]
https://ec.europa.eu/futurium/en/system/files/ged/list_of_eidas_accredited_cabs-2018-07-27.pdf
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Trust Services - Minor SCT issue disclosure

2018-08-23 Thread Ryan Sleevi via dev-security-policy
On Thu, Aug 23, 2018 at 8:50 AM, Andy Warner via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:
>
> * NOTE: The bug was due to an 'if/else' chain fall through. The code in
> question has been refactored to be simpler and more readable.
>

Andy,

It might be good for the community if you could describe the processes
before and after the change, so that other CAs can help prevent similar
issues with their own embedding systems.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Telia CA - problem in E validation

2018-08-20 Thread Ryan Sleevi via dev-security-policy
On Mon, Aug 20, 2018 at 4:06 AM, pekka.lahtiharju--- via
dev-security-policy  wrote:

> In our implementation E value in our certificates was "true" if it passed
> our technical and visual verification. If the BR requirement is to do "any"
> verification for E then the verification techniques we used should be OK.
> We think that BR has meant that both OU and E are based on values defined
> by Applicant and it is not mandatory to do any email send/response
> verification. How do you conclude that BR words "has been verified by the
> CA" actually means that some email has to be sent? In our opinion E is just
> a support email address and its verification is not similar to important
> subject fields like O,L or C but can be compared to OU verification.


The BRs exclusively detail, with only one exception, how to ensure the
information presented in the certificate is accurate (c.f. 7.1.4.2), and
that the information is factual (c.f. 4.2.1) and with a verification
process (c.f. 3.2.2).

Could you describe where in your CP/CPS your procedures for email
validation were documented?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DEFCON Talk - Lost and Found Certificates

2018-08-16 Thread Ryan Sleevi via dev-security-policy
On Wed, Aug 15, 2018 at 11:41 AM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On 14/08/2018 02:10, Wayne Thayer wrote:
> > I'd like to call this presentation to everyone's attention:
> >
> > Title: Lost and Found Certificates: dealing with residual certificates
> for
> > pre-owned domains
> >
> > Slide deck:
> >
> https://media.defcon.org/DEF%20CON%2026/DEF%20CON%2026%20presentations/DEFCON-26-Foster-and-Ayrey-Lost-and-Found-Certs-residual-certs-for-pre-owned-domains.pdf
> >
> > (NOTE: this PDF loads in Firefox, but not in Safari and not, I'm told, in
> > Chrome's native PDF viewer).
> >
> > Demo website: https://insecure.design/
> >
> > The basic idea here is that domain names regularly change owners,
> creating
> > "residual certificates" controlled by the previous owner that can be used
> > for MITM. When a bunch of unrelated websites are thrown into the same
> > certificate by a service provider (e.g. CDN), then this also creates the
> > opportunity to DoS the sites by asking the CA to revoke the certificate.
> >
> > The deck includes some recommendations for CAs.
> >
> > What, if anything, should we do about this issue?
> >
> > - Wayne
> >
>
> Suggested corrective processes that may be added to BRs, Mozilla
> policies or similar, and which the relevant parties (CAs and browsers)
> can begin to implement before they are standardized, as none contradict
> urrent policies, and several require coding and testing.  Backend
> uppliers (such as ejbCA and NSS) will probably be doing most of the work
> for the smaller players.
>
> 1. Browser members of CAB/F MUST do revocation checking, revocation
>being semi- or completely disabled in browsers is a glaring security
>hole that also affects these scenarios.  Browsers MUST support OCSP,
>CRL and other (future?) revocation protocols, in order to work
>securely with a heterogeneous mix of public CAs (that currently must
>run OCSP) and non-public offline organizational CAs.  Certificate
>client libraries made for/by major players should do the same, so they
>can be used in minor clients such as server side https clients and
>SMTP sending implementations.


The profound harm this would cause to the ecosystem is not worth
considering further. I am not sure why there is such a strong and
supportive view of CA-lead censorship, but given these issues have been
repeatedly pointed out - that such a system gives a few dozen entities
complete and total censorship control of the Internet - it likely does not
bear further discussion as at all being viable without substially larger
improvements.

2. When updating a CDN-style multi-SANs certificate and the replacement
>omits at least one of the previous SANs, CAs must revoke the old cert
>versions after a short administrative delay that allows the CDN to
>deploy the replacement cert.  Because this is not hard evidence of
>certificate invalid/misissued (this is a voluntary retraction due to
>non-compromise), something like 72 hours would be fine unless a
>faster revocation is explicitly requested by the previous cert holder
>(the CDN), the domain owner or any other relevant entity.


This is already required of the BRs, at 24 hours, and should remain so.

3. When updating a normal multi-SAN certificate (less than 3 different
>directly-below public-suffix DNS labels) always ask the certificate
>holder if and how quickly they want the old certificate voluntarily
>revoked (again no presumption of misissuance or compromise, domain
>owner may simply be regrouping his servers, rotating SANs between
>certificates from multiple CAs).  Also, with some CAs, the updating
>process is identical to the process for getting duplicate certs
>corresponding to different server end HSMs/TLS accelerators with an
>explicit intent to keep both certs valid for years.
> Unless of cause a faster revocation is explicitly requested by the
>previous cert holder, the domain owner or any other relevant entity.
>
> For example a certificate with the following SANs would fall under
>this more permissive rule:
>   example.com
>   www.example.com
>   static.example.com
>   mail.example.com
>   example.org
>   www.example.org
>   example.net
>   www.example.net
>   example.co.uk
>   web.example.co.uk
>   example.blogblog.com
>   beispiel.de
>   www.beispiel.de
>   eksempel.no
>   www.eksempel.no
> The labels directly below public suffix in this cert are "example",
> "beispiel" and "eksempel" totaling the maximum 3.  In a real case
> these would typically be names associated with a single real world
> entity that has registered its domains under a bunch of available
> suffixes, however the counting to 3 rule is easier to explain and
> enforce than subjective rules about companies and trademarks.  (Hint:
> In this example, the 3 

Re: DEFCON Talk - Lost and Found Certificates

2018-08-15 Thread Ryan Sleevi via dev-security-policy
On Mon, Aug 13, 2018 at 8:10 PM, Wayne Thayer via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> I'd like to call this presentation to everyone's attention:
>
> Title: Lost and Found Certificates: dealing with residual certificates for
> pre-owned domains
>
> Slide deck:
> https://media.defcon.org/DEF%20CON%2026/DEF%20CON%2026%
> 20presentations/DEFCON-26-Foster-and-Ayrey-Lost-and-
> Found-Certs-residual-certs-for-pre-owned-domains.pdf
>
> (NOTE: this PDF loads in Firefox, but not in Safari and not, I'm told, in
> Chrome's native PDF viewer).
>
> Demo website: https://insecure.design/
>
> The basic idea here is that domain names regularly change owners, creating
> "residual certificates" controlled by the previous owner that can be used
> for MITM. When a bunch of unrelated websites are thrown into the same
> certificate by a service provider (e.g. CDN), then this also creates the
> opportunity to DoS the sites by asking the CA to revoke the certificate.
>
> The deck includes some recommendations for CAs.
>
> What, if anything, should we do about this issue?
>

>From the recommendations:
- CAs could only issue short lived certificates
- CAs should not issue certificates valid for longer than domain
registration

It seems like both of those could/should be required of CAs via policy. For
example, aligning validity on 395 days (13 months). Not all registrars
provide the registration period information, so that seems the safer, more
reliable means.

Note that the use of cached validations also adds a further wrinkle - this
doubles the effective risk window. That is, a domain validation that occurs
on Day 1 can be used on Day 825 to issue a certificate valid for 825 days -
an effective window of 1,650 days - or 4.5 years.

I think it'd also be worth exploring the dataset with the authors, to see
and compare the overlap bucked by validity periods. That is, they report
1.5M domains with pre-existing certificates (25% unexpired), and 7M domains
sharing a certificate with a 'bygone' domain (41% unexpired). It would be
hugely beneficial to the discussion to take the set of unexpired
certificates, bucket them into validity windows (90D, 1Y, 2Y, 825 day, 3Y),
and then see the overlap of Bygone domains.

A third suggestion is also raised "Be careful with subject alt-names".
While some may wish to distinguish this into service providers vs
non-service providers, as a practical distinction, it's not going to be a
bright line (see: the same discussions around CAs vs resellers vs service
providers vs users). If we're to take meaningful steps there, it might be
better to look at the practice of including multiple SANs.

We know cloud providers are increasingly turning down SNI-less connections.
As a bellwether for the industry, cloud providers are incentivized to
maximize the number of connections they can handle for their customers, and
if they're not seeing significant negative impact from requiring SNI,
perhaps its worth discussing a sunset for multiple-SAN certificates within
the next several years.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Misissuance and BR Audit Statements

2018-08-15 Thread Ryan Sleevi via dev-security-policy
Wayne,

Thanks for raising this. I definitely find it surprising to see nothing
noted on Comodo's report, as you call out.

As another datapoint, consider this recent audit that is reported to be
from DigiCert, by way of Amazon Trust Services' providing the audits for
their externally operated sub-CAs in [A]. The scope of the WebTrust BR
audit report in [B] contains in its scope "DigiCert ECC Extended Validation
Server CA" of
hash FDC8986CFAC4F35F1ACD517E0F61B879882AE076E2BA80B77BD3F0FE5CEF8862,
which [C]. During that time, this CA issued a cert [D] as part of their
improperly configured Onion issuance in [E], which was remediated in early
March, within the audit period for [B]. I couldn't find it listed in the
report.

Looking over that period, there were two other (resolved) DigiCert issues,
[F] and [G], which affect the CAs listed in scope of [B].

I was a bit surprised by this, as like you, I would have expected these to
be called out by both Management's Assertion and the auditor.
http://www.webtrust.org/practitioner-qualifications/docs/item85808.pdf
provides some of the illustrative reports, but it appears to only provide
templates for management on the result of obtaining a qualified report.

[A] https://bugzilla.mozilla.org/show_bug.cgi?id=1482930
[B] https://bug1482930.bmoattachments.org/attachment.cgi?id=8999669
[C] https://crt.sh/?id=23432431
[D] https://crt.sh/?id=351449246
[E] https://bugzilla.mozilla.org/show_bug.cgi?id=1447192
[F] https://bugzilla.mozilla.org/show_bug.cgi?id=1465600
[G] https://bugzilla.mozilla.org/show_bug.cgi?id=1398269#c29

On Tue, Aug 7, 2018 at 1:32 PM, Wayne Thayer via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Given the number of incidents documented over the past year [1][2] for
> misissuance and other nonconformities, I would expect many of the 2018
> period-of-time WebTrust audit statements being submitted by CAs to include
> qualifications describing these matters. In some cases, that is exactly
> what we’re seeing. One of many positive examples is Deloitte’s report on
> Entrust [3] that includes 2 of the 3 issues documented in Bugzilla.
>
> Unfortunately, we are also beginning to see some reports that don’t meet my
> expectations. I was surprised by GlobalSign’s clean reports [4] from Ernst
> & Young, but after examining their incident bugs, it appears that the only
> documented misissuance that occurred during their audit period was placing
> metadata in Subject fields. I can understand how this could be regarded as
> a minor nonconformity rather than a qualification, but I would have liked
> to at least see the issue noted in the reports.
>
> Ernst & Young’s clean reports on Comodo CA [5] is the example that prompted
> this message. We have documented the following issues that occurred during
> Comodo’s last audit period:
> * Misissuance using "CNAME CSR Hash 2" method of domain control validation
> (bug 1461391)
> * Assorted misissuances and failure to respond to an incident report within
> 24 hours (bug 1390981)
> * CAA misissuance (bugs 1398545,1410834, 1420858, and 1423624 )
>
> I would like to know if Comodo reported these issues to EY. I asked Comodo
> this question four weeks ago [6] but have not received a response.
>
> I will acknowledge that ETSI audits are an even bigger problem (Actalis and
> SwissSign are recent examples [7][8][9]). Due to the structure of those
> audits, there is no provision for issuing a qualified report. WebTrust
> audits are theoretically much better in this regard, but only if auditors
> actually find and report on issues! I don’t think it is productive to
> expect auditors to search Bugzilla for a list of issues to copy into their
> reports, but I do think it is reasonable to question the competence and
> trustworthiness of the auditor when so many known issues are absent from
> their report.
>
> In this particular example, unless additional facts are presented, I plan
> to notate the auditor’s record in CCADB with this issue. We have documented
> a number of other issues with Ernst & Young - including the
> disqualification of their Hong Kong branch - but this is the first issue
> I’m aware of from their New York office. We also recently received a very
> “good” qualified audit report from EY’s Denmark office on Telia [10].
>
> - Wayne
>
> [1] https://wiki.mozilla.org/CA/Incident_Dashboard
> [2] https://wiki.mozilla.org/CA/Closed_Incidents
> [3]
> https://www.entrustdatacard.com/-/media/documentation/
> licensingandagreements/entrust_baselinerequirements_2018.pdf?la=en=
> BC08BAF5AE81B2EE66A2146EE7710FB2F4F33BA6
> [4] https://bugzilla.mozilla.org/show_bug.cgi?id=1388488
> [5] https://bugzilla.mozilla.org/show_bug.cgi?id=1472993
> [6] https://bugzilla.mozilla.org/show_bug.cgi?id=1472993#c5
> [7] https://www.actalis.it/documenti-en/actalisca_audit_
> statement_2018.aspx
> [8]
> https://it-tuv.com/wp-content/uploads/2018/07/AA2018070301_
> Audit_Attestation_TA_CERT__SwissSign_Platinum_G2_signed.pdf

Re: How to submit WebTrust audits in CCADB

2018-08-09 Thread Ryan Sleevi via dev-security-policy
Thanks for the update, Kathleen.

This is truly unfortunate, and unquestionably does harm to the value and
brand of the WebTrust Seal, rather than provide value.

On Thu, Aug 9, 2018 at 7:19 PM, Kathleen Wilson via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> All,
>
> In their effort to better protect WebTrust seals, CPA Canada has made it
> so we can no longer access WebTrust pdf files directly from the CCADB.
>
> I received the following response when inquiring about this.
> “”
> Thank you for contacting Chartered Professional Accountants of Canada.
> You can no longer link directly to PDF documents. You will need to go to
> the registered website where the seal is provided and click on the seal to
> obtain the document (e.g. audit report).
> Also, we are now enforcing the domain requirement when a seal is opened.
> Domain enforcement is essential to the program to prevent fraudulent use.
> It ensures that the WebTrust seals will only function on the certificate
> authority’s websites.
> If a seal is opened from a non-registered domain or other source (e.g.
> email, internal lists, etc.) the seal will not load and will display a
> notice indicating that the domain is not valid.
> “”
>
> Therefore, for the foreseeable future, please do the following when
> creating an Audit Case in the CCADB for WebTrust audits.
>
> 1) Make the PDFs of the audit statements available directly on your CA's
> website.
> OR
> Upload your audit statement PDF files to Bugzilla, as described here:
> https://ccadb.org/cas/fields#uploading-documents
>
> 2) For the audit statement link in your CCADB Audit Case either provide
> the URL to the PDF on your CA's website, or use the link to the document in
> Bugzilla.
>
> 3) Add a Audit Case Comment to indicate the URL where the WebTrust seals
> may be found on your CA’s website.
>
> 4) When you run the Audit Letter Validation (ALV), you can ignore the
> “Cleaned=Fail” ALV result. I will check the seal on your website manually,
> and add a comment to the Audit Case.
>
>
> Also, the cert.webtrust.org audit links that are currently in the root
> cert records and the intermediate cert records in the CCADB no longer work
> either. Fortunately we started archiving audit statements this year. So you
> can scroll down to the “File Archive…” section of the record, and you will
> be able to find the stored audit pdfs.
>
> Thanks,
> Kathleen
>
>
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: GoDaddy Revocations Due to a Variety of Issues

2018-08-09 Thread Ryan Sleevi via dev-security-policy
On Thu, Aug 9, 2018 at 8:24 AM, Nick Lamb via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Fri, 20 Jul 2018 21:38:45 -0700
> Peter Bowen via dev-security-policy
>  wrote:
>
> >  https://crt.sh/?id=294808610=zlint,cablint is one of the
> > certificates.  It is not clear to me that there is an error here.
> > The DNS names in the SAN are correctly encoded and the Common Name in
> > the subject has one of the names found in the SAN.  The Common Name
> > contains a DNS name that is the U-label form of one of the SAN
> > entries.
> >
> > It is currently undefined if this is acceptable or unacceptable for
> > certificates covered by the BRs.  I put a CA/Browser Forum ballot
> > forward a while ago to try to clarify it was not acceptable, but it
> > did not pass as several CAs felt it was not only acceptable but is
> > needed and desirable.
>
> It would be helpful if any such CAs can tell us why this was "needed and
> desirable" with actual examples.
>
> Since the CN field in Web PKI certs always contains information
> duplicated from a field that has been better defined for decades I'm
> guessing in most cases the cause is crappy software. But if we know
> which software is crappy we can help get that fixed rather than
> muddling along forever.


This information is readily available in the discussions for CA/Browser
Forum Ballot 202 -
https://cabforum.org/2017/07/26/ballot-202-underscore-wildcard-characters/
- which would have unambiguously specified and clarified this.

The following CAs voted against: Buypass, CFCA, DocuSign France, Entrust,
GDCA, GlobalSign, SHECA

Buypass - https://cabforum.org/pipermail/public/2017-July/011744.html
CFCA - https://cabforum.org/pipermail/public/2017-July/011733.html
Docusign - https://cabforum.org/pipermail/public/2017-July/011708.html
Entrust - https://cabforum.org/pipermail/public/2017-July/011747.html
GlobalSign - https://cabforum.org/pipermail/public/2017-July/011692.html
GDCA - https://cabforum.org/pipermail/public/2017-July/011736.html
SHECA - https://cabforum.org/pipermail/public/2017-July/011737.html

You can see not all objections are strictly related to that matter at hand,
but hopefully it provides you further information.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: localhost.megasyncloopback.mega.nz private key in client

2018-08-09 Thread Ryan Sleevi via dev-security-policy
Unfortunately, that's not correct. The CA/Browser Forum has passed no such
resolution, as can be seen at https://cabforum.org/ballots/ .

I believe you're confusing this with the discussion from
https://github.com/mozilla/pkipolicy/issues/98, which highlighted that the
BRs 4.9.3 requires clear instructions for reporting key compromise. That
language has existed since the BRs 1.3.0 (the conversion to 3647 format).

Alternatively, you may be confusing this discussion with
https://wiki.mozilla.org/CA/Communications#November_2017_CA_Communication ,
which required CAs to provide a tested email address for a Problem
Reporting Mechanism. However, as captured in Issue 98, this did not result
in a normative change to the CP/CPS.

On Wed, Aug 8, 2018 at 10:22 PM, Tim Hollebeek via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> IIRC we recently passed a CABF ballot that the CPS must contain
> instructions
> for submitting problem reports in a specific section of its CPS, in an
> attempt
> to solve problems like this.  This winter or early spring, if my memory is
> correct.
>
> -Tim
>
> > -Original Message-
> > From: dev-security-policy 
> On
> > Behalf Of Alex Cohn via dev-security-policy
> > Sent: Wednesday, August 8, 2018 4:01 PM
> > To: ha...@hboeck.de
> > Cc: mozilla-dev-security-pol...@lists.mozilla.org;
> ssl_ab...@comodoca.com;
> > summern1...@gmail.com
> > Subject: Re: localhost.megasyncloopback.mega.nz private key in client
> >
> > On Wed, Aug 8, 2018 at 9:17 AM Hanno Böck  wrote:
> >
> > >
> > > As of today this is still unrevoked:
> > > https://crt.sh/?id=630835231=ocsp
> > >
> > > Given Comodo's abuse contact was CCed in this mail I assume they knew
> > > about this since Sunday. Thus we're way past the 24 hour in which they
> > > should revoke it.
> > >
> > > --
> > > Hanno Böck
> > > https://hboeck.de/
> >
> >
> > As Hanno has no doubt learned, the ssl_ab...@comodoca.com address
> > bounces.
> > I got that address off of Comodo CA's website at
> > https://www.comodoca.com/en-us/support/report-abuse/.
> >
> > I later found the address "sslab...@comodo.com" in Comodo's latest CPS,
> > and forwarded my last message to it on 2018-08-05 at 20:32 CDT (UTC-5). I
> > received an automated confirmation immediately afterward, so I assume
> > Comodo has now known about this issue for ~70 hours now.
> >
> > crt.sh lists sslab...@comodoca.com as the "problem reporting" address
> for
> > the cert in question. I have not tried this address.
> >
> > Comodo publishes at least three different problem reporting email
> addresses,
> > and at least one of them is nonfunctional. I think similar issues have
> come up
> > before - there's often not a clear way to identify how to contact a CA.
> Should
> > we revisit the topic?
> >
> > Alex
> > ___
> > dev-security-policy mailing list
> > dev-security-policy@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-security-policy
>
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: AC Camerfirma's organizationName too long incident report

2018-08-08 Thread Ryan Sleevi via dev-security-policy
On Wed, Aug 8, 2018 at 8:13 AM, Juan Angel Martin via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Hello,
>
> We detected 5 certificates issued with ERROR: organizationName too long
> (X.509 lint)
>
> 1. How your CA first became aware of the problem (e.g. via a problem
> report submitted to your Problem Reporting Mechanism, a discussion in
> mozilla.dev.security.policy, a Bugzilla bug, or internal self-audit), and
> the time and date.
>
> We detected these certificates checking the CA issued certificates into
> crt.sh on August 3, 2018.
>
> 2. A timeline of the actions your CA took in response. A timeline is a
> date-and-time-stamped sequence of all relevant events. This may include
> events before the incident was reported, such as when a particular
> requirement became applicable, or a document changed, or a bug was
> introduced, or an audit was done.
>
> 2018-08-03 09:58 UTC --> We detected these 5 certificates and asked the
> team that manages them to revoke them.
> 2018-08-03 15:35 UTC --> All the certificates are revoked.
>
> 3. Whether your CA has stopped, or has not yet stopped, issuing
> certificates with the problem. A statement that you have will be considered
> a pledge to the community; a statement that you have not requires an
> explanation.
>
> The issuance of certificates from this CA was suspended until the
> operational control was deployed.
>
>
> 4. A summary of the problematic certificates. For each problem: number of
> certs, and the date the first and last certs with that problem were issued.
>
> https://crt.sh/?id=617995390
> https://crt.sh/?id=606954201
> https://crt.sh/?id=606953975
> https://crt.sh/?id=606953727
> https://crt.sh/?id=604874282
>
>
> 5. The complete certificate data for the problematic certificates. The
> recommended way to provide this is to ensure each certificate is logged to
> CT and then list the fingerprints or crt.sh IDs, either in the report or as
> an attached spreadsheet, with one list per distinct problem.
>
> https://crt.sh/?id=617995390
> https://crt.sh/?id=606954201
> https://crt.sh/?id=606953975
> https://crt.sh/?id=606953727
> https://crt.sh/?id=604874282
>
>
> 6. Explanation about how and why the mistakes were made or bugs
> introduced, and how they avoided detection until now.
>
> There was no effective control into Multicert's PKI platform about DN's O
> lenth and this CA wasn't included into Camerfirma's quality controls until
> 2018-08-03.
>
>
> 7. List of steps your CA is taking to resolve the situation and ensure
> such issuance will not be repeated in the future, accompanied with a
> timeline of when your CA expects to accomplish these things.
>
> - Multicert's team have added an operational control and they'll delploy
> the techinical control on August 9
> - Multicert's team will check crt.sh for misissued certificates (from
> today forward).
> - Camerfirma will check for certificates issued by new intermedite CAs
> into crt.sh no more than 24 hours after the CA certificate issuance (from
> today forward).
>
> Your comments and suggestions will be appreciated.
>

Hi Juan,

Can you speak more to what the technical controls being deployed are?

The DNs O length comes from X.509v3 and RFC 5280, so it's a bit baffling to
understand how there wasn't an effective control there. It does call into
question the potential for a lack of other effective controls, which could
be concerning.

With respect to Camerfirma checking post-issuance what Multicert is doing,
that's certainly the minimum expected, as you've cross-certified them. Can
you help explain why this wasn't already part of your process and controls
when working with third-party CAs? Can you explain why Multicert's PKI
platform is allowed independent issuance, rather than having Camerfirma
manage and maintain that for them, and how the community can rely on
Camerfirma appropriately supervising and mitigating that risk going forward?

I think it is good that you detected this, but note that your incident
report doesn't actually examine when Multicert first had this issue.
Looking at https://crt.sh/?id=617995390 for example, I see it being July
24. Can you explain why it took so long to detect? Can you discuss how far
back you've examined Multicerts issuance?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Telia CA - incorrect OID value

2018-08-08 Thread Ryan Sleevi via dev-security-policy
Thanks! I think this is more in line with the goal of these discussions -
trying to learn, share, and disseminate best practices.

Here, the best practice is that, prior to any configuration, the CA should
determine what the 'model' certificate should look like. This model
certificate is, in effect, the technical equivalent of their certificate
profile (e.g. 7.1 of a CA's CP/CPS) - indeed, it might even make sense for
CAs to include their 'model certificates' as appendices in their CP/CPS,
which helps ensure that the CP/CPS is updated whenever the profile is
updated, but also ensures there's a technically verifiable examination of
the profile.

Going further, it might make sense for CAs to share their model
certificates in advance, for community review and evaluation - although
we're not quite there yet, it could potentially help identify or mitigate
issues before hand, as well as help the CA ensure it's considered
everything in the profile.

Similarly, examining these model certificates through linting is another
thing to consider, and to compare these against the test certificates'
linted results. One thing to consider with model certificates is every
configurable option you allow (e.g. key size) can create different model
certificates, so as a testing procedure, you'll want to make sure you have
model certificates for every configurable option, as well as consider test
certificates for various permutations. For example, lets say you're
introducing a new subject attribute to a certificate - as part of
developing your model certificate, and your test certificate, you'll likely
want to examine the various constraints on that field (e.g. length of
field, acceptable characters), and run tests to make sure they produce the
correct and expected results. Consider situations like "all whitespace" -
does it do the expected thing (which could be to omit the field and allow
issuance, or prevent issuance, etc).

As far as training goes, it does sound like there is an opportunity for
routine training regarding changes to the BRs (and relevant RFCs), to make
sure the team constructing and reviewing profiles know what is and is not
acceptable. While it's good to examine the policies for RAs, looking more
holistically, you want to make sure the team tasked with creating and
reviewing these models is given adequate support and training to know - and
critically evaluate - what is or isn't permitted.

On Wed, Aug 8, 2018 at 6:34 AM, pekka.lahtiharju--- via dev-security-policy
 wrote:

> Telia got a serious lesson with this incident that should not have
> happened. Important detail also to know is that certificates were not
> issued to wrong entities and issuing new certificates with wrong OID field
> was prevented immediately.
>
> 1) Telia has a development process with multiple steps when doing a change
> to SSL process. Some steps of the process include creating test
> certificates in test and pre-production systems with documented change plan
> and a review. Unfortunately test certificates were using test OID values so
> that the problem couldn’t be detected at test side. Telia has analysed
> reasons that caused this error. The main reason was not adequately
> implemented testing. Test process didn't include certificate comparison
> correctly against so-called model certificate. Telia has model certificates
> for each certificate type that are used in comparison when any certificate
> profile changes. This time there wasn't DV model certificate at all (except
> in test system with test OID) because DV was a completely new certificate
> type for Telia. OV model certificate (that had OV OID value) was used
> instead by the reviewers. Telia should have created a DV model certificate
> at first. In model certificate creation there are several eye pairs
> including senior developers when accepting a new one. As a resolution Telia
> has now enhanced processes so that it is mandatory to create model
> certificate when completely new certificate type is created.
> 2) We have concluded that the main reason for this problem was not a lack
> of training but the incomplete test process and documentation. CA audits
> have annually evaluated Telia training. Recommendations about improvements
> have been documented into our internal audit reports if necessary.
> Recommendations (or issues) from CA auditors are always added to Telia
> Security Plan to improve Telia CA process continuously. Persons involved in
> the review have got many different types of training that vary from general
> security to deeper CA software related trainings. E.g. recently Feisty Duck
> – The Best TLS training in the World and several trainings from our CA
> Vendor.
> a) CA software vendor trainings have been held quarterly.
> b) Vendor keep the materials up to date and we update our own training
> materials annually or when needed
> c) CA audits have annually evaluated Telia internal trainings to
> Registration officers
>
> 3) When this problem was 

Re: Telia CA - problem in E validation

2018-08-07 Thread Ryan Sleevi via dev-security-policy
On Mon, Aug 6, 2018 at 3:28 AM, pekka.lahtiharju--- via dev-security-policy
 wrote:

> I want to emphasize that each and every value of certificate Subject have
> always been verified. It's wrong to say that those values are unverified.
> It is only a question about E verification method and quality. Our method
> has been to estimate visually by Registration Officer if each E value (or
> other subject value outside common group C,O,ST,L, streetAddress,
> postalCode) is correct for this Customer.
>

What are you visually validating though? That it's an email address? That
it's owned by the Subscriber? By comparison, what does it mean to "visually
validate" one of those other fields - are you using some registration
service? Some form of validation (e.g. sending an email)?

I think it's fair to say that these fields aren't validated, if your
process is just that the RA looks at it and says "sure"


> Registration Officer training has instructed which E values must be
> rejected. It is not possible to use visually similar kinds of characters
> because we technically restrict Subject characters to common ASCII
> characters (e.g. nulls are rejected). It is completely incorrect to claim
> that any values are added  without validation. Since Feb 2018 Telia also
> techically prevents any other values than C,O,L,OU,E,CN from inserted to
> SSL certificates. Since that the simple visual verification has been valid
> only with OU and E (others have be very rare always). In addition all Telia
> SSL certificates have always been also post-examined (visually) after the
> enrollment to be absolutely sure that no incorrect subject values have
> passed our validation (second person evaluation).
>

I think this is really good information - it suggests that prior to Feb
2018, those other fields from the CSR may be copied over.

If it helps, think about something like "Country" or "Organization". Visual
validation just says "Yeah, this is probably right", while actual
validation involves making sure it's a valid ISO country code (in the case
of C) or that the Organization is actually affiliated with the Applicant
(in the case of O). Hopefully that distinction makes it clearer?


> I understand your opinion that this kind of visual verification is not as
> strong as technical email verification with random codes. However, random
> code verification is not written to be required by BR. BR only states in
> 7.1.4.2.2.j: "All other optional attributes, when present within the
> subject field, MUST contain information that has been verified by the CA."
> In my opinion we have followed that requirement because we have had a
> verification method for those values; do you disagree?
>

I think this is the point that I'm trying to understand - what those
verification methods were and how they were assessed to be correct for the
field. We've discussed emailAddress, but it sounds like prior to Feb 2018,
this may have included other fields (besides OU and emailAddress). Have you
examined and reviewed the past certificates for that?


> Next we are ready to stop adding E values completely to solve this issue
> permanently but we think it is not right to require us to revoke all our
> old E values.
>

Why is that? What was actually validated for those emailAddresses? Just
that the RA thought it 'probably' was correct for that Applicant?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Telia CA - problem in E validation

2018-08-05 Thread Ryan Sleevi via dev-security-policy
On Fri, Aug 3, 2018 at 3:53 AM pekka.lahtiharju--- via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Incident report:
>
> PROBLEM IN SUBJECT E (email) VALUE VALIDATION (deviation 5)
> Telia got a preliminary CA audit report on 25th June 2018. One of its BR
> deviations was a statement that "Telia did not have controls to adequately
> verify the email address information (of SSL certificates)". Telia has
> always verified E values only visually and Registration officer (or
> certificate inspector in some cases) has to manually accept each value but
> only clearly incorrect values or syntactically incorrect values have been
> thus far rejected. Note! Subject E value has only informative meaning and
> often includes support email address related to the server and it can't be
> used for SMIME purposes.
>
> Timeline of actions:
> 10-Jul-2018 Telia decided to completely stop inserting E values to OV
> certificates because of this deviation because Telia won't know how they
> could be reasonably verified otherwise. Plan is to implement this removal
> in September 2018. But before that Telia would like to get community
> opinion how E values are verified by other CAs and how they are supposed to
> be verified when BR text is like this "All other optional attributes, when
> present within the subject field, MUST contain information that has been
> verified by the CA." Before this discussion Telia plan is not to revoke
> previously enrolled OV certificates with visually verified E values.
>
> Summary and details of problematic certificates:
> Lots of existing Telia OV certificates have E value because Openssl which
> is one of the most common CSR generators automatically adds it to the CSR
> and old Telia process has accepted the values unless they are incorrect in
> visual verification or syntactically incorrect. Actual count and list of
> problematic E values will be generated in August 2018.


From a system design perspective, this is deeply concerning. It sounds as
if the old Telia processes may have copied any number of bits over from the
CSR, without validation that they were fit for the certificate profile or
appropriate for inclusion.

Has Telia re-examined all of its certificates issued under the old system,
to ensure that all certificates conform with the CP/CPS profiles (such as
including other subject attributes requested by customers, beyond
emailAddress, not in accordance with its policies)? Has it maintained
sufficient documentation to confidentially demonstrate that the fields that
are included have been validated accordingly?

Perhaps most concerning is it sounds as if the system used the CSR value
for fields - which might allow visually confusing characters to be
introduced or other subtleties that CAs have encountered (such as embedded
NULs), leading to misleading or inaccurate information. How do Telia’s new
processes account for this? Greater transparency around the “new” system
(since the distinction is being made between old and new), how information
is entered, and how it is validated seem useful contributions to the
discussion of remediation and mitigation.


>
> Explanation about how and why the mistakes were made or bugs introduced,
> and how they avoided detection until now:
> Telia hasn't understand that E values should be verified using some other
> method than using visual check. Before this year it hasn't been on audit
> comments even though Telia E verification process has been same always.
>
> Steps to fix:
> 1. listing of problematic certificates; Telia plan to do this in August
> 2018
> 2. community discussion how other CAs verify E and how they are supposed
> to be verified; planned to happen starting in August 2018 based on this bug
> 3. possible revocation or revalidation if community states that existing E
> values cause a security problem; will be done after public discussion
>
>
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Telia CA - incorrect OID value

2018-08-05 Thread Ryan Sleevi via dev-security-policy
On Fri, Aug 3, 2018 at 3:51 AM pekka.lahtiharju--- via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Indident reports:
>
> ERROR IN DV OID VALUE (deviation 4)
>
> How Telia became aware:
> Telia got preliminary CA audit report on 25th June 2018. One of its BR
> deviations was a finding that "17 Telia DV certificates had incorrectly
> same OID value that was used for Telia OV certificates."
>
> Timeline of actions:
> On the same day Telia fixed the OID value into DV profile so that error
> won't happen again. Telia's opinion is that the incorrect OID value has no
> impact on any common system but anyway Telia's plan is to revoke all
> incorrect certificates ASAP and latest at September 2018. Customers need to
> replace their original incorrect certificate with a new certificate
> provided by Telia. Telia will update this bug until all incorrect
> certificates are revoked.
>
> Summary and details of problematic certificates:
> About ~300 of Telia DV certificates for a single pilot DV Customer
> included OV OID 2.23.140.1.2.2 instead of DV OID 2.23.140.1.2.1. All
> incorrect ones were enrolled between 20-Mar-2018 and 25-Jun-2018. All are
> logged to CT and can be found using given dates and issuer "Telia Domain
> Validation SSL CA v1". Certificates are also available in Telia CA database.
>
> Explanation about how and why the mistakes were made or bugs introduced,
> and how they avoided detection until now:
> Telia CA started to enroll DV SSL certificates in March 2018. Previously
> all Telia's SSL certificates were OV SSL certificates. The new certificate
> type was basically sub-type of Telia OV certificate but with fewer subject
> fields. Its profile was copied from OV and then modified, tested and
> piloted but still there was this error in the OID value that was undetected
> because it won't have any effect anywhere and was commonly used by Telia
> before.
>
> Steps to fix:
> 1. fix the DV profile; DONE 25-Jun-2018, no errors occurred after that
> 2. reproduce all incorrect certificates any provide those to Customer;
> ONGOING, planned to finnish 30-Sep-2018
> 3. revoke all incorrect ones; ONGOING, planned to finnish 30-Sep-2018
> 4. Telia CA decided to improve its testing process to avoid similar errors
> in the future; DONE 6-Jul-2018
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
I think this highlights a rather significant and serious failure on Telia’s
part when establishing a new certificate profile, and I don’t really see
what concrete steps are being taken to remediate this or detect if it has
happened in the past.

1) What process does Telia have in place for the review of profiles before
being deployed to production?
 a) In light of this, how has that process changed?

2) What level of training is provided to those employees tasked with
reviewing?
  a) Frequency of training
  b) Frequency of reevaluating the training materials?
  c) Independent assessments of those training materials?

3) These certificates materially misstate the level of validation provided
by Telia, and the BRs require revocation within 24 hours. What steps has
Telia taken to ensure this?

I cannot understate how significant this failure is. At least one CA
(Turktrust) through a failure of process to adequately review, configure,
and test their certificate profiles, ended up releasing CA certificates
into the wild to entities not qualified to receive them. The result of that
ultimately ended with them exiting the CA business after finding it
difficult to get their products universally trusted.

While it may be argued by some that this is less severe, because it was
“just” an OID, the demonstrated failure of that process calls into
fundamental question the operations of Telia, and the confidence that a
similar incident won’t occur.

Part of the reason for these postmortems is to ensure that all CAs can
learn and improve their best practices, through the mistakes others have
made, so that the public can reliably trust such certificates.  As such,
there is an onus on Telia to demonstrate how this is different from past
incidents, how they have learned and incorporated changes from those past
incidents into their current processes, and where those improvements were
deficient.

While it sounds as if Telia did not take those lessons to heart when they
occurred, as the CA community was still unaccustomed to learning from
others mistakes, I think we as a community need to understand - in concrete
and thorough detail - what the old processes were, why they failed, and how
they’re being improved. My questions are merely an illustrative attempt to
show some of how that might be done, but it is by no means exhaustive.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org

Re: Possible violation of CAA by nazwa.pl

2018-07-28 Thread Ryan Sleevi via dev-security-policy
On Sat, Jul 28, 2018 at 2:17 PM Jeremy Rowley 
wrote:

> I think the desire to categorize these is more to make sense of where the
> distrust line is. No one wants to end up on the same boat as Symantec, and
> there aren't clear guidelines on how to prevent that from happening to a
> CA.


I don’t think it’s that cut and dry. Everything enumerated highlights a
failure of process - whether that failure was technical or procedural is
far less important to the frequency, detection, and remediation of those
failures. The expectation is for the CA to design their systems in a way to
prevent as many human failures as possible - and there’s little excuse for
the technical ones - while also having robust systems in place to detect
and remediate.

The hidden thread in this is less about CAs being distrusted, and more
about finding reasons to not revoke certs - as if some failures are less
than revocation worthy. Yet that’s flossing over the largest systemic issue
in our industry, which is the lack of certificate agility (in issuance or
replacement). Requiring revocation acknowledges that our end state should
be the old cert is replaced transparently by the new cert and no systems
break - and any difficulty in that either rests with the CA for not
investing enough in meaningful systems (automatable validation like those
based on DNS, interoperable automated issuance protocols like ACME), or on
the Subscriber for not investing in automation.

Framing it as somehow being about the Browser reaction is thus incorrect -
ANY single instance of misissuance could be worthy of distrust, as could a
sustained pattern. Browsers are only going to get better at managing that
impact to their users, so both CAs need to get better preventing and
Subscribers need to take advantage of the better automation solutions.




>
> Pretty much every CA mis-issues at some point on an infinite timeline, and
> the lack of certainty on browser reaction to the mis-issuance makes it hard
> to determine the best corrective course of action should be. Obviously,
> public discussion on issues as they happen is the best way to figure that
> out, but explaining to management that the consequences of various
> misissuances could range from root removal to a simple apology, depending
> on the browser, is pretty difficult. If you follow the list closely, the
> levels of mis-issuance are a lot more clear. For CAs that don’t' follow as
> closely, it can be a lot scarier.
>
>
> -Original Message-
> From: dev-security-policy  digicert@lists.mozilla.org> On Behalf Of Ryan Sleevi via
> dev-security-policy
> Sent: Friday, July 27, 2018 8:01 PM
> To: Tim Hollebeek 
> Cc: mozilla-dev-security-pol...@lists.mozilla.org; Jakob Bohm <
> jb-mozi...@wisemo.com>
> Subject: Re: Possible violation of CAA by nazwa.pl
>
> I disagree that a series of categories is good or helpful to the community.
>
> I think the framing is generally adopted by CAs that want to see shades of
> gray in non-compliance, in order to downplay risk or downplay their lack of
> compliance.
>
> As to the Forum, browsers have tried multiple times to introduce
> definitions. Gerv had previously supported a single definition for any
> matter of non-compliance, in order to appropriately and adequately inform
> CAs about expectations, but CAs were still opposed.
>
> By focusing on that singular matter, ontologies can be avoided, as can the
> inevitable disagreements about impact and consequence that detract from a
> more meaningful focus on action and remediation.
>
> On Sat, Jul 28, 2018 at 4:39 AM Tim Hollebeek via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
> > I agree.
> >
> > I've actually thought about adding definitions of categories of
> > misissuance to the BRs before.  Some of the requirements like
> > revocation are really hard to write and understand if you don't first
> > categorize all the misissuance use cases, many of which are very, very
> > different.  And just when I think I have a reasonable ontology of them
> > in my head ... someone usually goes and invents a new one.
> >
> > Despite how much people like to talk about it, misissuance isn't a
> > defined term anywhere, AFAIK.  It can lead to a lot confusing
> > discussions, even among experts at the CA/Browser Forum.
> >
> > -Tim
> >
> > > -Original Message-
> > > From: dev-security-policy  > > bounces+tim.hollebeek=digicert@lists.mozilla.org> On Behalf Of
> > > bounces+Jakob
> > > Bohm via dev-security-policy
> > > Sent: Friday, July 27, 2018 2:46 AM
> > > To: mozilla-dev-security-pol...@lists.mozilla.org
> > > Subject: Re: Possible violation of CAA by nazwa.pl
> > >

Re: Possible violation of CAA by nazwa.pl

2018-07-27 Thread Ryan Sleevi via dev-security-policy
I disagree that a series of categories is good or helpful to the community.

I think the framing is generally adopted by CAs that want to see shades of
gray in non-compliance, in order to downplay risk or downplay their lack of
compliance.

As to the Forum, browsers have tried multiple times to introduce
definitions. Gerv had previously supported a single definition for any
matter of non-compliance, in order to appropriately and adequately inform
CAs about expectations, but CAs were still opposed.

By focusing on that singular matter, ontologies can be avoided, as can the
inevitable disagreements about impact and consequence that detract from a
more meaningful focus on action and remediation.

On Sat, Jul 28, 2018 at 4:39 AM Tim Hollebeek via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> I agree.
>
> I've actually thought about adding definitions of categories of
> misissuance
> to the BRs before.  Some of the requirements like revocation are really
> hard
> to write and understand if you don't first categorize all the misissuance
> use
> cases, many of which are very, very different.  And just when I think I
> have
> a reasonable ontology of them in my head ... someone usually goes and
> invents a new one.
>
> Despite how much people like to talk about it, misissuance isn't a defined
> term anywhere, AFAIK.  It can lead to a lot confusing discussions, even
> among experts at the CA/Browser Forum.
>
> -Tim
>
> > -Original Message-
> > From: dev-security-policy  > bounces+tim.hollebeek=digicert@lists.mozilla.org> On Behalf Of Jakob
> > Bohm via dev-security-policy
> > Sent: Friday, July 27, 2018 2:46 AM
> > To: mozilla-dev-security-pol...@lists.mozilla.org
> > Subject: Re: Possible violation of CAA by nazwa.pl
> >
> > On 26/07/2018 23:04, Matthew Hardeman wrote:
> > > On Thu, Jul 26, 2018 at 2:23 PM, Tom Delmas via dev-security-policy <
> > > dev-security-policy@lists.mozilla.org> wrote:
> > >
> > >>
> > >>> The party actually running the authoritative DNS servers is in
> > >>> control
> > >> of the domain.
> > >>
> > >> I'm not sure I agree. They can control the domain, but they are
> > >> supposed to be subordinate of the domain owner. If they did something
> > >> without the owner consent/approval, it really looks like a domain
> hijacking.
> > >
> > >
> > > But the agreement under which they're supposed to be subordinate to
> > > the domain owner is a private matter between the domain owner and the
> > > party managing the authoritative DNS.  Even if this were domain
> > > hijacking, a certificate issued that relied upon a proper domain
> > > validation method is still proper issuance, technically.  Once this
> > > comes to light, there may be grounds for the proper owner to get the
> > > certificate revoked, but the initial issuance was proper as long as
> > > the validation was properly performed.
> > >
> > >
> > >>
> > >>
> > >>> I'm not suggesting that the CA did anything untoward in issuing this
> > >>> certificate.  I am not suggesting that at all.
> > >>
> > >> My opinion is that if the CA was aware that the owner didn't
> > >> ask/consent to that issuance, If it's not a misissuance according to
> > >> the BRs, it should be.
> > >
> > >
> > > Others can weigh in, but I'm fairly certain that it is not misissuance
> > > according to the BRs.  Furthermore, with respect to issuance via
> > > domain validation, there's an intentional focus on demonstrated
> > > control rather than ownership, as ownership is a concept which can't
> > > really be securely validated in an automated fashion.  As such, I
> > > suspect it's unlikely that the industry or browsers would accept such a
> > change.
> > >
> > >
> >
> > I see this as a clear case of the profound confusion caused by the
> community
> > sometimes conflating "formal rule violation" with "misissuance".
> >
> > It would be much more useful to keep these concepts separate but
> > overlapping:
> >
> >   - A BR/MozPolicy/CPS/CP violation is when a certificate didn't follow
> the
> > official rules in some way and must therefore be revoked as a matter of
> > compliance.
> >
> >   - An actual misissuance is when a certificate was issued for a private
> key held
> > by a party other than the party identified in the certificate (in
> Subject Name,
> > SAN etc.), or to a party specifically not authorized to hold such a
> certificate
> > regardless of the identity (typically applies to SubCA, CRL-signing,
> OCSP-
> > signing, timestamping or other certificate types where relying party
> trust
> > doesn't check the actual name in the certificate).
> >
> >  From these concepts, revocation requirements could then be reasonably
> > classified according to the combinations (in addition to any specifics
> of a
> > situation):
> >
> >   - Rule violation plus actual misissuance.  This is bad, the 24 hours
> or faster
> > revocation rule should definitely be invoked.
> >
> >   - Rule compliant misissuance.  This will inevitably 

Re: Chunghwa Telecom eCA Root Inclusion Request

2018-07-13 Thread Ryan Sleevi via dev-security-policy
On Sat, Jul 14, 2018 at 2:16 AM Wayne Thayer via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Fri, Jul 13, 2018 at 3:03 AM lcchen.cissp--- via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
> > Dear Wayne,
> >
> >Those programs for checking field of ToBeSign SSL certificate are
> > online on June 22.
> >
> >We suggest that CA "in principle" must comply with the string length
> > limit of RFC 5280 for organizationalUnitName or organizationName filed in
> > Subject of a certificate. But if it is necessary after verification to
> > express an organization’s name in the organizationalUnitName or
> > organizationName filed of the subject field that exceeds the string
> length
> > limit of RFC 5280, then Mozilla should not regard these special cases as
> > errors of a CA. After all, X.500 standard has no limit on the length of
> the
> > string, and let the issuing CA and the Subscriber who has accepted that
> SSL
> > certificate may bear the possibility of any incompatibility issues.
> >
> > In effect, this is saying that CAs should be permitted to break
> well-defined rules when they find them inconvenient. This is the second
> example in which Chunghwa Telecom has argued that it's okay to do this
> (along with the Taiwan State/Locality issue). While I can sympathize with
> Chunghwa Telecom's reason for doing this, it is quite troubling because it
> implies that Chunghwa Telecom may be willing to ignore any of the rules
> they disagree with.
>
>For the unrevoked certificate with length of organizationalUnitName more
> > than 64 characters https://crt.sh/?id=336874396, Its Subject DN is as
> > below
> >
> > commonName= www.gov.vc
> > organizationalUnitName= Information Technology Services
> > Division
> > organizationalUnitName= Ministry of Finance, Economic
> > Planning, Sustainable Development and Information Technology
> > organizationName  = Government of Saint Vincent and
> > the Grenadines
> > countryName   = VC
> >
> >Because Saint Vincent and the Grenadines is a very, very small country
> > with 120 thousand citizens. Many Ministries are consolidated, so the
> > organizationalUnitName of the Ministry becomes longer. Why not let the
> > Registration Authority Officers fill in the name of the certificate
> subject
> > with the correct organization’s full name? Or should it be expressed in
> > short abbreviations with ambiguous meaning?
> >
> >   There is no consensus on the length of the string in the CAB Forum
> > Baseline Requirements, but in the case we have encountered, more than 64
> > characters for an organization name does exist.
> >
> >   Ben Wilson, Vice Chair of current CAB Forum, proposed to amend the
> > Baseline Requirements to relax the length of the commonName and
> > organizationName strings in April of 2017. Ben first posted his draft
> > revision of BR amendment to PKIX's mailing list for comments. Because of
> > the FQDN length in the commonName may be more than 64 characters, and the
> > organization name in organizationName may exceed 64 characters.
> >
> > Please read
> > Https://www.ietf.org/mail-archive/web/pkix/current/msg33853.html
> >
> > Ben's article was discussed in a series of PKIX mailing lists.
> >
> > Erik Andersen, who is currently responsible for the revision of the X.500
> > series standards, mentioned that since the 2008 version of the X.520
> > standard, the string definition of these attributes has been changed from
> > DirectoryString to UnboundedDirectoryString, and UnboundedDirectoryString
> > is basically unlimited. That is to say, the length of the string, from
> the
> > source of RFC 5280 : X.500 series is now not limited.
> >
> > UnboundedDirectoryString ::= CHOICE {
> >   teletexStringTeletexString(SIZE (1..MAX)),
> >   printableString  PrintableString(SIZE (1..MAX)),
> >   bmpStringBMPString(SIZE (1..MAX)),
> >   universalString  UniversalString(SIZE (1..MAX)),
> >   uTF8String   UTF8String(SIZE (1..MAX)) }
> >
> >
> >The main reason of the X.500 series of standards removed the string
> > length limit is to let the ISO/ITU-T Directory standard compatible with
> > LDAP, because the LDAP standard does not limit the string length of the
> > attribute.
> >
> >However, when RFC 5280 was originally developed, the referenced X.500
> > standard version has a limit on the length of the attribute string. In
> the
> > PKIX discussion thread, because the RFC 5280 standard has been cited by
> the
> > industry for many years, some people are worried that if you go back to
> the
> > RFC 5280 string length limit, or if the CA/Browser Forum jumps off the
> RFC
> > 5280 string length limit, it is possible will cause compatibility
> problems,
> > and finally this discussion string did not reach a conclusion.
> >
> > I disagree that the discussion string referenced above did not 

Re: [FORGED] TeletexString

2018-07-08 Thread Ryan Sleevi via dev-security-policy
On Sat, Jul 7, 2018 at 4:43 AM, Kurt Roeckx via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Sat, Jul 07, 2018 at 01:23:24AM +, Peter Gutmann via
> dev-security-policy wrote:
> >
> > So for certlint I'd always warn for T61String with anything other than
> ASCII
> > (which century are they living in? Point them at UTF8 and tell them to
> come
> > back when they've implemented it), treat it as a probably 8859-1 string
> when
> > checking for validity, and report an error if they try anything like
> character
> > set switching and fancy escape sequences, which are pretty much
> guaranteed not
> > to work (i.e. display) properly.
>
> I think it should generate an error on any character not defined
> in 102 and the space character. So any time you try to use anything
> in C0, C1 and G1, and those 6 in 102 that are not defined.
>

Is that because you believe it forbidden by spec, or simply unwise?

The value of a linter is fairly proportional to its value in spec
adherence. I'm all for warnings for things that are unwise, but otherwise
permitted, but making them errors puts burden on CAs and the community to
evaluate whether or not it's an "actual violation" or just something
"monumentally stupid". The former is far more important to the relying
party community, and thus if it's not a spec violation that can be
demonstrated as such, making it an error is just a good way to get linters
ignored :/

Perhaps I misunderstood the proposal, though? Is this considering the
escape sequences or not?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: TeletexString

2018-07-07 Thread Ryan Sleevi via dev-security-policy
On Sat, Jul 7, 2018 at 4:07 AM, Kurt Roeckx via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Fri, Jul 06, 2018 at 02:43:45PM -0700, Peter Bowen via
> dev-security-policy wrote:
> > In reviewing a recent CA application, the question came up of what is
> > allowed in a certificate in data encoded as "TeletexString" (which is
> > also sometimes called T61String).
> >
> > Specifically, certlint will report an error if a TeletexString
> > contains any characters not in the "Teletex Primary Set of Graphic
> > Characters" unless the TeletexString contains an escape sequence. For
> > example, including 'ä', or 'ö' will trigger this error unless preceded
> > by an escape sequence.
> >
> > In order to figure out what can be used, one need to reference X.690
> > Table 3, which notes that G0 is assumed to start with character set
> > 102.  Character set 102 is defined at
> > https://www.itscj.ipsj.or.jp/iso-ir/102.pdf.  Note that 102 isn't the
> > same as ASCII nor is it i the same as the first part of Unicode.
>
> I'm not sure why you bring this up. Anyway, according to X.690,
> the default is:
>
> G0: 102
> C0: 106
> C1: 107
>
> Or as escape sequences and locking shift:
> ESC 2/8 7/5 LS0 (G0 102, locking shift 0)
> ESC 2/1 4/5 (C0 106)
> ESC 2/2 4/8 (C1 107)
>
> But what is just as important is that G1 does not have a default,
> while at least some people assume it's 103. While 102 is close to
> ASCII, there is nothing for G1 that is close to latin1.
>

This came up in a recent CA review, in which a CA did not properly escape,
but stated that the vendor told them this is correct.

See https://bug1417041.bmoattachments.org/attachment.cgi?id=8985908
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: OISTE WISeKey Global Root GC CA Root Inclusion Request

2018-06-26 Thread Ryan Sleevi via dev-security-policy
On Tue, Jun 26, 2018 at 4:29 PM, Pedro Fuentes via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Hi Ryan,
> My comments below.
>
> El martes, 26 de junio de 2018, 21:12:44 (UTC+2), Ryan Sleevi  escribió:
> >
> > I just want to make sure - the plan is to provide a Period of Time report
> > from when the key was created to 1 year after (i.e. 9 May 2017 to 8 May
> > 2018)?
> > If so, that definitely closes the gap.
>
> Yes, we are formulating s solution to close the gap. The proposal that we
> made to solve the issue is to change the start date of our annual audit
> period, so it coincides with the creation of the new Root GC and covers 12
> months after this date, but being in scope the whole certification practice
> and the three roots (GA, GB and GC).
>
> This implies an overlap with the periods already audited, but closes any
> perceived gap.
>
> > Alternatively, a report on the 9 May 2017 to 15 September 2017 period
> would also close it.
>
> This is not appropriate as it would imply having to run two audits, one
> for GA+GB and another for GC. The above solution allows us to have a easier
> follow-up next year.
>

To be fair, you can align those periods by having one report prepared for 9
May 2017 to your current audit period, and then include GC in with your
normal audit - without having to alter your period. It allows you to
maintain your current audit cycle entirely.


> Is it too adventurous of me to say that we have a deal?
>

With a heads up that we'll be looking very closely compared to illustrative
reports to understand if any deviations are meaningful and significant, I
think that sounds like a way of addressing the uncertainty gap present :)
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: OISTE WISeKey Global Root GC CA Root Inclusion Request

2018-06-26 Thread Ryan Sleevi via dev-security-policy
On Mon, Jun 25, 2018 at 5:12 PM, Pedro Fuentes via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> 3.- The key ceremony of this Root was witnessed by the same auditors. I
> would say that the mere fact that an auditor issues a point in time WT BR
> report implies undoubtedly full compliance with this requirement, as with
> any other one set by BR. Therefore, the fact that the PiT exists, means
> that the key ceremony was executed according to the rule.
>

The issue is not that the key ceremony wasn't executed - although it's
worth calling out, the key ceremony does not test every BR issue - but
about the gap between when the key ceremony was conducted to present.


> 4.- Please check in this link (https://filevault.wisekey.com/d/412f61ab26/)
> the redline intermediate versions. It must be noted that not all versions
> are formally adopted and go public (i.e. version 2.7 was a working
> version). These are mostly changes to include the GC hierarchy, properly
> reflect latest BR (i.e. validity periods, reflect the contact point for
> incident reporting, etc) and also to correct minor glitches.
>

Thanks!


> 6.- As a result of these discussions and open concerns, and based in the
> auditor recommendation to advance in this inclusion process, we already
> proposed here to change the audit period so it starts the 9th of May 2017
> instead of the planned annual renew. Fortunately it was only one month
> difference, but I must say that I'd have preferred to take this decision
> based in a formal compliance issue that I could understand, because if it
> had been several months overlap it would have had a much bigger
>

I just want to make sure - the plan is to provide a Period of Time report
from when the key was created to 1 year after (i.e. 9 May 2017 to 8 May
2018)?

If so, that definitely closes the gap. Alternatively, a report on the 9 May
2017 to 15 September 2017 period would also close it.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Cert transition update

2018-06-26 Thread Ryan Sleevi via dev-security-policy
Hi Jeremy,

Thanks for posting the update. A few notes below, as already shared on the
Bugzilla Bug where you also shared this.

On Tue, Jun 26, 2018 at 10:57 AM, Jeremy Rowley via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Key Dates
>
> .March 2018 - Beginning of phased removal of trust by root
> program operators for Symantec TLS certificates issued prior to June 1,
> 2016.
>
> .October 2018 - Full removal of trust of Symantec-issued TLS
> certificates by root program operators.
>

One slight clarification to your dates: The removal is expected to _start_
late June/early July 2018.

Thus, by July 2018, all Symantec-issued TLS certificate consumers should
have begun transitioning, with the majority having completed the
transition. This ensures that, should there be any unforeseen issues, they
can have a small window of time to remove those issues.

In particular, releases of both Firefox and Chrome are expected, no later
than July, which begin distrusting these certificates, with the overall
population of versions increasing to 100% by October. Thus, rather than
October being a transition date from 0% to 100%, it should be seen as the
transition from, say, 50% to 100%. Thus, to avoid breaking 50% of users,
sites should be transitioning *now*.

If it helps, you can point customers to
https://security.googleblog.com/2018/03/distrust-of-symantec-pki-immediate.html
or
https://blog.mozilla.org/security/2018/03/12/distrust-symantec-tls-certificates/
. For Mozilla, https://wiki.mozilla.org/Release_Management/Calendar gives
the calendar - Firefox 63 has begun in Central as of yesterday (i.e. June),
with a scheduled Beta date of September 3.


>
> .By no later than Q2 CY 2019 - Full removal of Symantec-issued
> TLS certificates from all major root program operators.
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: OISTE WISeKey Global Root GC CA Root Inclusion Request

2018-06-25 Thread Ryan Sleevi via dev-security-policy
On Mon, Jun 25, 2018 at 5:12 PM, Pedro Fuentes via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Hi Ryan,
> thanks for your time reviewing this. I really appreciate your comments.
>
> As I have this week the auditors in the office, I prefer to check with
> them before issuing a more formal answer, because you're expressing
> concerns related to the audit practices that I'm not qualified enough to
> respond.
>
> In the meantime, please let me advance the following initial comments:
> 1.- I can't really understand how it can be expected that a CA is able to
> issue a point in time including BR dated the same day of the issuance of a
> Root, because that seems not possible. Any CA needs a minimum time to
> prepare an issuing CA, OCSP responders and doing SSL certificate tests, and
> AFAIK, this lapsed period is not regulated by BR nor Webtrust.
>

I agree - but WebTrust at least provides a reporting mechanism for this, by
indicating the scope of the audit and the (verified) non-performance of
certain activities.

For comparison, you can look at how the latest illustrative reports
formalize what many were already doing (or specifically requested to do),
by calling out things like the explicit (and verified) non-existence of RAs
or key escrow services.

For a new root being spun up, you need to verify that, at the moment the
key was created, the policies and procedures were in place to safeguard
that key, and then going forward, that those policies and procedures have
been examined consistently.

This is part of the requirement for an "unbroken series of audits". How
it's reported on is an issue - and that's why browsers have been working to
communicate directly with the WebTrust TF about these concerns so that they
can make sure that their practitioner guidance and illustrative reports
call this out, for practitioners working for CAs that wish to be trusted by
browsers.

I realize that, as a CA, you can be caught unawares if the auditor is not
following these discussions or best practices, and we're always keen to
make sure there's better understanding. That said, I think the
communication of the concerns around root key generation and its ongoing
proof of continued compliance is one that browsers have well-represented to
auditors, so when there's breakdowns, it's either between the Task Force
and the individual practitioners, or between practitioners and their
customers.

7. In my humble opinion, I think that these requirements must be formalized
> in audit criteria or explicitly in the BR, and not raised "ad hoc". Any CA
> embarking in an inclusion process should know all requirements beforehand.


But they're already arguably part of the BRs, as I showed, and it's up to
the relevant groups (WebTrust, ETSI) to ensure that the criteria they adopt
reflect what browsers expect. As we see with ETSI and ACAB-c, if the
auditor fails to meet those requirements, it's the auditor that's at fault.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: OISTE WISeKey Global Root GC CA Root Inclusion Request

2018-06-25 Thread Ryan Sleevi via dev-security-policy
Hi Pedro,

I followed-up with folks to better understand the circumstances of your
audits and the existing practicioner guidance. From these conversations, my
understanding is that WebTrust is working to provide better practicioner
clarity around these scenarios.

To recap, the particular scenario of concern is:
- A new root key is generated (May 2017 - presumably, May 9, 2017 as
expressed in the cert)
  - Under BRs 6.1.1.1, this should be witnessed by the auditor (or a video
recorded), and the auditor should issue a report opining on it
  - Under WebTrust, using ISAE3000 reporting (
http://www.webtrust.org/practitioner-qualifications/docs/item85806.pdf ),
that illustrative report is IN5.1
- The first audit, on September 15, 2017, is a Point in Time assessment
- The next audit provided is for the period of September 16, 2017 to
December 4, 2017
  - The report is based on the CPS dated July 25, 2017
- Thus, we lack any reporting or opining on the set of controls or
processes, minimally from the period of May 2017 to July 25, 2017 - but
potentially from May 2017 to September 2017.
  - As a consequence, we cannot have reasonable assurance that BRs 6.1.1.1,
p3, (5) was upheld - that is, for the period of May to July/September, that
OISTE maintained "effective controls to provide reasonable assurance that
the Private Key was generated and protected in conformance with the
procedures described in its Certificate Policy and/or Certification
Practice Statement and (if applicable) its Key Generation Script"

In an "ideal" world, for a new CA (since this is not being paired with your
Gen A/Gen B CAs), we would have
- Root Key report issued on Day X
- Point in Time assessment issued on Day X
- Period of Time assessment issued from Day X to Day Y
  - If the CA was not issuing certificates / not all controls could be
reporterd on, then the scope of the audit would indicate as such, until
such a time as the CA does.
  - Y should not be greater than 90 days after the first publicly trusted
certificate was issued.

Unfortunately, not all WebTrust practitioners have been given this
guidance, and as a result, have not passed it on to the CAs that they are
auditing. While some auditors do practice this chain of evidence/audits
from the birth of certificate, not all auditors do.

At this point, it's a question about how the community feels about the set
of changes between the following CP/CPS versions:
2.7, 2.8, 2.9, and 2.10. In particular, the set of changes in 2.9 call out
"Minor changes after WebTrust assessment" - which suggests that, prior to
the September 15, 2017 PITRA, there were issues or non-conformities that
required addressing, before the full engagement.

- Can you speak more to what happened on July 25, 2017?
- Can you provide diffs for 2.7 to 2.10?

Basically, what are things that can the community be confident in the
management and scope of the root certificate between May 9, 2017 and
September 16, 2017. Examples of considerations can be the adoption of the
same CP/CPS, the inclusion in scope of a previous audit (for example, was
this included in the scope of the Gen A/Gen B CAs audit for the period
ending September 15, 2017?), or other documentary evidence.


On Sat, Jun 16, 2018 at 11:45 AM, Pedro Fuentes via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Hello,
> Sorry for my insistence, but our audit is scheduled in less than two weeks.
> I'd appreciate some feedback in the case there's any deviation with BR-8.1
> that prevent keeping the planned audit scope.
> Thanks!
> Pedro
>
> El martes, 5 de junio de 2018, 9:02:42 (UTC+2), Ryan Sleevi  escribió:
> > Hi Pedro,
> >
> > I think the previous replies tried to indicate that I will not be
> available
> > to review your feedback at all this week.
> >
> > On Mon, Jun 4, 2018 at 9:18 AM, Pedro Fuentes via dev-security-policy <
> > dev-security-policy@lists.mozilla.org> wrote:
> >
> > > Kind reminder.
> > > Thanks!
> > >
> > > ___
> > > dev-security-policy mailing list
> > > dev-security-policy@lists.mozilla.org
> > > https://lists.mozilla.org/listinfo/dev-security-policy
> > >
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: OISTE WISeKey Global Root GC CA Root Inclusion Request

2018-06-05 Thread Ryan Sleevi via dev-security-policy
Hi Pedro,

I think the previous replies tried to indicate that I will not be available
to review your feedback at all this week.

On Mon, Jun 4, 2018 at 9:18 AM, Pedro Fuentes via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Kind reminder.
> Thanks!
>
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Namecheap refused to revoke certificate despite domain owner changed

2018-06-01 Thread Ryan Sleevi via dev-security-policy
Yes, as mentioned in the CABF in the first link Wayne provided, even for
our other methods, it can be problematic for domain holders to demonstrate
control using particular methods. As Joanna mentioned, .2 can be
problematic in a post-WHOIS world.

I realize that shooting down suggestions doesn't help us build sustainable
solutions, but this was a problem we thought about a lot in the context of
the CT redaction discussions, because the only 'effective' mitigation to
inappropriate redaction was reveal-(and-revoke) - that is, reveal the
redacted name, and possibly revoke the cert if you don't like what was
revealed. Trying to decide how to authorize that request and what the
consequences of that would be occupied a substantial part of our time (...
I promise I didn't hate redaction "just because").

As an example scenario beyond what Wayne pointed out in the
https://cabforum.org/pipermail/public/2018-January/012824.html link,
consider such situations such as All CAs being required to support all
validation methods for revocation. One possible scenario is a lack of
interoperable interpretations of some methods (yet compliant with the
letter) - for example, GlobalSign's validation methods compared to Let's
Encrypt's use of the (draft) ACME validation methods, which comply with the
letter but are different protocols. Another possible scenario is that a CA
only supports a given method for revocation, not issuance, and thus the
robustness of it and the testing of it is far weaker than might be expected
(and detected) for domain validation.

Understandably, we can try to patch those two examples I gave (and there is
useful result in doing so), such as trying to further specify exactly how
domains are validated, or potentially requiring CAs to also support all
such methods for domain validation as well (although determining whether
that means CAs MUST also support DV is a related and natural implication
that follows that sort of policy). However, I was trying to present them to
indicate the sort of holistic challenges we should think about, and why
it's not quite as easy as 'revoke the same way you validated' or 'validate
using any/every possible method'.

So what does that mean for Richard? Well, I agree with Jakob in that he
quoted the appropriate section, and there is a reasonable expectation in
principle for the CA to do due diligence to investigate for possible
revocation. And I think Wayne's directions on revocation do offer a number
of important contributions to that, by providing some degree of flexibility
for CAs to do meaningful investigations (although with some degree of
transparency inevitably being needed when offering CA discretion). And I
think the Root Stores have a role to play in how they communicate that
expectation to CAs, such that domain holders have recourse if the CA is not
taking meaningful steps.

On Fri, Jun 1, 2018 at 5:42 PM, Jeremy Rowley 
wrote:

> Which is yet another reason why removing method 1 and method 5 was a good
> idea.  Do any of the other methods share the same problem? Maybe IP address
> verification right now.
>
>
>
> *From:* Ryan Sleevi 
> *Sent:* Friday, June 1, 2018 2:51 PM
> *To:* Jeremy Rowley 
> *Cc:* Wayne Thayer ; Jakob Bohm <
> jb-mozi...@wisemo.com>; mozilla-dev-security-policy <
> mozilla-dev-security-pol...@lists.mozilla.org>
>
> *Subject:* Re: Namecheap refused to revoke certificate despite domain
> owner changed
>
>
>
> You know I'm strongly supportive of requiring disclosure of validation
> methods, for the many benefits it brings, I'm not sure how that would
> address the concern.
>
>
>
> Consider a certificate validated under .5. Would Richard now need to hire
> a lawyer to say they own their domain name now?
>
>
>
> On Fri, Jun 1, 2018 at 3:38 PM, Jeremy Rowley via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
> This is one of the reasons I think we should require an OID specifying the
> validation method be included in the cert. Then you can require the CA
> support revocation using the same validation process as was used to confirm
> certificate authorization. With each cert logged in CT, everyone in the
> world will know exactly how to revoke an unauthorized or no-longer-wanted
> cert.
>
>
> -Original Message-
> From: dev-security-policy  digicert@lists.mozilla.org> On Behalf Of Wayne Thayer via
> dev-security-policy
> Sent: Friday, June 1, 2018 1:02 PM
> To: Jakob Bohm 
> Cc: mozilla-dev-security-policy  lists.mozilla.org>
> Subject: Re: Namecheap refused to revoke certificate despite domain owner
> changed
>
> On Fri, Jun 1, 2018 at 5:06 PM Jakob Bohm via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
> >
> > Please contact the CA again, and inform them that BR 4.9.1.1 #6
> &g

Re: Plan to update CCADB PEM extraction tool

2018-06-01 Thread Ryan Sleevi via dev-security-policy
Ah, thanks! I was trying to figure out the context if it was a bug or
intentional - sounds like the former, in which case, all is well :)

On Fri, Jun 1, 2018 at 3:17 PM, J.C. Jones via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Ryan -
>
> Originally the Observatory had "Subject+SPKI" hash field. Someone filed a
> bug that Subject+SPKI field wasn't as useful for external comparisons as
> the SPKI, and the Observatory changed over, replacing the old Subject+SPKI
> hash with a pure SPKI hash.
>
> We were proposing to switch to just the SPKI, simply because that is what
> the Observatory is using today. However, there's no reason not to have the
> Observatory provide the Subject+SPKI hash alongside the SPKI, and then we
> can keep that field and effectively add the SPKI hash. That seems like a
> good idea, for all the reasons David pointed out in 2016
> .
>
> Thanks for catching this!
>
> Cheers,
> J.C.
>
> On Fri, Jun 1, 2018 at 11:57 AM, Julien Vehent via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
> > I think the revert was a mistake. I should have added the SPKI instead of
> > replacing the Subject+SPKI with SPKI. (I don't recall the discussion at
> the
> > time, but I think someone confused Subject+SPKI for SPKI and I meant to
> > address the confusion).
> >
> > I'll re-add the subject+spki field, this time in addition to SPKI, and
> > re-populate the DB.
> >
> > - Julien
> > ___
> > dev-security-policy mailing list
> > dev-security-policy@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-security-policy
> >
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Namecheap refused to revoke certificate despite domain owner changed

2018-06-01 Thread Ryan Sleevi via dev-security-policy
You know I'm strongly supportive of requiring disclosure of validation
methods, for the many benefits it brings, I'm not sure how that would
address the concern.

Consider a certificate validated under .5. Would Richard now need to hire a
lawyer to say they own their domain name now?

On Fri, Jun 1, 2018 at 3:38 PM, Jeremy Rowley via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> This is one of the reasons I think we should require an OID specifying the
> validation method be included in the cert. Then you can require the CA
> support revocation using the same validation process as was used to confirm
> certificate authorization. With each cert logged in CT, everyone in the
> world will know exactly how to revoke an unauthorized or no-longer-wanted
> cert.
>
> -Original Message-
> From: dev-security-policy  digicert@lists.mozilla.org> On Behalf Of Wayne Thayer via
> dev-security-policy
> Sent: Friday, June 1, 2018 1:02 PM
> To: Jakob Bohm 
> Cc: mozilla-dev-security-policy  lists.mozilla.org>
> Subject: Re: Namecheap refused to revoke certificate despite domain owner
> changed
>
> On Fri, Jun 1, 2018 at 5:06 PM Jakob Bohm via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
> >
> > Please contact the CA again, and inform them that BR 4.9.1.1 #6
> > requires the CA (not some reseller) to revoke the certificate within 24
> hours if:
> >
> > The CA is made aware of any circumstance indicating that use of a
> > Fully-Qualified Domain Name or IP address in the Certificate is no
> > longer legally permitted (e.g. a court or arbitrator has revoked a
> > Domain Name Registrant’s right to use the Domain Name, a relevant
> > licensing or services agreement between the Domain Name Registrant
> > and the Applicant has terminated, or the Domain Name Registrant has
> > failed to renew the Domain Name);
> >
> > While CAs are not required to discover such situations themselves,
> > they must revoke once made aware of the situation (in this case by you
> > telling them).
> >
> > At least, this is how I read the rules.
> >
> > This issue has come up in several CAB Forum discussions such as [1].
> > In
> practice, I believe that the requirement Jakob quoted is rarely invoked
> because (despite the examples), the language is too vague and narrow. It
> can also be quite difficult for a CA to verify that the revocation request
> is coming from the legitimate domain name registrant [1], making it less
> likely the CA will take action.
>
> I've made a couple of attempts to fix this, resulting in the current
> language proposed for ballot 213 [2]:
>
> The CA obtains evidence that the validation of domain authorization or
> control for any Fully-Qualified Domain Name or IP address in the
> Certificate should not be relied upon.
>
> I'd prefer a more prescriptive requirement that CAs allow anyone to revoke
> by proving that they control the domain name using one of the BR 3.2.2.4
> methods, but this is a problem because most CAs don't support every domain
> validation method and many domains are configured such that some validation
> methods can't be used.
>
> - Wayne
>
> [1] https://cabforum.org/pipermail/public/2018-January/012824.html
> [2] https://cabforum.org/pipermail/public/2018-May/013380.html
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Disallowed company name

2018-06-01 Thread Ryan Sleevi via dev-security-policy
On Fri, Jun 1, 2018 at 9:14 AM, Peter Kurrasch via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Security can be viewed as a series of AND's that must be satisfied in
> order to conclude "you are probably secure". For example, when you browse
> to an important website, make sure that "https" is used AND that the domain
> name looks right  AND that a "lock icon" appears in the UI AND, if the site
> uses EV certs, that the name of the organization seems correct. Failing any
> of those, stop immediately; if all of them hold true, you are probably fine.
>

Note that research has shown that your first, second, third, and fourth
options are all unreasonable requests of humans trying to be productive.

That is, https is unnecessarily confusing, "the domain looks right" is an
unreasonable task (might as well say "Make sure the fabardle is boijoing"
when presenting domains), and lock icons positive indicator is unnecessary
hostile. And that's before we get to EV certs (are you saying I shouldn't
do business with KLM?)

So basically, all four steps are unreasonable to determine you're fine :)
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Plan to update CCADB PEM extraction tool

2018-06-01 Thread Ryan Sleevi via dev-security-policy
On Fri, Jun 1, 2018 at 10:20 AM, Ryan Sleevi  wrote:

>
>
> On Thu, May 31, 2018 at 6:54 PM, Kathleen Wilson via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
>> All,
>>
>> We are working towards updating the tool that we use in the CCADB to
>> parse PEM data and fill in the corresponding fields in the CCADB. The new
>> tool is in the TLS Observatory:
>>
>> https://github.com/mozilla/tls-observatory
>>
>> 3) Certificate ID
>> OLD: hash(Subject + SPKI), with colons
>> NEW: hash(SPKI), no colons
>> OLD: 4F:31:A6:06:59:45:EA:BC:6A:45:CB:AD:72:D8:0A:20:A4:40:0E:55:
>> 05:B9:2A:0C:4C:F1:F6:C1:A3:10:92:9F
>> NEW: FF5680CD73A5703DA04817A075FD462506A73506C4B81A1583EF549478D26476
>>
>
> Thanks for the heads up. Could you explain why the change to just SPKI?
>

(For context,
https://github.com/mozilla/tls-observatory/commit/a8124ad65cb6f7ea8aca9648bc8985157a2a84a4#diff-738e6d4c59e010e98c704b9802751ebe
was the change, but it didn't document the motivation)
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Plan to update CCADB PEM extraction tool

2018-06-01 Thread Ryan Sleevi via dev-security-policy
On Thu, May 31, 2018 at 6:54 PM, Kathleen Wilson via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> All,
>
> We are working towards updating the tool that we use in the CCADB to parse
> PEM data and fill in the corresponding fields in the CCADB. The new tool is
> in the TLS Observatory:
>
> https://github.com/mozilla/tls-observatory
>
> 3) Certificate ID
> OLD: hash(Subject + SPKI), with colons
> NEW: hash(SPKI), no colons
> OLD: 4F:31:A6:06:59:45:EA:BC:6A:45:CB:AD:72:D8:0A:20:A4:40:0E:55:
> 05:B9:2A:0C:4C:F1:F6:C1:A3:10:92:9F
> NEW: FF5680CD73A5703DA04817A075FD462506A73506C4B81A1583EF549478D26476
>

Thanks for the heads up. Could you explain why the change to just SPKI?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: OISTE WISeKey Global Root GC CA Root Inclusion Request

2018-05-24 Thread Ryan Sleevi via dev-security-policy
Pedro,

Thanks for the quick and detailed replies! A few responses inline.

On Thu, May 24, 2018 at 8:19 AM, Pedro Fuentes via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:
>
> > * 1.5.4 requires a full meeting of the CAA to convene for updates, which
> > may make it difficult to have the CPS (and the attendant CA policies)
> > reflect the BRs
> 
> I understand you mean the PAA. As we say in the CPS “Minor versions only
> require the participation of a single member of the PAA in order to approve
> the publication of a new version.” This accelerates the process for quick
> amendments like the ones derived of your kind review (I’m myself member if
> the PAA).
> 
>

Thanks. I did mean PAA (had CAA on the brain), and mostly wanted to make
sure that we don't have the same process overhead that other CAs have
reported with updating CP/CPSes for compliance. The structure of the OWGTM
suggested an independent policy authority that then set policy for WISeKey
to implement, which sounded like it could lead to these problems.


> > * 3.2.6 notes an accreditation process for interoperating, but doesn't
> note
> > whether that includes audits consistent with section 8 of the BRs
> 
> We set the requirements for audit and compliance and audit in section 8 of
> the CPS, and that has to be respected in any case. This particular section
> is just pointing some additional controls related to interoperation, but
> frankly speaking I don’t see of much relevance, I could easily change it to
> “No stipulation”.
> 
>

Thanks for clarifying. Understandably, there's usually several places and
ways that one can express information in the CPS, and that's part of why we
do these detailed reviews - to make sure that things are both internally
consistent and contain the relevant information. I'm supportive of
including more details about the operational aspects, and so I think it's
good that you included this. My main concern was "They list these
additional controls, but how do they validate they are followed" - so
perhaps it's as simple as just mentioning Section 8?

To be clear, this is minor (meh) - I think your explanation suffices, and
you could change to "No stipulation", make no changes, or make changes to
reference other sections, and I think they'd all be good.


> > * 4.3 states "The OWGTM is not responsible for monitoring, research or
> > confirmation of the correctness of the information contained in a
> > certificate during the intermediate period between its issuance and
> > renewal, ", which in one read, is entirely consistent with 9.6.1 of the
> BRs
> > (consistent in that it's at time of issuance), but in another read, could
> > be seen as conflicting with 4.9.1.1 of the BRs
> 
> Maybe is a problem of language or interpretation, but we say that once the
> certificate is properly validated and issued, we can’t control the ulterior
> correctness of the information (i.e. change in domain ownership) until a
> new validation round (I.e. for a renewal) is performed. I would appreciate
> more details in your concern and I’m afraid I couldn’t find the
> relationship with BR-4.9.11 which is related to revocation status.
> 
>

Yeah, I think it's just a language/interpretation issue. This wasn't a big
concern (hence meh). Section 9.6.1 of the BRs makes it clear that the
issuance of a certificate is a certification at a point in time - that CAs
can't continually monitor the information for change (as you mention). That
said, the BRs also obligate Subscribers to notify CAs of changes (9.6.2)
and for CAs to act upon such notifications (4.9.1.1, revoking for material
changes), so I was calling out that one possible interpretation is a
suggestion that OWGTM *won't* revoke if they become aware that the
information changes (4.9.1.1).

I don't think that was the intent, but it was ambiguous enough that I
wanted to call it out and make sure we're on the same page :)


> > * Section 5.2.2 / 5.2.4 don't detail the minimum number of people
> required
> > for certain activities.
> 
> The fact of mandating separation of duties would imply a minimum of two
> persons, but I never saw these details on the number of people per task in
> other CPS… Is this really needed?
> 
>

You'd be surprised - there are some who argue separation of duties can be
met by the same person, acting in different roles. I've sadly had this come
up in other compliance exercises (FIPS), in which duties are split between
'logical' roles and 'physical' roles - in which the same physical person
can (not simultaneously) operate many logical roles.

I agree that it's not something consistently applied through CPSes - the
sheer number of them make it hard to make sure we give consistent and
reliable feedback on each and every one for the same issues, and especially
as patterns change over time, different issues come to the forefront. It
was from recently dealing with compliance issues (unrelated to CAs) that
the 

Re: 2018.05.18 Let's Encrypt CAA tag value case sensitivity incident

2018-05-23 Thread Ryan Sleevi via dev-security-policy
Tim,

I definitely think we've gone off the rails here, so I want to try to right
the cart here. You jumped in on a thread talking about DNSSEC providing
smoking guns [1] - which is a grandstanding bad idea. It wasn't yours, but
it's one that you jumped into the middle of the discussion, and began
offering other interpretations (such as it being about disk space [2]),
when the concern was precisely about trying to find a full cryptographic
proof that can be stable over the lifetime of the certificate - which for
Let's Encrypt is 90 days, but for some CAs, is up to 825-days [3].

As a systemic improvement, I think we're in violent agreement about the
goal - which is to make sure that when things go wrong, there are reliable
ways to identify where and why they went wrong - and perhaps simply in
disagreement on the means and ways to effect that. You posited that the
original motivation was that this specifically could not occur - but I
don't think that was actually shared or expressed, precisely because there
were going to be inherent limits to that information. I provided examples
of where and how, under the existing BRs, that the steps taken are both
consistent with and, arguably, above and beyond, what is required elsewhere
- which is not to say we should not strive for more, but is to put down the
notion from (other) contributors that somehow there's been less here.

I encouraged you to share more of your thinking, precisely because this is
what allows us to collectively evaluate the fitness for purpose [4] - and
the potential risks that well-intentioned changes can pose [5]. I don't
think it makes sense to anchor on the CAA aspect as the basis to improve
[6], when the real risk is the validation methods themselves. If our intent
is to provide full data for diagnostic purposes, then how far does that
rabbit hole go - do HTTP file-based validations need to record their DNS
lookup chains? Their IP flows? Their BGP peer broadcasts? The question of
this extreme rests on what is it we're trying to achieve - and the same
issue here (namely, CAA being misparsed) could just as equally apply to
HTTP streams, to WHOIS dataflows, or to BGP peers.

That's why I say it's systemic, and why I say that we should figure out
what it is we're trying to achieve - and misguided framing [1] does not
help further that.

[1]
https://groups.google.com/d/msg/mozilla.dev.security.policy/7AcHi_MgKWE/7L2_zfgfCwAJ
[2]
https://groups.google.com/d/msg/mozilla.dev.security.policy/7AcHi_MgKWE/gUT3t7B1CwAJ
[3]
https://groups.google.com/d/msg/mozilla.dev.security.policy/7AcHi_MgKWE/O7QTGmInCwAJ
[4]
https://groups.google.com/d/msg/mozilla.dev.security.policy/7AcHi_MgKWE/juHBkWV4CwAJ
[5]
https://groups.google.com/d/msg/mozilla.dev.security.policy/7AcHi_MgKWE/O5rwCV96CwAJ
[6]
https://groups.google.com/d/msg/mozilla.dev.security.policy/7AcHi_MgKWE/lpU2dpl8CwAJ


On Wed, May 23, 2018 at 11:29 AM, Tim Hollebeek via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> You’re free to misattribute whatever motives you want to me.  They’re not
> true.  In fact, I would like to call on you yet again to cease speculating
> and imputing malicious motives onto well-intentioned posts.
>
>
>
> The CAA logging requirements failed in this instance.  How do we make them
> better?  I’ll repeat that this isn’t a criticism of Let’s Encrypt, other
> than they had a bug like many of us have.  Mozilla wants this to be a place
> where we can reflect on incidents and improve requirements.
>
>
>
> I’m not looking for something that is full cryptographic proof, that’s
> can’t be made to work.  What are the minimum logging requirements so that
> CAA logs can be used to reliably identify affected certificates when CAA
> bugs happen?  That’s the discussion going on internally here.  Love to hear
> other thoughts on this issue.
>
>
>
> Also, we’re trying to be increasingly transparent about what goes on at
> DigiCert.  I believe we’re the only CA that publishes what we will deliver
> *next* sprint.  I would actually like to share much MORE information than
> we currently do, and have authorization to do so, but the current climate
> is not conducive to that.
>
>
>
> The fact that I tend to get attacked in response to my sharing of internal
> thinking and incomplete ideas is not helpful or productive.  It will
> unfortunately just cause us to have to stop being as transparent.
>
>
>
> -Tim
>
>
>
> I am opposed to unnecessary grand-standing and hand-wringing, when
> demonstrably worse things are practiced.
>
>
>
>
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: OISTE WISeKey Global Root GC CA Root Inclusion Request

2018-05-22 Thread Ryan Sleevi via dev-security-policy
Thanks for the reminder, Wayne.

I've reviewed the CPS and Audit Reports and have the following comments. I
will note that, due to having already had someone else look at it, I only
focused on information validation related to domains and IPs, and did not
examine the policies around OV and EV, as those are generally not
applicable to trust on the Web.

Overall, I think this would be good to proceed, but there's certain
discrepancies called out under Questions that I think should be resolved
before doing so. I would suggest contacting WISeKey for follow-up on these,
and not proceed until we've got a satisfactory response. With the upcoming
CA/Browser Forum F2F, I think that effectively means a delay of
approximately two weeks, to allow both WISeKey to respond and the community
(and maintainers) to review for sufficiency. I think that, provided a
response is provided, than barring any further feedback raising additional
concerns, proceeding by June 14 would be a reasonable timeframe? Does that
seem like a reasonable set of expectations and timing?

== Good ==
* 2.1 notes that they make a public repository of issued certificates
available, which is good to see positive affirmation of certificates being
public
* 3.1.4 and Annex B provide ample detail about the certificate fields and
their validated context, which provides a reasonable basis for
understanding their certificate profile and validation practices
* 3.1.6 "In any event, the OWGTM will not attempt to intermediate nor
resolve conflicts regarding ownership of names or trademarks." - It is good
to CA recognize its role in not independently trying to determine trademark
issues, and instead defer those to proper adjudication. I wish all CAs
would recognize this.
* 4.9.7 publishes CRLs in 3 days, effectively half the BR-required time (of
7 days), leading to more effective revocation distribution.
* 7.2.2 notes a quality profile for CRLs
  * Note: It could be improved to document the maximum size (worst case) of
CRLs or how sharding is done (across issuerDistributionPoint extensions),
to ensure that the worst case CRL size (if all certs pointing to that CRL
were revoked) is kept within a reasonable size limit, such as 64K. That's
an opportunity for improvement, but admittedly requires more careful
engineering design to implement.
* 9.4.2 notes that "private information" does NOT include information
contained within a certificate or CRL, which is the correct interpretation
* 9.6.1 explicitly notes MITM is prohibited. While implicit, it's also
encouraging to see this explicitly called out.
* Annex E notes that they support IODEF, and the supported methods (this is
a SHOULD, not a MUST, in the BRs)

== Meh ==
* 1.4.1.2, which details the validation process for server certificates,
explicitly calls out domain verification for DV, but not for OV/EV.
  * It's unclear if this implies the use of the (deprecated) 3.2.2.4.1 /
3.2.2.4.5 as demonstrations of domain "ownership" independent of domain
"control"
  * Annex E makes it clear that .1 and .5 are in scope as validation
methods, which should be phased out by August.
* 1.5.4 requires a full meeting of the CAA to convene for updates, which
may make it difficult to have the CPS (and the attendant CA policies)
reflect the BRs
* 3.2.6 notes an accreditation process for interoperating, but doesn't note
whether that includes audits consistent with section 8 of the BRs
* 4.3 states "The OWGTM is not responsible for monitoring, research or
confirmation of the correctness of the information contained in a
certificate during the intermediate period between its issuance and
renewal, ", which in one read, is entirely consistent with 9.6.1 of the BRs
(consistent in that it's at time of issuance), but in another read, could
be seen as conflicting with 4.9.1.1 of the BRs
* Section 4.9.1 lists 13 items, but there are 15 in the corresponding BRs.
Item #4 from the BRs is combined with Item #3 in the CPS, and Item #7 is
missing from the BRs as an explicit callout. #14 and #15 in the BRs are
seemingly not present, as the place where they would be expected - #12 and
#13 - from the CPS are different.
* Section 5.2.2 / 5.2.4 don't detail the minimum number of people required
for certain activities.
* Section 6.2.4 states that CA backup keys are "typically" store encrypted.
What protections are in place if they're not encrypted?
* Section 7.3.2 misunderstands OCSP extensions as being about the
certificates, rather than extensions within OCSP responses (such as CT
extensions, should that be supported, or nonces, if that should
unfortunately be supported [and it shouldn't])
* Annex B, 11.3.1, lists SAN in the base certificate profile, rather than
as an X.509v3 extension, and doesn't explicitly list the CN as being one of
the SAN values

== Bad ==
* Annex B, 11.3.1, does not list the extendedKeyUsage in the profile for
SSL certificates which is mandatory per the BRs 7.1.2.3.
  * Other profiles under Annex B do list it (under the misnamed 

Re: 2018.05.18 Let's Encrypt CAA tag value case sensitivity incident

2018-05-22 Thread Ryan Sleevi via dev-security-policy
On Tue, May 22, 2018 at 1:03 PM, Paul Wouters <p...@nohats.ca> wrote:

> On Tue, 22 May 2018, Ryan Sleevi via dev-security-policy wrote:
>
> However, what does this buy us? Considering that the ZSKs are intentionally
>> designed to be frequently rotated (24 - 72 hours), thus permitting weaker
>> key sizes (RSA-512),
>>
>
> I don't know anyone who believes or uses these timings or key sizes. It
> might be done as an _attack_ but it would be a very questionable
> deployment.
>
> I know of 12400 512 bit RSA ZSK's in a total of about 6.5 million. And I
> consider those to be an operational mistake.


http://tma.ifip.org/wordpress/wp-content/uploads/2017/06/tma2017_paper58.pdf
has some fairly damning empirical data about the reliability of those
records, which is not in line with your anecdata.


>
>
> However, let us not pretend that recording the bytes-on-the-wire DNS
>> responses, including for DNSSEC, necessarily helps us achieve some goal
>> about repudiation. Rather, it helps us identify issues such as what LE
>> highlighted - a need for quick and efficient information scanning to
>> discover possible impact - which is hugely valuable in its own right, and
>> is an area where I am certain that a majority of CAs are woefully lagging
>> in. That LE recorded this at all, beyond simply "checked DNS", is more of
>> a
>> credit than a disservice, and a mitigating factor more than malfeasance.
>>
>
> I see no reason why not to log the entire chain to the root. The only
> exception being maliciously long chains, which you can easilly cap
> and error out on after following about 50 DS records?


"Why not" is not a very compelling argument, especially given the
complexity involved, and the return to value being low (and itself being
inconsistent with other matters)
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


  1   2   3   4   5   6   7   8   9   >