Re: Policy 2.7.1:MRSP Issue #205: Require CAs to publish accepted methods for proving key compromise

2020-11-16 Thread Dimitris Zacharopoulos via dev-security-policy



On 15/11/2020 9:44 μ.μ., Ryan Sleevi wrote:


Thanks for chiming-in Peter,

I have always considered this revocation reason as the absolutely
"last
resort" for CAs when it comes to revocation of Certificates.
Especially
for the revocation of end-entity Certificates for which there is a
Subscriber Agreement attached, if the CA cannot properly justify the
revocation based on other documented reasons that Subscribers can
understand and be prepared for, it seems like a process failure and a
possible reason for Subscribers to move against the CA. Much like the
invocation of 9.16.3 should be avoided as much as possible, I believe
the same applies for relying on such very wide unbound CPS statements.


Isn't this the "different problem" that Nick was raising concerns about?

That is, your reply here, talking about revocation for reasons outside 
of the Subscriber Agreement, sounds even further afield than Issue 
205. Am I missing something on how it connects here?


If a CA already has a statement in its CP/CPS that the CA can revoke a 
certificate at any time and with no justification, this is probably 
already included in the Subscriber Agreement. I just wanted to highlight 
that this is a very weak revocation reason that CAs should avoid as much 
as possible. In any case, the fact that a CA can revoke for any reason 
is not related to the #205 issue which is to document how a third-party 
can prove that a key is compromised. The revocation reasons related to 
key compromise can fully cover the justification for why a certificate 
must be revoked.


We're trying to ensure that the Mozilla Policy language doesn't create a 
situation where a CA is being presented with evidence that a key is 
compromised, and this evidence is not inline with the documented 
procedure of the CA's 4.9.12 (at that time), thus the proposal to add 
broader language so a CA can accept other methods of demonstrating a key 
compromise.


I think Nick supports the updated language from Ben, and I also support 
Nick's updated version presented in 
https://groups.google.com/g/mozilla.dev.security.policy/c/QQmhYW6kxSw/m/CKaRcl27AgAJ.


Just a reminder, I'm also happy with the original language (that you 
support) in the MRSP, as long as it is clearly allowed for CAs to ADD 
the broader language in 4.9.12 of their CPS to avoid "audit 
misunderstandings" :)


Dimitris.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7.1:MRSP Issue #205: Require CAs to publish accepted methods for proving key compromise

2020-11-15 Thread Dimitris Zacharopoulos via dev-security-policy



On 2020-11-15 1:04 π.μ., Peter Bowen via dev-security-policy wrote:

On Sat, Nov 14, 2020 at 2:05 PM Ryan Sleevi via dev-security-policy
 wrote:

So, perhaps now that we've had this conversation, and you've learned about
potentially illegitimate revocations are a thing, but that they were not
the thing I was worrying about and that you'd misunderstood, perhaps we can
move back to the substantive discussion?

Going back to earlier posts, it seems like CAs could include a
statement in their CPS separate from key compromise that they may
revoke a certificate at any time for any reason or no reason at their
sole discretion.  That would allow the CA to choose to accept proofs
of key compromise that are not listed in the CPS based on their
internal risk methodologies, correct?  It does still have the "secret
document" issue, but moves it away from key compromise and makes it
clear and transparent to subscribers and relying parties.  This means
the CA could revoke the subscriber certificate because they have an
internal procedure that they roll a bunch of D16 and revoke any
certificate with a serial number that matches the result.   Or the CA
could revoke the certificate because they got a claim that the key in
the certificate is compromised but it came in a form not explicitly
called out, so they had to use their own judgement on whether to
revoke.

Thanks,
Peter


Thanks for chiming-in Peter,

I have always considered this revocation reason as the absolutely "last 
resort" for CAs when it comes to revocation of Certificates. Especially 
for the revocation of end-entity Certificates for which there is a 
Subscriber Agreement attached, if the CA cannot properly justify the 
revocation based on other documented reasons that Subscribers can 
understand and be prepared for, it seems like a process failure and a 
possible reason for Subscribers to move against the CA. Much like the 
invocation of 9.16.3 should be avoided as much as possible, I believe 
the same applies for relying on such very wide unbound CPS statements.


Dimitris.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7.1:MRSP Issue #205: Require CAs to publish accepted methods for proving key compromise

2020-11-15 Thread Dimitris Zacharopoulos via dev-security-policy



On 2020-11-14 5:01 π.μ., Ryan Sleevi wrote:
I believe it's possible to do, with the original language, but this 
requires the CA to proactively take steps to address that in their 
CP/CPS. That is, I think it'd be reasonable for an auditor to conclude 
that, if a CA stated "We do X, Y, Z" in our CP/CPS, then doing "A, B, 
or C" without it being listed in their CP/CPS first would be an issue. 
I believe that is the concern you're raising, if I understood you 
correctly.


Exactly, and the first proposed language by Ben, doesn't seem to allow 
the CA to include language to support "and any other method". This is 
not like the Domain or IP Address Validation methods where "any other 
method" was considered a bad thing. This is a case where we should 
welcome additional acceptable methods for third-parties to demonstrate 
their control of compromised keys.




The way to address that, and what I think is a good goal, is to get it 
to be "We do X, Y, Z, and any other method", ideally, when CAs make 
the update in response to the new policy. As situations come up on a 
case by case basis, the CA can deal with the issue without any update 
required first. If any CA updates their CP/CPS without also adding 
"and any other method" in response to the new policy, we can then 
clarify whether they're intentionally stating they'll reject anything, 
or whether it was an oversight, and like you, they want extra 
flexibility because they want to go "above and beyond" as needed.


However, I also want to make sure that any formally accepted method of 
proof is documented in the CP/CPS. So if the CA formalizes A and B as 
routine operations, they will update their CP/CPS to state "We do X, 
Y, Z, A, B, and any other method". This makes it clear which are the 
guaranteed methods they promise to accept, as well as that exceptions 
are recognized as necessary, and they will be accepted and dealt with.


I think we're in agreement here, and I already stated that in a previous 
reply.


"For CAs that want to do the right thing with this flexibility, the 
original language Ben proposed seems to be problematic, which is why I 
highlighted it in the discussion. The updated language keeps all the 
"good" things from the original language, and allows the CA to accept a 
reporting method that they may have not considered. Obviously, the 
logical thing is that once this new method is accepted, the CPS should 
be updated to include that additional method but that might take place 
later, after the report was accepted and certificates revoked."


We could make the language even stricter so that if a CA accepts new 
methods to demonstrate key compromise not mentioned in their CPS, they 
should include the details of these new methods in an upcoming CPS 
update (although I consider this redundant because this is IMO the 
normal way of doing things). Since CAs are required to perform this 
update task at least once a year, this information will eventually end 
up in their CPS.


I will reply to Peter's post separately why I think invoking that 
particular revocation reason is IMO not such a good idea.



Dimitris.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7.1:MRSP Issue #205: Require CAs to publish accepted methods for proving key compromise

2020-11-13 Thread Dimitris Zacharopoulos via dev-security-policy



On 2020-11-13 7:17 μ.μ., Ryan Sleevi wrote:



On Fri, Nov 13, 2020 at 2:55 AM Dimitris Zacharopoulos 
mailto:ji...@it.auth.gr>> wrote:


There is transparency that the CA has evaluated some reporting
mechanisms and these will be documented in the CPS. However, on an
issue
like compromised key reporting, there is no single recipe that covers
all possible and secure ways for a third-part to report a key
compromise. 



Sure, and the original proposed language doesn't restrict this.

The CA can still disclose "Email us, and we'll work it out with you", 
and that's still better than the status quo today, and doesn't require 
the CP/CPS update you speculate about.


I understand the proposal to describe a different thing. Your proposal 
to accept an email, is a different requirement, which is "how to 
communicate". I think the policy change proposal is to describe the 
details about what is expected from third-parties to submit in order to 
report a key compromise. Whether via email, web form or other means, I 
think this policy update covers *what* is expected to be submitted (e.g. 
via CSR, signed plain text, the actual private key).



For CAs that want to do the right thing with this flexibility, the
original language Ben proposed seems to be problematic, which is
why I
highlighted it in the discussion.


As above

The updated language keeps all the
"good" things from the original language, and allows the CA to
accept a
reporting method that they may have not considered. Obviously, the
logical thing is that once this new method is accepted, the CPS
should
be updated to include that additional method but that might take
place
later, after the report was accepted and certificates revoked.

I can't think of a bad scenario with this added language.


I addressed this in my reply to Nick, but for your benefit, the "bad" 
thing here is a CA that lists, in their CP/CPS, "We will only accept 
using our convoluted API that requires you to submit on the lunar 
equinox", and then states "Well, that's just the minimum, we have this 
doc over here saying we'll also consider X, Y, Z".


The modified language fully allows that. The original language would 
see methods X, Y, Z also added to the CP/CPS.


I think one of us has misunderstood the policy update proposal. I 
believe that what you describe is already covered by the existing policy 
which states that the CA must disclose *how* they accept requests (via 
email, API, web form, etc), disclosed in CPS section 1.5.2.



This makes no sense to me. We're not discussing secrets here. Say a
third party reports a key compromise by sending a signed plaintext
message, and the CA has only indicated as accepting a CSR with a
specific string in the CN field. The updated proposal would allow
the CA
to evaluate this "undocumented" (in the CPS) reporting method,
check for
its credibility/accuracy and proceed with accepting it or not.


The original proposal also allows this, by saying exactly what you 
stated here, but within the CP/CPS.
The modified proposal, however, keeps secret whether or not the CA has 
a policy to unconditionally reject such messages.


You seem to be thinking the original proposal prevents any discretion; 
it doesn't. I'm trying to argue that such discretion should be 
explicitly documented, rather than kept secret by the CA, or allowing 
the CA to give conflicting answers to different relying parties on 
whether or not they'll accept such messages.


If people consider that the original language is unambiguous and will 
prevent an auditor to interpret this as "you have stated specific 
technical method(s) for a third-party to demonstrate a key compromise, 
therefore these are the only methods you must accept otherwise you are 
violating your CP/CPS", then I'm fine.



Dimitris.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7.1:MRSP Issue #205: Require CAs to publish accepted methods for proving key compromise

2020-11-12 Thread Dimitris Zacharopoulos via dev-security-policy



On 12/11/2020 10:51 μ.μ., Ryan Sleevi via dev-security-policy wrote:

On Thu, Nov 12, 2020 at 1:39 PM Ben Wilson via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


On Thu, Nov 12, 2020 at 2:57 AM Dimitris Zacharopoulos 
wrote:


I believe this information should be the "minimum" accepted methods of
proving that a Private Key is compromised. We should allow CAs to accept
other methods without the need to first update their CP/CPS. Do people
think that the currently proposed language would forbid a CA from
accepting methods that are not explicitly documented in the CP/CPS?

I also think that "parties" is a bit ambiguous, so I would suggest
modifying that to follow the language of the BRs section 4.9.2
"Subscribers, Relying Parties, Application Software Suppliers, and other
third parties". Here is my proposed change:

"Section 4.9.12 of a CA's CP/CPS MUST clearly specify the methods (at a
minimum) that Subscribers, Relying Parties, Application Software
Suppliers, and other third parties may use to demonstrate private key
compromise."

Dimitris,

Instead, what about something like,  "Section 4.9.12 of a CA's CP/CPS MUST
clearly specify its accepted methods that Subscribers, Relying Parties,
Application Software Suppliers, and other third parties may use to
demonstrate private key compromise. A CA MAY allow additional, alternative
methods that do not appear in section 4.9.12 of its CP/CPS." ?


I understand and appreciate Dimitris' concern, but I think your original
language works better for Mozilla users in practice, and sadly, moves us
back in a direction that the trend has been to (carefully) move away from.

I would say the first goal is transparency, and I think that both proposals
try to accomplish that baseline level of providing some transparency. Where
I think it's different is that the concern Dimitris raised about
"minimums", and the proposed language here, is that it discourages
transparency. "We accept X or Y", and a secret document suggesting "We also
accept Z", makes it difficult to evaluate a CA on principle.


There is transparency that the CA has evaluated some reporting 
mechanisms and these will be documented in the CPS. However, on an issue 
like compromised key reporting, there is no single recipe that covers 
all possible and secure ways for a third-part to report a key 
compromise. This has already been demonstrated in the various 
discussions on this Forum. In such time-sensitive issues, where 
certificates must be revoked within 24 hours, CAs should have the 
liberty to accept a key compromise being reported by a method that might 
be considered acceptable but which the CA's engineers didn't think about 
when drafting the CPS. We can't expect a CA to update a CPS within 24 
hours to allow additional reporting methods before accepting them.


For CAs that want to do the right thing with this flexibility, the 
original language Ben proposed seems to be problematic, which is why I 
highlighted it in the discussion. The updated language keeps all the 
"good" things from the original language, and allows the CA to accept a 
reporting method that they may have not considered. Obviously, the 
logical thing is that once this new method is accepted, the CPS should 
be updated to include that additional method but that might take place 
later, after the report was accepted and certificates revoked.


I can't think of a bad scenario with this added language.



The second goal is auditability: whether or not the CP/CPS represents a
binding commitment to the community. This is why they exist, and they're
supposed to help relying parties not only understand how the CA operates,
but how it is audited. If a CA has a secret document Foo that discloses
secret method Z, is the failure to actually support secret method Z worth
noting within the audit? I would argue yes, but this approach would
(effectively) argue no.


This makes no sense to me. We're not discussing secrets here. Say a 
third party reports a key compromise by sending a signed plaintext 
message, and the CA has only indicated as accepting a CSR with a 
specific string in the CN field. The updated proposal would allow the CA 
to evaluate this "undocumented" (in the CPS) reporting method, check for 
its credibility/accuracy and proceed with accepting it or not.


The original proposal seems to forbid this and forces the CA instructing 
the reporter to create a CSR with a specific string because that's the 
only thing allowed in the CPS.


I hope this makes it clearer.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7.1:MRSP Issue #205: Require CAs to publish accepted methods for proving key compromise

2020-11-12 Thread Dimitris Zacharopoulos via dev-security-policy



On 2020-11-12 8:38 μ.μ., Ben Wilson wrote:


On Thu, Nov 12, 2020 at 2:57 AM Dimitris Zacharopoulos 
mailto:ji...@it.auth.gr>> wrote:



I believe this information should be the "minimum" accepted
methods of
proving that a Private Key is compromised. We should allow CAs to
accept
other methods without the need to first update their CP/CPS. Do
people
think that the currently proposed language would forbid a CA from
accepting methods that are not explicitly documented in the CP/CPS?

I also think that "parties" is a bit ambiguous, so I would suggest
modifying that to follow the language of the BRs section 4.9.2
"Subscribers, Relying Parties, Application Software Suppliers, and
other
third parties". Here is my proposed change:

"Section 4.9.12 of a CA's CP/CPS MUST clearly specify the methods
(at a
minimum) that Subscribers, Relying Parties, Application Software
Suppliers, and other third parties may use to demonstrate private key
compromise."

Dimitris,
Instead, what about something like,  "Section 4.9.12 of a CA's CP/CPS 
MUST clearly specify its accepted methods that Subscribers, Relying 
Parties, Application Software Suppliers, and other third parties may 
use to demonstrate private key compromise. A CA MAY allow additional, 
alternative methods that do not appear in section 4.9.12 of its CP/CPS." ?

Ben


That works better. Thank you.

Dimitris.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7.1:MRSP Issue #205: Require CAs to publish accepted methods for proving key compromise

2020-11-12 Thread Dimitris Zacharopoulos via dev-security-policy

On 5/11/2020 10:33 μ.μ., Ben Wilson via dev-security-policy wrote:

This email begins discussion of a potential change to section 6 of the
Mozilla Root Store Policy
.


The method by which a person may provide a CA with proof of private key
compromise has been an issue discussed on the mdsp list

this past year.

According to section 4.9.1.1 of the CA/Browser Forum's Baseline Requirements
, key compromise is
one reason for certificate revocation within a 24-hour period, and section
4.9.3 of the Baseline Requirements requires that CAs provide "clear
instructions for reporting suspected Private Key Compromise ..." and that
they "publicly disclose the instructions through a readily accessible
online means and in section 1.5.2 of their CPS."  However, in many of the
CPSes reviewed by Mozilla, the only information appearing is a contact
person's street address, email address, and sometimes a telephone number.
Seldom is this information provided in the context of revocation for key
compromise, and in many situations, email is an inadequate method of
communicating key compromises, especially at scale. Some CAs have portals
(e.g. DigiCert 
and Sectigo ) in
addition to an email address to submit revocation requests. There is also
an open-source ACME server which is designed for the sole purpose of
receiving revocations: https://github.com/tobermorytech/acmevoke.

Github Issue #205  notes
that the best place for disclosure of such revocation methods would be in
section 4.9.12 of a CA's CPS. Section 4.9.12 of the RFC 3647 outline
 is titled "Special
requirements re key compromise". Not only will this requirement make it
easier for the Mozilla community to report key compromises, but it will
also help streamline key-compromise-based revocations, thereby reducing the
number of Bugzilla incidents filed for delayed revocation.

Draft language in
https://github.com/BenWilson-Mozilla/pkipolicy/commit/719b834689949e869a0bd94f7bacb8dde0ccc9e4
proposes to add a last sentence to section 6 of the MRSP reading "Section
4.9.12 of a CA's CP/CPS MUST clearly specify the methods that parties may
use to demonstrate private key compromise."

We recognize that there is some overlap with the BR 4.9.3 requirement that
certain instructions be provided in section 1.5.2 of the CPS, but we
believe that the overlap can be worked through during this discussion and,
if not, a related discussion within the CA/Browser Forum.

We look forward to your comments and suggestions on this issue.

Sincerely yours,
Ben Wilson
Mozilla Root Store Program
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org


I believe this information should be the "minimum" accepted methods of 
proving that a Private Key is compromised. We should allow CAs to accept 
other methods without the need to first update their CP/CPS. Do people 
think that the currently proposed language would forbid a CA from 
accepting methods that are not explicitly documented in the CP/CPS?


I also think that "parties" is a bit ambiguous, so I would suggest 
modifying that to follow the language of the BRs section 4.9.2 
"Subscribers, Relying Parties, Application Software Suppliers, and other 
third parties". Here is my proposed change:


"Section 4.9.12 of a CA's CP/CPS MUST clearly specify the methods (at a 
minimum) that Subscribers, Relying Parties, Application Software 
Suppliers, and other third parties may use to demonstrate private key 
compromise."


Thank you,
Dimitris.


___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: MRSP Issue #147 - Require EV audits for certificates capable of issuing EV certificates

2020-11-12 Thread Dimitris Zacharopoulos via dev-security-policy
On 12/11/2020 10:41 π.μ., Dimitris Zacharopoulos via dev-security-policy 
wrote:
Finally, I would like to highlight that policy OID chaining is not 
currently supported in the webPKI by Browsers, so even if a CA adds a 
particular non-EV policyOID in an Intermediate CA Certificate, this 
SubCA would still be technically capable of issuing an end-entity 
certificate asserting an EV policy OID, and that certificate would 
probably get EV treatment from existing browsers. Is this correct? 


I see that this is related to 
https://github.com/mozilla/pkipolicy/issues/152, so I guess Mozilla 
Firefox does not enable "EV Treatment" if an Intermediate CA Certificate 
does not assert the anyPolicy or the CA's EV policy OID, including the 
CA/B Forum EV OID, regardless of what the end-entity certificate asserts.


Dimitris.


___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: MRSP Issue #147 - Require EV audits for certificates capable of issuing EV certificates

2020-11-12 Thread Dimitris Zacharopoulos via dev-security-policy

On 6/10/2020 11:38 μ.μ., Ben Wilson via dev-security-policy wrote:

  #147  - Require EV audits
for certificates capable of issuing EV certificates – Clarify that EV
audits are required for all intermediate certificates that are technically
capable of issuing EV certificates, even when not currently issuing EV
certificates.

This issue is presented for resolution in the next version of the Mozilla
Root Store Policy.

Suggested language is presented here:
https://github.com/BenWilson-Mozilla/pkipolicy/commit/a83eca6d7d8bf2a3b30529775cb55b0c8a5f982b


The proposal is to replace "if issuing EV certificates" with "if capable of
issuing EV certificates" in two places -- for WebTrust and ETSI audits.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy



Judging from the earlier discussion that took place in September 2020, I 
understand that some CAs have an EV-enabled hierarchy (meaning that the 
Root CA is in scope of the EV Guidelines and is included in an audit 
with "EV scope"), has issued some Intermediate CAs that issue EV 
Certificates and are included in the audit with "EV scope", and some 
Intermediate CAs that have never issued EV Certificates, nor are they 
intended to issue EV Certificates and were not listed in the "EV scope" 
of the audit.


I realize that this policy change, will require Intermediate CAs that 
have never issued nor intend to issue EV Certificates, to be included in 
an EV scope audit with the sole purpose of asserting that no TLS 
Certificates have been issued in scope of the EV Guidelines, which 
translates into making sure that no end-entity certificate has been 
issued asserting the EV policy OID in the certificatePolicies extension. 
Is that a fair statement?


Is there going to be an effective date after which Intermediate CA 
Certificates which were not intended to issue EV Certificates, will be 
required to have an EV audit?


Assuming my previous statement is fair, would it suffice for an auditor 
to examine the corpus of non-expired/non-revoked Certificates off of 
these "non-EV" Issuing CAs to ensure that no end-entity certificate has 
been issued asserting the EV policy OID according to the CA's CP/CPS?


Finally, I would like to highlight that policy OID chaining is not 
currently supported in the webPKI by Browsers, so even if a CA adds a 
particular non-EV policyOID in an Intermediate CA Certificate, this 
SubCA would still be technically capable of issuing an end-entity 
certificate asserting an EV policy OID, and that certificate would 
probably get EV treatment from existing browsers. Is this correct?



Thank you,
Dimitris.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7.1: MRSP Issue #192: Require information about auditor qualifications in the audit report

2020-11-09 Thread Dimitris Zacharopoulos via dev-security-policy

Thank you Ben, this is really helpful.

Dimitris.

On 2020-11-09 6:52 μ.μ., Ben Wilson via dev-security-policy wrote:

Hi Dimitris,

I intend to introduce the remaining discussion topics over the next three
weeks. I did not announce an end to the discussion period on purpose, so
that we can have as full of a discussion as possible. Also, in the next
three weeks, I intend to start summarizing the discussions and coming up
with new suggested language on those issues that have been discussed. I
expect that during December we will start to solidify the amendments to
MRSP (v.2.7.1), and that in January I'll announce a "last call" on the
amendments. Following that I will "summarize a consensus that has been
reached, and/or state the official position of Mozilla" - see
https://wiki.mozilla.org/CA/Updating_Root_Store_Policy.

Part of the discussion that will still need to take place deals with
implementation deadlines, timing, etc. Let's discuss that now for the
non-controversial items, and then in late December / early January for
those that are more contentious (assuming they remain in this batch of
changes).

Sincerely yours,
Ben Wilson
Mozilla Root Store


On Mon, Nov 9, 2020 at 2:45 AM Dimitris Zacharopoulos via
dev-security-policy  wrote:



On 7/11/2020 3:12 μ.μ., Ryan Sleevi wrote:


On Sat, Nov 7, 2020 at 4:52 AM Dimitris Zacharopoulos
mailto:ji...@it.auth.gr>> wrote:


 I will try to further explain my thoughts on this. As we all know,
 according to Mozilla Policy "CAs MUST follow and be aware of
 discussions in the mozilla.dev.security.policy
 <https://www.mozilla.org/about/forums/#dev-security-policy> forum,
 where Mozilla's root program is coordinated". I believe Mozilla
 Root store managers' minimum expectations from CAs are to _read
 the messages and understand the content of those messages_. Right
 now, we have [1], [2], [3], [4], [5], [6], [7], [8], [9]
 policy-related threads opened up for discussion since October 15th.

 If every post in these threads contained as much information and
 complexity as your recent reply to Clemens,


This seems like a strawman argument,  ht I don’t think it’s intentional.

You’re arguing that “if things were like this hypothetical situation,
that would be bad”. However, they aren’t like that situation, as the
evidence you provided shows. This also goes back to the “what is your
desired outcome from your previous mail”, and trying to work out what
a clear call to action to address your concerns. Your previous
message, especially in the context of your (hypothetical) concern,
reads like you’re suggesting “Mozilla shouldn’t discuss policy changes
with the community”. I think we’re all sensitive and aware of the
desire not to have too many parallels discussions, which is exactly
why Ben’s been only introducing a few points a week, to facilitate
that and make progress without overwhelming.

To the contrary, I want more people to be able to participate in these
discussions, which is precisely why I "complained" about the size of
your response to Clemens :-) Keeping our replies to reasonable levels,
with a mindset that this is an International Internet community and
people might be interested to participate (even auditors that are not
native-English speakers), I believe is a good thing.

I also see that Ben has introduced a lot of policy proposal topics for
discussion in a short period of time, but I don't know what the
expectations about their "discussion time" are. Should anyone just pick
any topic and start a discussion? That might introduce a lot of parallel
discussions and I'm not sure if this is desirable by Ben. Perhaps we
need some coordination on these topics, for example "please send
feedback for topics 1 and 2 before the end of week X. If no feedback is
received, we'll deem the proposal accepted", something like that, before
moving to other topics.


As it relates to this thread, or any other thread, it seems the first
order evaluation for any CA is “Will the policy change”, followed by
“What do I need to do to meet the policy?”, both of which are still
very early in this discussion. You’re aware of the policy discussion,
and you’re aware a decision has not been made yet: isn’t that all you
need at this point? Unlike some of the other proposals, which require
action by CAs, this is a proposal that largely requires action by
auditors, because it touches on the audit framework and scheme. It
seems like, in terms of expectations for CAs to participate,
discussing this thread with your auditor is the reasonable step, and
working with them to engage here.

Hopefully that helps. Your “but what if” is easily answered as “but
we’re not”, and the “this is a lot, what do I need to do” is simply
“talk with your auditor and make sure they’re aware of discussions
here”. That seems a very simple, digestible call to action?


It helps me understand you

Re: Policy 2.7.1: MRSP Issue #192: Require information about auditor qualifications in the audit report

2020-11-09 Thread Dimitris Zacharopoulos via dev-security-policy



On 7/11/2020 3:12 μ.μ., Ryan Sleevi wrote:



On Sat, Nov 7, 2020 at 4:52 AM Dimitris Zacharopoulos 
mailto:ji...@it.auth.gr>> wrote:



I will try to further explain my thoughts on this. As we all know,
according to Mozilla Policy "CAs MUST follow and be aware of
discussions in the mozilla.dev.security.policy
 forum,
where Mozilla's root program is coordinated". I believe Mozilla
Root store managers' minimum expectations from CAs are to _read
the messages and understand the content of those messages_. Right
now, we have [1], [2], [3], [4], [5], [6], [7], [8], [9]
policy-related threads opened up for discussion since October 15th.

If every post in these threads contained as much information and
complexity as your recent reply to Clemens,


This seems like a strawman argument,  ht I don’t think it’s intentional.

You’re arguing that “if things were like this hypothetical situation, 
that would be bad”. However, they aren’t like that situation, as the 
evidence you provided shows. This also goes back to the “what is your 
desired outcome from your previous mail”, and trying to work out what 
a clear call to action to address your concerns. Your previous 
message, especially in the context of your (hypothetical) concern, 
reads like you’re suggesting “Mozilla shouldn’t discuss policy changes 
with the community”. I think we’re all sensitive and aware of the 
desire not to have too many parallels discussions, which is exactly 
why Ben’s been only introducing a few points a week, to facilitate 
that and make progress without overwhelming.


To the contrary, I want more people to be able to participate in these 
discussions, which is precisely why I "complained" about the size of 
your response to Clemens :-) Keeping our replies to reasonable levels, 
with a mindset that this is an International Internet community and 
people might be interested to participate (even auditors that are not 
native-English speakers), I believe is a good thing.


I also see that Ben has introduced a lot of policy proposal topics for 
discussion in a short period of time, but I don't know what the 
expectations about their "discussion time" are. Should anyone just pick 
any topic and start a discussion? That might introduce a lot of parallel 
discussions and I'm not sure if this is desirable by Ben. Perhaps we 
need some coordination on these topics, for example "please send 
feedback for topics 1 and 2 before the end of week X. If no feedback is 
received, we'll deem the proposal accepted", something like that, before 
moving to other topics.




As it relates to this thread, or any other thread, it seems the first 
order evaluation for any CA is “Will the policy change”, followed by 
“What do I need to do to meet the policy?”, both of which are still 
very early in this discussion. You’re aware of the policy discussion, 
and you’re aware a decision has not been made yet: isn’t that all you 
need at this point? Unlike some of the other proposals, which require 
action by CAs, this is a proposal that largely requires action by 
auditors, because it touches on the audit framework and scheme. It 
seems like, in terms of expectations for CAs to participate, 
discussing this thread with your auditor is the reasonable step, and 
working with them to engage here.


Hopefully that helps. Your “but what if” is easily answered as “but 
we’re not”, and the “this is a lot, what do I need to do” is simply 
“talk with your auditor and make sure they’re aware of discussions 
here”. That seems a very simple, digestible call to action?




It helps me understand your point of view but it seems that you don't 
acknowledge the need to keep these emails to a reasonable and digestible 
size, regardless if the intended recipients are auditors, CAs, Relying 
Parties. You seem to dismiss my point and the fact that some messages on 
this list have been, in fact, very long and very complicated which makes 
participation and contributions very difficult. I trust that we are both 
interested in truly meeting Mozilla's goal for an open Internet 
community (which includes contributions from International 
participants), so please help the community by trying to break down 
complicated responses into simpler ones, and let's all try to use 
shorter answers and to the point.


Indeed, this particular policy change proposal seems to mainly affect 
Auditors, but individual members of this community (either representing 
CAs or as Relying Parties) might also be interested to participate, just 
as Auditors and Relying Parties may participate in discussions around 
policy change proposals that affect CAs. FWIW, I think changing the 
rules for auditors also affects CAs because it creates an opportunity 
for CAs to have engagements with individual auditor persons, as long as 
they are accepted by Mozilla.


___

Re: Policy 2.7.1: MRSP Issue #192: Require information about auditor qualifications in the audit report

2020-11-07 Thread Dimitris Zacharopoulos via dev-security-policy


I will try to further explain my thoughts on this. As we all know, 
according to Mozilla Policy "CAs MUST follow and be aware of discussions 
in the mozilla.dev.security.policy 
<https://www.mozilla.org/about/forums/#dev-security-policy> forum, where 
Mozilla's root program is coordinated". I believe Mozilla Root store 
managers' minimum expectations from CAs are to _read the messages and 
understand the content of those messages_. Right now, we have [1], [2], 
[3], [4], [5], [6], [7], [8], [9] policy-related threads opened up for 
discussion since October 15th.


If every post in these threads contained as much information and 
complexity as your recent reply to Clemens, I think it eventually 
"abuses" the requirement that CAs must follow discussions in m.d.s.p. 
and leads to fatigue. Understanding the complicated English language 
used, especially for non-Native English speakers, is a very challenging 
and difficult task of its own. Therefore, I think it is unreasonable for 
Mozilla Root store managers to expect that CAs will follow and 
understand all of these discussions if these threads are bombarded with 
long and complicate emails that only very few will be able to read and 
understand.


I think sending specific questions is a good advice and I will try to do 
that next week, but please try to also consider and respect the fact 
that CAs have a finite set of resources to work on these issues, among 
other duties. An unexpected increase in the volume of information CAs 
must follow, creates a risk that something critical might be missed, 
despite the good efforts of CAs having allocated the necessary resources 
to monitor these lists and Bugzilla incidents.


I obviously can't suggest anyone to post more or less, each person has 
the right to post whatever he/she deems necessary. I just wanted you to 
know, as a peer to this Module, that some participants of this Root 
Program want to contribute and continue to do so, and it would help 
tremendously if some messages were shorter and simpler to read. Perhaps 
breaking down your long reply into more than one messages might make 
them easier to process, I don't know.


Thanks for listening :-)


Dimitris.



[1]: 
https://groups.google.com/g/mozilla.dev.security.policy/c/4fhP4iV4ut4/m/WQknrWbhAAAJ
[2]: 
https://groups.google.com/g/mozilla.dev.security.policy/c/ZFLsguJyFDo/m/Tmn5rcXhAAAJ
[3]: 
https://groups.google.com/g/mozilla.dev.security.policy/c/oJiMmvAJXdI/m/ZhH6oLwpAAAJ
[4]: 
https://groups.google.com/g/mozilla.dev.security.policy/c/3sW3_cRBrfo/m/ErldH8JWAQAJ
[5]: 
https://groups.google.com/g/mozilla.dev.security.policy/c/Oqd2iKCFELI/m/f9Kfs0M0BAAJ
[6]: 
https://groups.google.com/g/mozilla.dev.security.policy/c/DChXLJrMwag/m/uGpEqiEcBgAJ
[7]: 
https://groups.google.com/g/mozilla.dev.security.policy/c/nMrORsPPcds/m/hVahATyTBwAJ
[8]: 
https://groups.google.com/g/mozilla.dev.security.policy/c/rbSFMYKlfI4/m/3kvOhydWAQAJ
[9]: 
https://groups.google.com/g/mozilla.dev.security.policy/c/xk3BanrcljY/m/8dFyM-5pAQAJ




On 2020-11-07 1:40 π.μ., Ryan Sleevi via dev-security-policy wrote:

On Fri, Nov 6, 2020 at 6:08 PM Dimitris Zacharopoulos via
dev-security-policy  wrote:


Can other people, except Ryan, follow this thread? I certainly can't. Too
much information, too much text, too many assumptions, makes it impossible
to meaningfully participate in the discussion.


These are complex topics, for sure, but that’s unavoidable. Participation
requires a degree of understanding both about the goals to be achieved by
auditing, as well as the relevant legal and institutional frameworks for
these audits. So, admittedly, that’s not the easiest to jump into.

Could you indicate what you’re having trouble following? I don’t know that
we can do much about “too much information”, since that can be said about
literally anything unfamiliar, but perhaps if you would simply ask
questions, or highlight what you’d like to more about, it could be more
digestible?

What would you say your desired outcome from your email to be? Accepting,
for a second that this is a complex topic, and so discussion will
inherently be complex, and so a response such as “make it simpler for me”
is a bit unreasonable.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy



___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7.1: MRSP Issue #192: Require information about auditor qualifications in the audit report

2020-11-06 Thread Dimitris Zacharopoulos via dev-security-policy
Can other people, except Ryan, follow this thread? I certainly can't. Too much 
information, too much text, too many assumptions, makes it impossible to 
meaningfully participate in the discussion.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7.1: MRSP Issue #152: Add EV Audit exception for Policy Constraints

2020-10-16 Thread Dimitris Zacharopoulos via dev-security-policy



On 2020-10-16 3:21 μ.μ., Ryan Sleevi wrote:



On Fri, Oct 16, 2020 at 7:31 AM Dimitris Zacharopoulos via 
dev-security-policy <mailto:dev-security-policy@lists.mozilla.org>> wrote:




On 2020-10-15 11:36 μ.μ., Ben Wilson via dev-security-policy wrote:
>   This issue is presented for resolution in the next version of
the Mozilla
> Root Store Policy. It is related to Issue #147
> <https://github.com/mozilla/pkipolicy/issues/147> (previously
posted for
> discussion on this list on 6-Oct-2020).
>
> Possible language is presented here:
>

https://github.com/BenWilson-Mozilla/pkipolicy/commit/c1acc76ad9f05038dc82281532fb215d71d537d4
>
> In addition to replacing "if issuing EV certificates" with "if
capable of
> issuing EV certificates" in two places -- for WebTrust and ETSI
audits --
> it would be followed by "(i.e. a subordinate CA under an
EV-enabled root
> that contains no EKU or the id-kp-serverAuth EKU or
anyExtendedKeyUsage
> EKU, and a certificatePolicies extension that asserts the CABF
EV OID of
> 2.23.140.1.1, the anyPolicy OID, or the CA's EV policy OID)."
Thus, Mozilla
> considers that a CA is capable of issuing EV certificates if it
is (1) a
> subordinate CA (2) under an EV-enabled root (3) that contains no
EKU or the
> id-kp-serverAuth EKU or anyExtendedKeyUsage EKU, and (4) a
> certificatePolicies extension that asserts the CABF EV OID of
2.23.140.1.1,
> the anyPolicy OID, or the CA's EV policy OID.
>
> I look forward to your suggestions.

Hello Ben,

I am trying to understand the expectations from Mozilla:

- If a CA that has an EV-capable RootCA , uses a subCA Certificate
that
contains the id-kp-serverAuth EKU and the anyPolicy OID that does not
issue EV end-entity Certificates, is this considered a policy
violation
if this subCA is not explicitly included in an EV audit scope
(ETSI or
WebTrust)?


Explicitly, yes, it is 100% the intent that this would be a violation.

Audits that are limited based on whether or not certificates were 
issued are not aligned with the needs of relying parties and users. We 
need assurances, for example, that keys were and are protected, and 
that audits measure technical capability.


The same exact assurances are included in the BR audit. There are no 
additional requirements in the EV Guidelines related to the CA 
Certificates except in section 17.7 for the Root CA Key Pair Generation 
which is the same in the BRs.


So from a practical standpoint, unless I'm missing something, there is 
no policy differentiation in terms of CA Certificates (Root or 
Intermediate CA) explicitly for EV. In fact, that's why it was allowed 
(and to the best of my knowledge, is still allowed) for a CA to obtain 
an EV audit for a BR-compliant Root.


I agree that for Issuing CAs, it's very easy to forge a new one and make 
it explicitly clear in the future that it is EV capable, although there 
is zero added value, because as I explained there are no separate policy 
rules for "EV CAs", but only in regards to end-entity certificates.




- If a subCA Certificate that contains the id-kp-serverAuth EKU
and the
anyPolicy OID was not covered by an EV-scope audit (because it did
not
issue EV Certificates) and it later decides to update the profile and
policies/practices to comply with the EV Guidelines for everything
related to end-entity certificates in order to start issuing EV
Certificates and is later added to an EV-scope audit, is that an
allowed
practice? Judging from the current EV Guidelines I didn't see
anything
forbidding this practice. In fact this is supported via section 17.4.


It has been repeatedly discussed in the CABForum about explicitly why 
this is undesirable for users, and why the set of policy updates, in 
whole, seek to prohibit this. I would refer you to the discussions in 
Shanghai, in the context of audit discussions and lifetimes, about why 
allowing this is inherently an unacceptable security risk to end users.


In this scenario, there is zero reason not to issue a new 
intermediate. The desired end state, for both roots AND intermediates, 
is that a change in capabilities leads to issuing a NEW intermediate 
or root. There is no legitimate technical reason not to issue a new 
intermediate, and transition that new certificate issuance to that new 
intermediate.


See above. I think the existing policy requirements for EV Roots and 
non-EV Roots are exactly the same (perhaps with the exception of 17.7, 
in which if a CA can demonstrate that the non-EV Root CA was following 
this rule during the key generation ceremony, it should be accepted).




The discussion about cradle to grave is intentionally: you can on

Re: Policy 2.7.1: MRSP Issue #152: Add EV Audit exception for Policy Constraints

2020-10-16 Thread Dimitris Zacharopoulos via dev-security-policy


Rob,

This looks like a chicken-egg problem. A RootCA that wants to enable EV 
needs to get an EV audit. The proposed language, if I am not 
misunderstanding something, says that in order to get an EV audit, it 
must be... "EV-enabled"?


Dimitris.

On 2020-10-16 2:33 μ.μ., Rob Stradling wrote:
Hi Ben.  I agree with Dimitris that the proposed language is a bit 
confusing.


> "(i.e. a subordinate CA under an EV-enabled root that contains no 
EKU or the id-kp-serverAuth EKU or anyExtendedKeyUsage EKU, and a 
certificatePolicies extension that asserts the CABF EV OID of 
2.23.140.1.1, the anyPolicy OID, or the CA's EV policy OID)."


It's not clear from that sentence if "...that contains no EKU or the 
id-kp-serverAuth EKU or anyExtendedKeyUsage EKU" is meant to apply to 
"a subordinate CA" or "an EV-enabled root". For clarity, I suggest 
converting this sentence into a bulleted list; and to avoid repeating 
that bulleted list unnecessarily, I suggest putting it into a new 
section 3.1.2.3, which sections 3.1.2.1 and 3.1.2.2 would then reference.


I've had a go at drafting a PR here: 
https://github.com/robstradling/pkipolicy/pull/1


----
*From:* dev-security-policy 
 on behalf of Dimitris 
Zacharopoulos via dev-security-policy 


*Sent:* 16 October 2020 12:31
*To:* Ben Wilson ; mozilla-dev-security-policy 

*Subject:* Re: Policy 2.7.1: MRSP Issue #152: Add EV Audit exception 
for Policy Constraints
CAUTION: This email originated from outside of the organization. Do 
not click links or open attachments unless you recognize the sender 
and know the content is safe.



On 2020-10-15 11:36 μ.μ., Ben Wilson via dev-security-policy wrote:
>   This issue is presented for resolution in the next version of the 
Mozilla

> Root Store Policy. It is related to Issue #147
> 
<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fmozilla%2Fpkipolicy%2Fissues%2F147data=04%7C01%7Crob%40sectigo.com%7Ca230422a644142b9d71a08d871c70667%7C0e9c48946caa465d96604b6968b49fb7%7C0%7C0%7C637384446910767139%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=SWvjOTEj6vaGRVCDZS%2FMMEPFCrsDfvpwli6lfKkzCHc%3Dreserved=0> 
(previously posted for

> discussion on this list on 6-Oct-2020).
>
> Possible language is presented here:
> 
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FBenWilson-Mozilla%2Fpkipolicy%2Fcommit%2Fc1acc76ad9f05038dc82281532fb215d71d537d4data=04%7C01%7Crob%40sectigo.com%7Ca230422a644142b9d71a08d871c70667%7C0e9c48946caa465d96604b6968b49fb7%7C0%7C0%7C637384446910767139%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=s4Nf4ViQnw8W2ckrofP%2BGYFeF5P4UHUWGfyEo8lEv4M%3Dreserved=0

>
> In addition to replacing "if issuing EV certificates" with "if 
capable of
> issuing EV certificates" in two places -- for WebTrust and ETSI 
audits --

> it would be followed by "(i.e. a subordinate CA under an EV-enabled root
> that contains no EKU or the id-kp-serverAuth EKU or anyExtendedKeyUsage
> EKU, and a certificatePolicies extension that asserts the CABF EV OID of
> 2.23.140.1.1, the anyPolicy OID, or the CA's EV policy OID)." Thus, 
Mozilla

> considers that a CA is capable of issuing EV certificates if it is (1) a
> subordinate CA (2) under an EV-enabled root (3) that contains no EKU 
or the

> id-kp-serverAuth EKU or anyExtendedKeyUsage EKU, and (4) a
> certificatePolicies extension that asserts the CABF EV OID of 
2.23.140.1.1,

> the anyPolicy OID, or the CA's EV policy OID.
>
> I look forward to your suggestions.

Hello Ben,

I am trying to understand the expectations from Mozilla:

- If a CA that has an EV-capable RootCA , uses a subCA Certificate that
contains the id-kp-serverAuth EKU and the anyPolicy OID that does not
issue EV end-entity Certificates, is this considered a policy violation
if this subCA is not explicitly included in an EV audit scope (ETSI or
WebTrust)?

- If a subCA Certificate that contains the id-kp-serverAuth EKU and the
anyPolicy OID was not covered by an EV-scope audit (because it did not
issue EV Certificates) and it later decides to update the profile and
policies/practices to comply with the EV Guidelines for everything
related to end-entity certificates in order to start issuing EV
Certificates and is later added to an EV-scope audit, is that an allowed
practice? Judging from the current EV Guidelines I didn't see anything
forbidding this practice. In fact this is supported via section 17.4.

The proposed language is a bit confusing so hopefully by getting
Mozilla's position on the above two questions, we can propose some
improvements.


Best regards,
Dimitris.


>
> Thanks,
>
> Ben
> __

Re: Policy 2.7.1: MRSP Issue #152: Add EV Audit exception for Policy Constraints

2020-10-16 Thread Dimitris Zacharopoulos via dev-security-policy



On 2020-10-15 11:36 μ.μ., Ben Wilson via dev-security-policy wrote:

  This issue is presented for resolution in the next version of the Mozilla
Root Store Policy. It is related to Issue #147
 (previously posted for
discussion on this list on 6-Oct-2020).

Possible language is presented here:
https://github.com/BenWilson-Mozilla/pkipolicy/commit/c1acc76ad9f05038dc82281532fb215d71d537d4

In addition to replacing "if issuing EV certificates" with "if capable of
issuing EV certificates" in two places -- for WebTrust and ETSI audits --
it would be followed by "(i.e. a subordinate CA under an EV-enabled root
that contains no EKU or the id-kp-serverAuth EKU or anyExtendedKeyUsage
EKU, and a certificatePolicies extension that asserts the CABF EV OID of
2.23.140.1.1, the anyPolicy OID, or the CA's EV policy OID)." Thus, Mozilla
considers that a CA is capable of issuing EV certificates if it is (1) a
subordinate CA (2) under an EV-enabled root (3) that contains no EKU or the
id-kp-serverAuth EKU or anyExtendedKeyUsage EKU, and (4) a
certificatePolicies extension that asserts the CABF EV OID of 2.23.140.1.1,
the anyPolicy OID, or the CA's EV policy OID.

I look forward to your suggestions.


Hello Ben,

I am trying to understand the expectations from Mozilla:

- If a CA that has an EV-capable RootCA , uses a subCA Certificate that 
contains the id-kp-serverAuth EKU and the anyPolicy OID that does not 
issue EV end-entity Certificates, is this considered a policy violation 
if this subCA is not explicitly included in an EV audit scope (ETSI or 
WebTrust)?


- If a subCA Certificate that contains the id-kp-serverAuth EKU and the 
anyPolicy OID was not covered by an EV-scope audit (because it did not 
issue EV Certificates) and it later decides to update the profile and 
policies/practices to comply with the EV Guidelines for everything 
related to end-entity certificates in order to start issuing EV 
Certificates and is later added to an EV-scope audit, is that an allowed 
practice? Judging from the current EV Guidelines I didn't see anything 
forbidding this practice. In fact this is supported via section 17.4.


The proposed language is a bit confusing so hopefully by getting 
Mozilla's position on the above two questions, we can propose some 
improvements.



Best regards,
Dimitris.




Thanks,

Ben
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-06 Thread Dimitris Zacharopoulos via dev-security-policy
On 6/7/2020 11:39 π.μ., Paul van Brouwershaven via dev-security-policy 
wrote:

As follow up to Dimitris comments I tested the scenario where a
sibling issuing CA [ICA 2] with the OCSP signing EKU (but without
digitalSignature KU) under [ROOT] would sign a revoked OCSP response for
[ICA] also under [ROOT]
https://gist.github.com/vanbroup/84859cd10479ed95c64abe6fcdbdf83d

I was actually surprised to see that certutil fails to validate decode the
OCSP response in this scenario. But this doesn't say it's not a problem as
other responders or versions might accept the response.

I will try to perform the same test on Mac in a moment.


Thank you very much Paul, this is really helpful.

Dimitris.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-06 Thread Dimitris Zacharopoulos via dev-security-policy

On 6/7/2020 11:03 π.μ., Ryan Sleevi via dev-security-policy wrote:

Yep. You have dismissed it but others may have not. If no other voices are
raised, then your argument prevails:)


I mean, it’s not a popularity contest:)


As others have highlighted already, there are times where people get 
confused by you posting by default in a personal capacity. It is easy to 
confuse readers when using the word "I" in your emails.


Even if you use your "Google Chrome hat" to make a statement, there 
might be a different opinion or interpretation from the Mozilla Module 
owner where this Forum is mainly for. There's more agreement than 
disagreement between Mozilla and Google when it comes to policy so I 
hope my statement was not taken the wrong way as an attempt to "push" 
for a disagreement.


I have already asked for the Mozilla CA Certificate Policy owner's 
opinion regarding separate hierarchies for Mozilla Root program in 
https://groups.google.com/d/msg/mozilla.dev.security.policy/EzjIkNGfVEE/jOO2NhKAAwAJ, 
highlighting your already clearly stated opinion on behalf of Google, 
because I am interested to hear their opinion as well. I hope I'm not 
accused of doing something wrong by asking for more "voices", if there 
are any.




___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-06 Thread Dimitris Zacharopoulos via dev-security-policy

On 6/7/2020 9:47 π.μ., Ryan Sleevi wrote:
I can understand wanting to wait to see what others do first, but 
that’s not leadership.


This is a security community, and it is expected to see and learn from 
others, which is equally good of proposing new things. I'm not sure what 
you mean by "leadership". Leadership for who?


We 



Who is we here? HARICA? The CA Security Council? The affected CAs in 
private collaboration? It’s unclear which of the discussions taking 
place are being referenced here.


HARICA.


There was also an interesting observation that came up during a
recent
discussion. 



You mean when I dismissed this line of argument? :)


Yep. You have dismissed it but others may have not. If no other voices 
are raised, then your argument prevails :)



Dimitris.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-05 Thread Dimitris Zacharopoulos via dev-security-policy


I'd like to chime-in on this particular topic because I had similar 
thoughs with Pedro and Peter.


I would like to echo Pedro's, Peter's and other's argument that it is 
unreasonable for Relying Parties and Browsers to say "I trust the CA 
(the Root Operator) to do the right thing and manage their Root Keys 
adequately", and not do the same for their _internally operated_ and 
audited Intermediate CA Certificates. The same Operator could do "nasty 
things" with revocation, without needing to go to all the trouble of 
creating -possibly- incompatible OCSP responses (at least for some 
currently known implementations) using a CA Certificate that has the 
id-kp-OCSPSigning EKU. Browsers have never asked for public records on 
"current CA operations", except in very rare cases where the CA was 
accused of "bad behavior". Ryan's response on 
https://bugzilla.mozilla.org/show_bug.cgi?id=1649939#c8 seems 
unreasonably harsh (too much "bad faith" in affected CAs, even of these 
CA Certificates were operated by the Root Operator). There are auditable 
events that auditors could check and attest to, if needed, for example 
OCSP responder configuration changes or key signing operations, and 
these events are kept/archived according to the Baseline Requirements 
and the CA's CP/CPS. This attestation could be done during a "special 
audit" (as described in the ETSI terminology) and possibly a 
Point-In-Time audit (under WebTrust).


We did some research and this "convention", as explained by others, 
started from Microsoft.


In 
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/dn786428(v=ws.11), 
one can read "if a CA includes EKUs to state allowed certificate usages, 
then it EKUs will be used to restrict usages of certificates issued by 
this CA" in the paragraph titled "Extended Key Usage Constraints".


Mozilla agreed to this convention and added it to Firefox 
https://bugzilla.mozilla.org/show_bug.cgi?id=725351. The rest of the 
information was already covered in this thread (how it also entered into 
the Mozilla Policy).


IETF made an attempt to set an extention for EKU constraints 
(https://datatracker.ietf.org/doc/draft-housley-spasm-eku-constraints/) 
where Rob Stradling made an indirect reference in 
https://groups.google.com/d/msg/mozilla.dev.security.policy/f5-URPoNarI/yf2YLpKJAQAJ 
(Rob, please correct me if I'm wrong).


There was a follow-up discussion in IETF that resulted that noone should 
deal with this issue 
(https://mailarchive.ietf.org/arch/msg/spasm/3zZzKa2lcT3gGJOskVrnODPBgM0/). 
A day later, all attempts died off because noone would actually 
implement this(?) 
https://mailarchive.ietf.org/arch/msg/spasm/_gJTeUjxc2kmDcRyWPb9slUF47o/. 
If this extension was standardized, we would probably not be having this 
issue right now. However, this entire topic demonstrates the necessity 
to standardize the EKU existence in CA Certificates as constraints for 
EKUs of leaf certificates.


We even found a comment referencing the CA/B Forum about whether it has 
accepted that EKUs in CA Certificates are considered constraints 
(https://mailarchive.ietf.org/arch/msg/spasm/Y1V_vbEw91D2Esv_SXxZpo-aQgc/). 
Judging from the result and the discussion of this issue, even today, it 
is unclear how the CA/B Forum (as far as its Certificate Consumers are 
concerned) treats EKUs in CA Certificates.


CAs that enabled the id-kp-OCSPSigning EKU in the Intermediate CA 
Profiles were following the letter of the Baseline Requirements to 
"protect relying parties". According to the BRs 7.1.2.2:


/"Generally Extended Key Usage will only appear within end entity 
certificates (as highlighted in RFC 5280 (4.2.1.12)), however, 
Subordinate CAs MAY include the extension to further *protect 
**r**elying parties* until the use of the extension is consistent 
between Application Software Suppliers whose software is used by a 
substantial portion of Relying Parties worldwide."/


So, on one hand, a Root Operator was trying to do "the right thing" 
following the agreed standards and go "above and beyond" to "protect" 
relying parties by adding this EKU in the issuing CA Certificate (at a 
minimum it "protected" users using Microsoft that required this "EKU 
Chaining"), and on the other hand it unintentionally tripped into a case 
where a CA Certificate with such an EKU could be used  in an OCSP 
responder service to sign status messages for its parent.


There was also an interesting observation that came up during a recent 
discussion. As mandated by RFC 5280 (4.2.1.12), EKUs are supposed to be 
normative constrains to *end-entity Certificates*, not CA Certificates. 
Should RFC 6960 need to be read in conjunction with RFC 5280 and not on 
its own? Should conforming OCSP Clients to the Publicly-Trusted 
Certificates (PTC) and BR-compliant solutions, implement both? If the 
answer is yes, this means that a "conforming" OCSP client should not 
place trust on the id-kp-OCSPSigning 

Re: When to accept/require revised audits for missing cert fingerprints

2020-02-07 Thread Dimitris Zacharopoulos via dev-security-policy

For what it's worth, I think that there should be two distinct cases:

a) Self-signed Certificates that have the same SPKI and name, but only 
one was ever requested to be included as a Trust Anchor in the Mozilla 
Root Program,


b) Variations of Issuing CA Certificates that have the same SPKI and name.

For b), the rules of disclosure in the Mozilla Policy were very clear 
and unambiguous. I would expect all versions of b) to be included in 
audit reports.


For a), in my understanding, if the CA only requested one version of the 
Root CA Certificate to be included, the possible other versions could 
"theoretically" be considered "Intermediate" CA Certificates but they 
are self-signed and effectively do not transfer any trust over, unless I 
am missing something. So, when the disclosure requirement was drafted in 
the policy, I don't think it was ever intended for CAs to disclose all 
variations of their Root CA Certificates that share the same SPKI and 
name. Wayne and Kathleen, as module owners, could confirm if this is 
true or not. If CAs were intended to add all variations of their 
self-signed Root CAs, it would have been highlighted and discussed on 
this list as a corner case (I can't remember if this corner case was 
discussed when writing the disclosure policy). And even if CAs wanted to 
disclose these in CCADB, it was counter-intuitive to add a self-signed 
Root as an Intermediate.


It is possible that a CA issued two or more variations of their Root CA 
Certificate, changing extensions and other information if they detected 
that something was incorrect. IMHO that should not effect anything 
because at the end of the day, the Trust Anchor is the one that matters.


I also can't think of a "bad actor" case for a) that could take 
advantage of this theoretical "gap" to cause any security risks, but 
perhaps others could. With that said, I believe there should be some 
guidance on the wiki of CCADB or Mozilla Policy that describe this 
corner case and have a process for adding "variations" of such 
self-signed certificates as "Intermediate CA" Certificates (when most 
people are used to see them as self-signed "Root CA" Certificates). As 
long as the SPKI and name is included in consecutive audit reports from 
the creation of one of those variations, there should be a grace period 
allowing the other variations to be disclosed and be included in future 
audit reports. It would be causing "pain" to CAs and auditors to update 
existing audit reports to retroactively disclose such cases, for no good 
reason.


I am not sure if adding one of the self-signed Root CA variations in the 
OneCRL (not the one included as a Trust Anchor) would cause any harm to 
the properly disclosed and audited hierarchy. Could someone please 
clarify that?


Finally, I don't think auditor professional ethics have anything to do 
with this discussion. Both audit schemes allow for reports to be updated 
otherwise we wouldn't even have this option on the table. Challenging 
audit schemes is good and healthy but should probably be on a separate 
thread with specific concerns raised.




Thank you,
Dimitris.

On 2020-02-07 6:00 μ.μ., Wayne Thayer via dev-security-policy wrote:

On Thu, Feb 6, 2020 at 5:44 PM Ryan Sleevi via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:



My recommendation is that, for audit periods ending within the next 30 or

so days (meaning, effectively, for reports provided over the next 4 months,
given the three month window before reporting), such situations are
accepted despite the limited assurance they provide. Following that - that
is, for any audit afterwards, there is zero exception, and revocation is
required.



I'd like to see Mozilla require an incident report from CAs that can't or
won't follow the existing guidance (by either supplying a revised audit
statement, revoking the certificate, or adding it to OneCRL). A number of
CAs have resolved these issues by following this guidance and I recommend
against adding a grace period at this time for those who have not.

This places the onus on the CA to ensure their audit reports will meet

Mozilla’s requirements.



In the future, I expect ALV to catch these issues as soon as the audit
report is published. Mistakes do happen, and I don't think our policy
should go straight to revocation upon an ALV failure due to an audit
statement error.

2) Should we accept a revised audit statement to include the SHA256

fingerprint of a certificate that was not previously listed and does not
have the same Subject + SPKI as other cert(s) listed in the audit
statement?





I realize Mozilla uses OneCRL to address the gap there, but ostensibly this

is a straight BR violation regarding providing continuous audits. The
proposed revisions will make this unambiguously clearer, but either way,
the best path to protect the most users is to require the CA to revoke such
certificates.

This also hopefully has the desired effect of forcing 

Re: Policy 2.7 Proposal: Forbid Delegation of Email Validation for S/MIME Certificates

2019-10-22 Thread Dimitris Zacharopoulos via dev-security-policy



On 2019-10-22 7:28 μ.μ., Wayne Thayer wrote:


The CA SHALL NOT delegate validation of the domain part of
an e-mail
address.


This is

https://github.com/mozilla/pkipolicy/commit/85ae5a1b37ca8e5138d56296963195c3c7dec85a


Sounds good. This was your proposed response to solving this issue
back on May 13, so it's full circle :)


I'm going to consider this issue resolved unless there are further 
comments.


Just checking whether the following is acceptable.

If a CA validates the domain mycompany.example being owned/controlled by 
"mycompany", can this company delegate the issuance of S/MIME 
certificates for subsection1.mycompany.example to an internal department 
or a subsidiary? Does the proposed language allow this?



Thanks,
Dimitris.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7 Proposal: Forbid Delegation of Email Validation for S/MIME Certificates

2019-10-05 Thread Dimitris Zacharopoulos via dev-security-policy

Jeremy,

As I'm sure you know, there are several federated services, at least in 
the Education and Research area, where schemes like https://edugain.org/ 
are used. In that scenario, Identity Providers under a certain policy 
(https://technical.edugain.org/documents) provide signed assertions that 
contain the full name and email address of the Subject, along with other 
signed attributes. Identity providers under this scheme usually have 
even stricter policies. Unfortunately these entities are not under a 
strict audit scheme but has been working pretty well for decades.


If any of these Identity Providers want to enable their users to get an 
S/MIME Certificate trusted by NSS, I think it would be acceptable for 
the CA to validate control of the Domain part and then let the federated 
login take care of the local part. Does that seem reasonable?



Dimitris.

On 2019-10-05 9:45 π.μ., Jeremy Rowley via dev-security-policy wrote:

I’m thinking more in terms of the potential rule in the Mozilla policy. If the 
rule is “the CA MUST verify the domain component of the email address” then the 
rule potentially prohibits the scenario where the CA verifies the entire email 
address, not the domain component, by sending a random value to each email 
address and requiring the email address holder to approve issuance. I actually 
liked the previous requirement prohibiting delegation of email address 
verification, although the rule lacked clarity on what email address 
verification entailed. I figure that will be defined in the s/MIME working 
group.

Describing the actors is a good way to look at it though. Besides those three 
buckets of issuers, you have the RAs, the email holders, and the organization 
controlling the domain portion of the email address. These entities may not be 
the same as the CA. More often than not, the RA ends up being the organization 
contracting with the CA for the s/MIME services. The RAs are the risky party 
that I think should be restricted on what they can verify since that’s where 
the lack of transparency starts to come in. With the prohibition against 
delegation of email control eliminated, we’re again placing domain/email 
control responsibilities on a party that has some incentive to misuse it (to 
read email of a third party) without audit, technical, or policy controls that 
limit their authority.  Because there are a lack of controls over the RAs, they 
become a hidden layer in the process that can issue certificates without anyone 
looking at how they are verifying the email address or domain name and whether 
these processes are equivalent to the controls found int eh BR.  Similar to 
TLS, the unaudited party should not be the one providing or verifying 
acceptance of the tokens used to approve issuance.

In short, I’m agreeing with the “at least” verifying the domain control 
portion. However, I know we verify a lot of email addresses directly with the 
email owner that doesn’t have control over the domain name. So the rule should 
be something that permits verification by the root CA of either the full email 
address or the domain name but at least eliminates delegation to non-audited 
third parties. For phrasing, “the CA MUST verify either the domain component of 
the email address or the entire email address using a process that is 
substantially similar to the process used to verify domain names as described 
in the Baseline Requirements”, with the understanding that we will rip out the 
language and replace it with the s/MIME requirements once those are complete at 
the CAB Forum.

Jeremy

From: Ryan Sleevi 
Sent: Friday, October 4, 2019 10:56 PM
To: Jeremy Rowley 
Cc: Kathleen Wilson ; Wayne Thayer ; 
mozilla-dev-security-policy 
Subject: Re: Policy 2.7 Proposal: Forbid Delegation of Email Validation for 
S/MIME Certificates

Jeremy:

Could you describe a bit more who the actors are?

Basically, it seems that the actual issuance is going to fall into one of 
several buckets:
1) Root CA controls Issuing CAs key
2) Issuing CA controls its own key, but is technically constrained
3) Issuing CA controls its own key, and is not technically constrained

We know #1 is covered by Root CA’s audit, and we know #3 is covered by Issuing 
CA’s audit, and #2 is technically constrained and thus the domain part is 
apriori validated.

So when you say “some organizations”, I’m trying to understand which of the three cases 
here they fall under. If I understand correctly, the idea is that Customer Foo approaches 
Root CA (Case #1). Root CA knows Foo’s namespace is foo.example through prior verification, 
and Root CA allows Foo to issue to *@foo.example. Then Foo says 
“oh, hey, we have a contractor at user@bar.example, we’d 
like a cert for them too”.

Why can’t Root CA verify themselves? Why would or should Root CA trust Foo to 
do it correctly? I can imagine plenty of verification protocols where Foo can 
be the “face” of the 

Re: Policy 2.7 Proposal:Extend Section 8 to Encompass Subordinate CAs

2019-10-04 Thread Dimitris Zacharopoulos via dev-security-policy



On 2019-10-04 12:56 μ.μ., Rob Stradling wrote:

Dimitris,

Since CAs should already be disclosing that an intermediate 
certificate is "externally-operated" by populating the "Subordinate CA 
Owner" field in the CCADB record, what's the benefit of duplicating 
this information in the intermediate certificate itself?


What happens if an intermediate certificate starts life as 
"externally-operated" but later becomes "internally-operated", or 
vice-versa?  (e.g., the root CA company acquires the intermediate CA 
company).


It's possible to update a CCADB record.  It's not possible to update a 
certificate.




That makes sense. This means that Mozilla has a way to know exactly how 
many externally-operated CAs are in the Root program, provided all CAs 
updated this "Subordinate CA Owner" field correctly, meaning that the 
"Subordinate CA Owner" operates/manages the CA keys.


Dimitris.


----
*From:* dev-security-policy 
 on behalf of Dimitris 
Zacharopoulos via dev-security-policy 


*Sent:* 04 October 2019 05:43
*To:* mozilla-dev-security-policy 

*Subject:* Re: Policy 2.7 Proposal:Extend Section 8 to Encompass 
Subordinate CAs

Adding to Jeremy's post, I believe we need to also define a normative
requirement to mark an unconstrained Intermediate CA Certificate not
operated by the entity that controls the Root Key.
Section 7.1.6.3 of the Baseline Requirements requires an explicit policy
identifier for these subCAs. The anyPolicy identifier is not permitted.
So, I assume that all Intermediate CA Certificates that include the
anyPolicy identifier should be operated by the Issuing CA (or an 
Affiliate).


Unfortunately -in one sense-, the BRs allow more specific policy
identifiers even for Intermediate CA Certificates that ARE operated by
the Issuing CA, so it is hard to differentiate which subCA is "Internal"
or "External".

I believe this is serious enough to consider the possibility of
requiring a specific policy identifier (assigned by the CABF) to be
included in externally-operated subCA Certificates, in addition to any
other policy identifiers that the Issuing CA has chosen for that CA
Certificate. Of course, other solutions might be available.

Mozilla is also going over a close investigation of unconstrained subCAs
that are missing audits and possibly included in oneCRL. I don't believe
these subCAs should be grandfathered in. However, others that have
supplied audit reports, following Mozilla Policies and indicating
compliance with the BRs, should be grandfathered.


Dimitris.



On 2019-10-04 3:06 π.μ., Jeremy Rowley via dev-security-policy wrote:
> Hey Wayne,
>
> I think there might be confusion on how the notification is supposed 
to happen. Is notification through CCADB sufficient? We've uploaded 
all of the Sub CAs to CCADB including the technically constrained 
ICAs. Each one that is hosted/operated by itself is marked that way 
using the Subordinate CA Owner field. Section 8 links to emailing 
certifica...@mozilla.org but operationally, CCADB has become the 
default means of providing this notice. If you're expecting email, 
that may be worth clarifying in case CAs missed that an email is 
required. I know I missed that, and because CCADB is the common method 
of notification there is a chance that notice was considered sent but 
not in the expected way.

>
> There's also confusion over the "new to Mozilla" language I think. I 
interpreted this language as organizations issued cross-signs after 
the policy. For example, Siemens operated a Sub CA through Quovadis 
prior to policy date so they aren't "new" to the CA space even if they 
were re-certified. However, they would be new in the sense you 
identified - they haven't gone through an extensive review by the 
community.  If the goal is to ensure the community review happens for 
each Sub CA, then requiring all recertifications to go through an 
approval process makes sense instead of making an exception for new. 
I'm not sure how many exist currently, but if there are not that many 
organizations, does a grandfathering clause cause unnecessary 
complexity? I realize this is not in DigiCert's best interest, but the 
community may benefit the most by simply requiring a review of all Sub 
CAs instead of trying to grandfather in existing cross-signs.  Do you 
have an idea on the number that might entail
>   ? At worst, we waste a bunch of time discovering that all of these 
are perfectly operated and that they could have been grandfathered in 
the first place. At best, we identify some critical issues and resolve 
them as a community.

>
> If there are a significant number of unconstrained on-prem CAs, then 
language that requires a review on re-signing would be helpful.  
Perhaps say "As of X date, a CA MUST NOT sign a non-technica

Re: Policy 2.7 Proposal:Extend Section 8 to Encompass Subordinate CAs

2019-10-03 Thread Dimitris Zacharopoulos via dev-security-policy
Adding to Jeremy's post, I believe we need to also define a normative 
requirement to mark an unconstrained Intermediate CA Certificate not 
operated by the entity that controls the Root Key.
Section 7.1.6.3 of the Baseline Requirements requires an explicit policy 
identifier for these subCAs. The anyPolicy identifier is not permitted. 
So, I assume that all Intermediate CA Certificates that include the 
anyPolicy identifier should be operated by the Issuing CA (or an Affiliate).


Unfortunately -in one sense-, the BRs allow more specific policy 
identifiers even for Intermediate CA Certificates that ARE operated by 
the Issuing CA, so it is hard to differentiate which subCA is "Internal" 
or "External".


I believe this is serious enough to consider the possibility of 
requiring a specific policy identifier (assigned by the CABF) to be 
included in externally-operated subCA Certificates, in addition to any 
other policy identifiers that the Issuing CA has chosen for that CA 
Certificate. Of course, other solutions might be available.


Mozilla is also going over a close investigation of unconstrained subCAs 
that are missing audits and possibly included in oneCRL. I don't believe 
these subCAs should be grandfathered in. However, others that have 
supplied audit reports, following Mozilla Policies and indicating 
compliance with the BRs, should be grandfathered.



Dimitris.



On 2019-10-04 3:06 π.μ., Jeremy Rowley via dev-security-policy wrote:

Hey Wayne,

I think there might be confusion on how the notification is supposed to happen. 
Is notification through CCADB sufficient? We've uploaded all of the Sub CAs to 
CCADB including the technically constrained ICAs. Each one that is 
hosted/operated by itself is marked that way using the Subordinate CA Owner 
field. Section 8 links to emailing certifica...@mozilla.org but operationally, 
CCADB has become the default means of providing this notice. If you're 
expecting email, that may be worth clarifying in case CAs missed that an email 
is required. I know I missed that, and because CCADB is the common method of 
notification there is a chance that notice was considered sent but not in the 
expected way.

There's also confusion over the "new to Mozilla" language I think. I interpreted this 
language as organizations issued cross-signs after the policy. For example, Siemens operated a Sub 
CA through Quovadis prior to policy date so they aren't "new" to the CA space even if 
they were re-certified. However, they would be new in the sense you identified - they haven't gone 
through an extensive review by the community.  If the goal is to ensure the community review 
happens for each Sub CA, then requiring all recertifications to go through an approval process 
makes sense instead of making an exception for new. I'm not sure how many exist currently, but if 
there are not that many organizations, does a grandfathering clause cause unnecessary complexity? I 
realize this is not in DigiCert's best interest, but the community may benefit the most by simply 
requiring a review of all Sub CAs instead of trying to grandfather in existing cross-signs.  Do you 
have an idea on the number that might entail
  ? At worst, we waste a bunch of time discovering that all of these are 
perfectly operated and that they could have been grandfathered in the first 
place. At best, we identify some critical issues and resolve them as a 
community.

If there are a significant number of unconstrained on-prem CAs, then language that 
requires a review on re-signing would be helpful.  Perhaps say "As of X date, a CA 
MUST NOT sign a non-technically constrained certificate where cA=True for keys that are 
hosted external to the CA's infrastructure or that are not operated in accordance with 
the issuing CA's policies and procedures unless Mozilla has first granted permission for 
such certificate"? The wording needs work of course, but the idea is that they go 
through the discussion and Mozilla signs off. A process for unconstrained Sub CAs that is 
substantially similar to the root inclusion makes sense, but there is documentation on 
CCADB for the existing ones. Still, this documentation should probably made available, 
along with the previous incident reports, to the community for review and discussion. 
Afterall, anything not fully constrained is essentially operating the same as a fully 
embedded root.

Speaking on a personal, non-DigiCert note, I think on-prem sub CAs are a bad 
idea, and I fully support more careful scrutiny on which entities are 
controlling keys. Looking at the DigiCert metrics, the on-prem Sub CAs are 
responsible for over half of the incident reports, with issues ranging from 
missed audit dates to incorrect profile information. The long cycle in getting 
information,  being a middle-man information gathering, and trying to convey 
both Mozilla and CAB forum policy makes controlling compliance very difficult, 
and a practice I would not recommend to any 

Re: DigiCert OCSP services returns 1 byte

2019-09-23 Thread Dimitris Zacharopoulos via dev-security-policy



On 2019-09-23 5:00 μ.μ., Ryan Sleevi via dev-security-policy wrote:

No. That’s the more dangerous approach which I’ve tried repeatedly to
dissuade. You should produce, and distribute, the Good response with the
pre-certificate.


Understood. Thank you for the clear guidance.

Dimitris.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DigiCert OCSP services returns 1 byte

2019-09-23 Thread Dimitris Zacharopoulos via dev-security-policy



On 2019-09-23 3:02 μ.μ., Ryan Sleevi wrote:



On Mon, Sep 23, 2019 at 12:50 PM Dimitris Zacharopoulos 
mailto:ji...@it.auth.gr>> wrote:




[...]



Doesn't this break compatibility with older clients? It is older
clients
that need to see "revoked" which is equivalent to "not good" for
cases
of "non-issued" Certificates. I think this is what 6960 is trying to
accommodate.


No.

Older clients will see "revoked" but newer clients will see
"revoked" plus additional information to interpret as
"non-issued". Is
there any specific text in the Mozilla Policy or the BRs that
strictly
forbids the use of this RFC 6960 practice?

BRs 4.9.13: "The Repository MUST NOT include entries that indicate
that
a Certificate is suspended."


You just quoted it.

6960 is trying to say “not revoked, suspended for now, but this may be 
used to issue a legitimate certificate at some point in the future”

Read Section 5. Read the related contemporary mailing list discussions.

It would be useful to identify whether there’s an objective to the 
questions, since that might help us cut down things quicker:

- Are you running a 5019 responder or a 6960 responder?
- Do you agree that the definition in 6960, Section 2.2, applies to 
pre-certificates?


If you are running a 6960 responder, and you don’t believe it applies 
to pre-certificates, we should work that out first.


2.2 applies to pre-certificates but between when the pre-certificate and 
the final certificate is issued, there is a gap. As I understand it, 
this is the main topic of this discussion, trying to interpret the best 
course of action for this gap. If the responder was allowed to respond 
with revoked and all the provisions of 6960 related to "non-issued" 
certificates, until the final certificate is issued (if it is ever 
issued), that seems like a safer option for Relying Parties because they 
would not risk seeing a "valid" response for a Certificate that has not 
been issued yet.


That was my initial thought which made me post to this thread. I thought 
it made sense but I could be wrong.


Dimitris.


___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DigiCert OCSP services returns 1 byte

2019-09-23 Thread Dimitris Zacharopoulos via dev-security-policy



On 2019-09-23 1:37 μ.μ., Ryan Sleevi via dev-security-policy wrote:

On Mon, Sep 23, 2019 at 9:31 AM Dimitris Zacharopoulos via
dev-security-policy  wrote:


On 20/9/2019 11:00 μ.μ., Wayne Thayer wrote:

On Fri, Sep 20, 2019 at 4:56 AM Dimitris Zacharopoulos
mailto:ji...@it.auth.gr>> wrote:

 

 Using the following practice as described in RFC 6960 should not
 be a violation of the BRs. That is, answering revoked where a
 pre-certificate has been issued but not the final certificate
 should be OK as long as the response contains the Extended Revoked
 extension and the revocationReason is |certificateHold|. With this
 practice, it is very clear that the final certificate has
 not been issued, so would this be considered a violation of the
 Mozilla policy?

Yes, I think it would be a violation of Mozilla policy for a CA's OCSP
responder to return a certificateHold reason in a response for a
precertificate. As you noted, the BRs forbid certificate suspension.
Mozilla assumes that a certificate corresponding to every
precertificate exists, so the OCSP response would be interpreted as
applying to a certificate and thus violating the BRs.

In practice, I also think that Ryan has raised a good point about OCSP
response caching. If a revoked response for a precertificate were
supplied by a CA, would the Subscriber need to wait until that
response expires before using the certificate, or else risk that some
user agent has cached the revoked response?

Dear Wayne,

This list has discussed about compatibility issues several times in the
past, so we must consider how Mozilla supports the majority of clients.
RFC 6960 does not just mandate that the revocationReason is
|certificateHold|. It requires a certain revocation date AND a specific
extension that unambiguously point to a "non-issued" Certificate, not a
"Suspended" Certificate in general. This means that there is technical
language to distinguish the case of a Certificate being "suspended" and
a Certificate being "non-issued".

OCSP response caching is equally problematic for "unknown" responses
which are also cached. The behavior of clients in sight of an "unknown"
or "revoked"-with-additional-info response, should be more or less the
same (i.e. don't trust the certificate).

Neither the Mozilla policy language nor the ||BRs support the assumption
that whenever we have an OCSP response of "certificateHold", this means
the certificate is "Suspended". My interpretation is that if a response
provides all of the following information:
- status --> revoked
- revocation reason --> certificateHold
- revocationTime --> January 1, 1970
- MUST NOT include a CRL references extension or any CRL entry extensions
- includes the extended revoke extension

then this is the consistent semantics for a "non-issued" certificate,
not about a Certificate that was "issued" and then "suspended".

Is this a reasonable interpretation?


I do not believe this is a reasonable interpretation, precisely because
6960 uses this status so that the revocation is temporary, and attackers
can not use this to cause responders to mark serials they have not yet used
as revoked.


Doesn't this break compatibility with older clients? It is older clients 
that need to see "revoked" which is equivalent to "not good" for cases 
of "non-issued" Certificates. I think this is what 6960 is trying to 
accommodate. Older clients will see "revoked" but newer clients will see 
"revoked" plus additional information to interpret as "non-issued". Is 
there any specific text in the Mozilla Policy or the BRs that strictly 
forbids the use of this RFC 6960 practice?


BRs 4.9.13: "The Repository MUST NOT include entries that indicate that 
a Certificate is suspended."


___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DigiCert OCSP services returns 1 byte

2019-09-23 Thread Dimitris Zacharopoulos via dev-security-policy

On 20/9/2019 11:00 μ.μ., Wayne Thayer wrote:
On Fri, Sep 20, 2019 at 4:56 AM Dimitris Zacharopoulos 
mailto:ji...@it.auth.gr>> wrote:




Using the following practice as described in RFC 6960 should not
be a violation of the BRs. That is, answering revoked where a
pre-certificate has been issued but not the final certificate
should be OK as long as the response contains the Extended Revoked
extension and the revocationReason is |certificateHold|. With this
practice, it is very clear that the final certificate has
not been issued, so would this be considered a violation of the
Mozilla policy?

Yes, I think it would be a violation of Mozilla policy for a CA's OCSP 
responder to return a certificateHold reason in a response for a 
precertificate. As you noted, the BRs forbid certificate suspension. 
Mozilla assumes that a certificate corresponding to every 
precertificate exists, so the OCSP response would be interpreted as 
applying to a certificate and thus violating the BRs.


In practice, I also think that Ryan has raised a good point about OCSP 
response caching. If a revoked response for a precertificate were 
supplied by a CA, would the Subscriber need to wait until that 
response expires before using the certificate, or else risk that some 
user agent has cached the revoked response?


Dear Wayne,

This list has discussed about compatibility issues several times in the 
past, so we must consider how Mozilla supports the majority of clients. 
RFC 6960 does not just mandate that the revocationReason is 
|certificateHold|. It requires a certain revocation date AND a specific 
extension that unambiguously point to a "non-issued" Certificate, not a 
"Suspended" Certificate in general. This means that there is technical 
language to distinguish the case of a Certificate being "suspended" and 
a Certificate being "non-issued".


OCSP response caching is equally problematic for "unknown" responses 
which are also cached. The behavior of clients in sight of an "unknown" 
or "revoked"-with-additional-info response, should be more or less the 
same (i.e. don't trust the certificate).


Neither the Mozilla policy language nor the ||BRs support the assumption 
that whenever we have an OCSP response of "certificateHold", this means 
the certificate is "Suspended". My interpretation is that if a response 
provides all of the following information:

- status --> revoked
- revocation reason --> certificateHold
- revocationTime --> January 1, 1970
- MUST NOT include a CRL references extension or any CRL entry extensions
- includes the extended revoke extension

then this is the consistent semantics for a "non-issued" certificate, 
not about a Certificate that was "issued" and then "suspended".


Is this a reasonable interpretation?

Dimitris.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DigiCert OCSP services returns 1 byte

2019-09-20 Thread Dimitris Zacharopoulos via dev-security-policy

Dear Wayne,

According to section 2.2 of RFC 6960, an OCSP responder may respond 
"revoked" for a "non-issued" Certificate. It even allows this response 
for "unknown" Certificates in order to support backwards compatibility 
with implementations of RFC 2560.


In addition to that, section 4.4.8 labeled "Extended Revoked Definition" 
says:


"This extension MUST be included in the OCSP response when that response 
contains a "revoked" status for a non-issued certificate".


Also, from section 2.2 
When a responder sends a "revoked" response to a status request for a 
non-issued certificate, the responder MUST include the extended revoked 
definition response extension (Section 4.4.8) in the response, 
indicating that the OCSP responder supports the extended definition of 
the "revoked" state to also cover non-issued certificates. In addition, 
the SingleResponse related to this non-issued certificate:


1. the responder MUST include the extended revoked definition response
   extension
2. In addition, the SingleResponse related to this non-issued certificate:

 * MUST specify the revocation reason certificateHold (6),
 * MUST specify the revocationTime January 1, 1970, and
 * MUST NOT include a CRL references extension (Section 4.4.2) or any
   CRL entry extensions (Section 4.4.5).

By reading the BRs (section 6.9.13), it is clear that TLS Certificates 
are not allowed to be "suspended". However, if an RFC 6960 compliant 
OCSP responder responds with "revoked" for an unknown serial number and 
then issues a certificate with the same requested serial number, it will 
later return a response "good". This seems to be an allowed practice 
therefore it should be OK for a responder to change a "revoked" status 
to "good". In addition to that, 6960 demands that for non-issued 
Certificates, the responder must use the revocationReason "certificateHold".


To summarize:

 * The BRs reference RFC 6960
 * The BRs forbid to have Certificates "suspended" (i.e.
   revocationReason |certificateHold|)
 * revoked-for-unknown must contain the Extended Revoked extension, so
   a client knows that this was a response for a non-issued certificate.

Using the following practice as described in RFC 6960 should not be a 
violation of the BRs. That is, answering revoked where a pre-certificate 
has been issued but not the final certificate should be OK as long as 
the response contains the Extended Revoked extension and the 
revocationReason is |certificateHold|. With this practice, it is very 
clear that the final certificate has
not been issued, so would this be considered a violation of the Mozilla 
policy?


I added this as a comment to https://github.com/mozilla/pkipolicy/issues/189
because I know that this mailing list messes up the formatting.


Thanks,
Dimitris.

On 2019-09-19 8:02 μ.μ., Wayne Thayer via dev-security-policy wrote:

I have gone ahead and added a section titled "Precertificates" [1] to the
Required Practices wiki page.

I have also updated a policy issue [2] suggesting that this be moved into
the Root Store policy, and added a new issue [3] suggesting that we clarify
the acceptable use of the "unknown" OCSP response.

I plan to sponsor a CAB Forum ballot to resolve the inconsistency with BR
7.1.2.5.

- Wayne

[1]
https://wiki.mozilla.org/CA/Required_or_Recommended_Practices#Precertificates
[2] https://github.com/mozilla/pkipolicy/issues/138
[3] https://github.com/mozilla/pkipolicy/issues/189

On Tue, Sep 17, 2019 at 6:10 PM Wayne Thayer  wrote:


Version 3 of my proposal replaces Jeremy's suggested examples with Andrew
and Ryan's:

The current implementation of Certificate Transparency does not provide

any way for Relying Parties to determine if a certificate corresponding to
a given precertificate has or has not been issued. It is only safe to
assume that a certificate corresponding to every precertificate exists.

RFC 6962 states “The signature on the TBSCertificate indicates the
certificate authority's intent to issue a certificate.  This intent is
considered binding (i.e., misissuance of the Precertificate is considered
equal to misissuance of the final certificate).”

However, BR 7.1.2.5 states “For purposes of clarification, a
Precertificate, as described in RFC 6962 – Certificate Transparency, shall
not be considered to be a “certificate” subject to the requirements of RFC
5280 - Internet X.509 Public Key Infrastructure Certificate and Certificate
Revocation List (CRL) Profile under these Baseline Requirements.”

Mozilla interprets the BR language as a specific exception allowing CAs
to issue a precertificate containing the same serial number as the
subsequent certificate [1]. Otherwise, Mozilla infers from the existence of
a precertificate that a corresponding certificate has been issued.

This means, for example, that:

* A CA must provide OCSP services and responses in accordance with
Mozilla policy for all certificates presumed to exist based on 

Re: Policy 2.7 Proposal: Clarify Revocation Requirements for S/MIME Certificates

2019-06-14 Thread Dimitris Zacharopoulos via dev-security-policy


Dear Wayne,

Please consider the fact that S/MIME is focused on "signature" 
Certificates which has different considerations than "authentication" 
Certificates. The baseline requirements (and their revocation 
requirements) are focused on "authentication" Certificates. I believe 
the revocation policies, at least for the CA Certificates, do not align 
well with S/MIME.


When a piece of data is "signed" (such as an e-mail), Relying Parties 
need to be able to verify the status of the signing Certificate _when 
the signature was created_. If the Issuing CA is revoked, it is no 
longer able to provide status information for that Certificate. If we 
think about the serial number issue, if a CA had to be revoked, status 
information for its issued Certificates would discontinue leading 
Relying Parties to have difficulties validating the existing signed 
e-mails that were valid when signed.


This might be something to consider more carefully.


Thank you,
Dimitris.


On 15/5/2019 3:25 π.μ., Wayne Thayer via dev-security-policy wrote:

On Tue, May 14, 2019 at 11:21 AM Kathleen Wilson via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


On 5/10/19 5:46 PM, Wayne Thayer wrote:

I've attempted to update section 6 to incorporate revocation requirements
for S/MIME certificates:



https://github.com/mozilla/pkipolicy/commit/15ad5b9180903b92b8f638c219740c0fb6ba0637

Note: since much of this language is copied directly from the BRs, if we
decide to adopt it, the policy will also need to comply with the Creative
Commons Attribution 4.0 International license used by the BRs.

I will greatly appreciate everyone's review and comments on this proposed
change.


The proposed changes look OK to me.

But I would also be fine with the new section 6.2 regarding revocation
of S/MIME certs just re-using the revocation text that we used to have
in our policy (which had been removed in an effort to remove redundancy
with the BRs).


https://github.com/mozilla/pkipolicy/blob/2.4.1/rootstore/policy.md#6-revocation



The 'reasons for revocation' from the old policy are very close to the BR
language I proposed. The main difference in my proposal is the inclusion of
deadlines by which certificates must be revoked (same as in the BRs). While
the BR deadlines have sometimes been challenging, I do feel that we're
better off to have them as our standard and handle exceptions as incidents,
so my preference is to stick with my proposal.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7 Proposal: Require EKUs in End-Entity Certificates

2019-04-24 Thread Dimitris Zacharopoulos via dev-security-policy



On 24/4/2019 10:18 π.μ., Matt Palmer via dev-security-policy wrote:

On Wed, Apr 24, 2019 at 09:13:31AM +0300, Dimitris Zacharopoulos via 
dev-security-policy wrote:

I support this update but I am not sure if this is somehow linked with the
scope of the Mozilla Policy. Does this change mean that after April 1, 2020,
any Certificate that does not have an EKU is out of Mozilla Policy scope or
not?

Given that the change doesn't touch section 1.1, it's reasonable to believe
that the scope of the policy is not changing.


If this change intends to bring these types of certificates out of scope
after April 1, 2020, we must make this clear and probably also update
section 1.1.

My reading of the policy, as amended by this proposal, as well as my
understanding of past discussions in this group, is that certificates
without an EKU are in scope now, and they will continue to be in scope if
this amendment is adopted.  The only change is that end-entity certificates
without an EKU will be considered misissued if the certificate's notBefore
is on or after April 1, 2020.

If you feel that the policy, as amended, does not make this state of affairs
clear, I'm sure Wayne would welcome suggestions for improvement.


I think your explanation clarifies the intent and the policy language 
make sense. I wasn't 100% sure if the intent was to narrow the scope or 
not.



Thanks,
Dimitris.


- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7 Proposal: Require EKUs in End-Entity Certificates

2019-04-24 Thread Dimitris Zacharopoulos via dev-security-policy



On 24/4/2019 2:09 π.μ., Wayne Thayer via dev-security-policy wrote:

On Fri, Apr 19, 2019 at 7:12 PM Matt Palmer via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


On Fri, Apr 19, 2019 at 01:22:59PM -0700, Wayne Thayer via
dev-security-policy wrote:

Okay, then I propose adding the following to section 5.2 "Forbidden and
Required Practices":

Effective for certificates issued on or after April 1, 2020, end-entity
certificates MUST include an EKU extension containing KeyPurposeId(s)
describing the intended usage(s) of the certificate, and the EKU

extension

MUST NOT contain the KeyPurposeId anyExtendedKeyUsage.

This does not imply that there will be technical enforcement, but also
doesn't rule it out.

I will appreciate everyone's feedback on this proposal.

If I may pick the absolute smallest of nits, is it "better" if the
restriction be on certificate notBefore, rather than "issued on"?  Whilst
that leaves certificates open to backdating, it does make it easier to
identify misissuance.  Otherwise there could be arguments made that the
certificate was *actually* issued before the effective date, even though
there is no evidence that that is the case.

Thanks Matt, I can see how that change makes it easier to check for

compliance.

I've added my proposal, updated per Matt's suggestion, to the 2.7 branch:

https://github.com/mozilla/pkipolicy/commit/842c9bd53e43904b160e79cb199018252fb60834

Unless there are further comments, I'll consider this issue resolved.


Wayne,

I support this update but I am not sure if this is somehow linked with 
the scope of the Mozilla Policy. Does this change mean that after April 
1, 2020, any Certificate that does not have an EKU is out of Mozilla 
Policy scope or not? I think the GRCA discussion around special-purpose 
certificates (I think they were meant for document signing) that do not 
contain an EKU (nor an emailAddress in the SAN extension or the CN 
subjectDN field), are currently considered in scope.


If this change intends to bring these types of certificates out of scope 
after April 1, 2020, we must make this clear and probably also update 
section 1.1.



Dimitris.



- Wayne
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Organization Identifier field in the Extended Validation certificates accordinf to the EVG ver. 1.6.9

2019-04-17 Thread Dimitris Zacharopoulos via dev-security-policy

I agree with Doug's interpretation.


Dimitris.

On 17/4/2019 9:23 μ.μ., Doug Beattie via dev-security-policy wrote:

The ETSI requirements for QWAC are complicated and not all that clear to me, 
but is it possible to use OV certificate and Policy OIDs as the base instead of 
EV?  Since OV permits additional Subject Attributes, then that approach would 
not be noncompliant.

Certainly issuing a QWAC needs to have vetting done in alignment with the EVGL, 
but by virtue of including the QualifiedStatement, you've asserted that, even 
if the certificate Policy OID claims only OV (OV being a subset EV, so it’s not 
a lie to say it’s OV validated).
- CertificatePolicy: CA can specify OV and also include this Policy OID: 
0.4.0.194112.1.4
- qualifiedStatement: qcs-QcCompliance is specified

Is that contradictory? If not, then I'm probably just missing the statement 
that a QWAC MUST be an EV certificate with EV Policy OIDs.

Doug

-Original Message-
From: dev-security-policy  On 
Behalf Of Ryan Sleevi via dev-security-policy
Sent: Wednesday, April 17, 2019 12:52 PM
To: Sándor dr. Szőke 
Cc: mozilla-dev-security-policy 
Subject: Re: Organization Identifier field in the Extended Validation 
certificates accordinf to the EVG ver. 1.6.9

On Wed, Apr 17, 2019 at 11:20 AM Sándor dr. Szőke via dev-security-policy < 
dev-security-policy@lists.mozilla.org> wrote:


Extended Validation (EV) certificates and EU Qualified certificates
for website authentication (QWAC).


European Union introduced the QWAC certificates in the eIDAS
Regulation in 2014.

Technically the QWAC requirements are based on the CABF EVG and
intended to be fully upper compatiable with the EV certificates, but
ETSI has set up some further requirements, like the mandatory usage of the QC 
statements.

ETSI TS 119 495 is a further specialization of the QWAC certificates
dedicated for payment services according to the EU PSD2 Directive.
The PSD2 certificates need to consist amoung others the Organization
Identifier [(OrgId) – OID: 2.5.4.97] field in the Subject DN field,
which contains PSD2 specific data of the Organization.

Till yesterday the usage of this field was not forbidden in the EV
certificates, altough as I know there has been discussion about this
topic due to the different interpretation of the EVG requirements.
As I know there is an ongoing discussion in the CABF about the
inclusion of the OrgId field in the definitely allowed fields in the
Subject DN of the EV certificates.

Today morning I got an email from the CABF mailing list with the new
version of the BR ver. 1.6.5 and the EVG ver. 1.6.9.  The new version
of the BR has already been published on the CABF web site but the new
EVG version hasn't been published yet.

I would like to ask the current status of this new EVG ver 1.6.9.

It is very important for us to have correct information because our CA
has begun to issue PSD2 certificates to financial institutions which
are intended to fulfil also the EVG requirements.
The new version of the EVG definitely states that only the listed
fields may be used in the Subject DN and the list doesn't contain the OrgId 
field.

We plan to fulfil both the QWAC and the EVG requirements
simultaneuosly but after having the change in the EVG requirements it
seems to be impossible in case of PSD2 QWAC certificates.
The separation of the EV and the QWAC certificates wouldn't be good
for the Customers and it would rise several issues.

Do you have any idea how to solve this issue?

Will the new version of the EVG ver 1.6.9 be published soon?

Isn't it possible to wait with the issuance the result of the ballot
regarding the inclusion of the OrgId field?


(Writing in a Google capacity)

At present, the ETSI TS 119 495 is specified incompatibly with the requirements 
of the EV Guidelines. The latest version of that TS [1], acknowledges that it 
is fundamentally incompatible with the EV Guidelines, in Section 5.3, by 
placing the ETSI TS version as superseding that of the requirements of the EVGs.

Unfortunately, this means that a TSP cannot issue a PSD2 certificate from a 
publicly trusted certificate and claim compliance with the EV Guidelines, and 
as a consequence, cannot claim compliance with the relevant root store 
requirements, including Mozilla's and Google's. If a TSP issues a certificate 
using the profile in TS 119 495, they must do so from a certificate hierarchy 
not trusted by user agents - and as a result, such certificates will not be 
trusted by browsers.

ETSI and the Browsers have been discussing this for over a year, and the 
browsers offered, within the CA/Browser Forum, a number of alternative 
solutions to ETSI that would allow for these two to harmoniously interoperate. 
ETSI declined to take the necessary steps to resolve the conflict while it was 
still possible. As a consequence, the CA/Browser Forum has attempted to address 
some of these issues itself - however, it still requires action by ETSI to 
harmonize their work.

Re: GRCA Incident: BR Compliance and Document Signing Certificates

2019-03-25 Thread Dimitris Zacharopoulos via dev-security-policy



On 25/3/2019 10:48 μ.μ., Wayne Thayer via dev-security-policy wrote:

I agree with Ryan on this. From a policy perspective, we should be
encouraging [and eventually requiring] EKU constraints, not making it
easier to exclude them.


I was merely copying parts of the existing policy related to "Policy 
Scope", not requirements for end-entity certificates. According to the 
BRs an EKU for SSL/TLS Certificates is required. I did a quick read on 
the Mozilla Policy and didn't find a statement to explicitly require an 
EKU for end-entity certificates capable of being used for S/MIME, unless 
I missed it. Section 5.3 only describes an EKU requirement for 
Intermediate Certificates. Perhaps we should update 5.2 to include a 
requirement for EKU for end-entity certificates.


Dimitris.


On Mon, Mar 25, 2019 at 1:03 PM Ryan Hurst via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


While it may be true that the certificates in question do not contain
SANs, unfortunately, the certificates may still be trusted for SSL since
they do not have EKUs.

For an example see "The most dangerous code in the world: validating SSL
certificates in non-browser software" which is available at
https://crypto.stanford.edu/~dabo/pubs/abstracts/ssl-client-bugs.html

What you will see that hostname verification is one of the most common
areas applications have a problem getting right. Often times they silently
skip hostname verification, use libraries provide options to disable host
name verifications that are either off by default, or turned off for
testing and never enabled in production.

One of the few checks you can count on being right with any level of
predictability in my experience is the server EKU check where absence is
interpreted as an entitlement.

Ryan Hurst
(writing in a personal capacity)
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: GRCA Incident: BR Compliance and Document Signing Certificates

2019-03-25 Thread Dimitris Zacharopoulos via dev-security-policy



On 17/3/2019 1:54 π.μ., Matthew Hardeman via dev-security-policy wrote:

While sending a message that non-compliance could result in policy change
is generally a bad idea, I did notice something about the profile of the
non-compliant certificate which gave me pause:

None of the example certificates which were provided include a SAN
extension at all.

Today, no valid certificates for the WebPKI lack a SAN extension.  There
should always been at least one SAN dnsName, SAN ipAddress, or in case of
S/MIME certificates, a SAN rfc822 name.

I know that Chrome has already fully deprecated non-SAN bearing certs.
Have the other browsers?

I'm wondering whether it's possible or reasonable for policy to update such
that certificates that lack any SAN at all would be out of scope?


This is a very interesting proposal and would narrow the scope of the 
policy to exactly the certificate types used by Mozilla products. There 
might be some legacy implementations out there that still use the CN 
field to validate a FQDN, IP Address and emailAddress to validate an 
e-mail address. If there is a policy change to better specify the scope 
adding something more than the EKU, IMHO it should include these legacy 
cases too. I made an attempt to describe how that would look like in the 
policy but the language can definitely be improved. I used some of the 
existing language of section 1.1 of the current policy.


For an end-entity certificate Certificate to be in scope of the Mozilla 
Policy, it MUST have:


 * either an Extended Key Usage (EKU) extension which contains one or
   more of these KeyPurposeIds: anyExtendedKeyUsage,  id-kp-serverAuth,
   id-kp-emailProtection; or
 * no EKU extension

Additionally, the end-entity certificate MUST have:

 * a Subject Alternative Names extension of any of the following types:
   dNSName, iPAddress, SRVName, rfc822Name; or
 * a subjectDN that contains a commonName attribute (OID: 2.5.4.3) that
   point to a Domain Name or IP Address; or
 * a subjectDN that contains an emailAddress attribute (OID:
   1.2.840.113549.1.9.1) that point to an Email Address.

I hope this is useful.


Dimitris.

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: EJBCA defaulting to 63 bit serial numbers

2019-03-09 Thread Dimitris Zacharopoulos via dev-security-policy



On 9/3/2019 2:37 μ.μ., Ryan Sleevi wrote:
I’m chiming in, Dimtris, as it sounds like you may have 
unintentionally misrepresented the discussion and positions, and I 
want to provide you, and possibly HARICA, the guidance and clarity it 
needs in this matter.


On Sat, Mar 9, 2019 at 12:46 AM Dimitris Zacharopoulos via 
dev-security-policy <mailto:dev-security-policy@lists.mozilla.org>> wrote:


I am personally shocked that a large part of this community considers
that now is the time for CAs to demonstrate "agility to replace
certificates", as lightly as that, without considering the
significant
pain that Subscribers will experience having to replace hundreds of
thousands of certificates around the globe. It is very possible that
Relying parties will also suffer availability issues.


I believe this significantly misunderstands the discussion and 
motivation. Having read all of the discussion to date, I do not 
believe this is at all an accurate framing of the expectations or 
motivations. I would humbly ask that you provide citations to back 
this claim.


I must admit that I may have over-reacted with this one, taking one 
particular paragraph from 
https://groups.google.com/d/msg/mozilla.dev.security.policy/S2KNbJSJ-hs/HNDX5LaZCAAJ 



which made me focus on the word "agility" as a requirement that CAs are 
ultimately responsible of building, and the sooner the better. Having 
worked with Subscribers that had a very hard time to manually install 
certificates in legacy web servers, I am very worried that CAs will have 
to repeat these tasks because for several cases, there are no tools to 
assist the automation process.




You are correct if you were to say one or two people have provided 
such a goal, but that’s certainly not consistent with the majority of 
the discussion to date from the root program participants. Indeed, the 
expectation expressed is that, *as with every other incident*, the CA 
consistently follow the expectations.


I highlight this, because I don’t think it’s reasonable to conflate 
existing expectations, which have been repeatedly clarified, as 
somehow motivated by some other motivation based on one or two 
participants’ views.


If you truly feel this way, please revisit the discussion in
https://groups.google.com/forum/m/#!topic/mozilla.dev.security.policy/S2KNbJSJ-hs 
, as I hope that mine and Wayne’s responses can demonstrate this. 
Judging by that thread, only a single voice has expressed something 
remotely as to how you’ve phrased it.


I stand corrected.



I don't know if others share the same view about the
interpretation of
7.1 but it seems that some highly respected members of this community
did. If we have to count every CA that had this interpretation,
then I
suppose all CAs that were using EJBCA with the default configuration
have the same interpretation.


I believe this also misunderstands the discussion to date, and as a 
consequence misrepresents this. I don’t believe it is reasonable or 
fact based to suggest that the CAs that had incidents necessarily 
shared the interpretation. The incident reports demonstrate that there 
are a myriad reasons, beyond interpretive differences, that one can 
find themselves in such a situation. Avoiding conflating the two is 
necessary, although if you feel it is justified, then I would implore 
you when summarizing others views to support your view, that you 
provide the direct links and references. This makes it easier to 
respond to and provide CAs the necessary clarity of expectations, as 
well as allows other participants to evaluate and judge themselves the 
accuracy of the summary.


I think I provided a link to an issue in the github repository of 
cablint where this topic was briefly discussed in the past.


Although I agree with you on that summarizing others views without them 
explicitly saying so (my comment for CAs using EJBCA with the default 
configuration) is not very objective, I see that over the past years, 
more and more CAs avoid to participate in m.d.s.p. leaving us with no 
choice but to "guess". That is unfortunate. At some point, the issue of 
less-and-less CA participation in m.d.s.p. should be discussed.




BTW, the configuration in EJBCA that we are talking about, as the
default number of bytes, had the number "8" in the setting,
resulting in
64-bits, not 63. So, as far as the CA administrator was concerned,
this
setting resulted in using 64 random bits from a CSPRNG. One would
have
to see the internal code to determine that the first bit was
replaced by
a zero.


This is exactly the point. CAs have an obligation to understand the 
code they’re using, regardless of the software platform. The failing 
is not with EJBCA, it is with the CAs that have done so. While there 
are a number of considerable and profound benefits to using EJBCA - 
most n

Re: EJBCA defaulting to 63 bit serial numbers

2019-03-08 Thread Dimitris Zacharopoulos via dev-security-policy
Adding to this discussion, and to support that there were -in fact- 
different interpretations (all in good faith) please check the issue I 
had created in Dec 2017 https://github.com/awslabs/certlint/issues/56.


My simple interpretation of the updated requirement in 7.1 at the time 
was that "no matter what, the resulting serial number should be at least 
64 bits long". However, experts like Peter Bowen, Rob Stradling and Matt 
Palmer, paying attention to details of the new requirement, gave a 
different interpretation. According to their explanation, if you take 
64-bits from a CSPRNG, there is a small but existing probability that 
the resulting serialNumber will be something smaller.  So, "shorter" 
serial numbers were not considered a violation of the BRs as long as the 
64-bits came out of a CSPRNG.


I am personally shocked that a large part of this community considers 
that now is the time for CAs to demonstrate "agility to replace 
certificates", as lightly as that, without considering the significant 
pain that Subscribers will experience having to replace hundreds of 
thousands of certificates around the globe. It is very possible that 
Relying parties will also suffer availability issues.


As discussed before, automation is one of the goals (other opinions had 
been raised, noting security concerns to this automation). Centralized 
systems like large web hosting providers or single large Subscribers 
like the ones already mentioned in current incident reports, can build 
automation easier. However, simple/ordinary Subscribers that don't have 
the technical skills to automate the certificate replacement, that 
struggled to even install certificates in their TLS servers in the first 
place, will create huge burden for no good reason.


I don't know if others share the same view about the interpretation of 
7.1 but it seems that some highly respected members of this community 
did. If we have to count every CA that had this interpretation, then I 
suppose all CAs that were using EJBCA with the default configuration 
have the same interpretation.


BTW, the configuration in EJBCA that we are talking about, as the 
default number of bytes, had the number "8" in the setting, resulting in 
64-bits, not 63. So, as far as the CA administrator was concerned, this 
setting resulted in using 64 random bits from a CSPRNG. One would have 
to see the internal code to determine that the first bit was replaced by 
a zero.


IMO, Mozilla should also treat this as an incident and evaluate the 
specific parameters (strict interpretation of section 7.1, CAs did not 
deliberately violate the requirement, a globally-respected software 
vendor and other experts had a different allowable interpretation of a 
requirement, the security impact on Subscribers and Relying Parties for 
1 bit of entropy is negligible), and consider treating this incident at 
least as the underscore issue. In the underscore case, there was a SCWG 
ballot with an effective date where CAs had to ultimately revoke all 
certificates that included an underscore.



Thanks,
Dimitris.

On 9/3/2019 6:32 π.μ., Peter Bowen via dev-security-policy wrote:

On Fri, Mar 8, 2019 at 7:55 PM Matthew Hardeman via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


On Fri, Mar 8, 2019 at 9:49 PM Ryan Sleevi  wrote:


I consider that only a single CA has represented any ambiguity as being
their explanation as to why the non-compliance existed, and even then,
clarifications to resolve that ambiguity already existed, had they simply
been sought.


Please contemplate this question, which is intended as rhetorical, in the
most generous and non-judgmental light possible.  Have you contemplated the
possibility that only one CA attempted to do so because you've stated your
interpretation and because they're subject to your judgement and mercy,
rather than because the text as written reflects a single objective
mechanism which matches your own position?


Matthew,

I honestly doubt so.  It seems that one CA software vendor had a buggy
implementation but we know this is not universal.  For example, see
https://github.com/r509/r509/blob/05aaeb1b0314d68d2fcfd2a0502f31659f0de906/lib/r509/certificate_authority/signer.rb#L132
  and https://github.com/letsencrypt/boulder/blob/master/ca/ca.go#L511 are
open source CA software packages that clearly do not have the issue.
Further at least one CA has publicly stated their in-house written CA
software does not have the issue.

I know, as the author of cablint, that the main reason I didn't have any
confusion.  I didn't add more checks because of the false positive rate
issue; if I checked for 64 or more bits, it would be wrong 50% of the
time.  The rate is still unacceptable with even looser rules; in 1/256
cases the top 8 bits will all be zero, leading to a whole the serial being
a whole byte shorter.

I do personally think that the CAs using EJBCA should not be faulted here;
their vendor added an option to be 

Re: Odp.: Odp.: Odp.: 46 Certificates issued with BR violations (KIR)

2019-02-02 Thread Dimitris Zacharopoulos via dev-security-policy
+1. Of course there must be consistency between CRLs and OCSP. 

Dimitris. 

-Original Message-
From: Eric Mill via dev-security-policy 
To: "Buschart, Rufus" 
Cc: mozilla-dev-security-policy 
, Kurt Roeckx , 
Wayne Thayer 
Sent: Sat, 02 Feb 2019 16:17
Subject: Re: Odp.: Odp.: Odp.: 46 Certificates issued with BR violations (KIR)

The BRs and Mozilla program policies don't support the idea of just
trusting a CA to issue certs for "internal" use or to keep them secret.
This is why CAs issuing "test certificates" on production CAs for domains
they don't own is clearly forbidden.

Given that, I don't see how it can be acceptable to provide an "unknown"
status over OCSP for a revoked certificate, on the premise that the CA
asserts they never actually shipped the cert to a customer.

The fact that they would have to mark the cert "valid" before marking it
"revoked" is a limitation of the implementation of the OCSP responder. It's
not a reason to ignore policy that is grounded in the very reasonable
desire to ensure that the certificate's revoked status is known to any
client which checks OCSP instead of CRL.

-- Eric

On Sat, Feb 2, 2019 at 4:31 AM Buschart, Rufus via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Personally I think it would be better, if the revoke reason "Certificate
> hold" on the CRL would be allowed for TLS certificates, as this state would
> exactly cover the described scenario. The OCSP responder could in such a
> case reply with "bad" and deliver the reason "certificate hold". But I
> fully understand that browser developers had a lot of issues with this
> state, so it is still forbidden.
>
> With best regards,
> Rufus Buschart
>
> Siemens AG
> Information Technology
> Human Resources
> PKI / Trustcenter
> GS IT HR 7 4
> Hugo-Junkers-Str. 9
> 90411 Nuernberg, Germany
> Tel.: +49 1522 2894134
> mailto:rufus.busch...@siemens.com
> www.twitter.com/siemens
>
> www.siemens.com/ingenuityforlife
>
> Siemens Aktiengesellschaft: Chairman of the Supervisory Board: Jim
> Hagemann Snabe; Managing Board: Joe Kaeser, Chairman, President and Chief
> Executive Officer; Roland Busch, Lisa Davis, Klaus Helmrich, Janina Kugel,
> Cedrik Neike, Michael Sen, Ralf P. Thomas; Registered offices: Berlin and
> Munich, Germany; Commercial registries: Berlin Charlottenburg, HRB 12300,
> Munich, HRB 6684; WEEE-Reg.-No. DE 23691322
>
> > -Ursprüngliche Nachricht-
> > Von: dev-security-policy 
> Im Auftrag von Kurt Roeckx via dev-security-policy
> > Gesendet: Freitag, 1. Februar 2019 23:38
> > An: Wayne Thayer 
> > Cc: mozilla-dev-security-policy <
> mozilla-dev-security-pol...@lists.mozilla.org>
> > Betreff: Re: Odp.: Odp.: Odp.: 46 Certificates issued with BR violations
> (KIR)
> >
> > On Fri, Feb 01, 2019 at 03:02:17PM -0700, Wayne Thayer wrote:
> > > It was pointed out to me that the OCSP status of the misissued
> > > certificate that is valid for over 5 years is still "unknown" despite
> > > having been revoked a week ago. I asked KIR about this in the bug [1]
> > > and am surprised by their response:
> > >
> > > This certificate is revoked on CRL. Because the certificate has been
> > > never
> > > > received by the customer its status on OCSP is "unknown". To make
> > > > the certificate "revoked" on OCSP first we should make it "valid"
> > > > what makes no sense. I know there is inconsistency between CRL and
> > > > OCSP but there are some scenarios when it can be insecure to make it
> > > > valid just in order to make it revoked.
> > > >
> > >
> > > Upon further questioning KIR states:
> > >
> > > Of course I can mark it as revoked after I make it valid, but I think
> > > it is
> > > > more secure practice not to change its status at all when the
> > > > certificate is not received by the customer. Let's suppose the
> > > > scenario when your CA generate certificate and the customer wants
> > > > you to deliver it to its office. What OCSP status the certificate
> > > > should have when you are on your way to the customer office? valid -
> > > > I do not think so. When the certificate is stolen you are in
> > > > trouble. So the only option is "unknown" but then we have different
> > > > statuses on CRL and OCSP - but we are still safe. It is not only my
> opinion, we had a big discuss with our auditors about that.
> > > >
> > >
> > > Does anyone other then KIR and their auditor (Ernst & Young) think
> > > this is currently permitted? At the very least, I believe that
> returning "unknown"
> > > for a revoked certificate is misleading to Firefox users who will
> > > receive the "SEC_ERROR_OCSP_UNKNOWN_CERT" error instead of
> > > "SEC_ERROR_REVOKED_CERTIFICATE".
> > >
> > > Does anyone other than KIR and Ernst & Young believe that this meets
> > > WebTrust for CAs control 6.8.12? [2]
> >
> > If you follow the RFC, the "unknown" answer can mean that it doesn't
> know, and that an other option like a CRL can be tried.
> > With "unknown", it doesn't say anything about 

Re: AW: AW: Incident Report DFN-PKI: Non-IDNA2003 encoded international domain names

2019-01-24 Thread Dimitris Zacharopoulos via dev-security-policy
I referred to your comment that "you perform a successful domain 
validation". My point, which you seem to understand and agree, is that 
there are additional rules than just DNS validation.


Dimitris.


On 24/1/2019 12:21 μ.μ., Buschart, Rufus wrote:

Hello Dimitris,

of course not, because the underscore is not part of the syntax for a hostname 
acc. RFC 1034, chapter 3.5. whereas xn--gau-7ka.siemens.de is a perfectly valid 
hostname.

With best regards,
Rufus Buschart

Siemens AG
Information Technology
Human Resources
PKI / Trustcenter
GS IT HR 7 4
Hugo-Junkers-Str. 9
90411 Nuernberg, Germany
Tel.: +49 1522 2894134
mailto:rufus.busch...@siemens.com
www.twitter.com/siemens

www.siemens.com/ingenuityforlife

Siemens Aktiengesellschaft: Chairman of the Supervisory Board: Jim Hagemann 
Snabe; Managing Board: Joe Kaeser, Chairman, President and Chief Executive 
Officer; Roland Busch, Lisa Davis, Klaus Helmrich, Janina Kugel, Cedrik Neike, 
Michael Sen, Ralf P. Thomas; Registered offices: Berlin and Munich, Germany; 
Commercial registries: Berlin Charlottenburg, HRB 12300, Munich, HRB 6684; 
WEEE-Reg.-No. DE 23691322


-Ursprüngliche Nachricht-
Von: Dimitris Zacharopoulos 
Gesendet: Donnerstag, 24. Januar 2019 11:16
An: Buschart, Rufus (GS IT HR 7 4) ; 
mozilla-dev-security-pol...@lists.mozilla.org
Betreff: Re: AW: Incident Report DFN-PKI: Non-IDNA2003 encoded international 
domain names

On 24/1/2019 10:47 π.μ., Buschart, Rufus via dev-security-policy wrote:

Good morning!

I would like to sharpen my argument from below a little bit: If a CA gets a 
request to issue a certificate for the domain xn--gau-

7ka.siemens.de, how can the CA tell, that xn--gau-7ka is a punycode string in 
IDNA2008 and not only a very strange server name? At
least I don't have a glass bowl to read the mind of my customers. Therefor I 
would say, it is perfectly okay to issue a certificate for xn--
gau-7ka.siemens.de as long as you perform a successful domain validation for 
xn--gau-7ka.siemens.de.
By following this argument, you would also approve issuance of domain names that contain 
the underscore "_" character, right?

Dimitris.


With best regards,
Rufus Buschart

Siemens AG
Information Technology
Human Resources
PKI / Trustcenter
GS IT HR 7 4
Hugo-Junkers-Str. 9
90411 Nuernberg, Germany
Tel.: +49 1522 2894134
mailto:rufus.busch...@siemens.com
www.twitter.com/siemens

www.siemens.com/ingenuityforlife

Siemens Aktiengesellschaft: Chairman of the Supervisory Board: Jim
Hagemann Snabe; Managing Board: Joe Kaeser, Chairman, President and
Chief Executive Officer; Roland Busch, Lisa Davis, Klaus Helmrich,
Janina Kugel, Cedrik Neike, Michael Sen, Ralf P. Thomas; Registered
offices: Berlin and Munich, Germany; Commercial registries: Berlin
Charlottenburg, HRB 12300, Munich, HRB 6684; WEEE-Reg.-No. DE 23691322


-Ursprüngliche Nachricht-
Von: dev-security-policy
 Im Auftrag von
Buschart, Rufus via dev-security-policy
Gesendet: Mittwoch, 23. Januar 2019 20:24
An: mozilla-dev-security-pol...@lists.mozilla.org
Betreff: AW: Incident Report DFN-PKI: Non-IDNA2003 encoded
international domain names

Hello!


Von: Servercert-wg  Im
Auftrag von Wayne Thayer via Servercert-wg


On Mon, Jan 21, 2019 at 5:50 PM Jeremy Rowley via Servercert-wg 
 wrote:
We received a report for someone saying that certificates issued with puny-code 
are mis-issued if they use IDNA2008.
Considering a number of people probably received the same report, I figured we 
should raise and discuss the implications here.
ISSUES:
1. Does a CA have to check the puny-code provided by a customer for
compliance? Generally, we send the validation request to the
puny-code domain (not the pre-conversation name). This confirms

control over the domain so is there a need to check this?

If we aren’t doing the conversion, are we actually an implementer in this case?

The BRs require 5280 compliance, so yes I think CAs need to ensure that 
certificates they sign conform to IDNA2003.

Where exactly in RFC5280 do you find the requirement that domains
that follow IDNA2008 but not IDNA2003 are not permitted in a
certificate? If I understand chapter 7.3 of RFC2008 correctly, it describes how 
to process a domain that follows IDNA2003 (rfc3490)

but it does not forbid that a domain can be encoded in IDNA2008 (rfc5890 / 
rfc5891). It simply says nothing special about how to
handle it.

Therefor I would interpret RFC5280 that in this case the domain name
in punycode can (or better say MUST) be treated like any other domain name.

Excerpt from the bug mentioned by Jürgen:

Question: Are ACE-labels not encoded as IDNA 2003 in certificates a
misissuance under the Baseline Requirements? Yes, we think

this is currently the case:

Baseline Requirements mandate conformance to exactly RFC 5280 and
don't reference/allow any updates, e.g., RFC 8399

Chapter 7.2 of RFC 5280 

Re: AW: Incident Report DFN-PKI: Non-IDNA2003 encoded international domain names

2019-01-24 Thread Dimitris Zacharopoulos via dev-security-policy

On 24/1/2019 10:47 π.μ., Buschart, Rufus via dev-security-policy wrote:

Good morning!

I would like to sharpen my argument from below a little bit: If a CA gets a 
request to issue a certificate for the domain xn--gau-7ka.siemens.de, how can 
the CA tell, that xn--gau-7ka is a punycode string in IDNA2008 and not only a 
very strange server name? At least I don't have a glass bowl to read the mind 
of my customers. Therefor I would say, it is perfectly okay to issue a 
certificate for xn--gau-7ka.siemens.de as long as you perform a successful 
domain validation for xn--gau-7ka.siemens.de.
By following this argument, you would also approve issuance of domain 
names that contain the underscore "_" character, right?


Dimitris.


With best regards,
Rufus Buschart

Siemens AG
Information Technology
Human Resources
PKI / Trustcenter
GS IT HR 7 4
Hugo-Junkers-Str. 9
90411 Nuernberg, Germany
Tel.: +49 1522 2894134
mailto:rufus.busch...@siemens.com
www.twitter.com/siemens

www.siemens.com/ingenuityforlife

Siemens Aktiengesellschaft: Chairman of the Supervisory Board: Jim Hagemann 
Snabe; Managing Board: Joe Kaeser, Chairman, President and Chief Executive 
Officer; Roland Busch, Lisa Davis, Klaus Helmrich, Janina Kugel, Cedrik Neike, 
Michael Sen, Ralf P. Thomas; Registered offices: Berlin and Munich, Germany; 
Commercial registries: Berlin Charlottenburg, HRB 12300, Munich, HRB 6684; 
WEEE-Reg.-No. DE 23691322


-Ursprüngliche Nachricht-
Von: dev-security-policy  Im 
Auftrag von Buschart, Rufus via dev-security-policy
Gesendet: Mittwoch, 23. Januar 2019 20:24
An: mozilla-dev-security-pol...@lists.mozilla.org
Betreff: AW: Incident Report DFN-PKI: Non-IDNA2003 encoded international domain 
names

Hello!


Von: Servercert-wg  Im
Auftrag von Wayne Thayer via Servercert-wg


On Mon, Jan 21, 2019 at 5:50 PM Jeremy Rowley via Servercert-wg 
 wrote:
We received a report for someone saying that certificates issued with puny-code 
are mis-issued if they use IDNA2008.
Considering a number of people probably received the same report, I figured we 
should raise and discuss the implications here.
ISSUES:
1. Does a CA have to check the puny-code provided by a customer for
compliance? Generally, we send the validation request to the puny-code domain 
(not the pre-conversation name). This confirms

control over the domain so is there a need to check this?

If we aren’t doing the conversion, are we actually an implementer in this case?

The BRs require 5280 compliance, so yes I think CAs need to ensure that 
certificates they sign conform to IDNA2003.

Where exactly in RFC5280 do you find the requirement that domains that follow 
IDNA2008 but not IDNA2003 are not permitted in a
certificate? If I understand chapter 7.3 of RFC2008 correctly, it describes how 
to process a domain that follows IDNA2003 (rfc3490) but
it does not forbid that a domain can be encoded in IDNA2008 (rfc5890 / 
rfc5891). It simply says nothing special about how to handle it.
Therefor I would interpret RFC5280 that in this case the domain name in 
punycode can (or better say MUST) be treated like any other
domain name.

Excerpt from the bug mentioned by Jürgen:

Question: Are ACE-labels not encoded as IDNA 2003 in certificates a misissuance 
under the Baseline Requirements? Yes, we think

this is currently the case:

Baseline Requirements mandate conformance to exactly RFC 5280 and
don't reference/allow any updates, e.g., RFC 8399

Chapter 7.2 of RFC 5280 https://tools.ietf.org/html/rfc5280#page-97 states:
"Specifically, conforming implementations MUST perform the conversion operation 
specified in Section 4 of RFC 3490, with the

following clarifications: "

So, IDNs must be converted according to the rules of RFC 3490

We, as a CA, don't perform the conversion mentioned in RFC 5280. We 
receive/process ACE-labels only. This means that our system

is likely not meant by the wording "conforming implementations" of RFC 5280.

However, our systems have the technical means and generally the responsibility 
to check for correct input. So, we shall

check/enforce IDNA 2003 ACE-labels.

I don't share your opinion, that you MUST check if a domain is a valid IDNA2003 
domain, just because you could technically do so. I
think this leads on a very slippery road. With the same argument one could 
require a CA to scan every web server before the issuance
of a certificate (or even regularly while the certificate is valid) if the web 
server is distributing malware (BRGs chapter 9.6.3 sub bullet
8). And this is obviously not the duty of a CA. So I understand the spirit of 
RFC 5280 and the BRGs that a CA has to perform domain
validation on the ACE-label but not to enforce any additional syntax on top of 
validated dNSNames.

With best regards,
Rufus Buschart

Siemens AG
Information Technology
Human Resources
PKI / Trustcenter
GS IT HR 7 4
Hugo-Junkers-Str. 9
90411 Nuernberg, Germany

Re: CA disclosure of revocations that exceed 5 days [Was: Re: Incident report D-TRUST: syntax error in one tls certificate]

2018-12-05 Thread Dimitris Zacharopoulos via dev-security-policy

On 5/12/2018 10:02 π.μ., Fotis Loukos wrote:

On 4/12/18 8:29 μ.μ., Dimitris Zacharopoulos via dev-security-policy wrote:

Fotis,

You have quoted only one part of my message which doesn't capture the
entire concept.

I would appreciate it if you mentioned how exactly did I distort your
proposal and which parts that change the meaning of what I said did I miss.


I never claimed that you "distorted" my proposal. I said that it didn't 
capture the entire concept.




CAs that mis-issue and must revoke these mis-issued certificates,
already violated the BRs. Delaying revocation for more than what the BRs
require, is also a violation. There was never doubt about that. I never
proposed that "extended revocation" would somehow "not be considered a
BR violation" or "make it legal".

You explicitly mentioned that there were voices during the SC6 ballot
discussion that wanted to extend the 5 days to something more (*extend*
the 5 days), as you also explicitly mentioned that this is not a
theoretical discussion.


This was mentioned in the context of a very long thread and you have 
taken a piece of it which changes the meaning of the entire concept. I 
explained what the entire concept was. Jakob summarized the proposal 
correctly.



I tried to highlight in this discussion that there were real cases in
m.d.s.p. where the revocation was delayed in practice. However, the
circumstances of these extended revocations remain unclear. Yet, the
community didn't ask for more details. Seeing this repeated, was the
reason I suggested that more disclosure is necessary for CAs that
require more time to revoke than the BRs require. At the very minimum,
it would help the community understand in more detail the circumstances
why a CA asks for more time to revoke.

I refer you to Ryan's email. Do you really believe that this is
something not expected from CAs?


I think Jakob make an accurate summary.

You contradict what you said before 2 paragraphs. Jakob explicitly
mentioned:

The proposal was apparently to further restrict the ability of CAs to
make exceptions on their own, by requiring all such exceptions to go
through the public forums where the root programs can challenge or even
deny a proposed exception, after hearing the case by case arguments for
why an exception should be granted.

effectively 'legalizing' BR violations after browsers' concent (granting
an exception). Before two paragraphs you stated that you never proposed
making an extended revocation legal.

Regards,
Fotis


You missed one of Jakob's important point. This usually happens when you 
clip-paste specific sentences that change the meaning of a whole 
conversation.


"

But only if one ignores the
reality that such exceptions currently happen with little or no
oversight."


My previous response to you tries to re-summarize the concept in a more 
accurate way. Please use that if you want to refer to the concept of my 
proposal and not particular pieces from a huge thread.



Dimitris.




Dimitris.



On 4/12/2018 8:00 μ.μ., Fotis Loukos via dev-security-policy wrote:

Hello,

On 4/12/18 4:30 μ.μ., Jakob Bohm via dev-security-policy wrote:

Hello to you too.

It seems that you are both misunderstanding what the proposal was.

The proposal was apparently to further restrict the ability of CAs to
make exceptions on their own, by requiring all such exceptions to go
through the public forums where the root programs can challenge or even
deny a proposed exception, after hearing the case by case arguments for
why an exception should be granted.


Can you please point me to the exact place where this is mentioned?

The initial proposal is the following:

Mandating that CAs disclose revocation situations that exceed the 5-day
requirement with some risk analysis information, might be a good place
to start.

I see nothing related to public discussion and root programs challenging
or denying the proposed exception.

In a follow-up email, Dimitris mentions the following:

The reason for requiring disclosure is meant as a first step for
understanding what's happening in reality and collect some meaningful
data by policy. [...] If, for example, m.d.s.p. receives 10 or 20
revocation exception cases within a 12-month period and none of them is
convincing to the community and module owners to justify the exception,
the policy can be updated with clear rules about the risk of distrust if
the revocation doesn't happen within 5 days.

In this proposal it is clear that the CA will *disclose* and not ask for
permission for extending the 24h/5 day period, and furthermore he
accepts the fact that these exceptions may not be later accepted by the
community, which may lead to changing the policy.



A better example would be that if someone broke their leg for some
reason, and therefore wants to delay payment of a debt by a short while,
they should be able to ask for it, and the request would be considered
on its merits, not bas

Re: CA disclosure of revocations that exceed 5 days [Was: Re: Incident report D-TRUST: syntax error in one tls certificate]

2018-12-04 Thread Dimitris Zacharopoulos via dev-security-policy
s Ryan. Generalizations and the
distinction of two teams, our team (the browsers) and their team (the
CAs), where by default our team are the good guys and their team are
malicious is plain demagoguery. Since you like extreme examples, please
note that generalizations (we don't like a member of a demographic thus
all people from that demographic are bad) have lead humanity to
committing atrocities, let's not go down that road, especially since I
know you Ryan and you're definitely not that type of person.

I believe that the arguments presented by Dimitris are simply red
herring. Whether there is a blackout period, the CA lost internet
connectivity or a 65 character OU does not pose a risk to relying
parties is a form of ignoratio elenchi, a fallacy identified even by
Aristotle thousands of years ago. Using the same deductive reasoning,
someone could argue that if a person was scammed in participating in a
ponzi scheme and lost all his fortune, he can steal someone else's money.

The true point of the argument is whether CAs should be allowed to break
the BRs based on their own risk analysis. So, what is a certificate?
It's more or less an assertion. And making an assertion is equally
important as revoking it. As Ryan correctly mentioned, if this becomes a
norm, why shouldn't CAs be allowed to make a risk analysis and decide
that they will break the BRs in making the assertion too, effectively
issuing certificates with their own validation methods? Where would this
lead us? Who would be able to trust the WebPKI afterwards? Are we
looking into making it the wild west of the internet?

In addition, do you think that CAs should be audited regarding their
criteria for their risk analysis?

Furthermore, this poses a great risk for the CAs too. If this becomes a
practice, how can CAs be assured that the browsers won't make a risk
analysis and decide that an issuance made in accordance to all the
requirements in the BRs is a misissuance? Until now, we have seen that
browsers have distrusted CAs based on concrete evidence of misissuances.
Do you think Dimitris that they should be allowed to distrust CAs based
on some risk analysis?

Regards,
Fotis


On 30/11/18 6:13 μ.μ., Ryan Sleevi via dev-security-policy wrote:

On Fri, Nov 30, 2018 at 4:24 AM Dimitris Zacharopoulos 
wrote:



On 30/11/2018 1:49 π.μ., Ryan Sleevi wrote:



On Thu, Nov 29, 2018 at 4:03 PM Dimitris Zacharopoulos via
dev-security-policy  wrote:


I didn't want to hijack the thread so here's a new one.


Times and circumstances change.


You have to demonstrate that.


It's self-proved :-)


This sort of glib reply shows a lack of good-faith effort to meaningfully
engage. It's like forcing the discussion every minute, since, yanno, "times
and circumstances have changed".

I gave you concrete reasons why saying something like this is a
demonstration of a weak and bad-faith argument. If you would like to
meaningfully assert this, you would need to demonstrate what circumstances
have changed in such a way as to warrant a rediscussion of something that
gets 'relitigated' regularly - and, in fact, was something discussed in the
CA/Browser Forum for the past two years. Just because you're unsatisfied
with the result and now we're in a month that ends in "R" doesn't mean time
and circumstances have changed meaningfully to support the discussion.

Concrete suggestions involved a holistic look at _all_ revocations, since
the discussion of exceptions is relevant to know whether we are discussing
something that is 10%, 1%, .1%, or .1%. Similarly, having the framework
in place to consistently and objectively measure that helps us assess
whether any proposals for exceptions would change that "1%" from being
exceptional to seeing "10%" or "100%" being claimed as exceptional under
some new regime.

In the absence of that, it's an abusive and harmful act.



I already mentioned that this is separate from the incident report (of the
actual mis-issuance). We have repeatedly seen post-mortems that say that
for some specific cases (not all of them), the revocation of certificates
will require more time.


No. We've seen the claim it will require more time, frequently without
evidence. However, I do think you're not understanding - there is nothing
preventing CAs from sharing details, for all revocations they do, about the
factors they considered, and the 'exceptional' cases to the customers,
without requiring any BR violations (of the 24 hour / 5 day rule). That CAs
don't do this only undermines any validity of the argument you are making.

There is zero legitimate reason to normalize aberrant behaviour.



Even the underscore revocation deadline creates problems for some large
organizations as Jeremy pointed out. I understand the compatibility
argument and CAs are doing their best to comply with the rules but you are
advocating there should be no exceptions and you say that without having
looked at specific evide

Re: CA disclosure of revocations that exceed 5 days [Was: Re: Incident report D-TRUST: syntax error in one tls certificate]

2018-11-30 Thread Dimitris Zacharopoulos via dev-security-policy



On 30/11/2018 1:49 π.μ., Ryan Sleevi wrote:



On Thu, Nov 29, 2018 at 4:03 PM Dimitris Zacharopoulos via 
dev-security-policy <mailto:dev-security-policy@lists.mozilla.org>> wrote:


I didn't want to hijack the thread so here's a new one.


Times and circumstances change.


You have to demonstrate that.


It's self-proved :-)



When I brought this up at the Server
Certificate Working Group of the CA/B Forum
(https://cabforum.org/pipermail/servercert-wg/2018-September/000165.html),

there was no open disagreement from CAs. 



Look at the discussion during Wayne’s ballot. Look at the discussion 
back when it was Jeremy’s ballot. The proposal was as simplified as 
could be - modeled after 9.16.3 of the BRs. It would have allowed for 
a longer period - NOT an unbounded period, which is grossly negligent 
for publicly trusted CAs.


Agreed.



However, think about CAs that
decide to extend the 5-days (at their own risk) because of
extenuating
circumstances. Doesn't this community want to know what these
circumstances are and evaluate the gravity (or not) of the situation?
The only way this could happen in a consistent way among CAs would
be to
require it in some kind of policy.


This already happens. This is a matter of the CA violating any 
contracts or policies of the root store it is in, and is already being 
handled by those root stores - e.g. misissuance reports. What you’re 
describing as a problem is already solved, as are the expectations for 
CAs - that violating requirements is a path to distrust.


The only “problem” you’re solving is giving CAs more time, and there 
is zero demonstrable evidence, to date, about that being necessary or 
good - and rich and ample evidence of it being bad.


I already mentioned that this is separate from the incident report (of 
the actual mis-issuance). We have repeatedly seen post-mortems that say 
that for some specific cases (not all of them), the revocation of 
certificates will require more time. Even the underscore revocation 
deadline creates problems for some large organizations as Jeremy pointed 
out. I understand the compatibility argument and CAs are doing their 
best to comply with the rules but you are advocating there should be no 
exceptions and you say that without having looked at specific evidence 
that would be provided by CAs asking for exceptions. You would rather 
have Relying Parties loose their internet services from one of the 
Fortune 500 companies. As a Relying Party myself, I would hate it if I 
couldn't connect to my favorite online e-shop or bank or webmail. So I'm 
still confused about which Relying Party we are trying to help/protect 
by requiring the immediate revocation of a Certificate that has 65 
characters in the OU field.


I also see your point that "if we start making exceptions..." it's too 
risky. I'm just suggesting that there should be some tolerance for 
extended revocations (to help with collecting more information) which 
doesn't necessarily mean that we are dealing with a "bad" CA. I trust 
the Mozilla module owner's judgement to balance that. If the community 
believes that this problem is already solved, I'm happy with that :)




> Phrased differently: You don't think large organizations are
currently
> capable, and believe the rest of the industry should accommodate
that.

"Tolerate" would probably be the word I'd use instead of
"accommodate".


I chose accommodate, because you’d like the entire world to take on 
systemic risk - and it is indeed systemic risk, to users especially - 
to benefit some large companies.


Why stop with revocation, though? Why not just let CAs define their 
own validation methods of they think they’re equivalent? After all, if 
we can trust CAs to make good judgements on revocation, why can’t we 
also trust them with validation? Some large companies struggle with 
our existing validation methods, why can’t we accommodate them?


That’s exactly what one of the arguments against restricting 
validation methods was.


As I said, I think this discussion will not accomplish anything 
productive without a structured analysis of the data. Not anecdata 
from one or two incidents, but holistic - because for every 1 real 
need, there may have been 9,999 unnecessary delays in revocation with 
real risk.


How do CAs provide this? For *all* revocations, provide meaningful 
data. I do not see there being any value to discussing further 
extensions until we have systemic transparency in place, and I do not 
see any good coming from trying to change at the same time as placing 
that systemic transparency in place, because there’s no way to measure 
the (negative) impact such change would have.


I don't see how data and evidence for "all revocations" somehow makes 
things better, unless I misunderstood your proposal. It's not a balanced 
request. It

CA disclosure of revocations that exceed 5 days [Was: Re: Incident report D-TRUST: syntax error in one tls certificate]

2018-11-29 Thread Dimitris Zacharopoulos via dev-security-policy

I didn't want to hijack the thread so here's a new one.

On 29/11/2018 6:39 μ.μ., Ryan Sleevi wrote:



On Thu, Nov 29, 2018 at 2:16 AM Dimitris Zacharopoulos 
mailto:ji...@it.auth.gr>> wrote:


Mandating that CAs disclose revocation situations that exceed the
5-day
requirement with some risk analysis information, might be a good
place
to start. 



This was proposed several times by Google in the Forum, and 
consistently rejected, unfortunately.


Times and circumstances change. When I brought this up at the Server 
Certificate Working Group of the CA/B Forum 
(https://cabforum.org/pipermail/servercert-wg/2018-September/000165.html), 
there was no open disagreement from CAs. However, think about CAs that 
decide to extend the 5-days (at their own risk) because of extenuating 
circumstances. Doesn't this community want to know what these 
circumstances are and evaluate the gravity (or not) of the situation? 
The only way this could happen in a consistent way among CAs would be to 
require it in some kind of policy.


This list has seen disclosures of revocation cases from CAs, mainly as 
part of incident reports. What I understand as disclosure is the fact 
that CAs shared that certain Subscribers (we know these subscribers 
because their Certificates were disclosed as part of the incident 
report) would be damaged if the mis-issued certificates were revoked 
within 24 hours. Now, depending on the circumstances this might be 
extended to 5 days.



I don't consider 5 days (they are not even working days) to be
adequate
warning period to a large organization with slow reflexes and long
procedures. 



Phrased differently: You don't think large organizations are currently 
capable, and believe the rest of the industry should accommodate that.


"Tolerate" would probably be the word I'd use instead of "accommodate".



Do you believe these organizations could respond within 5 days if 
their internet connectivity was lost?


I think there is different impact. Losing network connectivity would 
have "real" and large (i.e. all RPs) impact compared to installing a 
certificate with -say- 65 characters in the OU field which may cause 
very few problems to some RPs that want to use a certain web site.




For example, if many CAs violate the 5-day rule for revocations
related
to improper subject information encoding, out of range, wrong
syntax and
that sort, Mozilla or the BRs might decide to have a separate
category
with a different time frame and/or different actions.


Given the security risks in this, I think this is extremely harmful to 
the ecosystem and to users.


It is not the first time we talk about this and it might be worth
exploring further.


I don't think any of the facts have changed. We've discussed for 
several years that CAs have the opportunity to provide this 
information, and haven't, so I don't think it's at all proper to 
suggest starting a conversation without structured data. CAs that are 
passionate about this could have supported such efforts in the Forum 
to provide this information, or could have demonstrated doing so on 
their own. I don't think it would at all be productive to discuss 
these situations in abstract hypotheticals, as some of the discussions 
here try to do - without data, that would be an extremely unproductive 
use of time.


There were voices during the SC6 ballot discussion that wanted to extend 
the 5 days to something more. We continuously see CAs that either detect 
or learn about having mis-issued Certificates, that fail to revoke 
within 24 hours or even 5 days because their Subscribers have problems 
and the RPs would be left with no service until the certificates were 
replaces. I don't think we are having a hypothetical discussion, we have 
seen real cases being disclosed in m.d.s.p. but it would be important to 
have a policy in place to require disclosure of more information. 
Perhaps that would work as a deterrent for CAs to revoke past the 5 days 
if they don't have strong arguments to support their decisions in public.



As a general comment, IMHO when we talk about RP risk when a CA
issues a
Certificate with -say- longer than 64 characters in an OU field, that
would only pose risk to Relying Parties *that want to interact
with that
particular Subscriber*, not the entire Internet. 



No. This is demonstrably and factually wrong.

First, we already know that technical errors are a strong sign that 
the policies and practices themselves are not being followed - both 
the validation activities and the issuance activities result from the 
CA following it's practices and procedures. If a CA is not following 
its practices and procedures, that's a security risk to the Internet, 
full stop.


You describe it as a black/white issue. I understand your argument that 
other control areas will likely have issues but it always comes down to 
what impact and what 

Re: Incident report D-TRUST: syntax error in one tls certificate

2018-11-28 Thread Dimitris Zacharopoulos via dev-security-policy



On 29/11/2018 12:14 π.μ., Wayne Thayer via dev-security-policy wrote:

The way that we currently handle these types of issues is about as good as
we're going to get. We have a [recently relaxed but still] fairly stringent
set of rules around revocation in the BRs. This is necessary and proper
because slow/delayed revocation can clearly harm our users. It was
difficult to gain consensus within the CAB Forum on allowing even 5 days in
some circumstances - I'm confident that something like 28 days would be a
non-starter. I'm also confident that CAs will always take the entire time
permitted to perform revocations, regardless of the risk, because it is in
their interest to do so (that is not mean to be a criticism of CAs so much
as a statement that CAs exist to serve their customers, not our users). I'm
also confident that any attempt to define "low risk" misissuance would just
incentivize CAs to stop treating misissuance as a serious offense and we'd
be back to where we were prior to the existence of linters..

CAs obviously do choose to violate the revocation time requirements. I do
not believe this is generally based on a thorough risk analysis, but in
practice it is clear that they do have some discretion. I am not aware of a
case (yet) in which Mozilla has punished a CA solely for violating a
revocation deadline. When that happens, the violation is documented in a
bug and should appear on the CA's next audit report/attestation statement.
>From there, the circumstances (how many certs?, what was the issue?, was it
previously documented?, is this a pattern of behavior?) have to be
considered on a case-by-case basis to decide a course of action. I realize
that this is not a very satisfying answer to the questions that are being
raised, but I do think it's the best answer.

- Wayne


Mandating that CAs disclose revocation situations that exceed the 5-day 
requirement with some risk analysis information, might be a good place 
to start. Of course, this should be independent of a "mis-issuance 
incident report". By collecting this information, Mozilla would be in a 
better position to evaluate the challenges CAs face with revocations 
*initiated by the CA* without adequate warning to the Subscriber. I 
don't consider 5 days (they are not even working days) to be adequate 
warning period to a large organization with slow reflexes and long 
procedures. Once Mozilla collects more information, you might be able to 
see possible patterns in various CAs and decide what is acceptable and 
what is not, and create policy rules accordingly.


For example, if many CAs violate the 5-day rule for revocations related 
to improper subject information encoding, out of range, wrong syntax and 
that sort, Mozilla or the BRs might decide to have a separate category 
with a different time frame and/or different actions.


It is not the first time we talk about this and it might be worth 
exploring further.


As a general comment, IMHO when we talk about RP risk when a CA issues a 
Certificate with -say- longer than 64 characters in an OU field, that 
would only pose risk to Relying Parties *that want to interact with that 
particular Subscriber*, not the entire Internet. These RPs *might* 
encounter compatibility issues depending on their browser and will 
either contact the Subscriber and notify them that their web site 
doesn't work or they will do nothing. It's similar to a situation where 
a site operator forgets to send the intermediate CA Certificate in the 
chain. These particular RPs will fail to get TLS working when they visit 
the Subscriber's web site.



Dimitris.




On Wed, Nov 28, 2018 at 1:10 PM Nick Lamb via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


On Mon, 26 Nov 2018 18:47:25 -0500
Ryan Sleevi via dev-security-policy
 wrote:

CAs have made the case - it was not accepted.

On a more fundamental and philosophical level, I think this is
well-intentioned but misguided. Let's consider that the issue is one
that the CA had the full power-and-ability to prevent - namely, they
violated the requirements and misissued. A CA is only in this
situation if they are a bad CA - a good CA will never run the risk of
"annoying" the customer.

I would sympathise with this position if we were considering, say, a
problem that had caused a CA to issue certs with the exact same mistake
for 18 months, rather than, as I understand here, a single certificate.

Individual human errors are inevitable at a "good CA". We should not
design systems, including policy making, that assume all errors will be
prevented because that contradicts the assumption that human error is
inevitable. Although it is often used specifically to mean operator
error, human error can be introduced anywhere. A requirements document
which erroneously says a particular Unicode codepoint is permitted in a
field when it should be forbidden is still human error. A department
head who feels tired and signs off on a piece of work that 

Re: Incident report D-TRUST: syntax error in one tls certificate

2018-11-28 Thread Dimitris Zacharopoulos via dev-security-policy


As pointed out by one of my engineers, there is a simpler way by doing a 
simple direct query [1] in the read-only database of crt.sh. Using 
Rufus' example:


SELECT get_ca_name_attribute(issuer_ca_id, 'organizationName') issuer_o, 
ISSUER_CA_ID, FATAL_CERTS, ERROR_CERTS, WARNING_CERTS FROM lint_1week_summary 
WHERE LINTER = 'cablint' AND ISSUER_CA_ID=52410;

Anyone can automate this process with tools they are more familiar with.


Dimitris.

[1] https://groups.google.com/forum/#!topic/crtsh/sUmV0mBz8bQ




On 28/11/2018 2:07 μ.μ., Pedro Fuentes via dev-security-policy wrote:

Hi Rufus,
I got internal server error on that link, but I really appreciate your post and 
the link to code!
Pedro

El miércoles, 28 de noviembre de 2018, 8:45:42 (UTC+1), Buschart, Rufus  
escribió:

To simplify the process of monitoring crt.sh, we at Siemens have implemented a 
little web service which directly queries crt.sh DB and returns the errors as 
JSON. By this you don't have to parse HTML files and can directly integrate it 
into your monitoring. Maybe this function is of interest for some other CA:

https://eo0kjkxapi.execute-api.eu-central-1.amazonaws.com/prod/crtsh-monitor?caID=52410=30=false

To monitor your CA, replace the caID with your CA's ID from crt.sh. In case you 
receive an endpoint time-out message, try again, crt.sh DB often returns time 
outs. For more details or function requests, have a look into its GitHub repo: 
https://github.com/RufusJWB/crt.sh-monitor


With best regards,
Rufus Buschart

Siemens AG
Information Technology
Human Resources
PKI / Trustcenter
GS IT HR 7 4
Hugo-Junkers-Str. 9
90411 Nuernberg, Germany
Tel.: +49 1522 2894134
mailto:rufus.busch...@siemens.com
www.twitter.com/siemens

www.siemens.com/ingenuityforlife

Siemens Aktiengesellschaft: Chairman of the Supervisory Board: Jim Hagemann 
Snabe; Managing Board: Joe Kaeser, Chairman, President and Chief Executive 
Officer; Roland Busch, Lisa Davis, Klaus Helmrich, Janina Kugel, Cedrik Neike, 
Michael Sen, Ralf P. Thomas; Registered offices: Berlin and Munich, Germany; 
Commercial registries: Berlin Charlottenburg, HRB 12300, Munich, HRB 6684; 
WEEE-Reg.-No. DE 23691322


-Ursprüngliche Nachricht-
Von: dev-security-policy  Im 
Auftrag von Enrico Entschew via dev-security-policy
Gesendet: Dienstag, 27. November 2018 18:17
An: mozilla-dev-security-pol...@lists.mozilla.org
Betreff: Re: Incident report D-TRUST: syntax error in one tls certificate

Am Montag, 26. November 2018 18:34:38 UTC+1 schrieb Jakob Bohm:


In addition to this, would you add the following:

- Daily checks of crt.sh (or some other existing tool) if  additional
such certificates are erroneously issued before  the automated
countermeasures are in place?

Thank you, Jakob. This is what we intended to do. We are monitoring crt.sh at 
least twice daily every day from now on.

As to your other point, we do restrict the serial number element and the error 
occurred precisely in defining the constraints for this
field. As mentioned above, we plan to make adjustments to our systems to 
prevent this kind of error in future.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy



___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Clarifications on ETSI terminology and scheme

2018-10-31 Thread Dimitris Zacharopoulos via dev-security-policy



On 31/10/2018 8:00 μμ, Ryan Sleevi via dev-security-policy wrote:

[...]

Dimitris, I'm sorry, but I don't believe this is a correct correction.

EN 319 403 incorporates ISO/IEC 17065; much like the discussion about EN
319 411-2 incorporating, but being separate from, EN 319 411-1, the
structure of EN 319 403 is that it incorporates normatively the structure
of ISO/IEC 17065, and, at some places, extends.

Your description of the system is logically incompatible, given the
incompatibilities in 319 403 and 17065.

You're correct that any applicable national legislation applies, with
respect to the context of eIDAS. However, one can be assessed solely
against the scheme of EN 319 403 and 319 411-1, without going for

qualified.

I have to disappoint you and insist that your statement "As the scheme
used in eIDAS for CABs is ETSI EN 319 403, the CAB must perform their
assessments in concordance with this scheme, and the NAB is tasked with
assessing their qualification and certification under any local
legislation (if appropriate) or, lacking such, under the framework for
the NAB applying the principles of ISO/IEC 17065 in evaluating the CAB
against EN 319 403"

and specifically the use of "or" in your statement, is incorrect. NABs
*always* assess qualification of CABs applying ISO/IEC 17065 AND ETSI EN
319 403 AND any applicable legislation. Only Austria is an exception (if
I recall correctly) because they don't apply ETSI EN 319 403 for CAB
accreditation.

Then, each CAB is accredited for specific standards (e.g. ETSI EN 319
411-1, 411-2, 421, eIDAS regulation and so on).

ISO 17065 and ETSI EN 319 403 apply only to CABs and ETSI EN 319 411-1,
411-2 apply only for TSPs. 411-2 incorporates 411-1 and 401 but does not
incorporate 403 or 17065. They are completely unrelated.


I'm afraid you're still misunderstanding, and I believe, mistating.

It is not ISO/IEC 17065 AND EN 319 403 that a CAB is assessed against.
They're assessed against EN 319 403, which *incorporates* ISO/IEC 17065.
This is the same way that when a TSP is assessed against ETSI EN 319 411-2,
they're not also (as in, separate audit report) assessed against EN 319
411-1; EN 319 411-2 *incorporates* EN 319 411-1.

Now, the CAB may ALSO be accredited for ISO/IEC 17065 (e.g. in the context
of application of other schemes), but I can't find supporting evidence to
support your claim that *both* are required in the context of EN 319 403.
Could you provide further details that you believe would demonstrate this?



I would prefer to let some auditor reply to this but I quickly went 
through 
https://ec.europa.eu/futurium/en/system/files/ged/list_of_eidas_accredited_cabs-2018-07-27.pdf 
and checked out some CAB accreditation letters and it looks like they 
are accredited to both "(ISO/IEC 17065 + ETSI EN 319 493 + eIDAS 
Art.3.18 scope of accreditation)".


I think it is also clearly stated in ETSI EN 319 403 that lists ISO 
17065 as a Normative reference and also:


"ISO/IEC 17065 [1] is an international standard which specifies general 
requirements for conformity assessment bodies (CABs) performing 
certification of products, processes, or services. These requirements 
are not focussed on any specific application domain where CABs work. In 
the present document the general requirements are *supplemented* to 
provide additional dedicated requirements for CABs performing 
certification of Trust Service Providers (TSPs) and the trust services 
they provide towards defined criteria against which they claim conformance."


"The present document also *incorporates* many requirements relating to 
the audit of a TSP's management system, as defined in ISO/IEC 17021 
[i.12] and in ISO/IEC 27006 [i.11]. These requirements are incorporated 
by including text to derived from these documents in the present 
document, as well indirectly through references to requirements of 
ISO/IEC 17021 [i.12]."


So, in my understanding, ISO 17065 must be fully covered and some 
elements if ISO 17021 are incorporated. ETSI EN 319 403 supplements 17065.



I don't think this is a valid criticism, particularly in the context of

the

specific case we're speaking about. I'm speaking about what's required -
you're speaking about what's possible. Many things are possible, but what
matters for expectations is what is required. 7.11.3 simply defers to the
scheme to specify, which EN 319 403 does not as it relates to this
discussion.


ISO 17065 sets the base and 7.11.3 describes the principles that need to
be followed. It is very likely that different CABs will choose to
implement this principle in a different way but at the end of the day,
their implementation must satisfy the principle and they are evaluated
by the NAB that ensures the principles are met.


Yes, which does not mean it provides any baseline assurance for relying
parties, which matters.

For example, when we talk about expectations of CAs, we don't talk about
what they 'could' do, we talk about what they MUST do, 

Clarifications on ETSI terminology and scheme

2018-10-31 Thread Dimitris Zacharopoulos via dev-security-policy



On 31/10/2018 4:47 μμ, Ryan Sleevi via dev-security-policy wrote:

There's a lot of nitpicking in this, and I feel that if you want to
continue this discussion, it would be better off in a separate thread on
terminology. I disagree with some of the claims you've made, so have
corrected them for the discussion.

I would much rather keep this focused on the discussion of TUVIT as
auditors; if you feel that the nitpicking is relevant to that discussion
(which I don't believe anything you've said rises to that level), we should
certainly hash it out here. This is why I haven't forked this thread yet -
to make sure I've not misread your concern. However, if there's more
broadly a disagreement, but without impact to this discussion, we should
spin that out.


Indeed, my comments were more related to the ETSI terminology so I 
created a new thread. More answers in-line.




On Wed, Oct 31, 2018 at 7:11 AM Dimitris Zacharopoulos 
wrote:


On 30/10/2018 6:28 μμ, Ryan Sleevi via dev-security-policy wrote:

This establishes who the CAB is and who the NAB is. As the scheme used in
eIDAS for CABs is ETSI EN 319 403, the CAB must perform their assessments
in concordance with this scheme, and the NAB is tasked with assessing

their

qualification and certification under any local legislation (if
appropriate) or, lacking such, under the framework for the NAB applying

the

principles of ISO/IEC 17065 in evaluating the CAB against EN 319 403. The
NAB is the singular national entity recognized for issuing certifications
against ISO/IEC 17065 through the MLA/BLA and the EU Regulation No

765/2008

(as appropriate), which is then recognized trans-nationally.

Some clarifications/corrections because I saw some wrong usage of terms
being repeated.

A CAB MUST perform their assessments applying ISO/IEC 17065 AND ETSI EN
319 403 AND any applicable legislation (for EU CABs this includes
European and National legislation).


Dimitris, I'm sorry, but I don't believe this is a correct correction.

EN 319 403 incorporates ISO/IEC 17065; much like the discussion about EN
319 411-2 incorporating, but being separate from, EN 319 411-1, the
structure of EN 319 403 is that it incorporates normatively the structure
of ISO/IEC 17065, and, at some places, extends.

Your description of the system is logically incompatible, given the
incompatibilities in 319 403 and 17065.

You're correct that any applicable national legislation applies, with
respect to the context of eIDAS. However, one can be assessed solely
against the scheme of EN 319 403 and 319 411-1, without going for qualified.


I have to disappoint you and insist that your statement "As the scheme 
used in eIDAS for CABs is ETSI EN 319 403, the CAB must perform their 
assessments in concordance with this scheme, and the NAB is tasked with 
assessing their qualification and certification under any local 
legislation (if appropriate) or, lacking such, under the framework for 
the NAB applying the principles of ISO/IEC 17065 in evaluating the CAB 
against EN 319 403"


and specifically the use of "or" in your statement, is incorrect. NABs 
*always* assess qualification of CABs applying ISO/IEC 17065 AND ETSI EN 
319 403 AND any applicable legislation. Only Austria is an exception (if 
I recall correctly) because they don't apply ETSI EN 319 403 for CAB 
accreditation.


Then, each CAB is accredited for specific standards (e.g. ETSI EN 319 
411-1, 411-2, 421, eIDAS regulation and so on).


ISO 17065 and ETSI EN 319 403 apply only to CABs and ETSI EN 319 411-1, 
411-2 apply only for TSPs. 411-2 incorporates 411-1 and 401 but does not 
incorporate 403 or 17065. They are completely unrelated.






Also, a NAB issues "Accreditations" to CABs and not "Certifications".
Also, a CAB issues "Certifications" to TSPs and not "Accredidations".
So, T-Systems is "Certified", not "Accredited".


Fair. If you replace these words, does it change the semantic meaning of
the message at all? I don't believe so.


No, it doesn't but I know how much you like being accurate :-)




As the framework utilizes ISO/IEC 17065, the complaints process and
certification process for both TSPs and CABs bears strong similarity,

which

is why I wanted to explore how this process works in function.

Note that if either the TSP is suspended of their certification or
withdrawn, no notification will be made to relying parties.

This depends on applicable legislation and the implementation of ISO
17065 sections 4.6, 7.11.3 by each CAB. Some CABs have a public
repository where RPs can query the validity of TSP Certifications so if
a Certification is Suspended or Revoked, it will be displayed
accordingly. I don't think WT has a notification scheme for RPs either.
If the TSP publishes the seal URL or the CAB's URL to the TSP
Certificate (which is not mandatory), RPs can manually check the
validity of the TSP Certification.


I don't think this is a valid criticism, particularly in the context of the
specific case we're 

Re: Questions regarding the qualifications and competency of TUVIT

2018-10-31 Thread Dimitris Zacharopoulos via dev-security-policy

On 30/10/2018 6:28 μμ, Ryan Sleevi via dev-security-policy wrote:

This establishes who the CAB is and who the NAB is. As the scheme used in
eIDAS for CABs is ETSI EN 319 403, the CAB must perform their assessments
in concordance with this scheme, and the NAB is tasked with assessing their
qualification and certification under any local legislation (if
appropriate) or, lacking such, under the framework for the NAB applying the
principles of ISO/IEC 17065 in evaluating the CAB against EN 319 403. The
NAB is the singular national entity recognized for issuing certifications
against ISO/IEC 17065 through the MLA/BLA and the EU Regulation No 765/2008
(as appropriate), which is then recognized trans-nationally.


Some clarifications/corrections because I saw some wrong usage of terms 
being repeated.


A CAB MUST perform their assessments applying ISO/IEC 17065 AND ETSI EN 
319 403 AND any applicable legislation (for EU CABs this includes 
European and National legislation).


Also, a NAB issues "Accreditations" to CABs and not "Certifications".
Also, a CAB issues "Certifications" to TSPs and not "Accredidations". 
So, T-Systems is "Certified", not "Accredited".





As the framework utilizes ISO/IEC 17065, the complaints process and
certification process for both TSPs and CABs bears strong similarity, which
is why I wanted to explore how this process works in function.

Note that if either the TSP is suspended of their certification or
withdrawn, no notification will be made to relying parties.


This depends on applicable legislation and the implementation of ISO 
17065 sections 4.6, 7.11.3 by each CAB. Some CABs have a public 
repository where RPs can query the validity of TSP Certifications so if 
a Certification is Suspended or Revoked, it will be displayed 
accordingly. I don't think WT has a notification scheme for RPs either.


If the TSP publishes the seal URL or the CAB's URL to the TSP 
Certificate (which is not mandatory), RPs can manually check the 
validity of the TSP Certification.



The closest
that it comes is that if they're accredited according to EN 319 411-2
(Qualified Certificates), the suspension/withdrawing will be reported to
the Supervisory Body, which will them update the Qualified Trust List for
that country and that will flow into the EU Qualified Trust List. If
they're accredited against EN 319 411-1, the Supervisory Body will be
informed by the CAB (in theory, although note my complaint about TSP
informing the CAB was not followed, and the same can exist with CAB to SB),
but no further notification may be made. Furthermore, if certification is
later reissued, after a full audit, the certification history will not
reflect that there was a period of 'failed' certification. This similarly
exists with respect to CABs - if a CAB has their accreditation suspended,
on the advice of or decision of the NAB based on feedback from the SB - the
community will not necessarily be informed. In theory, because
certification is 'forward' looking rather than 'past' looking, a suspension
or withdraw of a CAB by a NAB may not affect its past certification of
TSPs; this is an area of process that has not been well-specified or
determined.


Note that Supervisory Bodies (only related to eIDAS) have no authority 
for TSP Certifications under ETSI EN 319 411-1, but only ETSI EN 319 
411-2. In all cases of Certification (ETSI EN 319 411-1 or ETSI EN 319 
411-2), the NAB is assessing the CAB. In most EU countries, the NAB IS 
NOT the Supervisory Body.


Similarly with TSPs losing their Certification, if a CAB loses their 
Accreditation it will be displayed on the NAB's web site.


I also consider the "WT seal" and "ETSI certification" very similar. A 
WT seal is similar to an ETSI certificate because they state (emphasis 
mine):


"An unqualified opinion from the practitioner indicates that such 
principles *are being followed* in conformity with the WebTrust for 
Certification Authorities Criteria. These principles and criteria 
reflect fundamental standards for *the establishment and on-going 
operation* of a Certification Authority organization or function."


So, if I check a WT seal today Oct 31, 2018, even though the CA has not 
been audited between their last audit and today, the WT seal represents 
that it is still valid and not withdrawn. They are both "forward 
looking" in the eyes of Relying Parties.


As far as the non-disclosure of compliance certificate 
suspension/withdrawals is concerned, CABs are only allowed to follow 
their practices as described in ISO 17065 section 7.11.3. Root Programs 
could possibly require that CAs MUST disclose any possible Certification 
suspension or revocation that occurred during their audit period.



Hope this helps.
Dimitris.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Concerns with Dun & Bradstreet as a QIIS

2018-10-02 Thread Dimitris Zacharopoulos via dev-security-policy



On 2/10/2018 5:21 μμ, Ryan Sleevi via dev-security-policy wrote:

On Tue, Oct 2, 2018 at 10:02 AM Dimitris Zacharopoulos 
wrote:


But this inaccurate data is not used in the validation process nor
included in the certificates. Perhaps I didn't describe my thoughts
accurately. Let me have another try using my previous example. Consider

an

Information Source that documents, in its practices, that they provide:


 1. the Jurisdiction of Incorporation (they check official government
 records),
 2. registry number (they check official government records),
 3. the name of legal representative (they check official government
 records),
 4. the official name of the legal entity (they check official
 government records),
 5. street address (they check the address of a utility bill issued
 under the name of the legal entity),
 6. telephone numbers (self-reported),
 7. color of the building (self-reported).

The CA evaluates this practice document and accepts information 1-5 as
reliable, dismisses information 6 as non-reliable, and dismisses
information 7 as irrelevant.

Your argument suggests that the CA should dismiss this information

source

altogether, even though it clearly has acceptable and verified

information

for 1-5. Is that an accurate representation of your statement?


Yes, I'm stating that the existence of and inclusion of 5-7 calls into
question whether or not this is a reliable data source.

Right, but in my example, the data source has already described -via
their practices- that this is how they collect each piece of data. The
CA, as a recipient of this data, can choose how much trust to lay upon
each piece of information. Therefore, IMHO the CA should evaluate and
use the reasonably verified information from that data source and
dismiss the rest. That seems more logical to me than dismissing a data
source entirely because they include "the color of the building", which
is self-reported.


Your parenthetical
about how they check that is what the CA has the burden to demonstrate,
particularly given that they have evidence that there is

less-than-reliable

data included. How does the competent CA ensure that the registry number

is

not self-reported -

The information in the parenthesis would be documented in the trusted
source practices and the CA would do an inquiry to check that these
practices are actually implemented and followed.


or that the QIIS allows it to be self-reported in the
future?

No one can predict the future, which is why there is a process for
periodic re-evaluation.


So let me understand: Your view is that QIIS's publish detailed policies
about the information they obtain (they don't), and the CA must
periodically re-evaluate that (which isn't in the BRs) to determine which
information is reliable or not.


EVG 11.11.5 says that

"The CA SHALL use a documented process to check the accuracy of the 
database and ensure its data is acceptable, *including reviewing the 
database provider's terms of use*. The CA SHALL NOT use any data in a 
QIIS that the CA knows is (i) self-reported and (ii) not verified by the 
QIIS as accurate. Databases in which the CA or its owners or affiliated 
companies maintain a controlling interest, or in which any Registration 
Authorities or subcontractors to whom the CA has outsourced any portion 
of the vetting process (or their owners or affiliated companies) 
maintain any ownership or beneficial interest, do not qualify as a QIIS."


I would assume that the "database provider's terms of use" describe the 
practices, so it is not fiction. Perhaps this doesn't apply for many 
information sources but it's not unheard of.


As for the re-evaluation, we (HARICA) consider this part of ETSI EN 319 
401 section 7.7 (Operational Security) with guidance provided by ISO/IEC 
27002:2013 clause 15. I assume that WebTrust has something similar. 
Perhaps the connection is not so "direct" but when you depend on some 
external entity to provide any kind of information related to CA 
operations (in our case, the Subject information validation), then you 
must follow best practice and periodically re-evaluate.




Presumably, that RDS/QIIS is also audited
against such statements (they aren't) in order to establish their
reliability. That's a great world to imagine, but that's not the world of
RDS or QIIS, and so it's an entirely fictitious world to imagine.

That world is either saying the RDS/QIIS is a Delegated Third Party - and
all the audit issues attendant - or we're treating them like a DTP for all
intents and purposes, and have to deal with all of the attendant DTP
issues, such as the competency of the auditor, the scoping of the audits,
etc. I see no gain from an overly convoluted system that, notably, does not
exist today, as compared to an approach of whitelisting such that the CA no
longer has to independently assess each source, and can instead work with
the community to both report omissions of 

Re: Concerns with Dun & Bradstreet as a QIIS

2018-10-02 Thread Dimitris Zacharopoulos via dev-security-policy



On 1/10/2018 8:15 μμ, Ryan Sleevi via dev-security-policy wrote:

On Mon, Oct 1, 2018 at 9:21 AM Dimitris Zacharopoulos 
wrote:



[...]



I am certainly not suggesting that CAs should put inaccurate and
misleading information in certificates :-) I merely said that if the
Subscriber introduces misleading or inaccurate information in certificates
via a reliable information source, then there will probably be a trail
leading back to the Subscriber. This fact, combined with the lack of clear
damage that this can cause to Relying Parties, makes me wonder why doesn't
the Subscriber, that wants to mislead Relying Parties, just use a DV
Certificate where this probably doesn't leave so much evidence tracing back
to the Subscriber?


"The lack of clear damage" - I'm not sure how better to communicate, since
we're discussing fundamental damage to the value that OV and EV are said to
provide. The only way we can say "lack of clear damage" is to say that OV
and EV are worthless - otherwise, it's incredibly damaging.


I'm actually still waiting for Ian to elaborate if the "attack" was just 
the insertion of an intentionally wrong address in an EV Certificate or 
if he was attempting something else. Although his attempt failed (no 
Certificate was issued with that wrong Street Address), I consider the 
discussion that followed very useful (at least to me).


For this particular case though, a Company's righful owner or Legal 
Representative can file for an address change to a government registry 
and I am not aware about what additional verification (if any) is 
performed by the government. We must have something to compare this 
process to, in order to establish what is "reasonably verified".




I have no idea where the notion of 'tracability' comes from, or why that's
relevant. It again seems to be anchoring on getting a certificate for the
real cloudflare.com or stripe.com, which is not the discussion. We're
talking about "confusing" a user (or subscriber or relying party or threat
monitoring system) by suggesting that the certificates being issued are
'benign' or 'authorized'.



Yes, it's clear that this is a follow-up discussion of 
https://groups.google.com/forum/#!searchin/mozilla.dev.security.policy/stripe%7Csort:date/mozilla.dev.security.policy/NjMmyA6MxN0/1cC9IrwjCAAJ. 
Sorry for the confusion.



But this inaccurate data is not used in the validation process nor
included in the certificates. Perhaps I didn't describe my thoughts
accurately. Let me have another try using my previous example. Consider an
Information Source that documents, in its practices, that they provide:


1. the Jurisdiction of Incorporation (they check official government
records),
2. registry number (they check official government records),
3. the name of legal representative (they check official government
records),
4. the official name of the legal entity (they check official
government records),
5. street address (they check the address of a utility bill issued
under the name of the legal entity),
6. telephone numbers (self-reported),
7. color of the building (self-reported).

The CA evaluates this practice document and accepts information 1-5 as
reliable, dismisses information 6 as non-reliable, and dismisses
information 7 as irrelevant.

Your argument suggests that the CA should dismiss this information source
altogether, even though it clearly has acceptable and verified information
for 1-5. Is that an accurate representation of your statement?


Yes, I'm stating that the existence of and inclusion of 5-7 calls into
question whether or not this is a reliable data source.


Right, but in my example, the data source has already described -via 
their practices- that this is how they collect each piece of data. The 
CA, as a recipient of this data, can choose how much trust to lay upon 
each piece of information. Therefore, IMHO the CA should evaluate and 
use the reasonably verified information from that data source and 
dismiss the rest. That seems more logical to me than dismissing a data 
source entirely because they include "the color of the building", which 
is self-reported.



Your parenthetical
about how they check that is what the CA has the burden to demonstrate,
particularly given that they have evidence that there is less-than-reliable
data included. How does the competent CA ensure that the registry number is
not self-reported -


The information in the parenthesis would be documented in the trusted 
source practices and the CA would do an inquiry to check that these 
practices are actually implemented and followed.



or that the QIIS allows it to be self-reported in the
future?


No one can predict the future, which is why there is a process for 
periodic re-evaluation.





This is where the 'stopped-clock' metaphor is incredibly appropriate. Just
because 1-5 happen to be right, and happen to be getting the right process,
is by no means a predictor of future 

Re: Concerns with Dun & Bradstreet as a QIIS

2018-10-01 Thread Dimitris Zacharopoulos via dev-security-policy

On 1/10/2018 1:06 μμ, Ryan Sleevi via dev-security-policy wrote:

On Mon, Oct 1, 2018 at 2:55 AM Dimitris Zacharopoulos 
wrote:


Perhaps I am confusing different past discussions. If I recall correctly,
in previous discussions we described the case where an attacker tries to
get a certificate for a company "Example Inc." with domain "example.com".
This domain has a domain Registrant Address as "123 Example Street".

The attacker registers a company with the same name "Example Inc." in a
different jurisdiction, with address "123 Example Street" and a different
(attacker's) phone number. How is the attacker able to get a certificate
for example.com? That would be a real "attack" scenario.


Yes, you are confusing things, as I would have thought this would be a
'simple' discussion. Perhaps this confusion comes from only thinking the
domain name matters in making an 'attack'. If that's the case, we can do
away with EV and OV entirely, because they do not provide value to that
domain validation. Alternatively, if we say that information is relevant,
then the ability to spoof any of that information also constitutes an
'attack' - to have the information for one organization presented in a
different (logical, legal) organization's associated information.


I'm just trying to understand the "attack" scenario of Ian. Domain 
Validation is the baseline and OV/EV builds on top of that to include 
verified information to the Relying Parties to assist in Trust 
decisions. There were suggestions in the past that the use of OV/EV 
validation of identity can substitute parts of Domain Validation but 
it's clear that this is not the case we are discussing.







Unless this topic comes as a follow-up to the previous discussion of
displaying the "Stripe Inc." information to Relying Parties, with the
additional similarity in Street Address and not just the name of the
Organization. If I recall correctly, that second "Stripe Inc." was not a
"fake" entity but a "real" entity that was properly registered in some
Jurisdiction. This doesn't seem to be the same attack scenario as getting a
certificate for a Domain for which you are not the owner nor control, but a
way to confuse Relying Parties. Certainly, in case of fraud, this leaves a
lot more evidence for the authorities to trail back to a source, than for a
case without Organization information.


This also seems to be fixing on the domain name, but I have no idea why
you've chosen that as the fixation, as the discussion to date doesn't
involve that. I don't think it's your intent, but it sounds like you're
saying "It's better for CAs to put inaccurate and misleading information in
certificates, because at least then it's there" - which surely makes no
sense.



No, this was not about the domain name but about the information 
displayed to the Relying Party with the attributes included in the OV/EV 
Certificate (primarily the Organization). So, I'm still uncertain if 
Ian's "misleading street address" was trying to get a certificate for 
domain "stripe.com" owned by "Stripe Inc." in California, or was trying 
to get a certificate for "ian's domain.com" owned by "Stripe Inc." in 
Kentucky, as was the previous discussions. The discussion so far 
indicates that it's the latter, with the additional element that now the 
Street Address is also misleading.


I am certainly not suggesting that CAs should put inaccurate and 
misleading information in certificates :-) I merely said that if the 
Subscriber introduces misleading or inaccurate information in 
certificates via a reliable information source, then there will probably 
be a trail leading back to the Subscriber. This fact, combined with the 
lack of clear damage that this can cause to Relying Parties, makes me 
wonder why doesn't the Subscriber, that wants to mislead Relying 
Parties, just use a DV Certificate where this probably doesn't leave so 
much evidence tracing back to the Subscriber?



But they do have some Reliable and Qualified Information according to our
standards (for example registry number, legal representative, company
name). If a CA uses only this information from that source, why shouldn't
it be considered reliable? We all need to consider the fact that CAs use
tools to do their validation job effectively and efficiently. These tools
are evaluated continuously. Complete dismissal of tools must be justified
in a very concrete way.


No, they are not Reliable Data Sources. Using unreliable data sources,
under the motto that "even a stopped clock is right twice a day", requires
clear and concrete justification. The burden is on the CA to demonstrate
the data sources reliability. If there is any reason to suspect that a
Reliable Data Source contains inaccurate data, you should not be using it -
for any data.


But this inaccurate data is not used in the validation process nor 
included in the certificates. Perhaps I didn't describe my thoughts 
accurately. Let me have another try using my previous example. 

Re: Concerns with Dun & Bradstreet as a QIIS

2018-10-01 Thread Dimitris Zacharopoulos via dev-security-policy

On 28/9/2018 9:59 μμ, Ian Carroll via dev-security-policy wrote:

On Thursday, September 27, 2018 at 10:22:05 PM UTC-7, Dimitris Zacharopoulos 
wrote:

Forgive my ignorance, but could you please explain what was your
ultimate goal, as "an attacker", what were you hoping to gain and how
could you use this against Relying Parties?

I read your email several times but I could not easily find a case where
your fake address creates any serious concern for Relying Parties. Even
if you used the same street address as CloudFlare, the CA would check
against the database and would find two company records that share the
same address. That would obviously block the process and additional
checks would take place. Now, as a way to delay certificate issuance for
CloudFlare, I find it interesting but it certainly doesn't seem to
affect Relying Parties.

I think Ryan's reply was spot on, but I do want to clarify a couple of things. 
First, CAs typically make lookup requests to D by specifying the company's 
DUNS number. This means that they aren't searching for a given company name; any 
conflicting companies would not come up in a search.

Also, I think you overestimate validation agents; Comodo actually found the real 
Cloudflare on another QIIS and emailed me saying they found a "similar" 
company, but was happy to ignore it when I gave them a valid DUNS number.


I am probably not very familiar with the "Duns number" or that 
particular database, so I am trying to understand your goals a little 
better. I still don't have this picture so would you please be able to 
describe -in simple words- what was your ultimate goal, as "an 
attacker", what were you hoping to gain and how could you use this 
against Relying Parties?


You say that the CA typically makes lookup requests to D by specifying 
the company's DUNS number. What information are they trying to validate 
by doing that? They normally start off by the Domain validation and try 
to link the name of the Registrant to an existing company. To the best 
of my knowledge, this number doesn't exist in Domain Registrant 
Information. If it did, things would be a lot simpler.


Please also notice that I try to find specific flaws in the Guidelines 
and not look at a specific CA's validation agents. If the Guidelines 
adequately describe how a validation agent or a CA should perform an 
analysis on the quality of a Reliable Data Source or a Qualified 
Information Source, then at least we're good in the policy part and need 
to focus on why these policies are not implemented in a satisfactory way.



Dimitris.


And to take this one step further, I believe there are several GISs that
also accept whatever address you tell them because:

  1. They have no reason to believe that you will lie to them (they know
 who you are and in some Jurisdictions you might be prosecuted for
 lying to the government)
  2. No foreseeable harm to others could be done if you misrepresent your
 own address.

Since we are discussing about Data/Information Sources, the BRs define
how CAs should evaluate a Data Source and declaring it "Reliable".


 "3.2.2.7 Data Source Accuracy

Prior to using any data source as a Reliable Data Source, the CA SHALL
evaluate the source for its reliability, accuracy, and resistance to
alteration or falsification. The CA SHOULD consider the following during
its evaluation:

  1. The age of the information provided,
  2. The frequency of updates to the information source,
  3. The data provider and purpose of the data collection,
  4. The public accessibility of the data availability, and
  5. The relative difficulty in falsifying or altering the data.

Databases maintained by the CA, its owner, or its affiliated companies
do not qualify as a Reliable Data Source if the primary purpose of the
database is to collect information for the purpose of fulfilling the
validation requirements under this Section 3.2."

The EVGs also describe how to evaluate and declare the "Qualified" status:


   "11.11.5. Qualified Independent Information Source

A Qualified Independent Information Source (QIIS) is a regularly-updated
and publicly available database that is generally recognized as a
dependable source for certain information. A database qualifies as a
QIIS if the CA determines that:

(1) Industries other than the certificate industry rely on the database
for accurate location, contact, or other information; and

(2) The database provider updates its data on at least an annual basis.

The CA SHALL use a documented process to check the accuracy of the
database and ensure its data is acceptable, including reviewing the
database provider's terms of use. The CA SHALL NOT use any data in a
QIIS that the CA knows is (i) self-reported and (ii) not verified by the
QIIS as accurate. Databases in which the CA or its owners or affiliated
companies maintain a controlling interest, or in which any Registration
Authorities or subcontractors to whom the CA has 

Re: Concerns with Dun & Bradstreet as a QIIS

2018-10-01 Thread Dimitris Zacharopoulos via dev-security-policy

On 28/9/2018 8:04 μμ, Ryan Sleevi via dev-security-policy wrote:

On Fri, Sep 28, 2018 at 1:22 AM Dimitris Zacharopoulos via
dev-security-policy  wrote:


Forgive my ignorance, but could you please explain what was your
ultimate goal, as "an attacker", what were you hoping to gain and how
could you use this against Relying Parties?

I read your email several times but I could not easily find a case where
your fake address creates any serious concern for Relying Parties. Even
if you used the same street address as CloudFlare, the CA would check
against the database and would find two company records that share the
same address. That would obviously block the process and additional
checks would take place. Now, as a way to delay certificate issuance for
CloudFlare, I find it interesting but it certainly doesn't seem to
affect Relying Parties.


I'm not Ian, but I would have thought his email would have been obvious and
clear. The confusion here is that two jurisdictions can allow different
entities the same name. The EVGs seek to resolve this by making use of a
variety of ancilliary fields - such as serialNumber and the incorporation
information - to presumably attempt to establish to the relying party the
identity they're speaking to.

In the "Stripe, Inc" case, the user was able to distinguish 'real' from
'fake' by virtue of the incorporation information - Kentucky vs California.
However, in this case, the attack went further, in as much as through the
CA using an unreliable datasource to verify the jurisdictional information.
If the CA used an unreliable datasource, then the end user would see
something that, for intents and purposes, appears the same.

I'm not sure your point about the same address - Ian made it clear it was a
different but *similar* address - and I'm not sure why you suggest it would
block for the legitimate subscriber. Does that explain it simply enough?



Perhaps I am confusing different past discussions. If I recall 
correctly, in previous discussions we described the case where an 
attacker tries to get a certificate for a company "Example Inc." with 
domain "example.com". This domain has a domain Registrant Address as 
"123 Example Street".


The attacker registers a company with the same name "Example Inc." in a 
different jurisdiction, with address "123 Example Street" and a 
different (attacker's) phone number. How is the attacker able to get a 
certificate for example.com? That would be a real "attack" scenario.


Unless this topic comes as a follow-up to the previous discussion of 
displaying the "Stripe Inc." information to Relying Parties, with the 
additional similarity in Street Address and not just the name of the 
Organization. If I recall correctly, that second "Stripe Inc." was not a 
"fake" entity but a "real" entity that was properly registered in some 
Jurisdiction. This doesn't seem to be the same attack scenario as 
getting a certificate for a Domain for which you are not the owner nor 
control, but a way to confuse Relying Parties. Certainly, in case of 
fraud, this leaves a lot more evidence for the authorities to trail back 
to a source, than for a case without Organization information.




And to take this one step further, I believe there are several GISs that
also accept whatever address you tell them because:

  1. They have no reason to believe that you will lie to them (they know
 who you are and in some Jurisdictions you might be prosecuted for
 lying to the government)
  2. No foreseeable harm to others could be done if you misrepresent your
 own address.


Then they are not Reliable nor QIISes. Full stop.


But they do have some Reliable and Qualified Information according to 
our standards (for example registry number, legal representative, 
company name). If a CA uses only this information from that source, why 
shouldn't it be considered reliable? We all need to consider the fact 
that CAs use tools to do their validation job effectively and 
efficiently. These tools are evaluated continuously. Complete dismissal 
of tools must be justified in a very concrete way.


I would accept your conclusion for an Information Source that claimed, 
in their practices, that they verify some information against a 
secondary government database and the CA gets evidence that they don't 
actually do that. This means that the rest of the "claimed as verified" 
information is now questionable. This is very much similar to the 
Browsers checking for misbehavior on CAs where they claim certain 
practices in their CP/CPS and don't actually implement them. That would 
be a case where the CA might decide to completely distrust that 
Information Source.


I hope you can see the difference.




In my understanding, this is the process each CA must perform to
evaluate every Data Source before granting them the "Reliable" or

Re: Concerns with Dun & Bradstreet as a QIIS

2018-09-27 Thread Dimitris Zacharopoulos via dev-security-policy


Forgive my ignorance, but could you please explain what was your 
ultimate goal, as "an attacker", what were you hoping to gain and how 
could you use this against Relying Parties?


I read your email several times but I could not easily find a case where 
your fake address creates any serious concern for Relying Parties. Even 
if you used the same street address as CloudFlare, the CA would check 
against the database and would find two company records that share the 
same address. That would obviously block the process and additional 
checks would take place. Now, as a way to delay certificate issuance for 
CloudFlare, I find it interesting but it certainly doesn't seem to 
affect Relying Parties.


And to take this one step further, I believe there are several GISs that 
also accept whatever address you tell them because:


1. They have no reason to believe that you will lie to them (they know
   who you are and in some Jurisdictions you might be prosecuted for
   lying to the government)
2. No foreseeable harm to others could be done if you misrepresent your
   own address.

Since we are discussing about Data/Information Sources, the BRs define 
how CAs should evaluate a Data Source and declaring it "Reliable".



   "3.2.2.7 Data Source Accuracy

Prior to using any data source as a Reliable Data Source, the CA SHALL 
evaluate the source for its reliability, accuracy, and resistance to 
alteration or falsification. The CA SHOULD consider the following during 
its evaluation:


1. The age of the information provided,
2. The frequency of updates to the information source,
3. The data provider and purpose of the data collection,
4. The public accessibility of the data availability, and
5. The relative difficulty in falsifying or altering the data.

Databases maintained by the CA, its owner, or its affiliated companies 
do not qualify as a Reliable Data Source if the primary purpose of the 
database is to collect information for the purpose of fulfilling the 
validation requirements under this Section 3.2."


The EVGs also describe how to evaluate and declare the "Qualified" status:


 "11.11.5. Qualified Independent Information Source

A Qualified Independent Information Source (QIIS) is a regularly-updated 
and publicly available database that is generally recognized as a 
dependable source for certain information. A database qualifies as a 
QIIS if the CA determines that:


(1) Industries other than the certificate industry rely on the database 
for accurate location, contact, or other information; and


(2) The database provider updates its data on at least an annual basis.

The CA SHALL use a documented process to check the accuracy of the 
database and ensure its data is acceptable, including reviewing the 
database provider's terms of use. The CA SHALL NOT use any data in a 
QIIS that the CA knows is (i) self-reported and (ii) not verified by the 
QIIS as accurate. Databases in which the CA or its owners or affiliated 
companies maintain a controlling interest, or in which any Registration 
Authorities or subcontractors to whom the CA has outsourced any portion 
of the vetting process (or their owners or affiliated companies) 
maintain any ownership or beneficial interest, do not qualify as a QIIS."



In my understanding, this is the process each CA must perform to 
evaluate every Data Source before granting them the "Reliable" or 
"Qualified" status. Self-reported information without any supporting 
evidence is clearly not acceptable. I have not evaluated this database 
that you mention but if they accept self-reporting for "Street Address" 
and don't perform any additional verification (like asking you for a 
utility bill or cross-referencing it with a government database), then 
the "Street Address" information is unreliable and the CA's evaluation 
process should catch that.


That doesn't mean that the rest of the information is also unreliable. 
For example, an Information Source might describe in their documentation 
practices how they verify each piece of information, for example:


 * the Jurisdicion of Incorporation (they check official government
   records),
 * registry number (they check official government records),
 * the name of legal representative (they check official government
   records),
 * the official name of the legal entity (they check official
   government records),
 * street address (they check the address of a utility bill issued
   under the name of the legal entity),
 * telephone numbers (self-reported),
 * color of the building (self-reported),

and the CA, during evaluation, might decide to accept only the first 5 
as Reliable/Qualified Information as they have higher level of 
assurance. That would be the right thing to do. For the rest of the 
information, the CA should probably request additional validation 
information from the Applicant.


Sorry for the long email, quoting requirements always does that :)


Dimitris.

On 27/9/2018 2:52 πμ, Ian 

Re: Telia CA - problem in E validation

2018-08-21 Thread Dimitris Zacharopoulos via dev-security-policy

Dear Pekka,

"verified by the CA" seems to be the weak point here. What does 
"verified by the CA" mean?


The community seems to interpret this as actions by the CA to verify 
that the information requested to be included in the certificate by the 
Applicant, is actually real and owned/controlled by the Applicant. As 
others already mentioned, CAs usually follow some kind of 
challenge-response process to prove that the email address is real and 
owned/controlled by the Applicant.


You seem to interpret this as "our RA officers look at the address that 
the Applicant requested to be included in the certificate and if it 
appears to have a correct email address syntax (something followed by an 
'@' and then a domain), accept it and include it in the certificate". Is 
this an accurate description of your process? If someone requested a 
Certificate to include "pekka.lahtiha...@teliasonera.com", which seems 
like a legitimate email address, wouldn't you approve it? If not, why not?



Dimitris.



On 21/8/2018 11:53 πμ, pekka.lahtiharju--- via dev-security-policy wrote:

In my opinion we follow BR. Here is why:  I think that the first chapter of  7.1.4.2 it 
says that "...CA represents it followed the procedure set forth in its Certificate 
Policy and/or Certification Practice Statement to verify...". That is exactly what 
we do because we have explained in our CPS how E is verified (check below). Perhaps the 
process description in CPS could be better but anyway the descriptions are there 
including the fact that email (domain) ownership hasn't been always verified. More 
detailed E process description has been in this discussion.

Then BR 7.1.4.2.j specifies how E should be verified for SSL certificates:

"All other optional attributes, when present within the subject field, MUST 
contain information that has
been verified by the CA. Optional attributes MUST NOT contain metadata such as 
‘.’, ‘-‘, and ‘ ‘ (i.e. space)
characters, and/or any other indication that the value is absent, incomplete, or not 
applicable."

In our opinion we follow also completely that chapter because we do two kind of 
verifications to E values and also we prevent meta data values like required. 
Note that it is not prohibited in BR text to use our methods. I still can't 
understand what is the exact BR detail that hasn't been followed by us? We 
haven't verified everything (specifically E) perfectly but we have followed our 
CPS and our E process has been written to be compatible to current BR 7.1.4.2.j.

Our current CPS v2.1 has E verification documentation mostly in chapter "3.2.4 
Non-verified Subscriber Information" and partly in 3.2.2. Our CPS is here 
https://repository.trust.teliasonera.com/Telia_Server_Certificate_CPS_v2.1.pdf. Relevant 
parts of it are copied below:

---
3.2.2: Other subject values like OU or E are verified each time separately.
3.2.4: The Registration Officer is obliged to always review all subject 
information and initiate additional checking routines if there are any unclear 
Subject values
...Domain name ownership of domains in email addresses may belong to another 
company than the applicant e.g. to some service provider

Note! We have now changed the CPS text into upcoming v2.2 because we completely 
stopped adding E values to certificates because our old methods have caused 
these discussions and E is not mandatory for Customers.

In my opinion E value requirements in BR are much more like weak OU process 
than any of the strict processes. And that is how it should be because there is 
no sense to require company support teams to accept each of your OV certificate 
but the current hostmaster acceptance related to SAN domains is enough.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy



___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Bit encoding (AW: Policy 2.6 Proposal: Add prohibition on CA key generation to policy)

2018-05-15 Thread Dimitris Zacharopoulos via dev-security-policy



On 15/5/2018 6:51 μμ, Wayne Thayer via dev-security-policy wrote:

Did you consider any changes based on Jakob’s comments?  If the PKCS#12 is
distributed via secure channels, how strong does the password need to be?





I think this depends on our threat model, which to be fair is not something
we've defined. If we're only concerned with protecting the delivery of the
PKCS#12 file to the user, then this makes sense. If we're also concerned
with protection of the file while in possession of the user, then a strong
password makes sense regardless of the delivery mechanism.


I think once the key material is securely delivered to the user, it is 
no longer under the CA's control and we shouldn't assume that it is. The 
user might change the passphrase of the PKCS#12 file to whatever, or 
store the private key without any encryption.



Dimitris.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Policy 2.6 Proposal: Update Minimum Audit Versions

2018-05-11 Thread Dimitris Zacharopoulos via dev-security-policy
Thanks Peter, I think we are in agreement. 

Dimitris. 

-Original Message-
From: "Peter Miškovič via dev-security-policy" 

To: Dimitris Zacharopoulos , Wayne Thayer 
, mozilla-dev-security-policy 

Sent: Fri, 11 May 2018 12:53
Subject: RE: Policy 2.6 Proposal: Update Minimum Audit Versions

Hi Dimitris,

the official list of ETSI published standards you can find at 
http://www.etsi.org/standards-search#Pre-defined%20Collections

If you search for ETSI EN 319 411 you can find that only officially  ETSI 
published versions for ETSI EN 319 411-1 were V1.1.1 (2016-02) and V1.2.2 
(2018-04). Any other version, according document history on the last page of 
standard, were version for  EN approval Procedure (V1.2.0) or Vote (V1.2.1).  
It means that versions 1.2.0 and 1.2.1 were not officially published by ETSI. 

For ETSI EN 319 411-2 you can find that only official ETSI published version 
were versions V2.1.1 (2016-02) and V2.2.2 (2018-04). 

According this the minimal requirements should looks like:

“Trust Service Providers practice” in ETSI EN 319 411-1 version 1.1.1 or 
version 1.2.2 or later ETSI officially published version.
“Trust Service Providers practice” in ETSI EN 319 411-2  version 2.1.1  or 
version 2.2.2 or later ETSI officially published version

Regards
Peter




-Original Message-
From: Dimitris Zacharopoulos  
Sent: Friday, May 11, 2018 7:23 AM
To: Peter Miškovič ; Wayne Thayer 
; mozilla-dev-security-policy 

Subject: Re: Policy 2.6 Proposal: Update Minimum Audit Versions

Hello Peter,

These were very recently published however not everyone is tracking down ETSI 
updates by registering to the mailing lists. The main question is where can you 
find the authoritative document *list*? I though the official list is 
https://portal.etsi.org/TBSiteMap/ESI/TrustServiceProviders.aspx.

Also, were there any other versions published before 1.2.2? The recommendation 
says "1.2 or later". Where are the versions 1.2.0, 1.2.1 published?

Thanks,
Dimitris.

On 11/5/2018 8:13 πμ, Peter Miškovič via dev-security-policy wrote:
> There were published a new versions of both ETSI standards:
>
> ETSI EN 319 411-1 V1.2.2 adopted on April 23, 2018 
> http://www.etsi.org/deliver/etsi_en/319400_319499/31941101/01.02.02_60
> /en_31941101v010202p.pdf
>
> ETSI EN 319 411-2 V2.2.2 adopted on April 23, 2018 
> http://www.etsi.org/deliver/etsi_en/319400_319499/31941102/02.02.02_60
> /en_31941102v020202p.pdf
>
> Peter
>
> -Original Message-
> From: dev-security-policy 
>  > On Behalf Of Wayne Thayer via dev-security-policy
> Sent: Thursday, May 10, 2018 5:04 PM
> To: mozilla-dev-security-policy 
> 
> Subject: Policy 2.6 Proposal: Update Minimum Audit Versions
>
> After consulting with representatives from WebTrust and ETSI, I 
> propose that we update the minimum required versions of audit criteria 
> in section
> 3.1.1 as follows:
>
> - WebTrust "Principles and Criteria for Certification Authorities - 
> Extended Validation SSL" from 1.4.5 to 1.6.0 or later
> - “Trust Service Providers practice” in ETSI EN 319 411-1 from 1.1.1 
> to 1.2 or later
> - “Trust Service Providers practice” in ETSI EN 319 411-2  from 2.1.1 
> to
> 2.2 or later
>
> These newer versions were all published last year and should be the minimum 
> for audits completed from now on.
>
> Please respond with any concerns you have about this update to our root store 
> policy.
>
> - Wayne
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.6 Proposal: Update Minimum Audit Versions

2018-05-10 Thread Dimitris Zacharopoulos via dev-security-policy

Hello Peter,

These were very recently published however not everyone is tracking down 
ETSI updates by registering to the mailing lists. The main question is 
where can you find the authoritative document *list*? I though the 
official list is 
https://portal.etsi.org/TBSiteMap/ESI/TrustServiceProviders.aspx.


Also, were there any other versions published before 1.2.2? The 
recommendation says "1.2 or later". Where are the versions 1.2.0, 1.2.1 
published?


Thanks,
Dimitris.

On 11/5/2018 8:13 πμ, Peter Miškovič via dev-security-policy wrote:

There were published a new versions of both ETSI standards:

ETSI EN 319 411-1 V1.2.2 adopted on April 23, 2018
http://www.etsi.org/deliver/etsi_en/319400_319499/31941101/01.02.02_60/en_31941101v010202p.pdf

ETSI EN 319 411-2 V2.2.2 adopted on April 23, 2018
http://www.etsi.org/deliver/etsi_en/319400_319499/31941102/02.02.02_60/en_31941102v020202p.pdf

Peter

-Original Message-
From: dev-security-policy 
 On 
Behalf Of Wayne Thayer via dev-security-policy
Sent: Thursday, May 10, 2018 5:04 PM
To: mozilla-dev-security-policy 
Subject: Policy 2.6 Proposal: Update Minimum Audit Versions

After consulting with representatives from WebTrust and ETSI, I propose that we 
update the minimum required versions of audit criteria in section
3.1.1 as follows:

- WebTrust "Principles and Criteria for Certification Authorities - Extended 
Validation SSL" from 1.4.5 to 1.6.0 or later
- “Trust Service Providers practice” in ETSI EN 319 411-1 from 1.1.1 to 1.2 or 
later
- “Trust Service Providers practice” in ETSI EN 319 411-2  from 2.1.1 to
2.2 or later

These newer versions were all published last year and should be the minimum for 
audits completed from now on.

Please respond with any concerns you have about this update to our root store 
policy.

- Wayne
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy



___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.6 Proposal: Update Minimum Audit Versions

2018-05-10 Thread Dimitris Zacharopoulos via dev-security-policy


This page https://portal.etsi.org/TBSiteMap/ESI/TrustServiceProviders.aspx
also displays EN 319 411-1 v1.1.1 
<http://www.etsi.org/deliver/etsi_en/319400_319499/31941101/01.01.01_60/> 
and EN 319 411-2 v2.1.1 
<http://www.etsi.org/deliver/etsi_en/319400_319499/31941102/02.01.01_60/>.



Dimitris.

On 10/5/2018 11:24 μμ, Dimitris Zacharopoulos via dev-security-policy wrote:


For ETSI EN 319 411-1, it seems that v1.1.1 is still listed as the 
official version. The list of ESI activities is 
https://portal.etsi.org//TBSiteMap/ESI/ESIActivities.aspx. There is an 
update for version 1.2.1 that is "on vote until 23 April".


Perhaps there is a more official page for these documents that I am 
not aware of.



Dimitris.

-Original Message-
From: Wayne Thayer via dev-security-policy 
<dev-security-policy@lists.mozilla.org>
To: mozilla-dev-security-policy 
<mozilla-dev-security-pol...@lists.mozilla.org>

Sent: Thu, 10 May 2018 18:04
Subject: Policy 2.6 Proposal: Update Minimum Audit Versions

After consulting with representatives from WebTrust and ETSI, I propose
that we update the minimum required versions of audit criteria in section
3.1.1 as follows:

- WebTrust "Principles and Criteria for Certification Authorities -
Extended Validation SSL" from 1.4.5 to 1.6.0 or later
- “Trust Service Providers practice” in ETSI EN 319 411-1 
 from 1.1.1 to 1.2

or later
- “Trust Service Providers practice” in ETSI EN 319 411-2 
  from 2.1.1 to

2.2 or later

These newer versions were all published last year and should be the 
minimum

for audits completed from now on.

Please respond with any concerns you have about this update to our root
store policy.

- Wayne
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org 
<mailto:dev-security-policy@lists.mozilla.org>

https://lists.mozilla.org/listinfo/dev-security-policy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy



___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.6 Proposal: Update Minimum Audit Versions

2018-05-10 Thread Dimitris Zacharopoulos via dev-security-policy


For ETSI EN 319 411-1, it seems that v1.1.1 is still listed as the 
official version. The list of ESI activities is 
https://portal.etsi.org//TBSiteMap/ESI/ESIActivities.aspx. There is an 
update for version 1.2.1 that is "on vote until 23 April".


Perhaps there is a more official page for these documents that I am not 
aware of.



Dimitris.

-Original Message-
From: Wayne Thayer via dev-security-policy 

To: mozilla-dev-security-policy 


Sent: Thu, 10 May 2018 18:04
Subject: Policy 2.6 Proposal: Update Minimum Audit Versions

After consulting with representatives from WebTrust and ETSI, I propose
that we update the minimum required versions of audit criteria in section
3.1.1 as follows:

- WebTrust "Principles and Criteria for Certification Authorities -
Extended Validation SSL" from 1.4.5 to 1.6.0 or later
- “Trust Service Providers practice” in ETSI EN 319 411-1  
from 1.1.1 to 1.2

or later
- “Trust Service Providers practice” in ETSI EN 319 411-2   
from 2.1.1 to

2.2 or later

These newer versions were all published last year and should be the minimum
for audits completed from now on.

Please respond with any concerns you have about this update to our root
store policy.

- Wayne
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org 


https://lists.mozilla.org/listinfo/dev-security-policy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Bit encoding (AW: Policy 2.6 Proposal: Add prohibition on CA key generation to policy)

2018-05-03 Thread Dimitris Zacharopoulos via dev-security-policy


As I was reading this very interesting thread, I kept asking myself 
"what are we trying to protect". Are we trying to protect a "Private 
Key" or a "PKCS#12" file? I suppose the consensus of the community, 
based mainly on compatibility issues, is that we can't avoid the 
solution of a PKCS#12 file, so we need to figure out how to send this 
file with reasonable security to the Subscriber.


We have two areas of concern:

1. How to prevent an attacker from getting the PKCS#12 file
2. If an attacker obtains the PKCS#12 file, make sure the encryption is
   reasonable and endurable to practically sustain a decryption attempt
   that would practically take longer to crack than the lifetime of the
   Certificate

For area 1, the file must be distributed via secure channel. Some 
recommendations:


1. Web page (protected by https) via authentication of the Subscriber
2. S/MIME encrypted email by using an existing Subscriber valid
   Certificate
3. Some might also use pgp for S/MIME
4. Registered post, if delivered in a simple USB
5. Registered, or not registered post, if delivered in a FIPS 140-2
   Level 3 USB drive
6. ...
7. ...

Do we need to expand on this list? What if there are other 
equally-secure methods that we haven't listed? This definitely doesn't 
look like a policy text to me. It describes very specific 
practices/procedures that should be the CA's job to discover and 
Auditor's to verify, but I also understand that in practice, some CAs 
haven't demonstrated good judgment on these topics and auditors didn't 
help either.


For area 2, obviously, if an attacker obtains the PKCS#12 file and has 
infinite time, the key will be decrypted. I believe the discussion 
resulted in two dominating factors:


 * The encryption algorithms in the PKCS#12 file
 * The password quality

For the encryption algorithms, I recommend that we defer to other 
organization's guidelines, such as SOGIS, NIST or ETSI, that have 
extensively studied the "strength" of encryption algorithms. I can't 
tell if this community or Mozilla is confident enough to choose a 
specific set of "approved" encryption algorithms.


For the password quality, we should follow the definition of a "Random 
Value" as described in the Baseline Requirements


*"Random Value*: A value specified by a CA to the Applicant that 
exhibits at least 112 bits of entropy."


And yes, I would also recommend the usage of a CSPRNG. With that said, 
even if this process (performed by the CA) produces complex passwords 
that contain special characters and (in an unlikely event) might take 20 
minutes to type, it is still going to be used "just once" and the 
Subscriber can then do whatever he/she wants with it. The Subscriber can 
change the passphrase to "1234" for all we care. As far as the CA is 
concerned, the main job is done. The file has been encrypted with a 
reasonably secure algorithm and protected with a reasonably secure 
passphrase.


Of course the passphrase will be delivered separately!

Finally, I would like to ask if the community thinks that it's 
reasonable to put all of our protection efforts on area 2. If we raise 
the bar on area 2 (the proper protection of the PKCS#12 file), we don't 
need to worry "too much" about area 1. We could even send a file via 
plain e-mail because it won't matter if the attacker obtains the file. 
It is already encrypted securely. I would still recommend both but I 
also understand the convenience of the Subscribers and the delivery 
methods for some of these files in types of devices that I am not aware 
of (IoT, some smart phones, etc).



Dimitris.


On 4/5/2018 1:01 πμ, Buschart, Rufus via dev-security-policy wrote:

Basically I like the new wording:


PKCS#12 files [...] SHALL have a password containing at least 112 bits
of output from a CSPRNG, [...]

But I think there is a practical problem here: Directly using the output of any random 
number generator ("C" or not) to generate a password will lead to passwords 
which contain most probably characters that are either not printable or at least not 
type-able on a 'normal' western style keyboard. Therefore I think we need to reword the 
password strength section a little bit, maybe like the following:


PKCS#12 files [...] SHALL have a 14 character long password consisting
of characters, digits and special characters based on output from a
CSPRNG, [...]

When I originally proposed my wording, I had the serial numbers in my mind (for 
which directly using the output of a CSPRNG works), but didn't think on the 
encoding problem.


With best regards,
Rufus Buschart

Siemens AG
Information Technology
Human Resources
PKI / Trustcenter
GS IT HR 7 4
Hugo-Junkers-Str. 9
90411 Nuernberg, Germany
Tel.: +49 1522 2894134
mailto:rufus.busch...@siemens.com
www.twitter.com/siemens

www.siemens.com/ingenuityforlife

Siemens Aktiengesellschaft: Chairman of the Supervisory Board: Jim Hagemann 
Snabe; Managing Board: Joe Kaeser, Chairman, President and Chief Executive 

Re: Policy 2.6 Proposal: Require CAs to support problem reports via email

2018-04-18 Thread Dimitris Zacharopoulos via dev-security-policy



On 18/4/2018 9:50 μμ, Wayne Thayer via dev-security-policy wrote:

On Wed, Apr 18, 2018 at 12:14 AM, Dimitris Zacharopoulos via
dev-security-policy <dev-security-policy@lists.mozilla.org> wrote:


On 18/4/2018 12:04 πμ, Jeremy Rowley via dev-security-policy wrote:


Having to go through captchas to even get the email sent is just another
obstacle in getting the CA a timely certificate problem report


Nowadays, people deal with captchas all the time in various popular web
sites. I don't understand this argument. Is someone wants to file a
certificate problem report, they will take the extra "seconds" to pass the
"I am not a robot" test :)

The arguments for email are:


When I wrote "I don't understand this argument" it was meant for the 
"having to go through captchas". Sorry, it wasn't very clear.





1 - it's easier. I have seen CAs use generic "support request" forms that
are difficult to decipher, especially when not in one's native language.
2 - It scales better. When someone is trying to report the same problem to
a number of CAs, one email is better than filling out a bunch of forms
3 - It automatically creates a record of the submission. Many forms provide
the user no confirmation unless they remember to take a timestamped screen
shot.



Despite the arguments for email, there are equally good arguments for 
web form submission. IMHO, both should be allowed. A CA could start with 
email but if the spam volume becomes out of control, the CA might switch 
to a web form solution and all we need to do is define the minimum 
"properties" of such a solution. In all cases, CAs should maintain 
up-to-date information for Certificate Problem Report submission methods 
in CCADB.




Mail servers receive tons of SPAM everyday and an email address target is
a very easy target for popular CAs. We should also consider the possibility
of accidental "spam labeling" of a certificate problem report via email.



I believe CAs should include the necessary information for receiving

Certificate Problem Reports in section 1.5.2 of their CP/CPS and this
should be required by the Mozilla Policy for consistently. The same applies
for the "high-priority" Certificate Problem Reports as mandated in 4.10.2
of the BRs.

I plan to introduce a CAB Forum ballot for the 1.5.2 disclosure

requirement. I disagree with the suggestion that Mozilla policy should
duplicate the BRs "for consistency", but since Mozilla policy has a broader
scope than the BRs (email certificates), I will plan to add this
requirement.


Currently the BRs don't mandate listing this particular information in 
1.5.2, so there is no consistency among CAs. So far, CCADB has the best 
collection of this information and this was initiated by Mozilla in the 
April 2017 CA Communication.
Historically, Mozilla had policies that passed on to the BRs and once 
they were in the BRs, they were removed from the Mozilla policy as 
duplicates :). We would support a ballot that would require CP/CPS 
sections 1.5.2 to describe the CA's specific Certificate Problem Report 
methods.



Dimitris.

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.6 Proposal: Require CAs to support problem reports via email

2018-04-18 Thread Dimitris Zacharopoulos via dev-security-policy

On 18/4/2018 12:04 πμ, Jeremy Rowley via dev-security-policy wrote:

Having to go through captchas to even get the email sent is just another 
obstacle in getting the CA a timely certificate problem report


Nowadays, people deal with captchas all the time in various popular web 
sites. I don't understand this argument. Is someone wants to file a 
certificate problem report, they will take the extra "seconds" to pass 
the "I am not a robot" test :)


Mail servers receive tons of SPAM everyday and an email address target 
is a very easy target for popular CAs. We should also consider the 
possibility of accidental "spam labeling" of a certificate problem 
report via email.


I believe CAs should include the necessary information for receiving 
Certificate Problem Reports in section 1.5.2 of their CP/CPS and this 
should be required by the Mozilla Policy for consistently. The same 
applies for the "high-priority" Certificate Problem Reports as mandated 
in 4.10.2 of the BRs.



Dimitris.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.6 Proposal: Require separate intermediates for different usages (e.g. server auth, S/MIME)

2018-04-17 Thread Dimitris Zacharopoulos via dev-security-policy



On 17/4/2018 9:24 μμ, Wayne Thayer via dev-security-policy wrote:

This proposal is to require intermediate certificates to be dedicated to
specific purposes by EKU. Beginning at some future date, all newly created
intermediate certificates containing either the id-kp-serverAuth or
id-kp-emailProtection EKUs would be required to contain only a single EKU.


We should not require a single EKU but separation of id-kp-serverAuth 
and id-kp-emailProtection. This means that if an Intermediate CA 
Certificate includes the id-kp-serverAuth, it MUST NOT include 
id-kp-emailProtection but it MAY also include (for example) the 
id-kp-clientAuth EKU.


Dimitris.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.6 Proposal: Audit requirements for new subCA certificates

2018-04-05 Thread Dimitris Zacharopoulos via dev-security-policy



On 5/4/2018 9:00 μμ, Ryan Sleevi via dev-security-policy wrote:

On Thu, Apr 5, 2018 at 5:20 AM, Dimitris Zacharopoulos via
dev-security-policy <dev-security-policy@lists.mozilla.org> wrote:


On 5/4/2018 12:02 πμ, Wayne Thayer via dev-security-policy wrote:


In a recent discussion [1] we decided to clarify the audit requirements
for
new subordinate CA certificates. I’ve  drafted a change that requires the
new certificate to appear in the next periodic audits and in the CP/CPS
prior to issuance:

https://github.com/mozilla/pkipolicy/commit/09867ef4a0db3b1c
ab162930c0326c84d272ec10

We also discussed requiring root key generation ceremony (RKGC) audit
reports, but I have since realized that the BRs (section 6.1.1.1) only
require these audit reports for new root certificates. I’m not convinced
that we should begin requiring an auditor’s report every time a new
subordinate CA certificate is created.

I would appreciate everyone's comments on this proposed change.

This is: https://github.com/mozilla/pkipolicy/issues/32

[1]
https://groups.google.com/d/msg/mozilla.dev.security.policy/
CAaC2a2HMiQ/IKimeW4NBgAJ
---

This is a proposed update to Mozilla's root store policy for version
2.6. Please keep discussion in this group rather than on GitHub. Silence
is consent.

Policy 2.5 (current version):
https://github.com/mozilla/pkipolicy/blob/2.5/rootstore/policy.md
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy



I will copy the proposed change here for convenience:

"

1. MUST be audited in accordance with Mozilla’s Root Store Policy. If
the subordinate CA has a currently valid audit report at the time of
creation of the certificate, it MUST appear on the subordinate CA's
next periodic audit reports.
2. MUST be publicly disclosed in the CCADB by the CA that has their
certificate included in Mozilla’s root program. The CA with a
certificate included in Mozilla’s root program MUST disclose this
information within a week of certificate creation, and before any
such subordinate CA is allowed to issue certificates. All disclosure
MUST be made freely available and without additional requirements,
including, but not limited to, registration, legal agreements, or
restrictions on redistribution of the certificates in whole or in part.
3. MUST be added to the relevant CP/CPS before issuing certificates.

"

I kind of disagree with 3. The new Subordinate CA Certificate MUST be
added to CCADB (per 2.). It MUST be covered by the audit (per 1.) and show
up in the next report. If a Subordinate CA is operated by the Root
operator, a change in the CP/CPS could probably be just an addition of the
Distinguished Name and a SHA fingerprint. However, CPS changes require
administrative work and several levels of approval which IMHO is not worth
the effort for such an addition. I don't see such a big value for 3.
compared to 1. and 2.


Do you see this as a frequent occurrence such that the overhead of
performing this is greater than the value derived by the community in
having this information?



I will call the specific scenario below "a rollover subCA". I think 
rollover subCAs are issued frequently, several times per year depending 
on the size of the CA and their practices. From 2. the community has 
this information so it seems redundant.



Consider the case where you have a Subordinate CA Certificate that you
need to update because you want to change the hashing algorithm (SHA1 -->
SHA256), or change the key size, or renew. This would lead to a new subCA
Certificate with CN "My Subordinate CA Certificate R2",  "My Subordinate CA
Certificate R3" and so on. The controls would be exactly the same as the
previous subCA Certificate.


Except how does the community know and verify that those controls are the
same? How does the community know and verify if those controls change (e.g.
the subordinate CA is immediately sold to a third-party and/or transferred
to them)



I understand the concern, so why don't we write policy language to 
address this concern rather than making it so broad and cause 
administrative overhead for not so much value compared to the other 
points? What you're describing looks like a rare case. Something along 
the lines of "If a new Subordinate CA Certificate is issued and not 
included in the currently published CP/CPS, it must be maintained by the 
Root CA Operator and must appear in the next audit report" might address 
this concern.



The 3rd requirement might make more sense for externally operated
Subordinate CAs. According to BRs 6.1.1.1, Key Pairs that are generated for
SubCAs not to be used by the Root CA operator or an Affiliate (meaning an
externally operated Subordinate CA), MUST be witnessed by a Qualified
Auditor (or record the ceremony and get an opinion let

Re: Policy 2.6 Proposal: Audit requirements for new subCA certificates

2018-04-05 Thread Dimitris Zacharopoulos via dev-security-policy

On 5/4/2018 12:02 πμ, Wayne Thayer via dev-security-policy wrote:

In a recent discussion [1] we decided to clarify the audit requirements for
new subordinate CA certificates. I’ve  drafted a change that requires the
new certificate to appear in the next periodic audits and in the CP/CPS
prior to issuance:

https://github.com/mozilla/pkipolicy/commit/09867ef4a0db3b1cab162930c0326c84d272ec10

We also discussed requiring root key generation ceremony (RKGC) audit
reports, but I have since realized that the BRs (section 6.1.1.1) only
require these audit reports for new root certificates. I’m not convinced
that we should begin requiring an auditor’s report every time a new
subordinate CA certificate is created.

I would appreciate everyone's comments on this proposed change.

This is: https://github.com/mozilla/pkipolicy/issues/32

[1]
https://groups.google.com/d/msg/mozilla.dev.security.policy/CAaC2a2HMiQ/IKimeW4NBgAJ
---

This is a proposed update to Mozilla's root store policy for version
2.6. Please keep discussion in this group rather than on GitHub. Silence
is consent.

Policy 2.5 (current version):
https://github.com/mozilla/pkipolicy/blob/2.5/rootstore/policy.md
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy



I will copy the proposed change here for convenience:

"

1. MUST be audited in accordance with Mozilla’s Root Store Policy. If
   the subordinate CA has a currently valid audit report at the time of
   creation of the certificate, it MUST appear on the subordinate CA's
   next periodic audit reports.
2. MUST be publicly disclosed in the CCADB by the CA that has their
   certificate included in Mozilla’s root program. The CA with a
   certificate included in Mozilla’s root program MUST disclose this
   information within a week of certificate creation, and before any
   such subordinate CA is allowed to issue certificates. All disclosure
   MUST be made freely available and without additional requirements,
   including, but not limited to, registration, legal agreements, or
   restrictions on redistribution of the certificates in whole or in part.
3. MUST be added to the relevant CP/CPS before issuing certificates.

"

I kind of disagree with 3. The new Subordinate CA Certificate MUST be 
added to CCADB (per 2.). It MUST be covered by the audit (per 1.) and 
show up in the next report. If a Subordinate CA is operated by the Root 
operator, a change in the CP/CPS could probably be just an addition of 
the Distinguished Name and a SHA fingerprint. However, CPS changes 
require administrative work and several levels of approval which IMHO is 
not worth the effort for such an addition. I don't see such a big value 
for 3. compared to 1. and 2.


Consider the case where you have a Subordinate CA Certificate that you 
need to update because you want to change the hashing algorithm (SHA1 
--> SHA256), or change the key size, or renew. This would lead to a new 
subCA Certificate with CN "My Subordinate CA Certificate R2",  "My 
Subordinate CA Certificate R3" and so on. The controls would be exactly 
the same as the previous subCA Certificate.


The 3rd requirement might make more sense for externally operated 
Subordinate CAs. According to BRs 6.1.1.1, Key Pairs that are generated 
for SubCAs not to be used by the Root CA operator or an Affiliate 
(meaning an externally operated Subordinate CA), MUST be witnessed by a 
Qualified Auditor (or record the ceremony and get an opinion letter 
afterwards). It seems that the BRs require some special treatment when a 
SubCA Key Pair is generated for an externally operated entity. Is it 
worth the effort to reflect a similar distinction in the Mozilla Policy?


Dimitris.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: FW: Complying with Mozilla policy on email validation

2018-04-05 Thread Dimitris Zacharopoulos via dev-security-policy

On 5/4/2018 3:08 πμ, Wayne Thayer via dev-security-policy wrote:

I think the existing language in section 2.2(2) also supports the

federated authentication system use case you described. It says that the CA
"takes reasonable measures to verify that the entity submitting the request
controls the email account associated with the email address referenced in
the certificate". If a CA first confirms that it is a condition of a
particular federated authentication system that a user must have proven
control over the email account that constitutes their username to activate
their account, then requires that user to prove they can authenticate in to
the account, I think that meets the "reasonable" standard, even though a
threat analysis might determine that the method is insufficient for various
reasons.


I would like to add to Wayne's post by saying that Federated 
Authentication (oauth, SAML, etc) can be (and is) widely used, however 
it's up to the CA to evaluate each IDentity Provider (IDP) before 
accepting them as a "Qualified Information Source", to ensure they are 
consistent with the CA's Policy and Practices for the quality of 
information included in the response assertions. Each IDP must also be 
periodically evaluated by the CA for quality assurance purposes.


Dimitris.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: TURKTRUST Non-compliance

2018-03-23 Thread Dimitris Zacharopoulos via dev-security-policy


On 23/3/2018 9:44 μμ, Wayne Thayer via dev-security-policy wrote:
> Therefore, the only action I plan to take on this is
> to ask the WebTrust Task Force for their opinion on "wind-down" audits, and
> also to ask them if it is possible for a CA to obtain a period-of-time
> audit for a hierarchy that hasn't issued any certificates in the period. I
> will appreciate any additional suggestions that could help to resolve this
> issue.

Auditors check what is required according to their audit criteria. There
is no different "set of criteria" for "wind-down" CAs. Common sense
dictates that a Qualified Auditor will check all the requirements and
note down any divergence from the standards. Not issuing Certificates is
not a divergence but, for example, not Issuing CRLs at the proper
interval mentioned in the standards, is a non-conformance.

If a CA doesn't actively issue certificates makes the audit days
"possibly" fewer because the sampling verification is less compared to a
CA that actively issues Certificates. I think both WebTrust and ETSI
have a standard for Auditors to calculate audit days so there is an
absolute minimum for all the controls and then they add days according
to the CA's operations, locations and so on. In other words, an audit
for a "wind-down" CA might be cheaper compared to an actively issuing CA
but there is a baseline.

Dimitris.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Allowing WebExtensions to Override Certificate Trust Decisions

2018-02-28 Thread Dimitris Zacharopoulos via dev-security-policy

On 28/2/2018 1:52 πμ, Ryan Sleevi via dev-security-policy wrote:

On Tue, Feb 27, 2018 at 6:15 PM, Wayne Thayer via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


In the bug I referenced as [2], people said that they specifically need to
be able to override "negative" certificate validation decisions, so they
may not see this as a compromise. I think an example would be a site
serving a self-signed certificate for a DANE add-on to validate.


I think some of it may relate to Moz Platform questions, since they go to
the heart of extensions' behaviours.

For example, can extensions allow mixed content when it's been blocked? Can
they disable sandboxing if the user requests? There's a spectrum of
decisions that a browser makes as an intrinsic part of guaranteeing the
security of its users that it does not allow extensions or the like to
override, and may not even allow users themselves to override.

The design of the new extension model is to try to explicitly make sure
each capability granted to extensions is balanced in its security rationale
and functionality, and aligns the collective risks against the individual
rewards. There's a full spectrum here, well beyond PKI bits, so that's why
I suggest it strikes a bit core. It was one of the big benefits of the
process-sandboxing efforts, as extensions no longer had an 'implicit'
backdoor into the browser process.

Would you consider extensions that enabled SHA-1 automatically or disabled
technical enforcement of CAs? Fundamentally, the capability to alter trust
either grants that ability or is no-different-to that ability. What about
an extension called "HTTPS-made-easy" that just disabled all certificate
errors, on the view that the Web should be like it was in the HTTP days,
and solving the technical hurdle? What about vendors that force-install
extensions to Firefox users so they can use a shared key for all of their
installations? All of these things become possible or significantly easier
with an extension that can confer positive trust on something that Firefox
has deemed negative.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


This reminds me of "Enterprise level" decisions where some custom 
options on FF (or Chrome) are allowed to disable some default features 
(like check CT). However, these options have  carefully been studied and 
have been designed "explicitly" to allow some security exceptions. 
Perhaps the FF team could provide the proper config options for some 
well-defined cases that are driven by clear user needs.


DANE is a good case. QWACs (in addition to EV validation) would also be 
nice to result to an additional indicator to the existing EV indicator. 
The latter would not even need to bypass the default browser checks that 
would normally perform in sight of an EV SSL/TLS Certificate. If the EV 
validation fails, the entire check fails. But if the EV validation is 
ok, a webExtension that would check if the Certificate chains to an CA 
with "granted" status in the EU-TL, could display an additional 
indicator to the user.



Dimitris.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Taiwan GRCA Root Renewal Request

2018-01-28 Thread Dimitris Zacharopoulos via dev-security-policy


On 26/1/2018 11:54 μμ, Ryan Sleevi via dev-security-policy wrote:
> Has any consideration been given to adopt a similar policy as discussed
> with the Government of Korea application -
> https://bugzilla.mozilla.org/show_bug.cgi?id=1226100#c38


Just to avoid any possible mis-reading of:

"If you have intermediates for which you cannot disclose, whether it be for 
personal, operational, or legal reasons, then an appropriate solution, 
consistent with Mozilla CA Certificate Policy, is to use Technically 
Constrained Subordinate CAs - as defined within the Baseline Requirements and 
as reflected within the Mozilla policy. Such TCSCAs are technically limited 
from the issuance of TLS certificates, and by doing so, are allowed to be 
operated in a way that is not consistent with the Baseline Requirements nor 
compliant with Mozilla Policy."


Currently, the Baseline Requirements (section 7.1.5) allow for TCSCAs to
issue TLS Certificates, by requiring the nameConstraints extension,
limiting the issuance to specific Domain Names and Organizations. These
TCSCAs MUST follow the Baseline Requirements, with the exceptions
provided for these types of TCSCAs.

As far as the Mozilla Policy is concerned, if a TCSCAs is technically
capable of issuing a Certificate for TLS authentication or S/MIME, it
MUST comply with the Mozilla policy, with the exceptions provided for
TCSCAs. Section 1.1 of the Mozilla Policy is fairly clear on the scope
of the policy. If there are possibly more exceptions, it should probably
be updated to reflect these cases.


Dimitris.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: ETSI Audits Almost Always FAIL to list audit period

2017-11-01 Thread Dimitris Zacharopoulos via dev-security-policy
This is a long thread but the topic is very critical so I hope people 
are patient enough to read through this long discussion.


On 1/11/2017 12:37 πμ, Ryan Sleevi wrote:



On Tue, Oct 31, 2017 at 5:29 PM, Dimitris Zacharopoulos via 
dev-security-policy <dev-security-policy@lists.mozilla.org 
<mailto:dev-security-policy@lists.mozilla.org>> wrote:


I don't believe your statement is supported by the evidence -
which is why
I'm pushing you to provide precise references. Consider from the
perspective as a consumer of such audits - there is zero
awareness of the
contract as to whether or not the BRs were in scope - after
all, 319 411-1
is meant to be inclusive of the normative requirements with
respect to
audit supervision.


My statement that auditors are governed by 17065 and 403 is
supported by evidence (section1 of 411-1 where it says that ETSI
EN 319 403 provides guidance to auditors that wish to audit the
411-1 standard). Also, the BRs are normative for 411-1 as stated
in section 2.1 of the same document. Normative references to the
BRs are all over the 411-1 document, unless I misunderstood your
statement.


I think you did, so I'll try to repeat.

As you know, for both ETSI and WebTrust criteria, what is normative is 
the requirements within the respective documents. That is, regardless 
of what the BRs say (or don't say), what is audited is the criteria. 
Section 2.1 lists the BRs as a normative reference, but that is 
because specific auditable criteria are derived from it, not because 
it's fully incorporated by reference. That is, if you imagine a ballot 
passing the CABF (which itself is hard to imagine) that said "CAs 
shall keep a rubber duck next to the HSM", and it was adopted, this 
wouldn't necessarily immediately cause ETSI or WebTrust audits to 
fail, because that requirement hasn't yet had an auditable criteria 
derived from it. That's why I'm suggesting that, for sake of 
discussion, auditors ignore whats in the BRs - unless specifically 
told (by the WebTrust or ETSI documents) to examine specific sections.


As to "whether or not they were in scope", my point was that a 319 
403/401 audit has the contract define the scope of the period and 
activities, and that's not part of the final reporting mechanism. As 
such, there's no public attestation as to the period of evidence 
examined. I expand on that more below.


But stepping back further from the contract, the claim that
"the audit
covers operations for one year" is also not part of the 17065,
17021, or
319 403 oversight. That is, the certification is forward
looking (as
evidenced by the expiration), and while it involves historic
review, it is
not, in and of itself, a statement of assurance of the
historic activities.
This is the core difference between the 17021/17065 evaluation
of processes
and products versus, say, the ISAE3000 assurance evaluation.


I read the ISAE3000 and can't find specific language to support a
core difference in auditor guidance, especially related to the
assurance of the historic activities. Perhaps there is a more
specific section you can reference.


http://www.ifac.org/system/files/downloads/b012-2010-iaasb-handbook-isae-3000.pdf

Pages 304 and 305
Assurance Report Content
49. The assurance report should include the following basic elements:
...
(c) An identification and description of the subject matter information
and, when appropriate, the subject matter: this includes for example:
The point in time or period of time to which the evaluation or
measurement of the subject matter relates;



Sure, this is about the contents of the report and it's a "should". From 
a RP perspective, I perfectly understand the concern being raised here 
that critical information (as the audit period) is missing from some 
ETSI reports (as Kathleen said, there are good ETSI reports that include 
this information) but audit reports are one thing and raising concerns 
about the audit scheme is another.




The eIDAS Regulation mandates for 2-year audits (not the
ETSI EN 319
411-1). This has been reflected in the ETSI EN 319 403
audit scheme, under
7.4.6 (Audit Frequency), which states:

"There shall be a period of no greater than two years for
a full
(re-)assessment audit unless otherwise required by the
applicable
legislation or commercial scheme applying the present
document.

NOTE: A surveillance audit can be required by an entitled
party at any
time or by the conformity assessment
body as defined by the surveillance programme according to
clause 

Re: ETSI Audits Almost Always FAIL to list audit period

2017-10-31 Thread Dimitris Zacharopoulos via dev-security-policy



On 31/10/2017 11:21 πμ, Dimitris Zacharopoulos via dev-security-policy 
wrote:


It is not the first time this issue is brought up. While I have a very 
firm opinion that ETSI auditors under the ISO 17065 (focused on the 
quality of products/services) and ETSI EN 319 403 definitely check 
historical data to assess the level of conformance, I will communicate 
this to our auditor and ask if they would like to provide more 
specific feedback. 


Here is the feedback from our auditor. I understand that the original 
concern (whether ETSI auditors MUST check historical data a.k.a. 
"period-in-time") is answered in section 7.9 of ETSI EN 319 403.


 Forwarded Message 
Subject:RE: ETSI Audits Almost Always FAIL to list audit period
Date:   Tue, 31 Oct 2017 15:33:31 +0200
From:   Nikolaos Soumelidis <qms...@qmscert.com>
Organization:   QCERT
To: 'Dimitris Zacharopoulos' <ji...@it.auth.gr>



Long story short, as an accredited CAB, we _definitely_ must check 
historical data over the period since previous audit. This requirement 
is clearly included in Section 7.9 of ETSI EN 319 403 
<http://www.etsi.org/deliver/etsi_en/319400_319499/319403/02.02.02_60/en_319403v020202p.pdf>:


“/In addition, a sample of records relating to the operation of TSP over 
the historical period since the previous audit shall be examined by the 
auditor./”


Also (in the same section):

“/The Conformity Assessment Body shall define a programme of periodic 
surveillance and re-assessment that includes on-site audits to verify 
that TSPs and trust services they provide continue to comply with the 
requirements. It is recommended that at least one surveillance audit per 
year is performed in between full (re-)assessment audits./”


The above is closely linked to Section 7.9.4 of the more generic ISO/IEC 
17065:2012 <https://www.iso.org/standard/46568.html> which requires 
that, since this is a product / service certification:


“/When continuing use of a certification mark is authorized for a 
process or service, surveillance shall be established and shall include 
periodic surveillance activities to ensure ongoing validity of the 
demonstration of fulfilment of process or service requirements./”


CA/B Forum BR 
<https://cabforum.org/wp-content/uploads/CA-Browser-Forum-BR-1.5.1.pdf> 
takes a slightly more time-specific approach in Section 8.1:


“/The period during which the CA issues Certificates SHALL be divided 
into an unbroken sequence of audit periods. An audit period MUST NOT 
exceed one year in duration./”


Thus, I agree with you, *this is mainly an audit report issue*, and it 
is due to a difference in the terminology and approach.


For example, in all our audits for other standards, no “audit period” is 
clearly documented in the report; time since previous audit is always 
implied.


As a more general remark:

CABs are auditing *only* according to specific rules and requirements 
set by the standards.


Any private sector requirements, such as the CA/B Forum BR, are also 
verified during the audit only when they are referenced by the standards 
or if this is requested explicitly by the TSP itself.


Any CAB who *is accredited* to conduct audits according to ISO/IEC 17065 
+ ETSI EN 319 403 meets their requirements and *this is assessed by its 
NAB* (both in office and in the field). It does not provide “some extra 
level of assurance”, it is the cornerstone of the accreditation scheme.


This accreditation is *accepted internationally* due to the 
participation of the NAB to EA MLA 
<http://www.european-accreditation.org/the-mla> and IAF 
<http://www.european-accreditation.org/iaf-and-ilac> (i.e. similar to 
the cross-certification between Root CA’s concept). Its purpose is to 
ensure *equivalence and reliability*.


If CA/B Forum requires for an audit period to be clearly defined and 
this information to present in the final report, *this should be 
communicated* to all accredited CABs either directly (one can find this 
list here 
<https://ec.europa.eu/futurium/en/content/list-conformity-assessment-bodies-cabs-accredited-against-requirements-eidas-regulation>) 
or through the TSPs which participate in Root Programs. Even better, 
disclose its recommended “ETSI audit report template”, common across all 
Root Programs.


I hope this assists your discussions.

We are always available to provide clarifications and detailed 
information on all parts of the certification and report processes.


Best Regards,

*Nikolaos Soumelidis***

QMS – ISMS Lead Auditor

91892614gm5

**

*QMSCERT INSPECTION - CERTIFICATION***

*Main Office***

October 26th No 90

Thessaloniki 54627

*Branch Office*

Vlasiou Gavriilidi Str. No 28

Thessaloniki 54 655

Tel.  +30-2310-535-198 (internal: 569)

+30-2310-443-041

Fax. +30-2310535-008

*http://www.qmscert.com/***

*//*

*/Be /**/green /**/- keep it on the screen!/**//**/P/*

__

Re: ETSI Audits Almost Always FAIL to list audit period

2017-10-31 Thread Dimitris Zacharopoulos via dev-security-policy

On 31/10/2017 1:37 μμ, Ryan Sleevi via dev-security-policy wrote:

On Tue, Oct 31, 2017 at 5:21 AM Dimitris Zacharopoulos via
dev-security-policy <dev-security-policy@lists.mozilla.org> wrote:


It is not the first time this issue is brought up. While I have a very
firm opinion that ETSI auditors under the ISO 17065 (focused on the
quality of products/services) and ETSI EN 319 403 definitely check
historical data to assess the level of conformance, I will communicate
this to our auditor and ask if they would like to provide more specific
feedback.

During the CA/Browser Forum F2F 41 in Berlin, it was stated that TUV-IT
(CAB and chair in ACAB-c), was in discussions with Root Programs to
determine an "ETSI audit report template" that would include all
critical information that Root programs would like to be included in the
public (or browser) audit letter/report. Minutes
(https://cabforum.org/2017/06/21/2017-06-21-f2f-minutes-meeting-41-berlin/
)

--- BEGIN QUOTE ---

Clemens Wanko from TÜVIT/ACABc – “Update: Addressing Browser Audit
Requirements under eIDAS/ETSI”

Clemens said that there were several discussions with the Browsers that
resulted in an audit report template that would meet the Browser’s
expectations.

Dimitris asked if this template could be posted on the public mailing list.

--- END QUOTE ---

Until today, such a template has not been published or circulated either
in the CA/Browser Forum or the m.d.s.p. I hope this discussion will push
for this template to be published.


Do you believe that the requirements stated in the policy are unclear? That
is, as Kathleen mentioned, the Mozilla policy states all the information
that must be present, as a template of what needs to be there. Perhaps this
is just confusion as to expecting, say, Mozilla to provide a PDF of a cover
sheet?


I do not believe the requirements are unclear which is why we have seen 
this information included properly in many ETSI audit reports.
If Mozilla finds this problem repeating for some ETSI reports, perhaps a 
guidance on the expected audit template would be a good place to start. 
Webtrust had different-looking reports in the past until the Webtrust 
Committee issued templates as guidance for practitioners.



I believe the issue being raised here is more of an audit report issue


I am addressing this part of the discussion as well :)


and not of audit criteria. Auditors under the ETSI audit scheme, just as
with the Webtrust scheme, in order for the audit to be "effective", must
obtain evidence of actions that took place in the past. How far back,
should be determined by the audit criteria and requirements. For
example, the Baseline Requirements and Root programs require a full
audit to occur once a year which means auditors must collect evidence
from "at least" one year. Auditors may examine evidence even further
back if they consider that this is required in order for them to get a
better understanding of CA operations for their assessment.


I don’t believe this is an accurate representation. You are correct that
historical evidence must be examined, but none of the aforementioned audit
criteria establish that a year must be examined. The BRs state annual
certification, but this is both irrelevant (the audits are to 319 411, not
the BRs) and misleading (you can be annually certified without examining
annual performance).

Perhaps you can highlight where the requirement is to opine on the past
year of activities. As you know, 319 411-1 is itself insufficient in this
regard, as it expects (full) audits every other year - a problem that has
occurred with a number of auditors performing surveillance audits rather
than full audits.


I think you are looking at this from the opposite side. Auditors have 
their own scheme to follow which is governed under ISO 17065 and ETSI EN 
319 403. These schemes provide guidance on how to conduct an effective 
audit for ETSI EN 319 401, 411-1, 411-2, 421 and so on. In addition to 
these, there are National Accreditation Body schemes for specific audits 
which provide additional guidance. When the audit covers operations for 
one year (mandated by the Baseline Requirements and which finds it's way 
in the contract between the CA and the CAB), the sampling must include 
evidence from the entire year.


The eIDAS Regulation mandates for 2-year audits (not the ETSI EN 319 
411-1). This has been reflected in the ETSI EN 319 403 audit scheme, 
under 7.4.6 (Audit Frequency), which states:


"There shall be a period of no greater than two years for a full 
(re-)assessment audit unless otherwise required by the applicable 
legislation or commercial scheme applying the present document.


NOTE: A surveillance audit can be required by an entitled party at any 
time or by the conformity assessment

body as defined by the surveillance programme according to clause 7.9."

Also, as we discussed at F2F 38 in Bilbao and is covered in the minutes 

Re: Regarding CA requirements as to technical infrastructure utilized in automated domain validations, etc. (if any)

2017-08-27 Thread Dimitris Zacharopoulos via dev-security-policy

On 25/8/2017 9:42 μμ, Ryan Hurst via dev-security-policy wrote:

Dimitris,

I think it is not accurate to characterize this as being outside of the CAs 
controls. Several CAs utilize multiple network perspectives and consensus to 
mitigate these risks. While this is not a total solution it is fairly effective 
if the consensus pool is well thought out.

Ryan


Just to make sure I am not misunderstanding, are you referring to CAs 
with real-time access to the Full Internet Routing Table that allows 
them to make routing decisions or something completely different? If 
it's something different, it would be great if you could provide some 
information about how this consensus over network perspectives (between 
different CAs) works today.  There are services that offer 
routing-status like https://stat.ripe.net/widget/routing-status or 
https://www.cidr-report.org/as2.0/ but I don't know if they are being 
used by CAs to minimize the chance of accepting a hijacked address 
prefix (Matt's example).


Dimitris.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Regarding CA requirements as to technical infrastructure utilized in automated domain validations, etc. (if any)

2017-08-24 Thread Dimitris Zacharopoulos via dev-security-policy

On 26/7/2017 3:38 πμ, Matthew Hardeman via dev-security-policy wrote:

On Tuesday, July 25, 2017 at 1:00:39 PM UTC-5,birg...@princeton.edu  wrote:

We have been considering research in this direction. PEERING controls several 
ASNs and may let us use them more liberally with some convincing. We also have 
the ASN from Princeton that could be used with cooperation from Princeton OIT 
(the Office of Information Technology) where we have several contracts. The 
problem is not the source of the ASNs but the network anomaly the announcement 
would cause. If we were to hijack the prefix of a cooperating organization, the 
PEERING ASes might have their announcements filtered because they are seemingly 
launching BGP attacks. This could be fixed with some communication with ISPs, 
but regardless there is a cost to launching such realistic attacks. Matthew 
Hardeman would probably know more detail about how this would be received by 
the community, but this is the general impression I have got from engaging with 
the people who run the PEERING framework.

I have some thoughts on how to perform such experiments while mitigating the 
likelihood of significant lasting consequence to the party helping ingress the 
hijack to the routing table, but you correctly point out that the attack 
surface is large and the one consistent feature of all discussion up to this 
point on the topic of BGP hijacks for purpose of countering CA domain 
validation is that none of those discuss have, up to this point, expressed 
doubt as to the risks or the feasibility of carrying out these risks.  To that 
ends, I think the first case that would need to be made to further that 
research is whether anything of significance is gained in making the attack 
more tangible.


So far we have not been working on such an attack very much because we are 
focusing our research more on countermeasures. We believe that the attack 
surface is large and there are countless BGP tricks an adversary could use to 
get the desired properties in an attack. We are focusing our research on simple 
and countermeasures CAs can implement to reduce this attack space. We also aim 
to use industry contacts to accurately asses the false positive rates of our 
countermeasures and develop example implementations.

If it appears that actually launching such a realistic attack would be valuable 
to the community, we certainty could look into it further.

This is the question to answer before performing such an attack.  In effect, 
who is the audience that needs to be impressed?  What criteria must be met to 
impress that audience?  What benefits in furtherance of the work arise from 
impressing that audience?

Thanks,

Matt Hardeman
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


That was a very interesting topic to read. Unfortunately, CAs can't do 
much to protect against network hijacking because most of the 
counter-measures lie in the ISPs' side. However, the CAs could request 
some counter-measures from their ISPs.


Best practices for ISPs state that for each connected peer, the ISP need 
to apply a prefix filter that will allow announcements for only 
legitimate prefixes that the peer controls/owns. We can easily imagine 
that this is not performed by all ISPs. Another solution that has been 
around for some time, is RPKI 
 
along with BGP Origin Validation 
. 
Of course, we can't expect all ISPs to check for Route Origin 
Authorizations (ROAs) but if the major ISPs checked for ROAs, it would 
improve things a lot in terms of securing the Internet.


So, in order to minimize the risk for a CA or a site owner network from 
being hijacked, if a CA/site owner has an address space that is Provider 
Aggregatable (PA) (this means the ISP "owns" the IP space), they should 
check that their upstream network provider has properly created the ROAs 
for the CA/site operator's network prefix(es) in the RIR authorized 
list, and that they have configured their routers to validate ROAs for 
each prefix. If the CA/site operator has a Provider Independent (PI) 
address space (this means the CA/site operator "owns" the IP space), 
then the CA/site operator should create the ROAs.


In Matt's example, if eff.org had ROAs for their network prefixes (that 
include their DNS servers) and if Let's Encrypt provider (or Let's 
Encrypt router) was validating ROAs, this attack wouldn't work.



Dimitris.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Fix definition of constraints for id-kp-emailProtection

2017-05-21 Thread Dimitris Zacharopoulos via dev-security-policy



On 19/5/2017 6:04 μμ, Jakob Bohm via dev-security-policy wrote:

On 19/05/2017 16:15, Gervase Markham wrote:

On 19/05/17 14:58, Jakob Bohm wrote:

Because the O and other dirname attributes may be shown in an e-mail
client (current or future) as a stronger identity than the technical
e-mail address.


Do you know of any such clients?



No, but it would be similar to how Fx displays that field in EV certs,
so a future Thunderbird, or a non-Mozilla client could reasonably do
something similar, even at OV level.


It doesn't have to be displayed in a client UI. It is information in the 
Subject of the Certificate and Relying Parties read and decide what to 
do with this information. I think we need to describe some use cases to 
better understand if dirName in permittedSubtrees must be required.


One case, is issuing a TCSC for an organization so that this 
organization (and possibly its affiliates) can issue personal 
certificates for employees. These personal certificates, apart from 
document signing/client authentication, could also be used for s/mime.


Just as section 7.1.5 of the BRs for TCSCs require a dirName present in 
the permittedSubtrees, having a similar requirement for 
email-constrained TCSCs reduces the risk of having end-entity 
certificates that bind particular users (e.g. CN=John Doe) to an 
organization (O=Very High Profile Corporation). If the TCSC was 
restricted to dirName="C=XX, L=XXX, O=ACME", the risk is lower. The 
administrator could still allow any e-mail address to be included in the 
end-entity certificates.


Another case that was described in this thread is an e-mail provider 
(such as Gmail) that wants to constrain issuance via a TCSC for 
@gmail.com. However, as Gerv pointed out, they would need to allow only 
information related to their customers (CN=John Doe and 
emailAddress=jsomeu...@gmail.com). I don't think dirName entries in 
permittedSubtrees allow such a representation. If there was a way to 
limit this, we would have a solution for both cases.


Are there any other cases we should consider in this discussion? IMHO, 
because of the risk associated in the first use case (incorrect binding 
between a natural person and an organization), TCSCs should require a 
dirName.



Dimitris.





Imagine a certificate saying that ge...@wosign.cn is "CN=Gervase
Markham, O=Mozilla Corporation, ST=California, CN=US", issued by a
SubCA name constrained to "@wosign.cn", but not to any range of DNs.


Surely such a certificate would be misissued? Although I guess the issue
here is that we are excluding them from scope...

So the idea would be to say that dirName had to be constrained to either
be empty (is that possible?) or to contain a dirNames validated as
correctly representing an organization owning at least one of the domain
name(s) in the cert?



Rather: It should be constrained to an X.500 subtree identifying an
organization validated to at least BR compliant OV level (EV level if
SubCA notBefore after some policy date) as for a ServerAuth certificate
for the same domain names specified in the rfc822name restrictions.

Keeps it short and simple and subject to well-understood policies.

Enjoy

Jakob


___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Fix definition of constraints for id-kp-emailProtection

2017-05-08 Thread Dimitris Zacharopoulos via dev-security-policy

On 8/5/2017 1:18 μμ, Gervase Markham wrote:

On 05/05/17 19:44, Dimitris Zacharopoulos wrote:

  * MUST include an EKU that has the id-kp-emailProtection value AND
  * MUST include a nameConstraints extension with
  o a permittedSubtrees with
  + rfc822Name entries scoped in the Domain (@example.com) or
Domain Namespace (@example.com, @.example.com) controlled by
an Organization and

It's this part that I'm looking for good wording for to make sure I
don't accidentally exclude valid use cases.


  + dirName entries scoped in the Organizational name and location

Help me understand how dirName interacts with id-kp-emailProtection?


When the Subscriber belongs to an Organization that needs to be included 
in the subjectDN.



Dimitris.




(a) For each rfc822Name in permittedSubtrees, the CA MUST confirm that
the Applicant has registered the Domain or Domain Namespace or has been
authorized by the domain registrant to act on the registrant's behalf in
line with the verification practices of section 3.2.2.4.
(b) For each DirectoryName in permittedSubtrees the CA MUST confirm the
Applicants and/or Subsidiary’s Organizational name and location such
that end entity certificates issued from the subordinate CA Certificate
will be in compliance with section 7.1.2.4 and 7.1.2.5.

Does anyone see problems with this language?

Gerv


___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Fix definition of constraints for id-kp-emailProtection

2017-05-08 Thread Dimitris Zacharopoulos via dev-security-policy

On 6/5/2017 1:19 πμ, Peter Bowen via dev-security-policy wrote:

One other question: Does your proposal allow a TCSC that covers both
ServerAuth and EmailProtection for the domains of the same organization?

Or put another way, would your proposed language force an organization
wanting to run under its own TCSC(s) to obtain two TCSCs, one for their
S/MIME needs and another for their TLS needs?

Yes, it allows a single TCSC that does both.  The little three diamond
symbol means parallel, so both legs are evaluated at the same time.
If both get to "Goto B", then it is a single TCSC that can issue both
serverAuth and emailProtection certs.


As Gerv pointed out in previous messages, this issue is described in 
https://github.com/mozilla/pkipolicy/issues/26. The current Mozilla 
policy does not force separating Intermediate CAs for serverAuth and 
emailProtection certs.


Microsoft's Policy 
 
currently says:


"/New/ intermediate CA certificates under root certificates submitted 
for distribution by the Program must separate Server Authentication, 
S/MIME, Code Signing and Time Stamping uses. This means that a single 
intermediate issuing CA must not be used to issue both server 
authentication, S/MIME, and code signing certificates. A separate 
intermediate must be used for each use case."


It would be ideal if both policies were aligned to either allow both 
serverAuth and emailProtection from the same Intermediate CA, or 
separate them. As we are today, CAs that participate in Mozilla and 
Microsoft Root program need to comply with the more restrictive policy.



Dimitris.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Fix definition of constraints for id-kp-emailProtection

2017-05-05 Thread Dimitris Zacharopoulos via dev-security-policy



On 5/5/2017 10:58 μμ, Peter Bowen wrote:

On Fri, May 5, 2017 at 11:58 AM, Dimitris Zacharopoulos via
dev-security-policy <dev-security-policy@lists.mozilla.org> wrote:


On 5/5/2017 9:49 μμ, Peter Bowen via dev-security-policy wrote:

On Fri, May 5, 2017 at 11:44 AM, Dimitris Zacharopoulos via
dev-security-policy <dev-security-policy@lists.mozilla.org> wrote:

Looking at https://github.com/mozilla/pkipolicy/issues/69

do you have a proposed language that takes all comments into account?
From
what I understand, the Subordinate CA Certificate to be considered
Technically Constrained only for S/MIME:

   * MUST include an EKU that has the id-kp-emailProtection value AND
   * MUST include a nameConstraints extension with
   o a permittedSubtrees with
   + rfc822Name entries scoped in the Domain (@example.com) or
 Domain Namespace (@example.com, @.example.com) controlled by
 an Organization and
   + dirName entries scoped in the Organizational name and
location
   o an excludedSubtrees with
   + a zero‐length dNSName
   + an iPAddress GeneralName of 8 zero octets (covering the IPv4
 address range of 0.0.0.0/0)
   + an iPAddress GeneralName of 32 zero octets (covering the
 IPv6 address range of ::0/0)

Why do we need to address dNSName and iPAddress if the only EKU is
id-kp-emailProtection?

Can we simplify this to just requiring at least one rfc822Name entry
in the permittedSubtrees?


I would be fine with this but there may be implementations that ignore the
EKU at the Intermediate CA level.

I've only ever heard of people saying that adding EKU at the
intermediate level breaks things, not that things ignore it.


You are probably right. Two relevant threads:

 * https://www.ietf.org/mail-archive/web/pkix/current/msg33507.html and
 * an older one from year 2000
   (https://www.ietf.org/mail-archive/web/pkix/current/msg06821.html)

I don't know if all implementations doing path validation, use the EKUs 
at the CA level but it seems that the most popular applications use it.





So, if we want to align with both the CA/B
Forum BRs section 7.1.5 and the Mozilla Policy for S/MIME, perhaps we should
keep the excludedSubtrees.

The BRs cover serverAuth.


Of course they do, I was merely trying to re-use the same language for 
S/MIME usage :)



Dimitris.


If you look at
https://imagebin.ca/v/3LRcaKW9t2Qt, you will see that TCSC will end up
being two independent tests.

Thanks,
Peter



___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Fix definition of constraints for id-kp-emailProtection

2017-05-05 Thread Dimitris Zacharopoulos via dev-security-policy



On 5/5/2017 9:49 μμ, Peter Bowen via dev-security-policy wrote:

On Fri, May 5, 2017 at 11:44 AM, Dimitris Zacharopoulos via
dev-security-policy <dev-security-policy@lists.mozilla.org> wrote:

Looking at https://github.com/mozilla/pkipolicy/issues/69

do you have a proposed language that takes all comments into account? From
what I understand, the Subordinate CA Certificate to be considered
Technically Constrained only for S/MIME:

  * MUST include an EKU that has the id-kp-emailProtection value AND
  * MUST include a nameConstraints extension with
  o a permittedSubtrees with
  + rfc822Name entries scoped in the Domain (@example.com) or
Domain Namespace (@example.com, @.example.com) controlled by
an Organization and
  + dirName entries scoped in the Organizational name and location
  o an excludedSubtrees with
  + a zero‐length dNSName
  + an iPAddress GeneralName of 8 zero octets (covering the IPv4
address range of 0.0.0.0/0)
  + an iPAddress GeneralName of 32 zero octets (covering the
IPv6 address range of ::0/0)

Why do we need to address dNSName and iPAddress if the only EKU is
id-kp-emailProtection?

Can we simplify this to just requiring at least one rfc822Name entry
in the permittedSubtrees?


I would be fine with this but there may be implementations that ignore 
the EKU at the Intermediate CA level. So, if we want to align with both 
the CA/B Forum BRs section 7.1.5 and the Mozilla Policy for S/MIME, 
perhaps we should keep the excludedSubtrees.


Dimitris.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Fix definition of constraints for id-kp-emailProtection

2017-05-05 Thread Dimitris Zacharopoulos via dev-security-policy


Looking at https://github.com/mozilla/pkipolicy/issues/69

do you have a proposed language that takes all comments into account? 
From what I understand, the Subordinate CA Certificate to be considered 
Technically Constrained only for S/MIME:


 * MUST include an EKU that has the id-kp-emailProtection value AND
 * MUST include a nameConstraints extension with
 o a permittedSubtrees with
 + rfc822Name entries scoped in the Domain (@example.com) or
   Domain Namespace (@example.com, @.example.com) controlled by
   an Organization and
 + dirName entries scoped in the Organizational name and location
 o an excludedSubtrees with
 + a zero‐length dNSName
 + an iPAddress GeneralName of 8 zero octets (covering the IPv4
   address range of 0.0.0.0/0)
 + an iPAddress GeneralName of 32 zero octets (covering the
   IPv6 address range of ::0/0)

Borrowing language from BRs 7.1.5, it would look like this:

"If the Subordinate CA Certificate includes the id‐kp‐emailProtection 
extended key usage, then the Subordinate CA Certificate MUST include the 
Name Constraints X.509v3 extension with constraints on rfc822Name and 
DirectoryName as follows:


(a) For each rfc822Name in permittedSubtrees, the CA MUST confirm that 
the Applicant has registered the Domain or Domain Namespace or has been 
authorized by the domain registrant to act on the registrant's behalf in 
line with the verification practices of section 3.2.2.4.
(b) For each DirectoryName in permittedSubtrees the CA MUST confirm the 
Applicants and/or Subsidiary’s Organizational name and location such 
that end entity certificates issued from the subordinate CA Certificate 
will be in compliance with section 7.1.2.4 and 7.1.2.5.


If the Subordinate CA Certificate is not allowed to issue certificates 
with an iPAddress, then the Subordinate CA Certificate MUST specify the 
entire IPv4 and IPv6 address ranges in excludedSubtrees. The Subordinate 
CA Certificate MUST include within excludedSubtrees an iPAddress 
GeneralName of 8 zero octets (covering the IPv4 address range of 
0.0.0.0/0). The Subordinate CA Certificate MUST also include within 
excludedSubtrees an iPAddress GeneralName of 32 zero octets (covering 
the IPv6 address range of ::0/0). Otherwise, the Subordinate CA 
Certificate MUST include at least one iPAddress in permittedSubtrees.


If the Subordinate CA is not allowed to issue certificates with 
dNSNames, then the Subordinate CA Certificate MUST include a zero‐length 
dNSName in excludedSubtrees. Otherwise, the Subordinate CA Certificate 
MUST include at least one dNSName in permittedSubtrees."


Although this might seem to be an overkill (perhaps the EKU should be 
sufficient and we could remove the requirement for excludedSubtrees) , 
it clearly narrows down the scope of such a Subordinate CA to only S/MIME.



Dimitris.



On 5/5/2017 7:16 μμ, Gervase Markham via dev-security-policy wrote:

On 01/05/17 09:55, Gervase Markham wrote:

"Each entry in permittedSubtrees must either be or end with a Public
Suffix." (And we'd need to link to publicsuffix.org)

Aargh. This should, of course, be "Public Suffix + 1" - i.e. an actual
domain owned by someone.


The second option is harder to spec, because I don't know the uses to
which TCSCs for email are put. Is the idea that they get handed to a
customer, and so it's OK to say that the domain names have to be
validated as being owned by the entity which has authority to command
issuance? Or are there scenarios I'm missing?

CAs who issue email certs need to pay attention here, as I want to close
this loophole but am at risk of making policy which does not suit you,
if you do not engage in this discussion.

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy