Re: Public Discussion of GlobalSign's CA Inclusion Request for R46, E46, R45 and E45 Roots

2021-02-12 Thread Nick Lamb via dev-security-policy
On Thu, 11 Feb 2021 15:12:46 -0500
Ryan Sleevi via dev-security-policy

> So I'd say feel free to ask your question there, which helps make
> sure it's answered before the issue is closed.

Good point. In this case Arvid has clarified that in fact the ticket
now has an updated sheet which (I haven't examined it yet) should
satisfy my question so I shan't follow up there except in the event I
have further questions.

> This is one of many outstanding items still
> for the Validation Working Group of the CA/B Forum, as possible
> mitigations were also discussed. In short, "capability URLs" (where
> the entire URL is, in effect, the capability) are dangerous.

Good to know.
> Note that there have been far more than "Ten Blessed Methods" since
> those discussions, so perhaps it's clearer to just say

Personally I just like the way "Ten Blessed Methods" sounds.

I wouldn't reliably recognise all Thirty Six Views of Mount Fuji,
everything except (what I'd call) Big Wave, and Watermill could be any
of dozens of imitators as far as this uneducated eye is concerned - and
of course there are actually ten more of them, but we still call it
"Thirty Six Views of Mount Fuji".

The addition (and deprecation) of methods is an expected and desirable
course for the Baseline Requirements, and I am watching even if I don't
comment on it.

However because everything is formatted according to RFC 3647 (which is
a good thing), section doesn't carry the same implication as
"Ten Blessed Methods". BR 1.3.0 had a section it's just that it
doesn't in fact set down which methods must be used, which is how we
got here in the first place.

But I'm not old enough just yet to be incapable of learning new tricks,
I've learned to call it a "blocklist" not a "blacklist" and I'm sure if
everybody really starts to refer only to "" I'll get used to

dev-security-policy mailing list

Re: Public Discussion of GlobalSign's CA Inclusion Request for R46, E46, R45 and E45 Roots

2021-02-11 Thread Nick Lamb via dev-security-policy
On Tue, 9 Feb 2021 14:29:15 -0700
Ben Wilson via dev-security-policy

> All,
> GlobalSign has provided a very detailed incident report in Bugzilla -
> see
> There are a few remaining questions that still need to be answered,
> so this email is just to keep you aware.
> Hopefully later this week I'll be able to come back and see if people
> are satisfied and whether we can proceed with the root inclusion
> request.

I have a question (if I should write it in Bugzilla instead please say
so it is unclear to me what the correct protocol is)

GlobalSign have provided a list of 112 other certificates which were
issued for the same reason, I examined some of them manually and
determined that they are in appearance unextraordinary (2048-bit RSA
keys for example) and so it's unsurprising we didn't notice they were
issued previously.

However, the list does not tell me when these certificates were ordered
or, if substantially different, when the email used to "validate" these
orders was sent.

As a result it's hard to be sure whether these certificates were issued
perhaps only a few weeks after they were ordered, which is a relatively
minor oversight, or, like the incident certificate, many years
afterwards. I'd like maybe a column of "order date" and "email sent
date" if the two can be different.


I also have noticed something that definitely isn't (just) for
GlobalSign. It seems to me that the current Ten Blessed Methods do not
tell issuers to prevent robots from "clicking" email links. We don't
need a CAPTCHA, just a "Yes I want this certificate" POST form ought to
be enough to defuse typical "anti-virus", "anti-malware" or automated
crawling/ cache building robots. Maybe I just missed where the BRs
tell you to prevent that, and hopefully even without prompting all
issuers using the email-based Blessed Methods have prevented this, 

dev-security-policy mailing list

Re: The CAA DNS Operator Exception Is Problematic

2021-02-09 Thread Nick Lamb via dev-security-policy
On Mon, 8 Feb 2021 13:40:05 -0500
Andrew Ayer via dev-security-policy

> The BRs permit CAs to bypass CAA checking for a domain if "the CA or
> an Affiliate of the CA is the DNS Operator (as defined in RFC 7719)
> of the domain's DNS."

Hmm. Would this exemption be less dangerous for a CA which is the
Registry for the TLD ?

I can see that there are a set of potential problems that can happen
where an entity mistakenly believes they are the DNS Operator when they
in fact are not, because there's a difference between configuring your
DNS servers to answer (I can tell mine to answer for and
having the authority to answer, but it seems like it's pretty clear
that either you are the registry for some TLD or you aren't, and so
that confusion ought not to arise in this case.

The existence of the exemption doesn't mean you need to take advantage
of it of course, it may be that any organisation large enough to
possess a CA and a Registry function today thinks it would prefer to
use public methods and not try to short-cut internally anyway, in which
case my thought doesn't matter.

dev-security-policy mailing list

Re: Policy 2.7.1:MRSP Issue #205: Require CAs to publish accepted methods for proving key compromise

2020-11-15 Thread Nick Lamb via dev-security-policy
On Mon, 16 Nov 2020 10:13:16 +1100
Matt Palmer via dev-security-policy
> I doubt it.  So far, every CA that's decided to come up with their own
> method of proving key compromise has produced something entirely
> proprietary to themselves.

At least two CAs (and from what I can tell likely more) offer ACME APIs
and thus ACME key compromise revocation (RFC 8555 section 7.6)

   The server MUST also consider a revocation request valid if it is
   signed with the private key corresponding to the public key in the

I appreciate that this is less convenient to your preferred method of
working, but it doesn't seem proprietary to agree on a standard way to
do something and my impression was that you could talk to ACME now?

> I have no reason to believe that, absent
> method stipulation from a trust store, that we won't end up with
> different, mutually-incompatible, unworkable methods for
> demonstrating key compromise equal to (or, just as likely, exceeding)
> the number of participating CA organisations.

OK, so in your opinion the way forward on #205 is for Mozilla policy to
mandate acceptance of specific methods rather than allowing the CA to
pick? Or at least, to require them to pick from a small set?

> Of course, the current way in which key compromise evidence is
> fracturing into teeny-tiny incompatible shards is, for my purposes, a
> significant *regression* from the current state of the art.
> Currently, I can e-mail a (link to a) generic but
> obviously-not-for-a-certificate CSR containing and signed by the
> compromised key, and it gets revoked.  No CA has yet to come back to
> me and say "we can't accept this as evidence of key compromise".

But your earlier postings on this subject suggest that this is far from
the whole story on what happens, not least because you sometimes weren't
immediately able to figure out where to email that CSR to anyway or the
responses, though not "we can't accept this" were... far from ideal.

> This format allows the pre-generation of compromise attestations, so
> that I don't need to keep a couple of million (ayup, there's a lot of
> keys out there) private keys online-accessible 24x7 to generate a
> real-time compromise attestation in whatever hare-brained scheme the
> affected CA has come up with -- not to mention the entertainment
> value of writing code to generate the compromise attestations for
> each of those schemes.

Experience with other elements of CA operations suggests that CAs don't
like writing the other side of the code either, and so tend to coalesce
on a smaller number of implementations especially when there's no
opportunity to differentiate their product.

> In an attempt to keep the madness under some sort of control, I've
> tried to codify my thoughts around best practices in a pre-draft RFC
> ( but so
> far it doesn't look like anyone's interested in it, and every time I
> think "oh, I should probably just go and submit it as an Experiment
> through the IETF individual stream and see what happens" the
> datatracker's not accepting submissions.

Well, it is IETF 109 now so yes, this isn't the right moment for new
drafts. My guess is that the closest match in terms of existing working
groups is probably either LAMPS - but that is only really chartered to
fix existing stuff, not explore new territory; or ACME - but ACME
already solved the key compromise revocation problem as far as they're
concerned. So, yes, individual submission is likely the way to go if
you want this published.

If expressions of interest are worth anything I can offer to read an
Internet Draft and provide feedback but you might not like my feedback.

For example the current text says:

"Given a valid signature, the subjectPKInfo in the CSR MUST be compared
against the subjectPublicKey info of the key(s) which are to be checked
for compromise."

But formats I've seen for keys (as opposed to certificates) do not
contain a "subjectPublicKey info" and so I guess what you actually want
to do here is compare the entire key. Explaining how to do that exactly
while being neutral about how that key might be stored will be
difficult, you might have to just pick a format.

Also is RFC 7169 really the best available poison extension for this
purpose? You understand it was intentionally published on April 1st

dev-security-policy mailing list

Re: Policy 2.7.1:MRSP Issue #205: Require CAs to publish accepted methods for proving key compromise

2020-11-14 Thread Nick Lamb via dev-security-policy
On Sat, 14 Nov 2020 17:05:26 -0500
Ryan Sleevi  wrote:

> I don't entirely appreciate being told that I don't know what I'm
> talking about, which is how this reply comes across, but as I've
> stated several times, the _original_ language is sufficient here,
> it's the modified language that's problematic.

That part of my statement was erroneous - of the actual texts I've seen
proposed so far I prefer this amended proposal from Ben:

"Section 4.9.12 of a CA's CP/CPS MUST clearly specify its accepted
methods that Subscribers, Relying Parties, Application Software
Suppliers, and other third parties may use to demonstrate private key
compromise. A CA MAY allow additional, alternative methods that do not
appear in section 4.9.12 of its CP/CPS."

I can't tell from here whether you know what you're talking about, only
whether I know what you're talking about, and I confess after some
effort I don't believe I was getting any closer.

Still, I believe this language can be further improved to achieve the
goals of #205. How about:

"Section 4.9.12 of a CA's CP/CPS MUST clearly specify one or more
accepted methods that Subscribers, Relying Parties, Application Software
Suppliers, and other third parties may use to demonstrate private key
compromise. A CA MAY allow additional, alternative methods that do not
appear in section 4.9.12 of its CP/CPS."

This makes clear that the CA must have at least one of these "clearly
specified" accepted methods which ought to actually help Matt get some

dev-security-policy mailing list

Re: Policy 2.7.1:MRSP Issue #205: Require CAs to publish accepted methods for proving key compromise

2020-11-14 Thread Nick Lamb via dev-security-policy
On Fri, 13 Nov 2020 21:06:30 -0500
Ryan Sleevi via dev-security-policy

> Right, I can see by my failing to explicitly state you were
> misunderstanding my position in both parts of your previous mail, you
> may have believed you correctly understood it, and not picked up on
> all of my reply.

To the extent your preferred policy is actually even about issue #205
(see later) it's not really addressing the actual problem we have,
whereas the original proposed language does that.

> Yes, you're absolutely correct that I want to make sure a CA says "any
> method at our discretion", but it's not about humiliation, nor is it
> redundant. It serves a valuable purpose for reducing the risk of
> arbitrary, undesirable rejections, while effectively improving
> transparency and auditability.

This boilerplate does not actually achieve any of those things, and
you've offered no evidence that it could do so. If anything it
encourages CAs *not* to actually offer what we wanted: a clearly
documented but secure way to submit acceptable proof of key compromise.
Why not? It will be easier to write only "Any method at our discretion"
to fulfil this requirement and nothing more, boilerplate which
apparently makes you happy but doesn't help the ecosystem.

> But equally, I made the mistake for referring to PACER/RECAP without
> clarifying more. My reply was to address that yes, there is the
> existence of "potentially illegitimate revocations", but that it's
> not tied to "secret" documents (which you misunderstood). And my
> mistake in not clarifying was that the cases weren't addressed to the
> CA, but related to a CA subscriber. You can read more about this at

> Here, this is about revocations that harm security, more than help,
> and you can read from that post more about why that's undesirable at

We're in the discussion about issue #205 which is about proving _key

If you believe Mozilla should write policies requiring CAs to resist
certain types of legal action this ought to be a separate issue. I
might even have positive things to say about that issue, and perhaps
some CA participants do too.

I was not able to discover what revocation reason was actually used in
the incident referenced, do you have copies of the signed OCSP response
or CRLs related to the Sci Hub revocations or similar incidents?

Otherwise I have to assume Key Compromise was not given as the reason
for these revocations and so this has nothing whatsoever to do with
issue #205 and you've hijacked an unrelated discussion.

dev-security-policy mailing list

Re: Policy 2.7.1:MRSP Issue #205: Require CAs to publish accepted methods for proving key compromise

2020-11-13 Thread Nick Lamb via dev-security-policy
On Fri, 13 Nov 2020 12:11:57 -0500
Ryan Sleevi via dev-security-policy

> I want it to be explicit whether or not a CA is making a restrictive
> set or not. That is, it should be clear if a CA is saying "We will
> only accept these specific methods" or if the CA is saying "We will
> accept these methods, plus any method at our discretion".

I see this as essentially redundant. Any major CA which does not choose
"We will accept ... any method at our discretion" under your formulation
stands to be humiliated repeatedly until they revise their policies to
say so as I explained previously.

I guess the existence of resulting let's call it "Sleevi boilerplate" is
harmless, but it seems foolish for Mozilla to demand people write
boilerplate that doesn't achieve anything.

> I encourage you to make use of PACER/RECAP then.

I examined 7 pages of RECAP results for "Key Compromise". Most of them
meant this phrase in the sense of "important settlement of differences"
but some were cryptography related.

Here is what I found:

There were verbatim copies of RFCs 2459 and 3281 submitted as evidence
to a patent case that ends up involving Acer, Microsoft and others.

Another case submitted as evidence the ISRG CPS. It's a Lanham Act case
roughly along lines Let's Encrypt followers will be familiar with, the
plaintiff wants a certificate revoked, Let's Encrypt says they just
issue certificates for DNS names, have the court take the DNS name away
if that's the issue. Not relevant here.

And finally there's an EFF Amicus briefing which says basically key
compromise is bad, which everybody here already knew.

I found no evidence that there are in fact such "secret documents" and
no evidence there's a problem here that would or could be fixed by your
preferred language for this Mozilla policy.

If you have a _much_ more specific claim than just "Somebody has
mentioned it in court at some point" then please make it.

dev-security-policy mailing list

Re: Policy 2.7.1:MRSP Issue #205: Require CAs to publish accepted methods for proving key compromise

2020-11-12 Thread Nick Lamb via dev-security-policy
On Thu, 12 Nov 2020 15:51:55 -0500
Ryan Sleevi via dev-security-policy

> I would say the first goal is transparency, and I think that both
> proposals try to accomplish that baseline level of providing some
> transparency. Where I think it's different is that the concern
> Dimitris raised about "minimums", and the proposed language here, is
> that it discourages transparency. "We accept X or Y", and a secret
> document suggesting "We also accept Z", makes it difficult to
> evaluate a CA on principle.


> Yes, this does mean they would need to update their CP/CPS as they
> introduce new methods, but this seems a net-positive for everyone.

I think the concern about defining these other than as minimums is
scenarios in which it's clear to us that key compromise has taken place
but your preferred policy forbids a CA from acting on that knowledge on
the grounds that doing so isn't "transparent" enough for your liking
because their policy documents did not spell out the method which
happened to be used.

The goal of this policy change is to avoid the situation where a
researcher has one or more compromised keys and, rather than being able
to quickly and securely set in motion the process of revoking relevant
certificates and forbidding more being issued they end up in a game of
telephone (perhaps literally) with a CA because its policies are
unclear or unworkable.

You seem to anticipate a quite different environment in which "secret
documents" are used to justify revocations which you presumably see as
potentially illegitimate. I haven't seen any evidence of anything like
that happening, or of anybody seeking to make it happen - which surely
makes a Mozilla policy change to try to prevent it premature.

dev-security-policy mailing list

Re: TLS certificates for ECIES keys

2020-10-29 Thread Nick Lamb via dev-security-policy
On Thu, 29 Oct 2020 11:06:43 -0700
Jacob Hoffman-Andrews via dev-security-policy

> I also have a concern about ecosystem impact. The Web PKI and
> Certificate Transparency ecosystems have been gradually narrowing
> their scope - for instance by requiring single-purpose TLS issuance
> hierarchies and planning to restrict CT logs to accepting only
> certificates with the TLS EKU. New key distribution systems will find
> it tempting to reuse the Web PKI by assigning additional semantics to
> certificates with the TLS EKU, but this may make the Web PKI less
> agile.

This is my main concern too.

I think this is something I would be annoyed to discover some CA has
decided it's allowed to do, even if I wasn't able to come up with a
tortured rationale for why it's prohibited. "It wasn't prohibited" is
not the standard Mozilla asks root programme participants to aim for.
So since I'm being asked, no, I think this is a bad idea.

If we were talking about a subscriber it seems obvious that ISRG needn't
try to police what they get up to, but ISRG itself is different.

> I've discussed the plan with Apple, and they're fully aware this is an
> unusual and non-ideal use of the Web PKI, and hope to propose a
> timeline for a better system soon. One of the constraints operating
> here is that Apple has already shipped software implementing the
> system described above, and plans to use it in addressing our
> current, urgent public health crisis. As far as I know, no publicly
> trusted Web PKI certificates are currently in use for this purpose.

The problem with such timelines is they are too often wishful thinking.
Once the immediate need abates, further action is de-prioritised and
often never happens at all. I suspect we've all experienced this.

ISRG could perhaps avoid that de-prioritization by committing up
front to ceasing the "unusual and non-ideal use" by some specific point
in time agreed with Apple, I don't know whether Apple would be at all
interested in doing that, but it might be enough to ensure that
resources remain properly focused on actually deploying the "better
system" in a timely fashion.

This "urgent public health crisis" is presumably the COVID-19
pandemic. Action in November 2020 or later hardly seems an "urgent"
response to the pandemic and at this point it seems clear that mostly
what matters is political direction rather than IT innovation.

That is to say, I think New Zealand has elimination whereas the USA
has tens of thousands of new cases every day because New Zealand's
political leadership pursued an elimination strategy and the American
government did not, rather than because the NZ COVID Tracer app is
markedly better than similar American software.

Back to the application. I think the desire here is to have
anonymisation because intellectually it seems as though users would be
satisfied that collecting anonymised aggregate statistics is OK where
they'd be trepidatious about any other data collection. Without robust
studies showing this to be true I very much doubt it. Users are not
much impressed by facts, their gut feeling is that collecting data
violates their privacy and the facts won't change that feeling.

> So, mdsp folks and root programs: Can a CA or a Subscriber
> participate in the above system without violating the relevant
> requirements?

I'm not an expert, but I suspect the answer for a CA is that yes, they perhaps 
BUT however they should not.

dev-security-policy mailing list

Re: Policy 2.7.1: MRSP Issue #152: Add EV Audit exception for Policy Constraints

2020-10-18 Thread Nick Lamb via dev-security-policy
On Thu, 15 Oct 2020 14:36:15 -0600
Ben Wilson via dev-security-policy

> Possible language is presented here:

I write this fully expecting to be corrected on the substance but I
have spent a day thinking about it on and off without reaching a
different conclusion.

Surely it wouldn't make sense in 2020 for a root to seek Mozilla EV
enablement? Firefox doesn't present EV enabled certificates with a
different UI these days as I understand it. So there doesn't
seem to be any benefit from being EV enabled in Firefox, instead you
get a bunch of extra requirements imposed by these policy documents.

dev-security-policy mailing list

Re: Let's Encrypt: 302 total OCSP responses served beyond acceptable timelines

2020-09-26 Thread Nick Lamb via dev-security-policy
On Fri, 18 Sep 2020 16:48:45 -0700
Kiel Christofferson via dev-security-policy

> We were notified of the problem by an alert on elevated error-level
> logs. We found that the errors were caused by a recent change to our
> RPC system that, in a certain error case, caused a particular column
> in our certificate status table to have a value of "0" for a specific
> empty field rather than either the expected value or NULL. We
> collected serials and last-update timestamp information for affected
> entries, and enacted a manual plan for continued remediation of these
> entries.

Hi Kiel,

Thank you for reporting this small deviation from required behaviour.

Let's Encrypt provides a community mutual assistance site (with
contributions from staff) on which a large volume of messages are
posted each day.

Once the problem was identified did you check to see if any messages to
that site were likely related to this issue? I guess it's not very
likely with a small number of deviations.


dev-security-policy mailing list

Re: New Blog Post on 398-Day Certificate Lifetimes

2020-07-12 Thread Nick Lamb via dev-security-policy
On Sat, 11 Jul 2020 11:06:56 +1000
Matt Palmer via dev-security-policy
> A histogram of the number of certificates grouped by their notBefore
> date is going to show a heck of a bump on August 31, I'll wager.
> Will be interesting to correlate notBefore with SCTs.

I expect there will be a modest number of entities which are all three

1. Aware this is happening in time to obtain certificates on or before
  August 31

2. Sufficiently unprepared for shorter certificate lifetimes still
  that they desire a longer lived certificate rather than just using new
  one year certificates (or automation).

3. And also organised enough to execute on a plan which obtains
  certificates in a timely fashion.

But, there's no particular attraction to August 31 itself for these
subscribers, once they meet these criteria why shouldn't they take
action sooner? So I'd expect this bump to be quite small and also
spread over days and weeks.

For the subscribers who are too late, too bad. I'm sure from September
for the next year or two commercial CAs will see some level of whining
from disgruntled customers whose cheese has been moved and aren't happy
about it. Some of it might leak here too.

I don't anticipate a WoSign-style back-dating epidemic. The benefits to
the subscriber are relatively small and the risk to a CA that gets
caught is more obvious than ever.

dev-security-policy mailing list

Re: Certificates possibly misissued to historical UK counties

2020-07-09 Thread Nick Lamb via dev-security-policy
On Thu, 9 Jul 2020 00:33:35 -0700 (PDT)
David Shah via dev-security-policy

> Richmond in the UK has not been part of Surrey from an administrative
> point of view since 1965. It is now part of Greater London.

If a model of how places work requires that the UK be split into
counties then the model is defective because that's not how it has
worked for decades.

However, for the purpose of OV/EV certificates I don't think this is a
real concern unless the address is actively misleading rather than
merely in some technical sense a "wrong" address. Letters which are
otherwise correctly addressed but imply Richmond is in Surrey will be
anyhow delivered without delay, and the address isn't made difficult to
find in person by this "mistake".

The subscriber is uncontroversially identified, and most likely any
weird glitches like "Richmond, Surrey" are a result of an external
database that isn't the responsibility of a CA.

dev-security-policy mailing list


2020-07-06 Thread Nick Lamb via dev-security-policy
On Mon, 6 Jul 2020 19:22:22 +0200
Matthias van de Meent via dev-security-policy

> I notice that a lot of Subscriber Certificates contain https-based
> URLs (e.g. PKIOverheid/KPN, Sectigo, DigiCert), and that other
> http-based urls redirect directly to an https-based website (e.g.
> LetsEncrypt, GoDaddy).

A piece of good news in this space is that these documents are
generally intended to be accessed with a web browser, as a result the
browser gets to interpret the URL and may choose to upgrade to HTTPS
based on considerations including:

* Policy of the host, or any parent domain (even a few TLDs are HSTS
  preloaded meaning any HTTP URL in those domains will be treated as if
  it was HTTPS by a web browser)

* Policy of the user (e.g. HTTPS-Everywhere) can arbitrarily upgrade
  URLs regardless of where they come from.

dev-security-policy mailing list

Re: GoDaddy: Failure to revoke certificate with compromised key within 24 hours

2020-05-22 Thread Nick Lamb via dev-security-policy
On Fri, 22 May 2020 22:48:42 +
Daniela Hood via dev-security-policy

> Hello,
> Thank you for all the comments in this thread.  We filed an incident
> report related to the revocation timing that can be followed here:
>  We also
> identified the error in revocation reason as a user error, corrected
> the error and provided feedback to the employee.

In addition to Ryan's concerns about the supposed ambiguity of a
pretty clear rule in the BRs I am as always interested in what can be
learned from incidents that might help everybody else.

What mechanism, if any, would have detected this "user error" in the
absence of a report by a third party to m.d.s.policy ?

Every CA has humans doing stuff, and humans make mistakes. Whether
that's a Let's Encrypt team member fat-fingering a server configuration
or a Symantec employee using rather than a Symantec name for
a test. But even though it's expected for humans to make mistakes, we
demand more of the Certificate Authority than we could ask of one human.

Where humans are necessary they will make mistakes and so you need
compensating controls. In this case that might mean reviewing critical
work done by humans. Depending on volume that might mean a second
person looks at every revocation, or it might mean a sample is examined
once a week for example.

I'd like to see incident reports like this not stop at "user error" for
this reason. Why wasn't the "user error" caught? What (other than
"feedback to the employee") prevents it happening again ?

dev-security-policy mailing list

Re: GTS - OCSP serving issue 2020-04-09

2020-04-19 Thread Nick Lamb via dev-security-policy
On Sat, 18 Apr 2020 22:57:03 -0400
Ryan Sleevi via dev-security-policy

> On Sat, Apr 18, 2020 at 6:39 PM Nick Lamb via dev-security-policy <
>> wrote:
> > What does "contractual jeopardy" mean here?  
> The Baseline Requirements address this. See 9.16.3 (particularly item
> 5) and 9.6.1 (6).
> For better or worse, the situation is as Neil described and required
> for all CAs.

It's possible that I'm confused somehow, but for me §9.16.3 of the BRs
does not have numbered item 5, and neither this nor §9.6.1 define
"contractual jeopardy" nor do they clear up why a subscriber would want
to shut down their service and perhaps be driven into bankruptcy in
deference to a mere technical error.

Is your position now that your earlier advice was quite wrong and
should be disregarded?

dev-security-policy mailing list

Re: GTS - OCSP serving issue 2020-04-09

2020-04-18 Thread Nick Lamb via dev-security-policy
On Fri, 17 Apr 2020 18:34:00 +0100
Neil Dunbar via dev-security-policy

> timestamp checking etc, etc]. Ryan's writeup calls out the revoked
> situation under the heading of 'make sure it is something the client
> will accept' - if the client understands OCSP responses at all, it
> needs to understand revoked, surely?

I'm sure the client does understand revoked, but it won't (and
certainly shouldn't) _accept_ it, hence Ryan's choice of language.

Clients also understand expired OCSP certificates, and they don't accept
those either.

> Because it places you (a good actor) in compliance with your
> subscriber agreement? Just as an example, some text in a few commonly
> used CA Subscriber Agreements have subscriber obligations like "cease
> all use of the Certificate and its Private Key upon expiration or
> revocation of the Certificate" or "Subscriber shall promptly cease
> using a Certificate and its associated Private Key" (under the
> section for revocation). Presumably failure to adhere to that
> agreement could place you in some contractual jeopardy?

What does "contractual jeopardy" mean here?

I guess a CA representative might chime in here to tell us if they've
sued any subscribers for not treating OCSP responses as a legal notice
that they must desist using a Private Key ? My firm guess would be "No,
this has never happened".

In fact do any CA representatives want to stand up and tell us they
regard OCSP responses as legally binding declarations by their CA
which are immune to ordinary mistakes?

dev-security-policy mailing list

Re: GTS - OCSP serving issue 2020-04-09

2020-04-17 Thread Nick Lamb via dev-security-policy
On Thu, 16 Apr 2020 13:56:34 +0100
Neil Dunbar via dev-security-policy

> On 16/04/2020 00:04, Nick Lamb via dev-security-policy wrote:
> For the avoidance of doubt (and my own poor brain) - does 'GOOD' here 
> mean OCSP status code 'successful' (0) AND returning a 'good' status
> for the certificate, or does it just mean status code 'successful'?
> The GTS case here was returning OCSP exception status 'unauthorized'
> (6).

GOOD means _at least_ the good CertStatus (also 0) in OCSP. We'll see
why in a moment.

Ryan provides a considerably longer list of stupid things that might go
wrong in item (2) from

You should consider all of them reasons the answer shouldn't replace an
existing GOOD answer you have.

> I would have thought that an OCSP-stapling implementation which got
> an OCSP status code 'successful' (0) with a 'revoked' status for the 
> certificate would want to pass that on to the client, replacing any 
> prior OCSP successful/status-good report, whether that prior report
> was still valid.

But why? We are us, why would we want to announce that our certificate
is revoked? What possible benefit could accrue to us from
choosing to do this?

Remember we cannot choose the behaviour of an adversary. So if we
choose to tell clients our certificate is revoked, but an adversary
asserts their copy is still good, clients will continue to talk to the
adversary which is almost certainly a worse outcome.

If your model of TLS still looks like early SSL, with implicit RSA
authentication then I can see that if you squint advertising your own
revocation isn't completely stupid. Maybe the revocation means an
adversary knows our private key, and so in continuing to talk to
clients with this key we make things worse, we should admit it's
revoked instead. I'd argue that if this was a scenario you care about
the right thing is for the server to shut down instead, not staple
revoked responses.

But anyway sites which actually care about security should never use
implicit authentication (and it doesn't exist in TLS 1.3). As a result
there is zero risk from pressing on, you are definitely you, the only
question is whether you can continue to convince clients that this is
so, and stapling a non-GOOD answer will never help you do that so it's
never the correct thing to do.

dev-security-policy mailing list

Re: GTS - OCSP serving issue 2020-04-09

2020-04-15 Thread Nick Lamb via dev-security-policy
On Tue, 14 Apr 2020 13:13:59 -0700
Andy Warner via dev-security-policy

> From 2020-04-08 16:25 UTC to 2020-04-09 05:40 UTC, Google Trust
> Services' EJBCA based CAs (GIAG4, GIAG4ECC, GTSY1-4) served empty
> OCSP data which led the OCSP responders to return unauthorized.

No new lessons for CAs here in general, but I think this incident is
worth highlighting as an example to OCSP Stapling implementations.

It is desirable (not technically required in the standard, but necessary
to a robust implementation) that your software should not be adversely
affected by an outage like this. Mistakes will happen, and good
software can and thus should allow for them without introducing
cascading failure.

Specifically: You should cache your stapled GOOD answers in durable
storage if practical, and when periodically refreshing you should report
non-GOOD answers to the operator (e.g. logging them as an ERROR
condition) but always continue to present clients with the last GOOD
answer until it actually expires even if you receive newer non-GOOD
OCSP responses.

dev-security-policy mailing list

Re: Proposal: prohibit issuance of new certificates with known-compromised keys, and for related purposes

2020-04-09 Thread Nick Lamb via dev-security-policy
On Mon, 6 Apr 2020 12:56:02 -0400
Ryan Sleevi via dev-security-policy

> It's not as easy as saying "use a bloom filter" if a bloom filter
> takes X amount of time to generate.

I've spent a bunch of time up to my neck in bloom filters (they're one
of the key components of 4store, a GPL'd RDF storage engine / SPARQL
implementation for which I wrote a lot of the code and its proprietary

Adding things to a bloom filter is cheap enough that we'd definitely
not shy away from putting it in the human perceptible updates rather
than batching it up to do asynchronously.

The part that's non-viable in Bloom filters is removing things, but
that's cool because we're all agreed that "This key is no longer
compromised" is not a thing. The most we should do there is recommend
people have one filter for each type of key they support, for example
if we imagine this rule had been in place from the outset, you no longer
need your "compromised" bloom filter for 1024-bit RSA because all
1024-bit RSA issuance is prohibited now, so you can throw that away.

Right-sizing of Bloom filters is an issue, but you only need to get
ballpark accuracy. If we genuinely aren't sure if there will be a
thousand or a billion RSA private keys compromised next year then yup
that's a problem to address early.

I recommended ISRG look at Bloom Filters for their response to Matt's
enquiries about refusing to re-issue, I have been busy but I don't
think they responded, which is fine it was unsolicited advice.

A Bloom filter doesn't solve the whole problem unless you're
comfortable being a bit savage. You *can* say "If it matches the bloom
filter, reject as possibly compromised" and set your false positive
ratio in the sizing decision as a business policy. e.g. "We accept
that we'll reject 1-in-a-million issuances for false positive". But I'd
suggest CAs just slow-path these cases, if it's a match to the Bloom
filter you do the real check, and maybe that's not fast enough for goal
response times in your customer service, but in most cases issuance
fails anyway because somebody was trying to re-use a bad key. Customers
who just got tremendously unlucky get a slightly slower issuance. "Huh,
these are normally instant. What's up with... oh, there is goes".

Is it necessary to spell out that even though _Private_ key compromise
is what we care about the things you need to be keeping in filters and
databases to weed out compromised keys are the corresponding _Public_

dev-security-policy mailing list

Paessler (was Re: Let's Encrypt: Failure to revoke key-compromised certificates within 24 hours)

2020-03-21 Thread Nick Lamb via dev-security-policy
On Sat, 21 Mar 2020 13:40:21 +1100
Matt Palmer via dev-security-policy

> Oh the facepalm, it burns (probably too much hand sanitizer)... let
> me try that again.

Use soap and water where practical. And, as the BBC Comedy TV show
"That Mitchell & Webb Look" put it many years ago "Remain indoors".

> There's also this one, which is another reuse-after-revocation, but
> the prior history of this key suggests that there's something *far*
> more interesting going on, given the variety of CAs and domain names
> it has been used for (and its current residence, on a Taiwanese
> traffic stats server):
> If anyone figures out the story with that last key, I'd be most
> pleased to hear about it.


This requires a small degree of insight into how little ordinary people
(even say IT people) understand about public key cryptography.

These servers are running PRTG - a network monitoring tool from an
outfit named Paessler. The software offers a web interface with SSL.

PRTG is supplied as Windows software, and I have just installed it on
my games PC (hopefully uninstalling it will be easy because this is no
time to go out shopping for a PC) to verify the following:

Rather than mint an RSA key pair and self-signed certificate to
bootstrap each install, they just supply a (presumably randomly
generated) key and certificate right in the install data.

They don't have one of those (often rather archaic but functional) UIs
where it mints new RSA keys and gives you a CSR for them. Instead it
offers either a tool that will convert keys and certificates and
install them, or you can just paste the files into the right place and
restart the software.

Now, for you or me the provided default RSA key is obviously no use and
you'd mint your own with your preferred tools before requesting a
publicly trusted certificate or indeed using your own in-house CA. But
if you don't know much about this stuff and you find there's a perfectly
nice RSA key supplied with the software it seems natural to use it.

Whereupon of course now your "real" publicly trusted certificate is for
a key which in reality is available to anybody with the insight to
guess which software you're using. Oops.

Here's their demo certificate, the associated Private Key is freely
available to download as part of their software, but there's no need
for me to paste it here.


dev-security-policy mailing list

Re: Let's Encrypt: Failure to revoke key-compromised certificates within 24 hours

2020-03-20 Thread Nick Lamb via dev-security-policy
On Sat, 21 Mar 2020 09:25:26 +1100
Matt Palmer via dev-security-policy

> These two certificates:
> Were issued by Let's Encrypt more than 24 hours ago, and remain
> unrevoked, despite the revocation of the below two certificates,
> which use the same private key, for keyCompromise prior to the above
> two certificates being issued:
> As per recent discussions here on m.d.s.p, I believe this is a breach
> of BR s4.9.1.1.

Hi Matt,

I haven't looked at the substance of your concern yet, but the 1st and
3rd links you gave above both look identical to me whereas your text
implies they should differ. Perhaps this is a copy-paste error?

dev-security-policy mailing list

Re: About upcoming limits on trusted certificates

2020-03-14 Thread Nick Lamb via dev-security-policy
On Thu, 5 Mar 2020 14:15:17 +
Nick Lamb via dev-security-policy

> There is some value in policy alone but there's also substantial
> independent value in writing the policy into the code. Would Mozilla
> accept third party work to implement something like #908125 ? I
> appreciate you don't work for them any more Wayne, perhaps Kathleen or
> somebody else who does can answer?

I never saw any reply on this topic and so my assumption is that at
best such a patch would be in the big pile of volunteer stuff maybe
nobody has time to look at.

After some further thought this gives me a real concern that maybe is
an error (in which case I'm sure somebody here will be delighted to
correct me)

As I understand it Apple's intent is that Safari will not accept a
certificate with a lifetime of (let's say for this example) 500 days,
but this would not necessarily become a violation of their root store
policy. Such a certificate could exist and (absent decisions here) it
would work in Firefox but not Safari. More practically, it would work
in some TLS-based internal system that trusts public roots, but not in
Safari, which would be just fine for a backend system that was never
actually intended to be used by web browsers.

This would make it like SCT enforcement in Safari or Chrome. Google
doesn't propose to distrust a CA which issues certificates without
logging them - it just ensures the Chrome browser doesn't trust those
certificates until it is shown proof they were logged, which might be
hours or weeks later. As I understand it Google's own CA deliberately
does this in fact.

If that understanding is correct (again the poor communication from
Apple which I already disapproved of doesn't help me) then in having an
unenforced root store policy about this, rather than enforcement but no
policy change, Mozilla would be standing alone.

That has much larger implications, so if that's what we're talking
about here we need to be clear about it.

dev-security-policy mailing list

Re: Certificate with Debian weak key

2020-03-09 Thread Nick Lamb via dev-security-policy
On Sun, 8 Mar 2020 10:57:49 +1100
Matt Palmer via dev-security-policy

> > The fingerpint of the claimed Debian weak key was not included in
> > our database.  
> I think it's worth determining exactly where obtained their
> fingerprint database of weak keys.  The private key in my possession,
> which I generated for inclusion in the database, was
> obtained by using the script provided in the `openssl-blacklist`
> source package, with no special options or modifications.

Yes, I would certainly want's report to give me confidence

#1 they've identified why they didn't spot this key, were there (many?)
  other keys which would also have been missed?

#2 they now have a complete and accurate list of such keys

#3 they went back and did the work to re-check other certificates
  they've issued for this (these?) extra weak keys and any matches were
  revoked and the subscriber contacted

Depending on the circumstances in #1 there may well be a lesson for
other CAs, especially if using a setup which is similar in some way to and so this point is very important. There might also be
further questions about's processes which failed to detect this

This sort of incident is also important because of the impact on the
Subscriber. Had this subscriber used a different CA with a complete
list they'd have been informed immediately that their chosen key was a
problem. Because didn't do that in fact this subscriber was
potentially vulnerable to active, and in some cases even passive
attacks on their TLS services for the period between issuance and

dev-security-policy mailing list

Re: About upcoming limits on trusted certificates

2020-03-05 Thread Nick Lamb via dev-security-policy
On Wed, 4 Mar 2020 16:41:09 -0700
Wayne Thayer via dev-security-policy

> I'm fairly certain that there is no validity period enforcement in
> Firefox. The request is
> I'm also not in a
> position to commit Mozilla to technical enforcement if we adopt a
> policy of 398 days. However, I believe there is still value in the
> policy alone - violations are easily detected via CT logs, and making
> them a misissuance under our policy then obligates the CA to file a
> public incident report.

I see, well that explains why I struggled to find it :) Always harder
to prove a negative.

There is some value in policy alone but there's also substantial
independent value in writing the policy into the code. Would Mozilla
accept third party work to implement something like #908125 ? I
appreciate you don't work for them any more Wayne, perhaps Kathleen or
somebody else who does can answer?

Bad guys don't obey policy. Certificates constructed to attack
Microsoft's bad implementation of elliptic curve signatures recently
for example obviously needn't respect policy documents. But they *did*
need to pass Chrome's technical enforcement of that policy. A
certificate constructed to claim notBefore 2019-07-01 was required by
Chrome to have SCTs, which of course an adversary could not obtain
because their certificate only fooled MS Windows. As it happens the SCT
requirement wasn't old enough to sidestep the issue - an adversary
could just choose a fake notBefore prior to Chrome's cut off. But it
was close to just shutting down the attack altogether.

Technical enforcement also quietly benefits Subscribers. If you buy a
certificate, quite legitimately, from an honest but inevitably
imperfect Certificate Authority, and it turns out that certificate is a
policy violation - it's better if when you install and test the
certificate it doesn't work. "Hey, this product you sold me doesn't
work". The CA can investigate, issue you a good certificate, apologise
and if appropriate report the incident to m.d.s.policy.

Whereas if we find it a month later and they have to revoke the
certificate, contact the subscriber, apologise etc. that's potentially
a much bigger inconvenience to that subscriber.

> As usual, I'll propose the policy language and we'll discuss it on
> the list.

Thanks Wayne,


dev-security-policy mailing list

Re: About upcoming limits on trusted certificates

2020-03-04 Thread Nick Lamb via dev-security-policy
On Tue, 3 Mar 2020 13:27:59 -0700
Wayne Thayer via dev-security-policy

> I'd like to ask for input from the community: is this a requirement
> that we should add to the Mozilla policy at this time (effective
> September 1, 2020)?

If Mozilla adds this as a policy requirement it should also land
enforcement in Firefox that rejects certificates which violate this
policy. I tried to investigate whether this currently happens for the
825 day rule in the BRs but failed to satisfy myself either way.

I read the SC22 discussion when it happened but I will re-read it all in
the light of Apple's recent decision and your question and post again
if that results in something I miss here.

One thing Mozilla definitely shouldn't replicate is Apple's decision to
present this to CA/B in person - resulting in tech news coverage based
on hearsay and conjecture - then only follow up days later to the wider
population with material that doesn't cover every obvious question a
reasonable person would have. A few hours before Clint's post I actually
had to explain to someone that their understanding of the issue was
probably wrong† - but with nothing official from Apple it was
impossible to say so definitively, which means they're left pointlessly
confused, presumably not Apple's purpose here.

If Mozilla does follow Apple's policy here (which I am minded to think
is the wiser course) they should make sure to have materials on hand
immediately to clarify exactly what that will mean to both specialists
and lay people when that policy is announced.

†They had imagined existing two year certificates would suddenly cease
to work on iPhones after their first year, which of course would be a
nightmare to manage and does not match Clint's confirmation here that
notBefore will be used to decide which certificates the policy applies
dev-security-policy mailing list

Re: Acceptable forms of evidence for key compromise

2020-03-02 Thread Nick Lamb via dev-security-policy
On Mon, 2 Mar 2020 13:48:55 +1100
Matt Palmer via dev-security-policy

> In my specific case, I've been providing a JWS[1] signed by the
> compromised private key, and CAs are telling me that they can't (or
> won't) work with a JWS, and thus no revocation is going to happen.
> Is this a reasonable response?

I don't hate JWS, but I can see Ryan's point of view on this. Not every
"proof" is easy to definitively assess, and a CA doesn't want to get
into the game of doing detailed forensics on (perhaps) random unfounded

Maybe it makes sense for Mozilla to provide in its policy (without
limiting what else might be accepted) an example method of
demonstrating Key Compromise which it considers definitely sufficient ?

I'd also be comfortable with such an example in the BRs, if people think
that's the right place to do this.

dev-security-policy mailing list

Re: 2020.02.29 Let's Encrypt CAA Rechecking Bug

2020-02-29 Thread Nick Lamb via dev-security-policy
On Fri, 28 Feb 2020 21:50:47 -0800 (PST)
Jacob Hoffman-Andrews via dev-security-policy

> Also posted to

Hi Jacob, was there a reason not to use the ordinary incident reporting
format ? This is pretty good for ensuring you cover all the questions
we're otherwise likely to ask anyway.

> On 2020-02-29 UTC, Let’s Encrypt found a bug in our CAA code. Our CA
> software, Boulder,  checks for CAA records at the same time it
> validates a subscriber’s control of a domain name. Most subscribers
> issue a certificate immediately after domain control validation, but
> we consider a validation good for 30 days. That means in some cases
> we need to check CAA records a second time, just before issuance.
> Specifically, we have to check CAA within 8 hours prior to issuance
> (per BRs §, so any domain name that was validated more than 8
> hours ago requires rechecking.

For example "found a bug" _probably_ means that programmers in the
course of their ordinary work realised there was a logical error in the
Boulder software, but people might also use it to describe figuring out
the cause of problems reported to them by a third party, which has
different implications for security. The usual "How did you learn of
this incident?" question ensure a clear answer.

> The bug: when a certificate request contained N domain names that
> needed CAA rechecking, Boulder would pick one domain name and check
> it N times. What this means in practice is that if a subscriber
> validated a domain name at time X, and the CAA records for that
> domain at time X allowed Let’s Encrypt issuance, that subscriber
> would be able to issue a certificate containing that domain name
> until X+30 days, even if someone later installed CAA records on that
> domain name that prohibit issuance by Let’s Encrypt.

It seems unlikely that in practice this bug has inconvenienced a
subscriber, let alone enabled adversaries to actually get miss-issued
certificates - but given Let's Encrypt operates a self-help forum for
users it may be worth spending a few minutes searching for related
problems in those forums once the timespan involved is clear to see if
any of your users reported symptoms from this bug that were missed.

If it's not very difficult it would also be useful to have some idea
how many certificates might be affected. That is, how many certificates
were really issued to multiple FQDNs (if a single FQDN the bug described
has no effect) more than 8 hours after initial correct CAA checks ?
Intuitively this should be almost none, but intuitions can be

dev-security-policy mailing list

Re: Sectigo-issued certificates with concerningly mismatched subject information

2020-01-26 Thread Nick Lamb via dev-security-policy
On Sun, 26 Jan 2020 11:16:24 +0100
Hanno Böck via dev-security-policy

> I guess this is the most relevant part here. Noone has noticed.
> I see that a lot of people are having fun pointing out these issues
> again and again to show how sloppy CAs work. Which is fine I guess,
> but it leads to the question what the point of all this is.

Unlike minor typographical errors which I don't think have a larger
significance, this type of mistake might realistically have grave impact
depending on how it happens, for which we will need Sectigo's honest
response to the incident.

For example suppose Sectigo has a bug in which under some circumstances
Customer A is treated as though they were Customer B instead, and of
course certificates like these are one possible result of the bug that
we can see in the CT logs. But other symptoms of that same bug might
include Customer B has proved to Sectigo that they control,
so Customer B can order new certificates for, but with the
bug now Customer A can get such certificates too which they are not
entitled to.

> Maybe it's time to change the WebPKI rules to reflect that - either say
> "any information in a certificate that is not the CN/SAN is yolo and
> can be whatever and web clients should make sure they never display
> that informaiton" or "any useless extra information should be
> skipped".

I definitely can't support the former. The purpose of X.509
certificates is to bind a public key to an identity. If we decide that
something isn't part of the identity then it shouldn't be included.

I think the latter isn't a good idea, beyond the extent to which it's
already present in the BRs but I don't feel strongly about it.
dev-security-policy mailing list

Re: Audit Letter Validation (ALV) on intermediate certs in CCADB

2019-12-24 Thread Nick Lamb via dev-security-policy
On Mon, 23 Dec 2019 14:20:16 -0700
Wayne Thayer via dev-security-policy

> I suggest that we modify question #1 to require CAs
> to attest that they intend to FULLY comply with version 2.7 of the
> policy and if they won't fully comply, to list all non-conforrmities.
> In other words, define an exception as anything that isn't compliant
> with the current policy rather than something we granted in the past.

Thanks Wayne, I believe this would achieve my broader goals without
being too onerous for you/ Mozilla or the CAs.

I look forward to any discussions prompted by the modified question or
by non-comformities disclosed as a result.

dev-security-policy mailing list

Re: Audit Letter Validation (ALV) on intermediate certs in CCADB

2019-12-21 Thread Nick Lamb via dev-security-policy
On Thu, 19 Dec 2019 10:23:19 -0700
Wayne Thayer via dev-security-policy

> We've included a question about complying with the intermediate audit
> requirements in the January survey, but not a more general question
> about exceptions. I feel that an open-ended question such as this
> will be confusing for CAs to answer, and moreover I don't want to
> create the impression that Mozilla grants exceptions for policy
> violations because, as a general rule, we don't.

As a general rule you don't grant exceptions, and so exceptions are
let's say, an exception to that general rule? Hence the name.

So, to the same end as my original proposal, I recommend instead that
Mozilla personalizes any CA survey sent out to a CA which they believe
currently benefits from any such exceptions - setting out what those
exceptions to its rules are for that CA. And in all communications the
text should be clear that any exceptions the CA believed were in place
are in fact spent as far as Mozilla is concerned unless they are
enumerated in this communication.

In the event there are in fact NO exceptions, that's just one small
tweak to the text.

In the event that one or two CAs benefit from some minor exception
which still has force, it's a little bit of work, and in the process a
firm reminder to both Mozilla and the CA of the ongoing price of such

And in the event that it's actually dozens of exceptions across many or
most CAs I hope the realisation of the effort involved will cause Wayne
to reconsider his previous claim that "as a general rule, we don't".

One valuable opportunity from m.d.s.policy is for CAs to learn from
each others mistakes and in doing so avoid making the same or similar
mistakes themselves. But Mozilla has opportunities to learn from
mistakes here too, and I feel as though the mismatch between Kathleen's
expectation (that a situation should have "resolved" since 2016) and
the CA's understanding (that this constituted an indefinite exception
to Mozilla policy) is such a mistake.

dev-security-policy mailing list

Re: [FORGED] Re: How Certificates are Verified by Firefox

2019-12-05 Thread Nick Lamb via dev-security-policy
On Wed, 4 Dec 2019 17:12:50 -0500
Ryan Sleevi via dev-security-policy

> Yes, I am one of the ones who actively disputes the notion that AIA
> considered harmful.

As not infrequently happens I can't agree with Ryan here. AIA chasing in
browsers is a non-trivial privacy leak AND doesn't match how the
specification says things work.

What I'd like to see, as with OCSP stapling, is for web /servers/ to
do the fix-up not browsers. If an operator doesn't take the initiative
to provide the server with a complete chain, it should do its own AIA
chasing to discern the chain and then provide that chain in the TLS
Certificate message. This obeys the specification AND makes the server
software easier to administrate AND has few or no privacy implications

No new standards development work is needed. Anybody can do this today,
but so far as I can tell nobody does.

I know Mozilla does outreach to server operators, but does it also do
any outreach to server software developers? Is the situation that
they've got their fingers in their ears about this, or that we aren't
yelling at the right people?

dev-security-policy mailing list

Re: Audit Letter Validation (ALV) on intermediate certs in CCADB

2019-11-26 Thread Nick Lamb via dev-security-policy
On Mon, 25 Nov 2019 14:12:46 -0800
Kathleen Wilson via dev-security-policy

> CAs should have been keeping track of and resolving their own known 
> problems in regards to not fully following the BRs and Mozilla
> policy. For example, I expect that a situation in which I responded
> with an OK in 2016 would have been corrected in the 3 years since
> that email was written.

Perhaps to this end it would be useful for Mozilla's periodic survey
letters to always ask each CA to list any exceptional circumstances they
believe currently apply to them?

This would act both as a reminder to Mozilla of any such exceptions
which they granted but may have assumed meanwhile ceased to be
relevant, AND to the CA of any such exceptions upon which they find
themselves still relying.

The publication of CA responses is an opportunity for Mozilla, Peers
and the wider community to comment on any discrepancy.

dev-security-policy mailing list

Re: [FORGED] Firefox removes UI for site identity

2019-10-30 Thread Nick Lamb via dev-security-policy
On Tue, 29 Oct 2019 10:54:18 -0700
Paul Walsh via dev-security-policy
> [PW] I agree with your conclusion. But you’re commenting on the wrong
> thing. You snipped my message so much that my comment above is
> without context. You snipped it in a way that a reader will think I’m
> asking about the old visual indicators for identity - I’m not. I
> asked Wayne if he thinks the new Firefox visual indicator for
> tracking is unnecessary. 

I see, with this explanation your post makes more sense but now seems
dreadfully off-topic.

Firefox added positive visual indicators for a variety of things in
recent years, such as audio playback, webcam and location, but those
would seem equally irrelevant to a discussion about the EV

dev-security-policy mailing list

Re: [FORGED] Firefox removes UI for site identity

2019-10-29 Thread Nick Lamb via dev-security-policy
On Mon, 28 Oct 2019 16:19:30 -0700
Paul Walsh via dev-security-policy
> If you believe the visual indicator has little or no value why did
> you add it? 

The EV indication dates back to the creation of Extended Validation,
and so the CA/Browser forum, which is well over a decade ago now.

But it inherits its nature as a positive indicator from the SSL
padlock, which dates back to the mid-1990s when Netscape developed SSL.
At the time there was not yet a clear understanding that negative
indicators were the Right Thing™, and because Tim's toy hypermedia
system didn't have much security built in there was a lot of work to
do to get from there to here.

Plenty of other bad ideas date back to the 1990s, such as PGP's "Web of
Trust". I doubt that Wayne can or should answer for bad ideas just
because he's now working on good ideas.

dev-security-policy mailing list

Re: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-30 Thread Nick Lamb via dev-security-policy
On Fri, 30 Aug 2019 12:02:42 -0500
Matthew Hardeman via dev-security-policy

> What's not discussed in that mechanism is how Google decides what
> pages are unsafe and when?

Yes, but the point was to show what shape Safe Browsing API is, I guess
I'd assumed this makes it obvious that EV doesn't really fit well but
didn't spell that out properly.

Google doesn't end up able to interrogate whether the site the user is
visiting presented them an EV certificate. Indeed in most cases it will
have no idea they visited a site, let alone which certificate was

But yes, it would be possible to use EV as an input to a manual
process to create the list of phishing pages. It would also be possible
to use astrology. If I were tasked with this I would not do either.

dev-security-policy mailing list

Re: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-30 Thread Nick Lamb via dev-security-policy
On Thu, 29 Aug 2019 18:44:11 -0700 (PDT)
Kirk Hall via dev-security-policy

> OK, I'll try one last time to see if you are willing to share Google
> information that you have with this group on the question at hand (Do
> browser phishing filters and anti-virus apps use EV data in their
> anti-phishing algorithms).  

For the AV apps I can totally believe they'd do this because bogus
assumptions are more or less their bread and butter. "It's an EV cert
so it's safe" is exactly the kind of logic I can imagine them employing.

But it really doesn't seem like a good fit for Google Safe Browsing,
if they do try to triangulate from EV that seems like a big leap to me.

For readers unfamiliar, let me briefly explain what Safe Browsing gives

For every URL you're considering displaying you calculate a whole bunch
of cryptographic hashes, of the whole URL, just the FQDN and certain
other combinations. Then you truncate the hashes and you see if the
truncated hashes are in a small list Google gave you (a browser will
update this list periodically using a synchronisation API Google
designed for the purpose).

If one of your truncated hashes /is/ in the list, maybe this is
Phishing! You call Google, telling them the truncated hash you are
worried about, and Google gives you a complete list of full (not
truncated) hashes you should worry about with this prefix. It might be
empty (the phishing attack is gone) or have multiple entries.

Only if the full hash you were worried about is in that fresh list from
Google do you tell the user "Ohoh. Phishing, probably go somewhere
else" in all other cases everything is fine.

This design has important privacy properties because it means Google
definitely isn't told which pages you visit, and ordinarily it doesn't
even learn roughly how many pages you're visiting or anything like
that. Only when you try to visit a phishing site, or there's a random
coincidence, it learns (if it chooses to remember) that someone from
your IP either tried to visit a phishing site or there was a random
coincidence, and not which of those options it was.

Most Phishing detections aren't for a whole site, they are
page-specific. So maybe jims-oil-change.example is a perfectly
legitimate site for Jim the auto mechanic with a Let's Encrypt cert, but
his poorly configured PHP setup means bad guys create
https://jims-oil-change.example/.temp/ which is a
PayPal phish form.

The Safe Browsing design lets Google add the hash for that nasty
phishing page, without also making Jim's harmless front page get an
angry message in browsers.

dev-security-policy mailing list

Re: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-29 Thread Nick Lamb via dev-security-policy
On Thu, 29 Aug 2019 13:33:26 -0400
Lee via dev-security-policy 

> That it isn't my financial institution.  Hopefully I'd have the
> presence of mind to save the fraud site cert, but I'd either find the
> business card of the person I've been dealing with there or find an
> old statement, call and ask to be transferred to the fraud dept.

I commend this presence of mind.

> Same deal if the displayed info ends with (US) but doesn't match what
> I'm expecting, except I'd be asking the fraud dept about the name
> change instead of telling them.

Perhaps American banks are much better about this than those I've
handled but certainly here in the UK "expecting" is tricky for ordinary
customers. As a domain expert I know why my good bank says:

first direct (HSBC Bank plc) (GB)

... but I won't be surprised if many of their customers didn't know
they're technically part of the enormous HSBC

NS's certificate spells their name out. Unfortunately their name is
quite long, which is why they prefer the abbreviation, so my browser

National Savings and Investme... (GB)

... but it would be perfectly legal to set up businesses with different
names that truncate exactly the same as this.

My mother banks with Halifax. Again I understand why, but I suspect
she'd be astonished if she stopped to read that it says:

Lloyd Banking Group PLC (GB)

... in fact her bank is part of a larger group under a different name
and they didn't bother to get certificates that mention Halifax at all.

> I understand that ev certs aren't a panacea, but for the very few web
> sites that I really care about I like having the company name
> displayed automatically.  I think they're helpful and, since I use
> bookmarks instead of email links or search results, provide an
> adequate assurance that I've actually ended up on the web site I want.
> Is that an incorrect assumption?  What more should I be doing?

The implication of the UI change is that you needn't bother trying to
guess whether the Company Name is what you expected, if you are
visiting the bookmark for your bank (credit union, card issuer,
whatever), that will be your bank. As you have seen in this thread,
some people don't agree, but I endorse this view.

In a broader picture, there isn't much you should bother trying to do,
the onus is largely on the bank. You could try to use countermeasures
they provide e.g. per account images to re-assure you that they know
who you are before you complete login, but they're pretty likely to get
rid of them or change to new ones on a whim so it's scarcely worth it.

If you _work_ for such an institution, the best thing you could do to
protect your customers against Phishing, a very popular attack that
TLS is often expected to mitigate, is offer WebAuthn. Unfortunately the
FIDO tokens to enable WebAuthn are not cheap, making the idea of just
mailing one to every customer prohibitive. But certainly it could make
sense to offer this to High Net Worth Individuals or just let customers
use their own tokens if they want to.

dev-security-policy mailing list

Re: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-29 Thread Nick Lamb via dev-security-policy
On Thu, 29 Aug 2019 17:05:43 +0200
Jakob Bohm via dev-security-policy

> The example given a few messages above was a different jurisdiction
> than those two easily duped company registries.

I see. Perhaps Vienna, Austria has a truly exemplary registry when it
comes to such things. Do you have evidence of that? I probably can't
read it even if you do.

But Firefox isn't a Viennese product, it's available all over the
world. If only some handful of exemplary registries contain trustworthy
information, you're going to either need to persuade the CAs to stop
issuing for all other jurisdictions, or accept that it isn't actually
helpful in general.

> You keep making the logic error of concluding from a few example to
> the general.

The IRA's threat to Margaret Thatcher applies:

We only have to be lucky once. You will have to be lucky always.

Crooks don't need to care about whether their crime is "generally"
possible, they don't intend to commit a "general" crime, they're going
to commit a specific crime.

> A user can draw conclusions from their knowledge of the legal climate
> in a jurisdiction, such as how easy it is to register fraudulent 
> untraceable business names there, and how quickly such fraudulent 
> business registrations are shut down by the legal teams of high
> profile companies such as MasterCard Inc.

Do you mean knowledge here, or beliefs? Because it seems to me users
would rely on their beliefs, that may have no relationship whatsoever
to the facts.

> That opinion still is lacking in strong evidence of anything but spot 
> failures under specific, detectable circumstances.

We only have to be lucky once.

> Except that any event allowing a crook to hijack http urls to a
> domain is generally sufficient for that crook to instantly get and
> use a corresponding DV certificate.

If the crook hijacks the actual servers, game is over anyway,
regardless of what type of certificate is used.

Domain owners can set CAA (now that it's actually enforced) to deny
crooks the opportunity from an IP hijack. More sophisticated owners can
use CAA and DNSSEC to deny crooks the opportunity to use this even
against a DNS hijack, so that crooks need to attack a registrar or

If the crook only does some sort of IP hijack they need to control the
IP from the perspective of the issuer as well as from the perspective
of their target in order to obtain and use a DV certificate with methods

This means small hijacks (e.g. of a single ISP or public access point)
are unlikely to be effective for obtaining a certificate.

You are correct that a large hijack (e.g. BGP hijack to move an
entire /24 for most of the Internet to some system you control) would
work on most domains, BUT this is relatively difficult for an attacker,
cannot be done silently and is already being addressed by numerous
initiatives by people over in that community rather than m.d.s.policy

> Yes, I think you have repeatedly used the failures of UK and US
> company registries as reason to dismiss all other governments.

I don't have examples from other countries either way. I assure you if
I could say "Oh, in New Zealand it works great" based on solid
information like a track record of actually prosecuting people who
make bogus registrations - I'd do that.

dev-security-policy mailing list

Re: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-29 Thread Nick Lamb via dev-security-policy
On Wed, 28 Aug 2019 11:51:37 -0700 (PDT)
Josef Schneider via dev-security-policy

> Not legally probably and this also depends on the jurisdiction. Since
> an EV cert shows the jurisdiction, a user can draw conclusions from
> that.

Yes it is true that crimes are illegal. This has not previously stopped
criminals, and I think your certainty that it will now is misplaced.

What conclusions would you draw from the fact that the jurisdiction is
the United Kingdom of Great Britain and Northern Ireland? Or the US
state of Delaware ?

Those sound fine right? Lots of reputable businesses?

Yes, because those are great places to register a business,
tremendously convenient. They have little if any regulation on
registering businesses, light touch enforcement and they attract a
modest fee for each one.

This is of course also exactly the right environment for crooks.

> But removing the bar is also not the correct solution. If you find
> out that the back door to your house is not secured properly, will
> you remove the front door because it doesn't matter anyway or do you
> strengthen the back door?

Certainly if crooks are seen to walk in through the back door and none
has ever even attempted to come through the upstairs windows, it is
strange to insist that removing the bars from your upstairs windows to
let in more light makes the house easier to burgle.

> The current
> EV validation information in the URL works and is helpful to some
> users (maybe only a small percentage of users, but still...)

Is it helpful, or is it misleading? If you are sure it's helpful, and
yet as we saw above you don't really understand the nuances of what
you're looking at (governments are quite happy to collect business
registration fees from crooks) then I'd say that means it's misleading.

> EV certificates do make more assurances about the certificate owner
> than DV certificates. This is a fact. This information can be very
> useful for someone that understands what it means. Probably most
> users don't understand what it means. But why not improve the display
> of this valuable information instead of hiding it?

The information is valuable to my employer, which does with it
something that is useless to Mozilla's users and probably not in line
with what EV certificate purchasers were intending, but I'm not on
m.d.s.policy to speak for my employer, and they understood that
perfectly well when they hired me.

In my opinion almost any conceivable display of this information is
likely to mislead users in some circumstances and bad guys are ideally
placed to create those circumstances. So downgrading the display is a
reasonable choice especially when screen real estate is limited.

> Certificates cannot magically bring security. Certificates are about
> identity. But the fact that the owner of the website is
> the owner of the domain is not that helpful in
> determining the credibility.

If I process a link (as browsers do many times in constructing even
trivial web pages these days) then this assures me it actually links to
what was intended.

This is enough to bootstrap WebAuthn (unphishable second factor
credentials) and similar technologies, to safeguard authentication
cookies and sandbox active code inside an eTLD+1 or narrower. All very
useful even though the user isn't aware of them directly.

For end users it means bookmarks they keep and links they follow from
outside actually lead where they should, and not somewhere else as
would trivially happen without this verification.

> But the information that the owner of
> is a incorporated company from Germany officially called
> "Somebank AG" is more valuable. Maybe some people don't care and
> enter their account data happily at, maybe most people
> do. We don't know and we probably can't know how many people stopped
> and thought if they are actually at the correct website because the
> green bar was missing. But I am certain that it was more than zero. 

Why are you certain of this? Just gut feeling?

> Why not for example always open a small overlay with information when
> someone starts entering data in a password field? Something like "You
> are entering a password at You visited this page 5 times
> before, first on August 4th 2019. We don't know anything about the
> owner" or for EV "You are entering a password at You
> visited this page 5 times before, first on August 4th 2019. This
> server is run by "WebPage GmbH" from Vienna, Austria [fancy flag
> picture]".

This server is run by "Authorised Web Site" from London, UK [Union

Sounds legitimate.

Remember, the British government doesn't care that Authorised Web Site
is a stupid name for a company, that its named officers are the
characters in Toy Story, that its claimed offices are a building site,
nor even that it has never filed (and never will file) any business
accounts. They collected their registration fee and that's all they

Re: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-16 Thread Nick Lamb via dev-security-policy
On Fri, 16 Aug 2019 13:31:08 +
Doug Beattie via dev-security-policy

> DB: One of the reasons that phishers don't get EV certificates is
> because the vetting process requires several interactions and
> corporate repositories which end up revealing more about their
> identity.  This leaves a trail back to the individual that set up the
> fake site which discourages the use of EV. DV is completely anonymous
> and leaves very few traces.

It's really tangential to Mozilla's purpose but it's worth dispelling
this myth.

Nothing about your identity is revealed. Let's take the country I live
in as an example, it looks superficially as though you need to reveal a
lot of personal details to register a company in the United Kingdom.
Surely this is all backed up with the considerable power of the
government of a major world power, and so if I can track down which
company is behind a phishing site then the individuals responsible
won't be hard to find right?

Er, no. If you just lie on the paperwork nothing will happen. If
private citizens point out specifically that the paperwork for your
company is a tissue of lies, Companies House will reply to explain that
alas the government doesn't have sufficient resources to investigate or
do anything about it and so it's just too bad their records are largely
fictitious nonsense. Still they promise they _care_ about this, it's
a top priority, just not one that anything will be done about...

There has been exactly one prosecution for lying to Companies House in
the modern era. They had the money and pursued it through the courts
very enthusiastically on exactly that one occasion and no other. Guess
why? Because someone wrote up paperwork for a bogus company naming
famous politicians who'd done nothing to fix this for years. That was
bad publicity, and so the government threw resources at "fixing" the
problem, ie prosecuting the person who pointed out the corruption.

Read "Where there's Muck there's Brass Plates" for further examples of
how much worse than few fraudsters phishing for bank credentials the
rot in British companies already is:

dev-security-policy mailing list

Re: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-15 Thread Nick Lamb via dev-security-policy
On Thu, 15 Aug 2019 22:11:37 +0200
Eric Rescorla via dev-security-policy

> I expect this is true, but it seems to me that if anything it is an
> argument that EV doesn't provide security value, not the other way
> around: DV certificates are much cheaper to obtain than EV, and so
> naturally if you just need a certificate you're going to get DV.
> OTOH, if users actually trusted EV more, it might be worthwhile for
> an attacker to get EV anyway.

It is as ever simultaneously reassuring and annoying to see EKR wrote
what I was thinking but more succinctly and a few hours before I get
time to draft an email.


My interpretation is that a LOT of phishing sites in 2019 only
have DV certificates because that was the default. The crooks didn't
think "I need a certificate" they thought "I need a web site" and in
2019 a typical web site comes with a certificate - same as you don't
need to buy separate seatbelts for your car these days.

If we are looking to protect users from Phishing, we should promote
WebAuthn, not Extended Validation, because we know WebAuthn actually
protects users from phishing.

dev-security-policy mailing list

Re: Comodo password exposed in GitHub allowed access to internal Comodo files

2019-07-27 Thread Nick Lamb via dev-security-policy
On Sun, 28 Jul 2019 00:06:38 +0200
Ángel via dev-security-policy 

> A set of credentials mistakenly exposed in a public GitHub repository
> owned by a Comodo software developer allowed access to internal Comodo
> documents stored in OneDrive and SharePoint:
> It doesn't seem that it affected the certificate issuance system, but
> it's an ugly security incident nevertheless.

What was once the Comodo CA is named Sectigo these days, so conveniently
for us this makes it possible to simply ask whether the incident
affected Sectigo at all:

- Does Sectigo in practice share systems with Comodo such that this
  account would have access to Sectigo internal materials ?

In passing it's probably a good time to remind all programme
participants that Multi-factor Authentication as well as being
mandatory for some elements of the CA function itself (BR 6.5.1), is a
best practice for any security sensitive business like yours to be using
across ordinary business functions in 2019. Don't let embarrassing
incidents like this happen to you.


dev-security-policy mailing list

Re: DarkMatter CAs in Google Chrome and Android

2019-07-25 Thread Nick Lamb via dev-security-policy
On Thu, 25 Jul 2019 13:16:44 -0500
Matthew Hardeman via dev-security-policy

>  Perhaps I misunderstand, but this would seem to suggest that there be
> direct penalties for mere pursuit of due process.

Mmm? Due process is something a minority of sovereign entities promise
(though they are not always very consistent in delivering), it has no
relevance to relationships between anybody else, including Mozilla,
Google, Dark Matter, myself or you.

And participation in Mozilla's root programme is, as the name implies,
solely in Mozilla's gift, presumably likewise Google. Not getting to
participate is not a "penalty".

dev-security-policy mailing list

Re: DarkMatter CAs in Google Chrome and Android

2019-07-25 Thread Nick Lamb via dev-security-policy
On Wed, 24 Jul 2019 14:32:41 + Scott Rea via dev-security-policy

> As you are aware, DarkMatter and DigitalTrust have appealed the
> decision by Mozilla on the basis of multiple elements which have also
> be published to the list. Has the appeal or any of the points at the
> heart of that appeal been taken into account in this decision by
> Google?

Surely the answer is "Yes" ? I mean, it makes strategic sense to react
to a CA which tries to appeal a trust store decision over the heads of
the people making it in exactly this way - by distrusting it.

I think it's what I would advise an independent trust store to do in
this situation.

dev-security-policy mailing list

Re: Certinomis Issues

2019-05-28 Thread Nick Lamb via dev-security-policy
PSD2 is the Payment Services Directive 2 a Directive from the European Union. Directives aren't legislation per se, but tell the member states to write their own legislation to achieve some agreed outcome. Many things you think of as EU laws are actually Directives, as a citizen the broad effect of a Directive should be pretty similar everywhere in the EU but implementation details very a lot.AIUI PSD2 has numerous goals following on from the previous successful Payment Services Directive, but they did once again get into the game of defining what X.509 certificates should mean and how issuers should validate information. So they've got themselves an OID arc for new policy OIDs.If these OIDs are used in certs in the Web PKI then such certificates would need to obey both sets of rules, but as a relying party I can't say I care about the EU rules at all until I see some clear benefit, whereas the benefit of rules from Mozilla and CA/B forum is already clear.If they shove an valid but nonsensical policy OID into a cert I don't know what Mozilla policy about that would be, but certainly the browser and common TLS clients will just ignore it altogether.___
dev-security-policy mailing list

Re: GlobalSign misissuance: 4 certificates with invalid CN

2019-05-18 Thread Nick Lamb via dev-security-policy
On Fri, 17 May 2019 21:11:41 +
Doug Beattie via dev-security-policy

> Today our post issuance checker notified us of 4 certificates were
> issued with invalid CN values this afternoon.
> We posted our incident report here:

Thanks Doug,

I have two questions that seem relevant to this incident, because it
is reminiscent of problems we had with the sprawl of issuance systems
under Symantec

1. I have examined one of the certificates and I see it contains a bogus
SAN dnsName matching the CN. Please let us know which constraints that
should be in place weren't in place for this API, for example could the
customer have successfully obtained a certificate for a FQDN which has
CAA policy saying GlobalSign should not issue ?

2. The API is described as "deprecated" but I'd like more details to
understand what that means from a practical standpoint. A subscriber
was able (and by the sound of things continues to be able) to cause
issuance through this API - was there already a specific date after
which GlobalSign had announced (to such customers) that the API would
cease availability? Is an equivalent, but so far as you understand
compliant, replacement API for these customers already available ? How
should a GlobalSign customer have known this API (or software using it)
was deprecated and when they needed to stop using it?

"In coordination with the customer, we are assured that no more
non-compliant certificates will be issued" certainly reads to me like
you know this API could issue more non-compliant certs right now, but
you're content to let a subscriber pinky swear not to do so. I don't
think that's what Mozilla has in mind with the phrase "a pledge to the
community" but perhaps Wayne disagrees.

dev-security-policy mailing list

Re: CAA record checking issue

2019-05-11 Thread Nick Lamb via dev-security-policy
On Fri, 10 May 2019 02:05:17 +
Jeremy Rowley via dev-security-policy

> Anyway, let me know what questions, comments, etc you have.

Thanks Jeremy,

If DigiCert is able to retrospectively achieve confidence that issuance
would have been permitted (because their records are good enough to go
back and see the CAA DNS records that were fetched but not used or at
the least the assessment made of those records at the time) I personally
think there is no need to revoke certificates that were in some sense
legitimately issued. To revoke them in these circumstances seems

This also rewards keeping high quality issuance records that let you go
back and understand what went wrong. The BRs mandate some record
keeping, but we definitely don't always see evidence of good quality
record keeping in incident reports (I would count ISRG / Let's Encrypt
here definitely).

If DigiCert turns out not to have the records, or checking isn't done
for whatever reasons then I think all 1053 affected certs should be
revoked, without trying to justify narrowing it down further.

In the margins, e.g. if DigiCert can see that some cases have no CAA,
but in cases with CAA it's not possible to be sure if it would have
permitted issuance, I think we need to ask for all 1053 to be revoked
for consistency rather than making complicated decisions that have the
effect of penalizing some subscribers for doing the Right Thing.

I don't endorse the plan of revoking 16 certs based on CAA information
that's far (perhaps more than 12 months) newer than the issuance, I
don't think this is compatible with the declared philosophy of the CAA
and so it makes the message about what CAA is or is not for too
muddled. Revoking all 1053 makes more sense than revoking 16 on this

dev-security-policy mailing list

Re: Certificates with subject locality "Default City"

2019-05-02 Thread Nick Lamb via dev-security-policy
On Thu, 2 May 2019 12:15:33 -0500
Alex Cohn via dev-security-policy

> I came across a number of certificates issued by Sectigo, SECOM, and
> DigiCert that list "Default City" as the subject's locality. Unless
> there are actually localities named "Default City" that I'm unaware
> of, it seems to me this is a violation of the BRs, sections
> and

I agree with you that this isn't what is wanted by the BRs.

In terms of diagnostics, I would say that L="Default City" has ended up
in CSRs because it's the default in OpenSSL (which explains the
diversity of affected issuers and applicants). That's also going to
spill over into appliances that embed OpenSSL and where a CSR may be
the only way to do things because the designers quite reasonably don't
let you upload or download private keys.

Alex, you say you "came across" these certificates, do you think it is
likely that there are many more, or was that in practice a fairly
thorough search?

I do have some questions for CAs implicated here:

I assume that in each case the ultimate cause is that a human agent
accepted that this subject (with L=Default City, but in examples I saw
otherwise entirely normal) was correct, when it fact L=Default City
means it is incorrect. If I'm wrong about that, please let me know.
Some mistakes are inevitable, but what we do about them is important.

1. If a certificate issued this way was some day implicated in a serious
security incident, would you be able to identify the specific human
individual who made that decision - from existing records and in a
timely fashion ? This would make it possible for investigators to
question that person about their possible connection to the incident.

2. Does your process specifically allow any "slop" such as typographical
mismatches, additional or missing address lines and so on, beyond those
specifically enumerated in the BRs when matching a Subject address? Can
you say what sort of "slop" that is, and justify why it's permitted ? 

Presumably you have some process to validate that the human agents do a
good enough job. e.g. through sampling their work

3. Has any sampling or other validation ever brought any such "Default
City" CSRs to your attention previously ? How about other mistakes in
CSRs of this same sort, e.g. Default values, common non-existent
cities, countries, etcetera. If so please say briefly what you did
about them before.

4. Do you believe your agents would feel empowered to ask questions
about the process, that they genuinely understand what we're trying to
achieve and they feel they have the time and resources needed to do a
good job, so that what we're seeing here is the best we can reasonably
expect from human validators ?

dev-security-policy mailing list

Re: AT SSL certificates without the AIA extension

2019-04-30 Thread Nick Lamb via dev-security-policy
On Mon, 29 Apr 2019 12:41:07 +
Doug Beattie via dev-security-policy
> It should be noted that these certificates are not posted to CT logs
> nor are they accessed via browsers as they are used within closed
> networks, but we'll get more details on their exact usage shortly.

Hi Doug,

Thanks for reporting this problem, I appreciate that this subCA doesn't
see a proportionate reward to logging these certs in the existing well
known public logs and so it makes sense that they wouldn't write to

I'm also glad to hear that a 100% sample policy was in place with, it
sounds like, a monthly audit period, given the volumes involved (from
what I can see publicly in e.g. Censys) that seems like a good idea.

Still, in terms of your audit oversight role it could make sense, as
software is replaced/ upgraded, to switch to private CT logging as a
substitute for a human role of uploading certs for audit.

>From your description it sounds as though GlobalSign reasonably trusts
that the assigned AT Employee will provide them with an accurate set
of certs, the thing we're protecting against here is accident or
mistake, not a malevolent subCA operator which would be very hard to
detect this way. Unfortunately this employee (and perhaps one or more
deputies) were on leave. If that assessment is correct then software
which uses RFC6962 methods to write certs on issuance to a log operated
by GlobalSign would satisfy this requirement automatically without a
human action.

With the log not publicly trusted it could operate a much relaxed
policy (e.g. MMD 7 days or even not defined, not publicly accessible)
but it would avoid this dependency on a specific person at AT doing a
manual step periodically in order for GlobalSign to have sight of issued

With the relative popularity of RFC6962 logging, this becomes an
off-the-shelf hook that can be used to support audit roles easily
without either manual steps to export the certificates or special
modifications to the issuance software. You mentioned EJBCA
specifically in this post, and so I verified that as expected EJBCA
does provide a means for CA operators to configure a log without also
then embedding SCTs in certificates (which might not be desirable for
AT's application)

dev-security-policy mailing list

Re: Arabtec Holding public key? [Weird Digicert issued cert]

2019-04-13 Thread Nick Lamb via dev-security-policy
On Fri, 12 Apr 2019 16:56:23 +
Jeremy Rowley via dev-security-policy

> I don't mind filling in details.
> We have a system that permits creation of certificates without a CSR
> that works by extracting the key from an existing cert, validating
> the domain/org information, and creating a new certificate based on
> the contents of the old certificate. The system was supposed to do a
> handshake with a server hosting the existing certificate as a form of
> checking control over the private key, but that was never
> implemented, slated for a phase 2 that never came. We've since
> disabled that system, although we didn't file any incident report
> (for the reasons discussed so far).  

Thanks Jeremy

I agree that in TLS specifically there's no direct way to leverage these
certificates to do anything awful. So for m.d.s.policy's core purpose
of caring about Mozilla/ Firefox there's no problem here, and as others
have noticed the BRs are silent on this. Though perhaps they should not

I am not so sure in the general case, it is certainly possible in the
very general sense to create scenarios in which something resembling the
Confused Deputy problem arises with this sort of certificate, a loose
example follows taking inspiration from the work done recently on TLS
1.3 PSK attacks by Drucker and Gueron

1. Trent is a Trusted Third Party, in this case a CA issuing IOT devices
certificates tying their identity to a public key. Unfortunately Trent
is easily confused as we shall see

2. These IOT devices don't do TLS but have some custom public key
protocol using Trent's certificates. One feature in this protcol is the
[MUTE] message to tell devices you want nothing further to do with them.

3. Alice, the Archive System, has a cert (Alice,A). Bob, the video
surveillance system also has a cert (Bob,B). And finally there's a
singing fish toy Carol with a cert (Carol,C) received as a free gift.

4. The makers of Carol trick Trent into issuing (Carol,A) a certificate
with Carol's identity but Alice's public key

5. Carol presents Bob with (Carol,A) and annoys Bob with constant
nonsense, knowing that in the protocol Bob can reply with a [MUTE]
message to make her stop.

6. Bob sends a message to Carol, but using the A public key. Carol can't
read this message since she does not know the A private key but she can
reasonably guess it's a [MUTE]

7. Carol relays Bob's [MUTE] to Alice. It is encrypted to Alice, and
signed by Bob, so Alice will consider this a valid [MUTE] message from

8. Now the video surveillance footage is not archived, because a toy
fish switched it off... it may be very difficult to diagnose that the
problem was with Trent, issuing this bogus (Carol,A) cert, as even if
suspicion falls on Carol (or Carol's makers) it's far from obvious how
they could cause Bob to send Alice a message.

The fact that DigiCert's CPS says explicitly that it will check CSRs is
a good thing. Not checking them is a bad thing. Is the situation that
we need to spell out in the BRs or Mozilla policy every single basically
good idea to ensure CAs don't think it's optional and stop doing it?
Let's hope not.

dev-security-policy mailing list

Arabtec Holding public key?

2019-04-10 Thread Nick Lamb via dev-security-policy
(Resending after I typo'd the ML address)

At the risk of further embarrassing myself in the same week, while
working further on mimicking Firefox trust decisions I found this
pre-certificate for Arabtec Holding PJSC:

Now there's nothing especially strange about this certificate, except
that its RSA public key is shared with several other certificates

... such as the DigiCert Global Root G2:

I would like to understand what happened here. Maybe I have once again
made a terrible mistake, but if not surely this means either that the
Issuing authority was fooled into issuing for a key the subscriber
doesn't actually have or worse, this Arabtec Holding outfit has the
private keys for DigiCert's Global Root G2

dev-security-policy mailing list

Re: Mozilla cert report - am I holding it wrong?

2019-04-09 Thread Nick Lamb via dev-security-policy
On Tue, 9 Apr 2019 14:07:55 -0400
Ryan Sleevi via dev-security-policy

> I think it's merely a misparsing of the description.
> The intermediate you referenced - -
> chains to a "root in Mozilla's program with the Websites trust bit
> set". That root is, and you can see, it has
> the Website Trust Bit set.


> I suspect you parsed it as "intermediates ... with the websites trust
> bit set", but that's not what that report is.

Yes, I see. So I was indeed holding it wrong. Thanks for this clear
explanation Ryan.

dev-security-policy mailing list

Mozilla cert report - am I holding it wrong?

2019-04-09 Thread Nick Lamb via dev-security-policy
Mozilla's wiki has a page about the subCAs

On that page I see a link labelled:

"Non-revoked, non-expired Intermediate CA Certificates chaining up to
roots in Mozilla's program with the Websites trust bit set"

And clicking that link produces a CSV file. Fine so far.

I anticipated that this CSV file would be a set of subCA certs which
were trusted by Firefox to issue leaf TLS certs, since on the face of
it that's what the title claims.

But, that seems to be wrong, for example the file includes
"Symantec Shared Individual Email Certificate Authority"

which as its name suggests does not have the Websites trust bit set

So. What's actually going on here? Is there a trick that I'm not
understanding to processing this file? Why are there certs in it that
actually aren't for trusted subCAs at all?

Is the link wrong?

What is the recommended procedure for someone who wants to determine
whether a random leaf cert they're looking at would in fact be trusted
in Firefox? Other than "try it in Firefox" ?

dev-security-policy mailing list

Re: CFCA certificate with invalid domain

2019-03-17 Thread Nick Lamb via dev-security-policy
On Fri, 15 Mar 2019 19:41:58 -0400
Jonathan Rudenberg via dev-security-policy

> I've noted this on a similar bug and asked for details:

I can't say that this pattern gives me any confidence that the CA
(CFCA) does CAA checks which are required by the BRs.

I mean, how do you do a CAA check for a name that can't even exist? If
you had the technology to run this check, and one possible outcome is
"name can't even exist" why would you choose to respond to that by
issuing anyway, rather than immediately halting issuance because
something clearly went badly wrong? So I end up thinking probably CFCA
does not actually check names with CAA before issuing, at least it does
not check the names actually issued.

dev-security-policy mailing list

Re: Possible DigiCert Mis-issuance

2019-02-28 Thread Nick Lamb via dev-security-policy
On Thu, 28 Feb 2019 05:52:14 +
Jeremy Rowley via dev-security-policy

Hi Jeremy,

> 4. The validation agent specified the approval scope as

I assume this is a typo by you not the agent, for ?

Meanwhile, and without prejudice to the report itself once made:

> 2. The system marked the WHOIS as unavailable for automated parsing
> (generally, this happens if we are being throttled or the WHOIS info
> is behind a CAPTCHA), which allows a validation agent to manually
> upload a WHOIS document

This is a potentially large hole in issuance checks based on WHOIS.

Operationally the approach taken ("We can't get it to work, press on")
makes sense, but if we take a step back there's obvious potential for
nasty security surprises like this one.

There has to be something we can do here, I will spitball something in
a next paragraph just to have something to start with, but to me if it
turns out we can't improve on basically "sometimes it doesn't work so
we just shrug and move on" we need to start thinking about deprecating
this approach altogether. Not just for DigiCert, for everybody.

- Spitball: What if the CA/B went to the registries, at least the big
  ones, and said we need this, strictly for this defined purpose, give
  us either reliable WHOIS, or RDAP, or direct database access or
  _something_ we can automate to do these checks ? The nature of CA/B
  may mean that it's not appropriate to negotiate paying for this
  (pressuring suppliers to all agree to offer members the same rates is
  just as much a problem as all agreeing what you'll charge customers)
  but it should be able to co-ordinate making sure members get access,
  and that it isn't opened up to dubious data resellers that the
  registries don't want rifling through their database.

My argument to the registries would be that this is a service for their
customers. Unlike the data resellers, either the registry customer, or
some agent of theirs is asking you to authenticate their registration,
so giving you access makes sense as part of what the registry does for
its customers anyway.

> 7. During the review, no one noticed that the WHOIS document did not
> match the verification email nor did anyone notice that the email
> used for verification was actually a constructed email instead of the
> WHOIS admin email

So, reviews are good, but this review was not very effective. Valuable
to consider in the final report why not and how that can be improved.

Just to be clear though, are you sure "no one noticed" ? It can happen
that in review processes somebody does notice the issue, but they
are persuaded or persuade themselves that it's fine. A British railway
incident occurred when the person transcribing a document effectively
"moved" a railway crossing. Manual reviewers did see it, and so did the
controllers responsible for managing the crossing, but both persuaded
themselves that the movement must be a correction and approved it.

With the crossing now shown in the wrong place, instructions authorising
use of the crossing were no longer protected by the controller's view
of the movement of trains, this resulted in a "near miss" and thanks to
the victim's persistence in demanding it be properly investigated
fortunately accident investigators visited the crossing, found the
mistake and had things corrected before anyone died.

dev-security-policy mailing list

Re: DarkMatter Concerns

2019-02-27 Thread Nick Lamb via dev-security-policy
On Wed, 27 Feb 2019 09:30:45 -0500
Alex Gaynor via dev-security-policy

> Finally, I think there's a point that is very much being stepped
> around here. The United States Government, including its intelligence
> services, operate under the rule of law, it is governed by both
> domestic and international law, and various oversight functions. It
> is ultimately accountable to elected political leadership, who are
> accountable to a democracy.

So, on my bookshelf I have a large book with the title "The Senate
Intelligence Committee Report On Torture".

That book is pretty clear that US government employees, under the
direction of US government officials and with the knowledge of the
government's executive tortured and murdered people, to no useful
purpose whatsoever. For the avoidance of doubt, those are international

Are those employees, officials and executives now... in prison? Did
they face trial, maybe in an international court explicitly created for
the purpose of trying such people?

Er, no, they are honoured members of US society, and in some cases
continue to have powerful US government jobs. The US is committed to
using any measures, legal or not, to ensure none of them see justice.

Sure, there are lots of places where there wouldn't even be a book
published saying "Here are these terrible things we did". But that's a
very low bar. For the purposes of m.d.s.policy we definitely have to
assume that the United States of America very much may choose to
disregard the "rule of law" if it suits those in power to do so.

I don't think the insistence that the UAE is definitively worse than
the US helps this discussion at all. We're not here to publish books
about awful things done by governments years after the fact, we're here
to protect Relying Parties. It is clear they will need protecting from
the US Government _and_ the United Arab Emirates.

dev-security-policy mailing list

Re: Possible DigiCert Mis-issuance

2019-02-27 Thread Nick Lamb via dev-security-policy
On Tue, 26 Feb 2019 17:10:49 -0600
Matthew Hardeman via dev-security-policy

> Is it even proper to have a SAN dnsName in ever?

It does feel as though ARPA should consider adding a CAA record to and similar hierarchies that don't want certificates,
denying all CAs, as a defence in depth measure.

dev-security-policy mailing list

Re: DarkMatter Concerns

2019-02-25 Thread Nick Lamb via dev-security-policy
On Sat, 23 Feb 2019 10:16:27 +0100
Kurt Roeckx via dev-security-policy
> I would also like to have a comment from the current root owner
> (digicert?) on what they plan to do with it.

Two other things would be interesting from Digicert on this topic

1. To what extent does DarkMatter have practical ability to issue
independently of Digicert?

It would be nice to know where this is on the spectrum of intermediate
CAs, between the cPanel intermediate (all day-to-day operations
presumably by Sectigo and nobody from cPanel has the associated RSA
private keys) and Let's Encrypt X3 (all day-to-day operations by Let's
Encrypt / ISRG and presumably nobody from IdenTrust has the associated
RSA private keys)

2. Does Digicert agree that currently misissuances, even on seemingly
minor technical issues like threadbare random serial numbers are their
problem, since they are the root CA and ultimately responsible for this
intermediate ?

dev-security-policy mailing list

Re: Certificate issued with OU > 64

2019-02-18 Thread Nick Lamb via dev-security-policy
On Fri, 15 Feb 2019 05:05:16 -0800 (PST)
info--- via dev-security-policy 

> Feb 14th 13:28 -> reported the incident to our PKI software
> manufacturer
> Feb 14th 15:24 -> received the answer from the
> manufacturer. They tell us that there’s a bug in the preventive
> filter with the OU, and that they have a hotfix to solve it.
> Feb 14th 17:21 -> Izenpe reports to list

One value from incident reports is that other participants can learn
from what has happened rather than having to learn only from their own

With that in mind, two thoughts

1. This incident report doesn't tell me whether the "PKI software
manufacturer" has other customers in the Web PKI. We should definitely
want not only Izenpe but any other participating CAs using the same
software to apply a fix for this issue or verify that they're

The most trivial way to achieve that would be for Izenpe to tell us the
name of this manufacturer and any other info (e.g. patched build
numbers, manufacturer's internal bug tracker codes) other CAs (which
are obliged to watch m.d.s.policy by Mozilla rules) should follow.

However it may be that this is too commercially sensitive, and if that
is the case Izenpe (and other CAs who find themselves in a similar
situation) should I think make sure the "PKI software manufacturer"
tells any other customers who may be affected, and add comments to this
effect in the Incident follow-up so that those following along know it
was taken care of, without it being made public which exact CAs are
using the same vendor's software.

2. Where third party software is essential to the Web PKI, I'd
encourage openness from this "PKI software manufacturer" and anybody
else in the like business. That could mean participating here (you
needn't mention specific customers if that's a problem) or it could mean
setting up some a continuing discussion elsewhere, especially if it's
going to inevitably drift far from Mozilla's focus.

There's plenty to discuss about security of these products without
straying into commercially sensitive issues like pricing or
non-security features, but it feels as though in reality a lot of the
time the makers don't talk to one another, which can mean the same
problem recurs in different software since lessons were not passed on,
the exact thing m.d.s.policy Incident reports are intended to prevent.


dev-security-policy mailing list

Re: Incident Report DFN-PKI: Non-IDNA2003 encoded international domain names

2019-01-25 Thread Nick Lamb via dev-security-policy
On Thu, 24 Jan 2019 10:04:00 +0100
Kurt Roeckx via dev-security-policy
> Will you fill something in in the commonName? I think what is
> expected in the commonName is what the user would type and expect to
> see, I don't think the commonName should contain
> If you have a commonName, I would expect that
> it contains gauß And if you create a commonName then, you
> are required to check that it matches the in
> the SAN.

I have two responses to this, first the practical one:

In Firefox (our most direct concern here on m.d.s.policy) of course CN
is entirely ignored for matching certificates in the Web PKI.

However many other clients exist, and we know most of them continue to
parse CN as you might have done twenty years ago trying to find some IP
address or DNS name in the human readable text. In some cases they
either don't understand SANs, or they prioritise matching CN over SANs.

This is a bad idea (if you are reading this and have responsibility for
the name matching algorithm in either a client or library I implore you
to go look at this again) but it's out there today and isn't going
away in the immediate future.

Concrete example: Until relatively recently Python's SSL/TLS
implementation, including in the very popular "Requests" library, would
match a Unicode hostname string against CN or SANs, even though that's
not correct behaviour. When a user asks to connect to 瞺瞹砡.example
the Python code correctly determines that it needs the DNS name
xn--b6yb42a.example to find the IP address but it still expects the
certificate to match 瞺瞹砡.example not xn--b6yb42a.example. This is of
course impossible for SANs by definition, and that impossibility was
helpful in persuading developers that their understanding of what
needed to happen here couldn't be correct.

I (as a relying party) would prefer that failure modes that fall out of
this sort of error aren't fatal to security. CAs that write SANs as
IA5-Strings with A-labels into CN fail safely here, whereas those which
try to conjure U-labels for a Unicode String risk tricking some of this
bad parser code into accepting a certificate for one name as valid for
a similar but different name or blowing up the parser itself (I haven't
seen examples where UCS-2 string data ends up written to a NUL-byte
terminated C string but I would not be surprised if it happens)

For compatibility reasons omitting CN altogether is not usually a good
plan, so to me that leaves writing the A-labels as the best option. I
believe Let's Encrypt currently has experiments ongoing as to how to
opt out of writing CN, but there's no intent to actually stop doing it
by default.

Second, a philosophical response:

The purpose of the Subject DN is to identify the Subject to a Relying
Party and we want it to be clear exactly which Subject we're
identifying. It is difficult, and maybe impossible, for a Certificate
Authority to specify how the user's input will be handled or how
exactly a name will be displayed in every possible user agent software.
On the other hand, the DNS A-labels, though unfamiliar to a human and
unwieldy to think about, have the advantage that they're definitely
identifying the specific thing we validated, not anything with a
different but similar name.

The reason it's hard for the CA to reason about Unicode names is that
not only do you have all of IDNA-2003, IDNA-2008, TR#46 but also
browsers have lots of counter measures (and the exact counter measures
deployed in famous brand browsers have changed over time) for the
problem of confusable DNS names. A browser may choose to write an IDN
in Punycode to avoid confusing users into believing the IDN is actually
some distinct name that merely looks similar.

My preferred outcome here would be for CAs to just voluntarily
choose not to write U-labels into CN AND for user agents to stop trying
to parse CN instead of just handling SANs. I think that's easier and
safer for basically everybody. But I don't feel strongly enough about
it that I feel we want "Incident Reports" for every scenario where this
didn't happen.

I do feel strongly enough about it that if a incident does happen and
the proximate cause was "We write U-labels into CN and that tripped a
bug" there's a good chance I will do the Nelson Muntz laugh and no
chance I'll have sympathy for the CA this happened to.

dev-security-policy mailing list

Re: Use cases of publicly-trusted certificates

2018-12-30 Thread Nick Lamb via dev-security-policy
On Thu, 27 Dec 2018 16:56:39 -0800
Peter Bowen via dev-security-policy

> - The character Asterisk (U+002A, '*') is not allowed in dNSName SANs
> per the same rule forbidding Low Line (U+005F, '_').   RFC 5280 does
> say: "Finally, the semantics of subject alternative names that
> include wildcard characters (e.g., as a placeholder for a set of
> names) are not addressed by this specification.  Applications with
> specific requirements MAY use such names, but they must define the
> semantics."  However it never defines what "wildcard characters" are
> acceptable.  As Wikipedia helpfully documents, there are many
> different characters that can be wildcards:
>  The very same
> ballot that attempted to clarify the status of the Low Line character
> tried to clarify wildcards, but it failed.  The current BRs state
> "Wildcard FQDNs are permitted." in the section about subjectAltName,
> but the term "Wildcard FQDN" is never defined.  Given the poor
> drafting, I might be able to argue that Low Line should be considered
> a wildcard character that is designed to match a single character,
> similar to Full Stop (U+002E, '.') in regular expressions.

Are you, in fact, now arguing this? If you, in fact, ever believed
this, do you not think it has very significant implications that should
have been raised previously?

e.g. If these are wildcards, putting one in an EV cert would be a
serious problem. Did you go back and check there were problem reports
for any cases where EV certs have these imaginary underscore wildcards?

Let's be real: There was never any such idea, the underscores are not
"wildcards" they're present because some CAs took a lackadaisical
approach to name validation that suited their customers better.

> - The meaning of the extendedKeyUsage extension in a CA certificate is
> unclear.  There are at least two views: 1) It constrains the use of
> the public key in the certificate and 2) It constrains the use of
> end-entity public keys certified by the CA named in the CA
> certificate.  This has been discussed multiple times on the IETF PKIX
> mailing list and no consensus has been reached.  Similarly, the X.509
> standard does not clarify.  Mozilla takes the second option, but it
> is entirely possible that a clarification could show up in a future
> RFC or X.500-series doc that goes with the first option.

In the absence of a consensus from the relevant IETF Working Groups I
don't see why you'd expect a future RFC. Certainly there shouldn't be
any mechanism to get a Standards Track RFC without consensus.

We can't do anything about ISO, if they go completely off the rails I
guess we'd have to decide what to do about that when it happens, it
doesn't feel tempting to try to get ahead of that particular calamity.

> Of course people are going to try to do better, but part of that is
> understanding that people are not perfect and that even automation can
> break. I wrote certlint/cablint with hundreds of tests and continue
> to get reports of gaps in the tests.  Yes, things will get better,
> but we need to get them there in an orderly way.

This feels pretty orderly to me?

We're a pretty long way from, say, the end of Vernor Vinge's
novel "Rainbows End", where government spooks issue blanket revocations
for all certificates under a major root CA. (It's fun to imagine how
disappointingly little effect this would have in our real world)

dev-security-policy mailing list

Re: Use cases of publicly-trusted certificates

2018-12-30 Thread Nick Lamb via dev-security-policy
On Thu, 27 Dec 2018 22:43:19 +0100
Jakob Bohm via dev-security-policy

> You must be traveling in a rather limited bubble of PKIX experts, all
> of whom live and breathe the reading of RFC5280.  Technical people
> outside that bubble may have easily misread the relevant paragraph in
> RFC5280 in various ways.

It's practically a pub quiz question. I appreciate that I might be
unusual in happening to care about this as a lay person, but for a
public CA in the Web PKI correctly understanding this stuff was _their
job_. It isn't OK for them to be bad at their jobs.

> The documents that prescribes the exact workings of DNS do not
> prohibit (only discourage) DNS names containing underscores.  Web
> browser interfaces for URL parsing may not allow them, which would be
> a technical benefit for at least one usage of such certificates
> reported in the recent discussion.

We get it, you don't accept that not all DNS names can be names of
hosts. That you still seem determined not to understand this even
when it's explained repeatedly shows that my characterization of this
position was correct.

> That I disagree with you on certain questions of fact doesn't mean
> I'm unreliable, merely that you have not presented any persuasive
> arguments that you are not the one being wrong.

I can't distinguish people who are "actually" unreliable from people
who claim the plain facts are "unpersuasive" to their point of view, and
so I don't. Likewise m.d.s.policy largely doesn't care whether a CA's
problems are a result of incompetence or malfeasance, same outcome
either way: distrust.

> I merely
> dispute that this was obvious to every reader of those documents

Since you like legal analogies, the usual standard in law is that
something was known _or should have been known_. This means that a
declaration that you didn't know something holds no weight if a court
concludes that you _should_ have known it. If you have a responsibility
to know, "I didn't know" is not usually an excuse.

I don't believe subscribers should have known, but I do believe
Certificate Authorities should have known, or, as corporate entities,
should have employed someone who knew that this was an important thing
to understand, did their research and came back with a "No" that had
the effect of setting issuance policy.

Doubtless some ordinary subscribers believe Africa is a country. I
don't have a problem with that. But I hope we agree that a CA should
not sign a certificate which gives C=AP (an ISO code reserved for other
reasons associated with Africa) on the rationale that they thought
Africa is a country.

> A better example is the pre-2015 issuing of .onion names, which do
> not exist in the IANA-rooted DNS.

A better example in the sense that, if this happened today we would
expect CAs not to issue for such a name without first getting a change
to the BRs saying this hierarchy is special ?

If the situation was that CAs had sensibly not issued for underscores,
then asked if they could and been turned down this entire thread would
not exist.

> I wrote this in opposition to someone seemingly insisting that the 
> _name_ implied that all non-web uses are mistakes that should not be 
> given any credence.

You wrote it in reply to me, and you quoted me. I don't know whether my
reciting these facts will be "persuasive" to you, but once again
refusing to believe something won't stop it being true - it only affects
your credibility.

dev-security-policy mailing list

Re: When should honest subscribers expect sudden (24 hours / 120 hours) revocations?

2018-12-30 Thread Nick Lamb via dev-security-policy
On Sat, 29 Dec 2018 16:32:46 -0800
Peter Bowen via dev-security-policy

>  Consider the following cases:
> - A company grows and moves to larger office space down the street.
> It turns out that the new office is in a different city even though
> the move was only two blocks away.  The accounting department sends
> the CA a move notice so the CA sends invoices to the new address.
> Does this mean the CA has to revoke all existing certificates in 5
> days?

If the certificates have this now useless address in them, then sure,
they're now wrong. Leading to two questions that have awkward answers
for CAs and my present employer: What kind of idiot would put
irrelevant stuff in the certificate and pay extra to do so?

I will also note here that it's not uncommon to give a companies "legal"
address (and even other "legal" details) that have little resemblance to
reality since they were chosen for tax efficiency or to protect a
Person with Significant Control from the lawful authority of the
country in which business is actually done.

My previous employer had a whole lot of certificates which gave the
address of a law firm on a small nominally independent island, they're a
large international company and do almost no business on that island,
but they're legally incorporated there and so that's what they decided
to write on the certificates, of course no actual users check or care.

This has a useful effect in "office move" scenarios because the legal
address does not change. But if you didn't write it at all then you
wouldn't need to care either.

> - Widget LLC is a startup with widgetco.example.  They want to take
> investment so they change to a C-corp and become Widget, Inc.  Widget
> Inc now is the registrant for widgetco.example. Does this now trigger
> the 5 day rule?
> - Same example as above, but the company doesn't remember to update
> the domain registration.  It therefore is invalid, as it points to a
> non-existence entity.  Does this trigger the 5 day rule?

It would matter which of the Ten Blessed Methods was used, in some
(most?) of the Methods the legal name of the domain registrant is
irrelevant and may never be known to the CA. Where the CA is confident
of issuance only because of a relationship to the legal registrant, a
change in registrant could indeed need urgent action by somebody.

> - The IETF publishes a new RFC that "Updates: 5280
> ".  It removes a previously valid
> feature in certificates.  Do all certificates using this feature need
> to be revoked within 5 days?
> - The  IETF publishes a new RFC that "Updates: 5280
> ".  It says it update 5280 as
> follows:

The IETF is not a member organisation. All of us can and should
participate. I know all the major browser vendors have employees who
(on or off the clock) are IETF participants, and I hope that at least
some of the CAs likewise have participants. If a CA believes that their
perspective is lacking they are, of course, free to assign one or more
personnel to track relevant work and even to pay to fly people out to
the periodic physical instantiation of the IETF.

If an IETF working group is updating RFC 5280 anybody - and I mean
anybody you don't even need to do so much as subscribe to a mailing
list first - can email that working group and point out a problem like
"Oh, if you make this change it's disruptive to our business, so please
don't do that without a suitable justification".

You are very likely to be able to achieve the IETF's requirement of
"rough consensus" to avoid changes that are needlessly disruptive.

More importantly IETF changes are often flagged months or years in
advance. In reality I would expect you'd see a Mozilla routine
communication asking CAs about their preparedness for any such change
some time in advance. It's not "five days" if you had a year's warning.

> - A customer has a registered domain name that has characters that
> current internationalized domain name RFCs do not allow (for example
>✪  A CA issues because this is a registered
> domain name according to the responsible TLD registry.  Must this be
> revoked within 5 days if the CA notices?

Seems sane to me. Also seems like a foolhardy practice by the
responsible TLD registry and/or its registrars. I would definitely
suggest annoyed subscribers demand compensation from their registrar
for letting them have a bogus name unless it turns out the registrar
was talked into this despite warning what might happen.

> - A customer has a certificate with a single domain name in the SAN
> which is an internationalized domain name.  The commonName attribute
> in the subject contains the IDN.  However the CN attribute uses
> U-labels while the SAN uses A-labels.  Whether this is allowed has
> been the subject of debate at the CA/Browser Forum as neither BRs nor
> RFCs make this clear.  Do any certificates using U-labels in the CN
> need to 

Re: Use cases of publicly-trusted certificates

2018-12-27 Thread Nick Lamb via dev-security-policy
On Thu, 27 Dec 2018 15:30:01 +0100
Jakob Bohm via dev-security-policy

> The problem here is that the prohibition lies in a complex legal
> reading of multiple documents, similar to a situation where a court
> rules that a set of laws has an (unexpected to many) legal
> consequence.

I completely disagree. This prohibition was an obvious fact, well known
to (I had assumed prior to this present fever) everyone who cared about
the Internet's underlying infrastructure.

The only species of technical people I ever ran into previously who
professed "ignorance" of the rule were the sort who see documents like
RFCs as descriptive rather than prescriptive and so their position
would be (as it seems yours is) "Whatever I can do is allowed". Hardly
a useful rule for the Web PKI.

Descriptive documents certainly have their place - I greatly admire
Geoff Pullum's Cambridge Grammar of the English Language, and I
do own the more compact "Student's Introduction" book, both of which
are descriptive since of course a natural language is not defined by
such documents and can only be described by them (and imperfectly,
exactly what's going on in English remains an active area of research).
But that place is not here, the exact workings of DNS are prescribed, in
documents you've called a "complex legal reading of multiple documents"
but more familiarly as "a bunch of pretty readable RFCs on exactly this

> It would benefit the honesty of this discussion if the side that won
> in the CAB/F stops pretending that everybody else "should have known"
> that their victory was the only legally possible outcome and should
> never have acted otherwise.

I would suggest it would more benefit the honesty of the discussion if
those who somehow convinced themselves of falsehood would accept this
was a serious flaw and resolve to do better in future, rather than
suppose that it was unavoidable and so we have to expect they'll keep
doing it.

Consider it from my position. In one case I know Jakob made an error
but has learned a valuable lesson from it and won't be caught the same
way twice. In the other case Jakob is unreliable on simple matters of
fact and I shouldn't believe anything further he says.

> Maybe because it is not publicly prohibited in general (the DNS
> standard only recommends against it, and other public standards
> require some such names for uses such as publishing certain public
> keys).  The prohibition exists only in the certificate standard
> (PKIX) and maybe in the registration policies of TLDs (for TLD+1
> names only).

Nope. You are, as it seems others in your position have done before,
confusing restrictions on all names in DNS with restrictions on names
for _hosts_ in DNS. Lots of things can have underscores in their names,
and will continue to have underscores in their names, but hosts cannot.
Web PKI certs are issued for host names (and IP addresses, and as a
special case, TOR hidden services).

Imagine if, on the same basis, a CA were to insist that they'd
understood Texas to be a US state, and so they'd written C=TX on the
rationale that a "state" is essentially the same kind of thing as a

I do not doubt they could find a few (mostly Texan) people to defend
this view, but it's obviously wrong, and when the City of Austin
Independent League of Skateboarders protests that they need to keep
getting certificates with C=TX for compatibility reasons we'd have a
good laugh and tell the CA to stop being so stupid, revoke these certs
and move on.

> Also it isn't the "Web PKI".  It is the "Public TLS PKI", which is
> not confined to Web Browsers surfing online shops and social
> networks, and hasn't been since at least the day TLS was made an IETF
> standard.

It is _named_ the Web PKI. As you point out, it is lots of things, and
so "Web PKI" is not a good description but its name remains the Web
PKI anyway.

The name for people from my country is "Britons". Again it's not a good
description, since some of them aren't from the island of Great Britain
as the country extends to adjacent islands too. Nevertheless the name is

dev-security-policy mailing list

Re: Use cases of publicly-trusted certificates

2018-12-27 Thread Nick Lamb via dev-security-policy
As a relying party I read this in the context of the fact that we're talking about names that are anyway prohibited.Why would you need a publicly trusted certificate that specifies a name that is publicly prohibited?I guess the answer is "But it works on Windows". And Windows is welcome to implement a parallel "Windows PKI" which can have its own rules about naming and whatever else and so the certificates could be issued in that PKI but not in the Web PKI.___
dev-security-policy mailing list

Re: Incident report D-TRUST: syntax error in one tls certificate

2018-12-04 Thread Nick Lamb via dev-security-policy
On Tue, 4 Dec 2018 14:55:47 +0100
Jakob Bohm via dev-security-policy

> Oh, so you meant "CA issuance systems and protocols with explicit
> automation features" (as opposed to e.g. web server systems or
> operating systems or site specific subscriber automation systems).
> That's why I asked.

Yes. These systems exist, have existed for some time, and indeed now
appear to make up a majority of all issuance.

> And note that this situation started with an OV certificate, not a DV
> certificate.  So more than domain ownership needs to be validated.

Fortunately it is neither necessary nor usual to insist upon fresh
validations for Organisational details for each issuance. Cached
validations can be re-used for a period specified in the BRs although
in some cases a CA might chose tighter constraints.

> You have shown that ONE system, which you happen to like, can avoid
> that weakness, IF you ignore some other issues.  You have not shown
> that requiring subscribers to do this for any and all combinations of
> validation systems and TLS server systems they encounter won't have
> this weakness.

Yes, an existence proof. Subscribers must of course choose trade-offs
that they're comfortable with. That might mean accepting that your web
site could become unavailable for a period of several days at short
notice, or that you can't safely keep running Microsoft IIS 6.0 even
though you'd prefer not to upgrade. What I want to make clear is that
offering automation without write access to the private key is not only
theoretically conceivable, it's actually easy enough that a bunch of
third party clients do it today because it was simpler than whatever
else they considered.

> I made no such claim.  I was saying that your hypothetical that
> all/most validation systems have the properties of ACME and that
> all/most TLS servers allow certificate replacement without access to
> the private key storage represents an idealized scenario different
> from practical reality.

Subscribers must choose for themselves, in particular it does not
constitute an excuse as to why they need more time to react. Choices
have consequences, if you choose a process you know can't be done in a
timely fashion, it won't be done in a timely fashion and you'll go

> And the paragraph I quoted says to not do that unless you are using a
> HSM, which very few subscribers do.

It says it only recommends doing this for a _renewal_ if you have an
HSM. But a scheduled _renewal_ already provides sufficient notice for
you to replace keys and make a fresh CSR at your leisure if you so
choose. Which is why you were talking about unscheduled events.

If you have a different reference which says what you originally
claimed, I await it.

> It is not a convenience of scheduling.  It is a security best
> practice, called out (as the first example found) in that particular
> NIST document.

If that was indeed their claimed security best practice the NIST
document would say you must replace keys every time you replace
certificates, for which it would need some sort of justification, and
there isn't one. But it doesn't - it recommends you _renew_ once per
year‡, and that you should change keys when you _renew_, which is to
say, once per year.

‡ Technically this document is written to be copy-pasted into a three
ring binder for an organisation, so you can just write in some other
amount of time instead of . As with other documents of
this sort it will not achieve anything on its own.

> Which has absolutely no bearing on the rule that keys stored outside
> an HSM should (as a best practice) be changed on every reissue.  It
> would be contradictory if part B says not to reuse keys, and part C
> then prescribes an automation method violating that.

There is no such rule listed in that NIST document. The rule you've
cited talks about renewals, but a reissue is not a renewal. There was
nothing wrong with the expiry date for the certificate, that's not why
it was replaced.

There are however several recommendations which contradict this idea
that it's OK to have processes which take weeks to act, such as:

"System owners MUST maintain the ability to replace all certificates on
their systems within <2> days to respond to security incidents"

"Private keys, and the associated certificates, that have the
capability of being directly accessed by an administrator MUST be
replaced within <30> days of reassignment or <5> days of termination of
that administrator"

The NIST document also makes many other recommendations that - like the
one year limit - won't be followed by most real organisations; such as a
requirement to add CAA records, to revoke all their old certificates
a short time after they're replaced, the insistence on automation for
adding keys to "SSL inspection" type capabilities or the prohibition of
all wildcards.

> So it is real.

Oh yes, doing things that are a bad idea is very real. That is, after
all, why we're discussing this at all.

> - 

Re: Incident report D-TRUST: syntax error in one tls certificate

2018-12-04 Thread Nick Lamb via dev-security-policy
On Tue, 4 Dec 2018 07:56:12 +0100
Jakob Bohm via dev-security-policy

> Which systems?

As far as I'm aware, any of the automated certificate issuance
technologies can be used here, ACME is the one I'm most familiar with
because it is going through IETF standardisation and so we get to see
not only the finished system but all the process and discussion.

> I prefer not to experiment with live certificates.  Anyway, this was 
> never intended to focus on the specifics of ACME, since OC issuance 
> isn't ACME anyway.

The direction of the thread was: Excuses for why a subscriber can't
manage to replace certificates in a timely fashion. Your contribution
was a claim that automated deployment has poor operational security

"it necessarily grants read/write access to the certificate data
(including private key) to an automated, online, unsupervised system."

I've cleanly refuted that, showing that in a real, widely used system
neither read nor write access to the private key is needed to perform
automated certificate deployment. You do not need to like this, but to
insist that something false is "necessarily" true is ludicrous.

> So returning to the typical, as-specified-in-the-BRs validation 
> challenges.  Those generally either do not include the CSR in the 
> challenge, or do so in a manner that would involve active checking 
> rather than just trivial concatenation.  These are the kind of 
> challenges that require the site owner to consider IF they are in a 
> certificate request process before responding.

I _think_ this means you still didn't grasp how ACME works, or even how
one would in general approach this problem. The CSR needs to go from
the would-be subscriber to the CA, it binds the SANs to the key pair,
proving that someone who knows the private key wanted a certificate for
these names. ACME wants to bind the names back to the would-be
subscriber, proving that whoever this is controls those names, and so
is entitled to such a certificate. It uses _different_ keys for that
precisely so that it doesn't need the TLS private key.

But most centrally the Baseline Requirements aren't called the "Ideal
Goals" but only the "Baseline Requirements" for a reason. If a CA
approaches them as a target to be aimed for, rather than as a bare
minimum to be exceeded, we're going to have a problem. Accordingly the
Ten Blessed Methods aren't suggestions for how an ideal CA should
validate control of names, they're the very minimum you must do to
validate control of names. ACME does more, frankly any CA should be
aiming to do more.

> See for example NIST SP 1800-16B Prelim Draft 1, Section 5.1.4 which
> has this to say:
>   "... It is possible to renew a certificate with the same public and 
>   private keys (i.e., not rekeying during the renewal process). 
>   However, this is only recommended when the private key is contained 
>   with a hardware security module (HSM) validated to Federal
> Information Processing Standards (FIPS) Publication 140-2 Level 2 or
> above"

Just before that sentence the current draft says:

"It is important to note that the validity period of a certificate is
different than the cryptoperiod of the public key contained in the
certificate and the corresponding private key."

Quite so. Thus, the only reason to change both at the same time is as I
said, a convenience of scheduling, NIST does not claim that creating
certificates has any actual impact on the cryptoperiod, they just want
organisations to change their keys frequently and "on renewal" is a
convenient time to schedule such a change.

Moreover, this is (a draft of) Volume B of NIST's guidance. There is an
entire volume, Volume C, about the use of automation, to be published
later. I have no idea what that will say, but I doubt it will begin by
insisting that you need read-write access to private keys to do
something people are already doing today without such access.

> I am referring to the very real facts that:
> - Many "config GUI only" systems request certificate import as
> PKCS#12 files or similar.

This is a real phenomenon, and encourages a lot of bad practices we've
discussed previously on m.d.s.policy. It even manages to make the
already confusing (for lay persons) question of what's "secret" and what
is not yet more puzzling, with IMNSHO minimal gains to show for it. Use
of PKCS#12 in this way can't be deprecated quickly enough for my liking.

[ This is also related to the Windows ecosystem in which there's a
pretence kept up that private keys aren't accessible once imported,
which of course isn't mechanically true since those keys are needed by
the system for it to work. So bad guys can ignore the documentation
saying its impossible and just read the keys out of RAM with a trivial
program, but good guys can't get back their own private keys.
A true masterpiece of security engineering, presumably from the same
people who invented the LANMAN password hash. ]

> - Many open source TLS servers 

Re: Incident report D-TRUST: syntax error in one tls certificate

2018-12-03 Thread Nick Lamb via dev-security-policy
On Tue, 4 Dec 2018 01:39:05 +0100
Jakob Bohm via dev-security-policy

> A few clarifications below
> Interesting.  What is that hole?

I had assumed that you weren't aware that you could just use these
systems as designed. Your follow-up clarifies that you believe doing
this is unsafe. I will endeavour to explain why you're mistaken.

But also I specifically endorse _learning by doing_. Experiment for
yourself with how easy it is to achieve auto-renewal with something like
ACME, try to request renewals against a site that's configured for
"stateless renewal" but with a new ("bad guy") key instead of your real
ACME account keys.

> It certainly needs the ability to change private keys (as reusing
> private keys for new certificates is bad practice and shouldn't be
> automated).

In which good practice document can I read that private keys should be
replaced earlier than their ordinary lifetime if new certificates are
minted during that lifetime? Does this document explain how its authors
imagine the new certificate introduces a novel risk?

[ This seems like breakthrough work to me, it implies a previously
unimagined weakness in, at least, RSA ]

You must understand that bad guys can, if they wish, construct an
unlimited number of new certificates corresponding to an existing key,
silently. Does this too introduce an unacceptable risk ? If not, why is
the risk introduced if a trusted third party mints one or more further
certificates ?

No, I think the problem here is with your imaginary "bad practice".
You have muddled the lifetime of the certificate (which relates to the
decay in assurance of subject information validated and to other
considerations) with the lifetime of the keys, see below.

> By definition, the strength of public keys, especially TLS RSA
> signing keys used with PFS suites, involves a security tradeoff
> between the time that attackers have to break/factor the public key
> and the slowness of handling TLS connections with current generation
> standard hardware and software.

This is true.

> The current WebPKI/BR tradeoff/compromise is set at 2048 bit keys
> valid for about 24 months.

Nope. The limit of 825 days (not "about 24 months") is for leaf
certificate lifetime, not for keys. It's shorter than it once was not
out of concern about bad guys breaking 2048-bit RSA but because of
concern about algorithmic agility and the lifetime of subject
information validation, mostly the former.

Subscribers are _very_ strongly urged to choose shorter, not longer
lifetimes, again not because we're worried about 2048-bit RSA (you will
notice there's no exemption for 4096-bit keys) but because of agility
and validation.

But choosing new keys every time you get a new certificate is
purely a mechanical convenience of scheduling, not a technical necessity
- like a fellow who schedules an appointment at the barber each time he
receives a telephone bill, the one thing has nothing to do with the

> It requires write access to the private keys, even if the operators
> might not need to see those keys, many real world systems don't allow
> granting "install new private key" permission without "see new
> private key" permission and "choose arbitrary private key" permission.
> Also, many real world systems don't allow installing a new
> certificate for an existing key without reinstalling the matching
> private key, simply because that's the interface.
> Traditional military encryption systems are built without these 
> limitations, but civilian systems are often not.


I'm sure there's a system out there somewhere which requires you to
provide certificates on a 3.5" floppy disk. But that doesn't mean
issuing certificates can reasonably be said to require a 3.5" floppy
disk, it's just those particular systems.

> This is why good CAs send out reminder e-mails in advance.  And why 
> one should avoid CAs that use that contact point for infinite spam 
> about new services.

They do say that insanity consists of doing the same thing over and
over and expecting different results.

> The scenario is "Bad guy requests new cert, CA properly challenges 
> good guy at good guy address, good guy responds positively without 
> reference to old good guy CSR, CA issues for bad guy CSR, bad guy 
> grabs new cert from anywhere and matches to bad guy private key, 
> bad guy does actual attack".

You wrote this in response to me explaining exactly why this scenario
won't work in ACME (or any system which wasn't designed by idiots -
though having read their patent filings the commercial CAs on the whole
may be taken as idiots to my understanding)

I did make one error though, in using the word "signature" when this
data is not a cryptographic signature, but rather a "JWK Thumbprint".

When "good guy responds positively" that positive response includes
a Thumbprint corresponding to their ACME public key. When they're
requesting issuance this works fine because they use their ACME keys

Re: Incident report D-TRUST: syntax error in one tls certificate

2018-11-30 Thread Nick Lamb via dev-security-policy
On Wed, 28 Nov 2018 22:41:37 +0100
Jakob Bohm via dev-security-policy

> I blame those standards for forcing every site to choose between two 
> unfortunate risks, in this case either the risks prevented by those 
> "pinning" mechanisms and the risks associated with having only one 
> certificate.

HTTPS Key Pinning (HPKP) is deprecated by Google and is widely
considered a failure because it acts as a foot-gun and (more seriously
but less likely in practice) enables sites to be held to ransom by bad

Mostly though, what I want to focus on is a big hole in your knowledge
of what's available today, which I'd argue is likely significant in
that probably most certificate Subscribers don't know about it, and
that's something the certificate vendors could help to educate them
about and/or deliver products to help them use.

> Automating certificate deployment (as you often suggest) lowers 
> operational security, as it necessarily grants read/write access to 
> the certificate data (including private key) to an automated, online, 
> unsupervised system.


This system does not need access to private keys. Let us take ACME as
our example throughout, though nothing about what I'm describing needs
ACME per se, it's simply a properly documented protocol for automation
that complies with CA/B rules.

The ACME CA expects a CSR, signed with the associated private key, but
it does not require that this CSR be created fresh during validation +
issuance. A Subscriber can as they wish generate the CSR manually,
offline and with full supervision. The CSR is a public document
(revealing it does not violate any cryptographic assumptions). It is
entirely reasonable to create one CSR when the key pair is minted and
replace it only in a scheduled, predictable fashion along with the keys
unless a grave security problem occurs with your systems.

ACME involves a different private key, possessed by the subscriber/
their agent only for interacting securely with ACME, the ACME client
needs this key when renewing, but it doesn't put the TLS certificate key
at risk.

Certificates are public information by definition. No new risk there.

> Allowing multiple persons to replace the certificates also lowers 
> operational security, as it (by definition) grants multiple persons 
> read/write access to the certificate data.

Again, certificates themselves are public information and this does not
require access to the private keys.

> Under the current and past CA model, certificate and private key 
> replacement is a rare (once/2 years) operation that can be done 
> manually and scheduled weeks in advance, except for unexpected 
> failures (such as a CA messing up).
This approach, which has been used at some of my past employers,
inevitably results in systems where the certificates expire "by
mistake". Recriminations and insistence that lessons will be learned
follow, and then of course nothing is followed up and the problem

It's a bad idea, a popular one, but still a bad idea.

> For example, every BR permitted automated domain validation method 
> involves a challenge-response interaction with the site owner, who
> must not (to prevent rogue issuance) respond to that interaction
> except during planned issuance.

It is entirely possible and theoretically safe to configure ACME
responders entirely passively. You can see this design in several
popular third party ACME clients.

The reason it's theoretically safe is that ACME's design ensures the
validation server (for example Let's Encrypt's Boulder) unavoidably
verifies that the validation response is from the correct ACME account

So if bad guys request issuance, the auto-responder will present a
validation response for the good guy account, which does not match and
issuance will not occur. The bad guys will be told their validation
failed and they've got the keys wrong. Which of course they can't fix
since they've no idea what the right ACME account private key is.

For http-01 at least, you can even configure this without the
auto-responder having any private knowledge at all. Since this part is
just playing back a signature, our basic cryptographic assumptions mean
that we can generate the signature offline and then paste it into the
auto-responder. At least one popular ACME client offers this behaviour.

For a huge outfit like Google or Facebook that can doubtless afford to
have an actual "certificate team" this would not be an appropriate
measure, but at a smaller business it seems entirely reasonable.

> Thus any unscheduled revalidation of domain ownership would, by 
> necessity, involve contacting the site owner and convincing them this
> is not a phishing attempt.

See above, this works today for lots of ACME validated domains.

> Some ACME protocols may contain specific authenticated ways for the
> CA to revalidate out-of-schedule, but this would be outside the norm.

Just revalidating, though it seems to be a popular trick for CAs, is

Re: Incident report D-TRUST: syntax error in one tls certificate

2018-11-26 Thread Nick Lamb via dev-security-policy
In common with others who've responded to this report I am very skeptical about the contrast between the supposed importance of this customer's systems versus their, frankly, lackadaisical technical response.This might all seem harmless but it ends up as "the boy who cried wolf". If you relay laughable claims from customers several times, when it comes to an incident where maybe some extraordinary delay was justifiable any good will is already used up by the prior claims.CA/B is the right place for CAs to make the case for a general rule about giving themselves more time to handle technical non-compliances whose correct resolution will annoy customers but impose little or no risk to relying parties, I personally at least would much rather see CAs actually formally agree they should all have say 28 days in such cases - even though that's surely far longer than it should be - than a series of increasingly implausible "important" but ultimately purely self-serving undocumented exceptions that make the rules on paper worthless.___
dev-security-policy mailing list

Re: Request to Include emSign Root CA - G1, emSign Root CA - G3, emSign Root CA - C1, and emSign Root CA - C3

2018-10-11 Thread Nick Lamb via dev-security-policy
On Thu, 11 Oct 2018 13:06:46 -0700
Wayne Thayer via dev-security-policy

> This request is for inclusion of these four emSign roots operated by
> eMudhra in bug:

I would like to read more about eMudhra / emSign.

I have never heard of this entity before, perhaps because they're
Indian (if I understand correctly) but perhaps because they're just
entirely new to this business.

Of course just being new isn't inherently disqualifying, but it'd be
good to understand things like:

- Who (human individuals) is behind this outfit, are there people we've
dealt with before in any key roles? (For example I hope we can agree
that individuals from previously distrusted CAs as leadership would
be a potential red flag) Are there people involved who've done this or
something similar before?

- Does this entity or a legally related entity already operate a
  business in this space that has a record we can look at such as:
  Indian RA for another Certificate Authority, CA in another PKI, or
  more distantly somewhat similar businesses such as making identity
  documents, or payment card systems.

- How did they come to decide to set up a new root CA for the Web PKI?

Running a trustworthy CA is pretty hard, so I am at least a little bit
sceptical of the idea that people I've never hard of can wake up one
morning and decide "Hey let's run a CA" and do a good job, whether in
India, Indianapolis or Israel.
dev-security-policy mailing list

Re: 46 certificates issued with BR violations

2018-10-08 Thread Nick Lamb via dev-security-policy
On Mon, 8 Oct 2018 03:43:53 -0700 (PDT)
"piotr.grabowski--- via dev-security-policy"

> We have by the way question about error: ERROR: The 'Organization
> Name' field of the subject MUST be less than 64 characters. According
> to and the note from this RFC
> 'ub-organization-name INTEGER ::= 64. For UTF8String or
> UniversalString at least four times the upper bound should be
> allowed. So what is the max length of this field  for UTF8String?

As I understand it:

Although the word "character" is vague and should generally be avoided
in modern technical documents, in this context it seems to refer to a
Unicode code point. And "at least four times" is referring to the prior
lines of the RFC which explain that you will need more than one octet
(byte) to represent some of these characters - this is important for
resource constrained implementations.

So: Organization Names in certificates obeying RFC5280 should not
consist of more than 64 Unicode code points, when encoded in UTF-8,
those 64 code points might consume up to 256 octets (bytes)

This is NOT an excuse to write longer names which fit in 256 bytes, the
constraint is on the number of characters (Unicode code points) not the
bytes needed to encode these characters.

In practice Organization names obeying the 64 character limit from RFC
5280 are likely to fit in much fewer than 256 octets because the more
common characters such as "Ø" or "の" do not need 4 octets to encode,
whereas the  Smiling Cat Emoji does need 4 octets but of course rarely
appears in the name of organizations.

dev-security-policy mailing list

Re: SHA-1 exception history

2018-09-27 Thread Nick Lamb via dev-security-policy
On Thu, 27 Sep 2018 14:52:27 +
Tim Hollebeek via dev-security-policy

> My personal impression is that by the time they are brought up here,
> far too many issues have easily predicted and pre-determined outcomes.

It is probably true that many issues have predictable outcomes but I
think predictability is on the whole desirable. Are there in fact CA
representatives who'd rather they had no idea how Mozilla would react
when there's an issue?

> I know most of the security and key management people for the payment
> industry very well [1], and they're good people.

I mean this not sarcastically at all, but almost everybody is "good
people". That's just not enough. I would like to think that I'm "good
people" and yet it certainly would not be a good idea for the Mozilla
CA root trust programme to trust some CA root I have on this PC.

> I attempted to speak up a few times in various fora but it was pretty
> clear that anything that wasn't security posturing wasn't going to be
> listened to, and finding a practical solution was not on the agenda.
> It was pretty clear sitting in the room that certain persons had
> already made up their minds before they even understood what a
> payment terminal was, how they are managed, and what the costs and
> risks were for each potential alternative.

If we're being frank, my impression is that First Data lied in their
submission to us and if it came solely to my discretion that would be
enough to have justified telling them "No" on its own the first time.

Here's what they wrote to us:

"In Nov. 2014 Datawire added SHA-2 certificates to our staging and
support environments."

And here's what they'd told their customers about one of those staging
environments as late as September 2015:

"Datawire will update to SHA-256 support on March 9, 2016 on the
following url: (staging)"

and yet when Symantec did create a SHA-256 certificate for it wasn't in November 2014, or on March 9, 2016, it
was dated 10 June 2016.

OK, well, maybe it was just right? Let's try one
of their support sites,

That finally received a SHA-256 certificate in September 2016 almost
two years after Datawire told us it had happened, in fact, it was just
barely before Symantec forwarded us their request for an exception.
Rather than almost two _years_ their customers actually had two _days_
for this change before First Data put an onion in their pocket and came
to tell us about how hard they'd tried...

[ still exists at time of writing but is scheduled
to expire in the next few hours]

As to understanding what a payment terminal is, how about "The cheapest
possible device that passes the bare minimum of tests to scrape
through" ? Is that a good characterisation?

dev-security-policy mailing list

Re: Google Trust Services Root Inclusion Request

2018-09-27 Thread Nick Lamb via dev-security-policy
On Wed, 26 Sep 2018 23:02:45 +0100
Nick Lamb via dev-security-policy
> Thinking back to, for example, TSYS, my impression was that my post on
> the Moral Hazard from granting this exception had at least as much
> impact as you could expect for any participant. Mozilla declined to
> authorise the (inevitable, to such an extent I pointed out that it
> would happen months before it did) request for yet another exception
> when TSYS asked again.

Correction: The incident I'm thinking of is First Data, not TSYS, a
different SHA-1 exception.

dev-security-policy mailing list

Re: Google Trust Services Root Inclusion Request

2018-09-26 Thread Nick Lamb via dev-security-policy
On Wed, 26 Sep 2018 16:03:58 +
Jeremy Rowley via dev-security-policy

> Note that I didn’t say Google controlled the policy. However, as a
> module peer, Google does have significant influence over the policy
> and what CAs are trusted by Mozilla. Although everyone can
> participate in Mozilla discussions publicly, it’s a fallacy to state
> that a general participant has similar sway or authority to a module
> peer.

I do not agree with this. I participate in m.d.s.policy as an individual
and I don't think there has ever been a situation where I felt I did
not have "similar sway or authority to a module peer".

Thinking back to, for example, TSYS, my impression was that my post on
the Moral Hazard from granting this exception had at least as much
impact as you could expect for any participant. Mozilla declined to
authorise the (inevitable, to such an extent I pointed out that it
would happen months before it did) request for yet another exception
when TSYS asked again.

I think my situation may be different from yours Jeremy in that even
when posting strictly in a personal capacity your "other hat" remains in
view. I don't really have another hat, I'm a Relying Party from the
Network. I want the Network to be able to Rely on the Web PKI and I
seek the Prevention of Future Harm to myself and other Relying Parties.
That lines up really well with Mozilla's goals (not quite perfectly
since Mozilla cares primarily about Firefox not generic Relying Parties)

dev-security-policy mailing list

Re: Identrust Commercial Root CA 1 EV Request

2018-09-22 Thread Nick Lamb via dev-security-policy
On Tue, 18 Sep 2018 17:53:34 -0700
Wayne Thayer via dev-security-policy

> * The version of the CPS that I initially reviewed (4.0) describes a
> number of methods of domain name validation in section that
> do not appear to fully comply with the BRs. This was corrected in the
> current version, but one of the methods listed is BR,
> which contains a known vulnerability.

Since the time of the post about the Let's Encrypt team (and
others via the relevant IETF working group?) have developed a new
realisation of that is not vulnerable.

Specifically tls-sni-01 and tls-sni-02 are replaced by tls-alpn-01
which as its name might suggest uses an ALPN TLS feature to ask a
remote server to show the certificate. This involves a brand new ALPN
sub-protocol with no other purpose. Suppliers who aren't trying to help
their customers get certificates have no reason to develop/
enable/ configure such a feature. So it becomes reasonable (unlike with
SNI) to assume that if the check passes, it was intended to pass by the
name's real owner or by their agent.

Section doesn't specify how Identrust's checks work, and it
would be desirable to have better descriptions for methods like that are a bit vague, but it's definitely not true that all
realisations of are broken.

dev-security-policy mailing list

EV Policy OIDs (was Re: Identrust Commercial Root CA 1 EV Request)

2018-09-20 Thread Nick Lamb via dev-security-policy
On Tue, 18 Sep 2018 17:53:34 -0700
Wayne Thayer via dev-security-policy

> ** EV Policy OID:

This reminds me of a question I keep meaning to ask. I know Microsoft
has been trying to get CAs to use for EV and knock it off
with the arbitrary policy OIDs, does Mozilla have any policy on that?

dev-security-policy mailing list

Re: Google Trust Services Root Inclusion Request

2018-09-20 Thread Nick Lamb via dev-security-policy
On Mon, 17 Sep 2018 18:41:07 -0500
Jake Weisz via dev-security-policy

> I guess under this logic, I withdraw my protest. As you say, Google
> could simply start using these certificates, and Mozilla executives
> would force you to accept them regardless of any policy violations in
> order to keep people using Firefox. This whole process appears to
> mostly just be a veneer of legitimacy on a process roughly akin to the
> fair and democratic election of Vladimir Putin. :| As long as Google
> remains legally answerable to no authority and an effective monopoly
> in half a dozen markets, there is roughly no point for Mozilla to
> maintain a CA policy: It should simply use Chrome's trusted store.

I think you've misunderstood. What happened was that somebody turned
your logic on itself, to show that it tears itself to pieces. The right
conclusion to draw from that is "My whole position is senseless and I
must reconsider".

It's analogous to the mathematical "proof by contradiction".

It certainly isn't our intent to say you're right, but only to follow
your position to its self-defeating logical conclusion.

Also, in passing, it would help if you knew that, for example, Chrome
doesn't have a trust store, Google operates a root trust programme in
its role as an Operating system vendor (for Android) but the Chrome
browser uses the OS-provided trust store, a Chrome on Windows trusts
the various obscure Government CAs that Microsoft decided are
trustworthy, a Chrome on macOS trusts whatever Apple trusts, and so on.

> Google's explanation in their announcement seems to confirm my
> statement: That buying roots from GlobalSign is effectively
> backdooring the CA process and making their certificates work in
> products which would not otherwise trust them.

Mechanically it is necessary to have trust from existing systems or you
can't run a new CA for many years while you wait for new systems that do
trust you to be deployed.

[ For example for Let's Encrypt this was ensured by obtaining cross
signatures on the Let's Encrypt intermediates from Identrust's DST Root
CA X3. ]

This fact makes a difference to what a CA might plausibly choose to do,
operationally, but doesn't alter how trustworthy, or otherwise that CA
is to operate a store today, which is the purpose of Mozilla's process

dev-security-policy mailing list

Re: Visa Issues

2018-09-15 Thread Nick Lamb via dev-security-policy
On Thu, 13 Sep 2018 12:26:55 -0700
Wayne Thayer via dev-security-policy


Thanks for this list Wayne, you do a valuable task in assembling lists
like this for us to ponder.

> I would like to request that a representative from Visa engage in this
> discussion and provide responses to these issues.

And I look forward to that. Meanwhile.

For Issue D:

This looks like the problem we saw with CrossCert where nobody is
keeping proper records OR where they know the records they're keeping
are sub-par so they refuse to show them to auditors, which has much the
same effect.

There's a good chance if this CA issues a cert we later conclude was
bogus, they are unable to produce any meaningful evidence of how it
came to be issued, and we're just back to Symantec-style "We've fired
the employee who did it" which is not a basis on which we can have
confidence in the operation of the CA.


I'd also like to understand whether this CA root exists for the Web PKI
or if in fact Visa operates it for some other reason, and the issuance
of certificates valid in the Web PKI is a secondary or tertiary

That is: CT logs show only a handful per month of new certificates
issued by this CA, but are there in fact more (perhaps far more) issued
that aren't for the Web PKI but are issued by this same root ?

In Bug #1315016 Visa's representative says the certificates discussed
were part of a "Visa product" as distinct from being separately
replaceable components.

To the extent that in fact trust in the Web PKI is orthogonal to Visa's
needs here, it may actually make sense for Visa to take the lead in
separating from the Web PKI rather than waiting to get kicked out of
root programmes. The reason is that we've seen previously (e.g. with
SHA-1) that financial services companies like Visa proactively choose
higher risk profiles than would be acceptable for the Web PKI. But
remaining trusted in the Web PKI means foregoing the economic
incentives for these practices - in practice this will mean Visa gets
itself needlessly into trouble, as happened for Issue C where Visa
decided it had its own "exception policy" that allowed it to violate
the root programme rules.


My understanding is that Mozilla intends for some future Firefox to do
SCT checking as Chrome does already. It appears Visa either never or
rarely logs certificates, so their sites (these names mostly belong to
Visa, to subsidiary or related organisations) would fail these checks.

It may be that if such SCT checks are in Firefox in the foreseeable
future that has the effect that these certs cease to impact on Firefox
at all. At which point, why would Mozilla keep Visa in the root trust
programme ?

dev-security-policy mailing list

Re: Google Trust Services - Minor SCT issue disclosure

2018-08-23 Thread Nick Lamb via dev-security-policy
On Thu, 23 Aug 2018 05:50:05 -0700 (PDT)
Andy Warner via dev-security-policy

> May 21st 2018, a new tool for issuing certificates within Google was
> made available to internal customers. Within hours we started to
> receive reports that Chrome Canary (v67) with Certificate
> Transparency checks enabled was showing warnings. A coding error led
> to the new tool providing Signed Certificate Timestamps (SCTs) from 2
> Google CT logs instead of one Google and one non-Google log. 

Feel free to jump in anywhere I've made a mistake, this might totally
invalidate some of my questions.

Presumably, since you eventually "fixed" this by asking Subscribers to
re-issue, the SCTs are baked into a signed certificate, rather than
provided separately so that the Subscriber can use them with e.g.
Stapling technologies ?

Which means that this "new tool" also involved a Google controlled
subCA signing these certificates with, as it turns out, the wrong SCTs
in them. It's not clear to me if the tool and CA are operationally one
and the same.

Q1: Could a more significant "coding error" in this tool have resulted
in certificates being mis-issued (for example with SANs that don't
belong to Google, or lacking mandatory X.509 fields, or without being
CT logged)? If not please explain why the tool couldn't cause this.

Q2: If this error hadn't caused a negative end-user experience, what
mechanisms if any do you believe would have brought it to your
attention and how soon? e.g. does a team sample resulting certificates
from this tool at some interval? If it samples pre-certificates that
would not have detected this error, but is worth mentioning.

Q3: Such mistakes are of course inevitable in software development. But
they could also be introduced maliciously. Were you able to confidently
identify which specific individual(s) made the relevant change? (I don't
want names). Are you confident you'd be able to do this even if somehow
the production tool turned out not to match your revision control

Thanks as always for satisfying my curiosity

dev-security-policy mailing list

Re: GoDaddy Revocations Due to a Variety of Issues

2018-08-09 Thread Nick Lamb via dev-security-policy
On Fri, 20 Jul 2018 21:38:45 -0700
Peter Bowen via dev-security-policy

>,cablint is one of the
> certificates.  It is not clear to me that there is an error here.
> The DNS names in the SAN are correctly encoded and the Common Name in
> the subject has one of the names found in the SAN.  The Common Name
> contains a DNS name that is the U-label form of one of the SAN
> entries.
> It is currently undefined if this is acceptable or unacceptable for
> certificates covered by the BRs.  I put a CA/Browser Forum ballot
> forward a while ago to try to clarify it was not acceptable, but it
> did not pass as several CAs felt it was not only acceptable but is
> needed and desirable.

It would be helpful if any such CAs can tell us why this was "needed and
desirable" with actual examples.

Since the CN field in Web PKI certs always contains information
duplicated from a field that has been better defined for decades I'm
guessing in most cases the cause is crappy software. But if we know
which software is crappy we can help get that fixed rather than
muddling along forever.
dev-security-policy mailing list

Re: Malformed Certificate Revocation - Godaddy

2018-05-31 Thread Nick Lamb via dev-security-policy
Hi Daymion,

I will summarise briefly my understanding of this report in case it is
wrong, if so please correct me, I apologise, and the rest of the email
is probably of no further importance:

GoDaddy integrated a linter (checking the certificates for sense) in
November 2017, and in February this linter caught an error of the same
sort described in this report and GoDaddy corrected the software defect
which made it possible for the error to occur. In May other
certificates with the error (some of them still in-date) were reported
to GoDaddy, and these have been revoked and replaced.

In terms of lessons learned, obviously this incident is further
evidence of the value of "linting" as a valuable defence in depth for
Certificate Authorities, and I hope anybody else who was still on the
fence about that has it on their TODO list.

But it seems to me that the February incident could and should have
triggered somebody at GoDaddy to scan their store of issued
certificates back then for any previous examples, and thus avoided the
subsequent incident report. Commercially this offers better value
to GoDaddy subscribers, since if you did this internally you'd be able
to offer subscribers a more generous and business-aligned timeline to
revoke and replace, rather than being on the clock due to an incident
report for their certificate.

I also have a small question: Does GoDaddy's linter check the To Be
Signed Certificate, or the finished signed Certificate (or both)? The
effect here is that the tbsCertificate doesn't constitute issuance, but
technically once the certificate is signed it "exists" even if you are
careful never to deliver a certificate to the subscriber if the linter
detects a problem.
dev-security-policy mailing list

Re: 2018.05.18 Let's Encrypt CAA tag value case sensitivity incident

2018-05-22 Thread Nick Lamb via dev-security-policy
On 21 May 2018 14:59, Ryan Sleevi  wrote:Given the TTLs and the key sizes in use on DNSSEC records, why do you believe this?This is a smoking gun because it's extremely strong circumstantial evidence. Why else would these records exist except that in fact the "victim" published these DNS records at the time of (or shortly before) issuance?As with a real smoking gun there certainly could be other explanations, but the most obvious (that these were the genuine query answers) will usually be correct.If the reality is that fake records were supplied by a MitM using cracked 512 bit keys in order to fool the CA, the name owner victim is humiliated perhaps but they can take action to secure their names with a better key in future. And the Ecosystem gets a free warning as to the safety (rather otherwise) of short keys.If we suppose the CA systematically produced these fake records afterwards to justify a mis-issuance I'd say that's quite a credibility jump from the level of shenanigans we've gotten used to from CAs and it depends upon their victim having a short key for it to even be possible.These both sound like reasons to increase RSA keylengths for any names that are important for you, not justifications for inadequate logging.___
dev-security-policy mailing list

RE: 2018.05.18 Let's Encrypt CAA tag value case sensitivity incident

2018-05-21 Thread Nick Lamb via dev-security-policy
As a lowly relying party, I have to say I'd expect better here.In particular, if says their DNSSEC signed CAA forbade Let's Encrypt from issuing, and Let's Encrypt says otherwise, I absolutely would expect Let's Encrypt to produce DNSSEC signed RRs that match up to their story. The smoking gun for such scenarios exists, and CAs are, or should be, under no illusions that it's their job to produce it.A log entry that says "CAA: check OK" is worthless for exactly the reason this thread exists, record the RRs themselves, byte for byte.We've seen banks taking this sort of shortcut in the past and it did them no favour with me. I want to see the EMV transaction signature that proves a correct PIN was used, not a blurry print from some mainframe with an annotation that says "A 4 in this column indicates PIN confirmed".___
dev-security-policy mailing list

Re: Regional BGP hijack of Amazon DNS infrastructure

2018-04-25 Thread Nick Lamb via dev-security-policy
On Wed, 25 Apr 2018 09:42:43 -0700 (PDT)
Santhan Raj via dev-security-policy

> What is interesting to me is the DV certificate that Amazon had
> issued for ( and this
> certificate expired on Apr 23rd 2018. 
> Could it be that the attackers were using this cert all along in
> place of a EV cert? ___

I have not been able to view this link for some reason. However I can
say that I've seen screenshots alleged to be of the Cert Viewer on a
Windows PC connected to the attacker site, and it's hilariously bogus,
it's a self-signed certificate with CA:TRUE set, and the site's name as
Common Name, it looks like if somebody with no previous exposure to the
Web PKI tried to make a certificate based on some random blog post or
old Youtube tutorial. e.g.

There's no way this was ever valid, anywhere. If it's what was actually
used (and I have no reason to believe it wasn't) the attackers relied
upon the Dancing Pig effect to get their job done.

Maybe we're actually lucky they didn't get a newer tutorial that taught
them to use ACME.

dev-security-policy mailing list

Re: DigiCert .onion certificates without Tor Service Descriptor Hash extension

2018-03-22 Thread Nick Lamb via dev-security-policy
On 21 Mar 2018 17:58, Wayne Thayer via dev-security-policy	  wrote:7.  List of steps your CA is taking to resolve the situation and

ensure such issuance will not be repeated in the future, accompanied

with a timeline of when your CA expects to accomplish these things.

We revoked the certificates and added preliminary checking for Tor

descriptors. We are adding additional checks to ensure certs cannot

issue without them.A broader consideration might be how DigiCert (or any CA) can ensure such checks get thought up / planned for during the process of spinning up a new type of issuance.Imagine the CA/B eventually authorizes some hypothetical new "MV" certificates, they are Web PKI certs but with some different (less / more / just strange) validation and criteria for the cert itself. Obviously we cannot plan today for how this should be done exactly, but a CA thinking of issuing MV ought to - as part of that - figure out what needs to happen in terms of preventing mis-issuance of the new certs.Otherwise we're inevitably back here shortly after the CA/B says OK.___
dev-security-policy mailing list

Re: Following up on Trustico: reseller practices and accountability

2018-03-05 Thread Nick Lamb via dev-security-policy
On Mon, 5 Mar 2018 09:29:47 -0800 (PST)
"okaphone.elektronika--- via dev-security-policy"

> On Monday, 5 March 2018 18:10:17 UTC+1,
> wrote:
> Ah, found it. It was tialaramex who suggested that this could be how
> Trustico got the private keys.

I wrote this comment in response to a redditor who claimed they'd
received an email about this mass revocation although they were sure
they'd used best practices in issuing a CSR.

Now that we know in fact the reseller tried to have all certificates
revoked regardless of whether they had the private keys (and DigiCert
not unreasonably balked at doing this) it is likely the redditor in
question had got an email from their reseller and their cert was not
eventually revoked.

> Just speculation then. But still worth keeping in mind as something a
> reseller could be doing. I can just see some programmer coming up
> with this idea to workaround the problem of not having the private
> key. ;-)

I'm pretty sure I have seen this sort of practice, but I don't have any
hard evidence and it may be another of the bad ideas that has died out
as the market reforms.

In terms of the larger topic of this thread, I don't think we're going
to get very far putting pressure on CAs to fix resellers for reasons
several people have already mentioned. We can however encourage three
things that will help even though they can't overnight forbid
undesirable retention of other people's keys:

1. Education. Let's make sure material from the Trust Store owners,
from CAs, and from other entities we come into contact with describes
processes that are secure by default, such as the use of CSRs. Got a
document that skips the CSR "just for the example" ? Fix that, the same
way you'd show a normal family wearing seatbelts in a car in a movie
even though obviously for the movie they might be on a sound stage so
the seatbelts do nothing.

2. Implementation. Software vendors including Trust Store owners (such
as Microsoft and Apple) have an opportunity to "bake in" secure
approaches. The easier it is to do things the safer way, the less
likely users are to look for a shortcut from a reseller. Nobody is
offering a key generation feature so as to make the sales journey more
complicated and harder to use - if "just use a CSR" was the easy
option, that's all resellers would offer.

3. Customer focused standards. Rather than try to push from the CAs,
groups like PCI get to set demand, if the PCI compliance document
explicitly says that your private keys mustn't come from somebody else
then that's another reason somebody is going to get that right. I'm
sure there are other appropriate groups that mandate SSL and could
explicitly specify this as a requirement.
dev-security-policy mailing list

Re: How do you handle mass revocation requests?

2018-03-01 Thread Nick Lamb via dev-security-policy
On Thu, 1 Mar 2018 10:51:04 +
Ben Laurie via dev-security-policy

> Seems to me that signing something that has nothing to do with certs
> is a safer option - e.g. sign random string+Subject DN.

That does sounds sane, I confess I have not spent much time playing with
easily available tools to check what is or is not easily possible on
each platform in terms of producing and checking such proofs. I knew
that you can make a CSR on popular platforms, and I knew how to check a
CSR is valid and a bogus CSR seemed obviously harmless to me.

I feel sure I saw someone's carefully thought through procedure for
proving control over a private key written up properly for close to
this sort of situation but I have tried and failed to find it again
since the incident was first reported, and apparently Jeremy didn't
know it either.
dev-security-policy mailing list

Re: How do you handle mass revocation requests?

2018-02-28 Thread Nick Lamb via dev-security-policy
On Wed, 28 Feb 2018 20:03:51 +
Jeremy Rowley via dev-security-policy

> The keys were emailed to me. I'm trying to get a project together
> where we self-sign a cert with each of the keys and publish them.
> That way there's evidence to the community of the compromise without
> simply listing 23k private keys. Someone on Reddit suggested that,
> which I really appreciated.

That's probably me (tialaramex).

Anyway, if it is me you're referring to, I suggested using the private
keys to issue a bogus CSR. CSRs are signed, proving that whoever made
them had the corresponding private key but they avoid the confusion
that comes from DigiCert (or its employees) issuing bogus certs.
Everybody reading m.d.s.policy can still see that a self-signed cert is
harmless and not an attack, but it may be harder to explain in a
soundbite. Maybe more technically able contributors disagree ?
dev-security-policy mailing list

Re: Certificates with 2008 Debian weak key bug

2018-02-16 Thread Nick Lamb via dev-security-policy
On Fri, 16 Feb 2018 11:28:41 +
Arkadiusz Ławniczak via dev-security-policy

>   The issue was caused by incorrect calculation of the SHA1
> fingerprint of public key. Public keys hashes stored in Certum's
> database was calculated from the Modulo key value with the Modulus
> prefix and a line ending character while the  value of public
> key from CSR was calculated and returned without these additional
> characters. So, this is the reason why the calculated fingerprint did
> not match the value from  Certum's database. Weak keys verification
> is tested each time before the new version of the software is
> deployed and also periodically as part of the test schedule.
> Unfortunately, the database of weak keys that served the tests
> contained keys hashes in incorrect formats, the parsed key was also
> in an incorrect format.   Therefore we could not recognize weak
> key in its "original" OpenSSL form. So each test returned false
> positives.

Thanks for your report Arkadiusz,

This is a reminder that just because your unit tests pass, doesn't mean
your larger system behaves how you think the unit tests mean it does. If
you want to be sure how the whole _system_ behaves (and for a CA we
certainly do want that) you're going to need to explicitly test that
whole system even if your unit tests are green.
dev-security-policy mailing list

Re: Possible Issue with Domain Validation Method 9 in a shared hosting environment

2018-01-15 Thread Nick Lamb via dev-security-policy
On Mon, 15 Jan 2018 18:18:10 +
Doug Beattie via dev-security-policy

> -  Total number of active OneClick customers: < 10

What constitutes a OneClick customer in this sense?

The focus of concern for tls-sni-01 was service providers who present
an HTTPS endpoint for many independent entities, most commonly a bulk
web host or a CDN. These function as essentially a "Confused Deputy" in
the discovered attack on tls-sni-01. For those providers there would
undoubtedly be a temptation to pretend all is well (to keep things
working) even if in fact they aren't able to defeat this attack or some
trivial mutation of it, and that's coloured Let's Encrypt's response,
because there's just no way to realistically police whitelisting of
thousands or tens of thousands of such service providers.

>From the volumes versus numbers of customers, it seems as though
OneClick must be targeting the same type of service providers, is that

The small number of such customers suggests that, unlike Let's Encrypt,
it could be possible for GlobalSign to diligently affirm that each of
the customers has technical countermeasures in place to protect their
clients from each other.

In my opinion such an approach ought to be adequate to continue using
OneClick in the short term, say for 12-18 months with the understanding
that this validation method will either be replaced by something less
problematic or the OneClick service will go away in that time.

But of course I do not speak for Google, Mozilla or any major trust
dev-security-policy mailing list

Re: 2018.01.09 Issue with TLS-SNI-01 and Shared Hosting Infrastructure

2018-01-10 Thread Nick Lamb via dev-security-policy
On Wed, 10 Jan 2018 15:10:41 +0100
Patrick Figel via dev-security-policy

> A user on Hacker News brought up the possibility that the fairly
> popular DirectAdmin control panel might also demonstrate the
> problematic behaviour mentioned in your report[1].

Although arguably tangential to the purpose of m.d.s.policy, I think it
would be really valuable to understand what behaviours are actually out
there and in what sort of volumes.

I know from personal experience that my own popular host lets me create
web hosting for a 2LD I don't actually control. I had management
agreement to take control, began setting up the web site and then
technical inertia meant control over the name was never actually
transferred, the site is still there but obviously in that case needs
an /etc/hosts override to visit from a normal web browser.

Would that host:

* Let me do this even if another of their customers was hosting that
  exact site ? If so, would mine sometimes "win" over theirs, perhaps if
  they temporarily disabled access or due to some third criteria like
  our usernames or seniority of account age ?

* Let me do this for sub-domains or sub-sub-domains of other customers,
  including perhaps ones which have a wildcard DNS entry so that "my"
  site would actually get served to ordinary users ?

* Let me do this for DNS names that can't exist (like *.acme.invalid,
  leading to the Let's Encrypt issue we started discussing) ?

I don't know the answer to any of those questions, but I think that
even if they're tangential to m.d.s.policy somebody needs to find out,
and not just for the company I happen to use.
dev-security-policy mailing list

Re: Serial number length

2017-12-29 Thread Nick Lamb via dev-security-policy
On Fri, 29 Dec 2017 07:24:31 +0100
Jakob Bohm via dev-security-policy

> 3. Or would the elimination in #2 reduce the entropy of such serial
>numbers to slightly less than 64 bits (since there are less than
> 2**64 allowed values for all but the first such certificate)?

The tremendous size of the numbers involved means that in practice this
makes no difference. A single collision only becomes likely (not
certain, merely likely) over the course of issuing billions of such

If I'm right a decision to append a further byte (say 0x42) to the
serial number any time a collision would otherwise occur would have
the same _visible_ effect as just throwing away the colliding number and
choosing another, ie no effect because collisions don't actually
happen in practice.

[ In my day job I maintain a system which uses a 64-bit hash of URLs to
index them. We are conscious that by the pigeon hole principle this hash
could sometimes confuse two URLs and there's a safeguard to detect that.
Despite processing millions of URLs this way every day, for several
years, the safeguard has never triggered outside of unit tests. Perhaps
one day it will. ]

It wouldn't surprise me if some CAs actually don't check #2 at all.
Since collisions are so unlikely with truly random serial numbers it
might well never come up, even if you explicitly looked for it, so that
this "failure" might have no detectable consequence for a smaller CA
even over the course of decades of operation.

So far as we know ISRG / Let's Encrypt are issuing the largest volume
from a single subCA of any CA, but I believe they just use a lot more
than 64-bits, which is a rational choice here to avoid answering tricky
philosophical questions about integers. I would commend this approach
to other CAs wondering how best to comply.

Final thought: The linter should check for at least 64-bits, but it
can't check for true randomness (doing so may be literally impossible in
fact) so anything further should be left for human observers and/or CA

dev-security-policy mailing list

Re: On the value of EV

2017-12-15 Thread Nick Lamb via dev-security-policy
On Thu, 14 Dec 2017 16:33:29 -0800 (PST)
Matthew Hardeman via dev-security-policy

> That attack was by hacking the target's domain registrar account.
> Others have done that as well, including against a Brazilian bank.
> The right attacker would not even need that - they could just hijack
> traffic headed to the IP address of the real DNS server in question.

Attacking the registry or registrar are perhaps *more* effective rather
than less, because this focuses on the agreed source of truth. We've
seen not so long ago with Togo that even a TLD registry may not be as
secure as we'd like.

An attacker with control over North American routing may be able to
arrange for traffic from a North American CA to, say, Fox IT systems in
Europe to be directed to them instead, but find it difficult to do the
same for traffic from say, Russia.

But if the attacker simply changes the actual DNS data controlled by
the registrar, everywhere in the world will agree that this new data is
correct - it comes from the legitimate source of truth on the matter.
Russia is just as happy as Canada to believe what the registrar for a
domain says about that domain.
dev-security-policy mailing list

Re: On the value of EV

2017-12-13 Thread Nick Lamb via dev-security-policy
On Wed, 13 Dec 2017 12:29:40 +0100
Jakob Bohm via dev-security-policy

> What is *programmatically* enforced is too little for human safety.
> believing that computers can replace human judgement is a big mistake.
> Most of the world knows this.

That's a massive and probably insurmountable problem then since the
design of HTTPS in particular and the way web browsers are normally
used is _only_ compatible with programmatic enforcement.

Allow me to illustrate:

Suppose you visit your bank's web site. There is a lovely "Green
Bar" EV certificate, and you, as a vocal enthusiast for the value of
Extended Validation, examine this certificate in considerable detail,
verifying that the business identified by the certificate is indeed
your bank. You are doubtless proud that this capability was available
to you.

You fill in your username and password and press "Submit". What happens?

Maybe your web browser finds that the connection it had before to
the bank's web site has gone, maybe it timed out, or there was a
transient network problem or a million other things. But no worry, you
don't run a web browser in order to be bothered with technical minutiae
- the browser will just make a new connection. This sort of thing
happens all the time without any trouble.

This new connection involves a fresh TLS setup, the server and browser
must begin again, the server will present its certificate to establish
identity. The web browser examines this certificate programmatically to
decide that it's OK, and if it is, the HTTPS form POST operation for
the log in form is completed by sending your username and password over
the new TLS connection.

You did NOT get to examine this certificate. Maybe it's the same one as
before, maybe it's slightly different, maybe completely different, the
hardware (let alone software) answering needn't be the same as last
time and the certificate needn't have any EV data in it. Your web
browser was happy with it, so that's where your bank username and
password were sent.

Even IF you decide now, with the new connection, that you don't trust
this certificate, it's too late. Your credentials were already
delivered to whoever had that certificate.

Software makes these trust decisions constantly, they take only the
blink of an eye, and require no human attention, so we can safely build
a world that requires millions of them. The moment you demand human
attention, you not only introduce lots of failure modes, you also use
up a very limited resource.

Perhaps you feel that when browsing the web you make a conscious
decision about trust for each site you visit. Maybe, if you are
extraordinarily cautious, you make the decision for individual web
pages. Alas, to be of any use the decisions must be taken for every
single HTTP operation, and most pages will use dozens (some hundreds)
of such operations.

dev-security-policy mailing list

Re: On the value of EV

2017-12-12 Thread Nick Lamb via dev-security-policy
On Mon, 11 Dec 2017 19:08:43 -0500
Adam Caudill via dev-security-policy

> I can say from my own experience, in some states in the US, it's a
> trivial matter to create a company online, with no validation of
> identity or other information. It takes about 10 minutes, and you'll
> have all the paperwork the next day. When I did this (in a state I
> had never done business in before), there was absolutely no identity
> checks, no identity documents, nothing at all that would tie the
> business to me if I had lied. Creating a business with no connection
> to the people behind it is a very, very simple thing to do.

It may be valuable to understand here that although we often think of
countries like the United States and United Kingdom as places with
great respect for the Rule of Law, they have also both quietly
functioned as places where the rich may hide their wealth with no
questions asked. Even "Who are you?" is too many questions.

The ability to create companies in these countries without anyone
really knowing who controls them or ultimately benefits from any
financial income is a _feature_ not a bug as far as their governments
are concerned.
dev-security-policy mailing list

Re: CA generated keys

2017-12-11 Thread Nick Lamb via dev-security-policy
On Sat, 9 Dec 2017 18:20:56 +
Tim Hollebeek via dev-security-policy

> First, third parties who are *not* CAs can run key generation and
> escrow services, and then the third party service can apply for a
> certificate for the key, and deliver the certificate and the key to a
> customer.  I'm not sure how this could be prevented.  So if this
> actually did end up being a Mozilla policy, the practical effect
> would be that SSL keys can be generated by third parties and
> escrowed, *UNLESS* that party is trusted by Mozilla. This seems .
> backwards, at best.

I'm actually astonished that CAs would _want_ to be doing this.

A CA like Let's Encrypt can confidently say that it didn't lose the
subscriber's private keys, because it never had them, doesn't want them.
If there's an incident where the Let's Encrypt subscriber's keys go
"walk about" we can start by looking at the subscriber - because that's
where the key started.

In contrast a CA which says "Oh, for convenience and security we've
generated the private keys you should use" can't start from there. We
have to start examining their generation and custody of the keys. Was
generation predictable? Were the keys lost between generation and
sending? Were they mistakenly kept (even though the CA can't possibly
have any use for them) after sending? Were they properly secured during

So many questions, all trivially eliminated by just not having "Hold
onto valuable keys that belong to somebody else" as part of your
business model.

> Second, although I strongly believe that in general, as a best
> practice, keys should be generated by the device/entity it belongs to
> whenever possible, we've seen increasing evidence that key generation
> is difficult and many devices cannot do it securely.

I do not have any confidence that a CA will do a comprehensively better
job. I don't doubt they'd _try_ but the problem is Debian were trying,
we have every reason to assume Infineon were trying. Trying wasn't

If subscribers take responsibility for generating keys we benefit from
heterogeneity, and the subscriber gets to decide directly to choose
better quality implementations versus lower costs. Infineon's "Fast
Prime" was optional, if you were happy with a device using a proven
method that took a few seconds longer to generate a key, they'd sell
you that. Most customers, it seems, wanted faster but more dangerous.

Aside from the Debian weak keys (which were so few you could usefully
enumerate all the private keys for yourself) these incidents tend to
just make the keys easier to guess. This is bad, and we aim to avoid
it, but it's not instantly fatal. But losing a customer keys to a bug
in your generation, dispatch or archive handling probably _is_
instantly fatal, and it's unnecessary when you need never have those
keys at all.

dev-security-policy mailing list

Re: Certificate incident: private key leaked for wildcard certificate for *

2017-12-09 Thread Nick Lamb via dev-security-policy
On Sat, 9 Dec 2017 09:51:59 +0100
Hanno Böck via dev-security-policy

> On Fri, 8 Dec 2017 16:43:48 -0700
> Wayne Thayer via dev-security-policy
>  wrote:
> > The root CA is ultimately responsible for subordinate CAs it has
> > signed.
> I see a problem with that, as this is far from obvious.

I saw "responsibility" here as meaning responsibility to the Trust
Stores on behalf of the Relying Parties. For the Relying Parties
themselves I think the right pattern is: Try filing a Problem Report
with the Issuer, if the result isn't satisfactory, complain to your
Trust Store(s). We can do the rest, can we not?

The Trust Stores have just as much reason to distrust a root CA which
can't keep its subCAs from breaking the rules as they do if this root CA
were to break the rules directly themselves. That's sort-of the lesson
from Symantec too, right albeit in their case the problem was RAs?

It should be in the Root CA's interest to make sure that every
sub-ordinate CA, whether physically under its control or not, is
properly operated, and if there's a suspicion that it's not being
properly operated, to get that sorted out. Handling problem reports is
part of the proper operation of the CA.

It may be that root CAs decide the best way to _achieve_ this objective
[for a subCA they don't actually intend as simply a cross-signature to
bootstrap another root] is to insist upon being the point of contact
for Problem Reports, and they'll pass them on, so this way they have
oversight. Or that they insist on the Problem Reports going to an
alias, Exchange DL or similar that sends a copy to the root CA, I don't
think we need to dictate how this is done, only to re-emphasise that as
the root CA making sure the Problem Reports are handled properly is
ultimately your responsibility, however you discharge it, not a
situation for buck passing.

We definitely mustn't be shy about problems affecting another business
with a Trust Store. If Microsoft's executive management has any sense
there is an institutional firewall between their Trust Store and their
Certificate Authority functions, and the former is able to make
decisions independent of their potential impact on the latter. If a
root CA finds that it is politically uncomfortable to have two very
different relationships (Programme Member: Trust Programme / CA: subCA)
to the same public company, well, that's unfortunate, and I would
suggest the less awkward way forward is to bring the subCA relationship
to an ordered close. Perhaps Microsoft shouldn't be in both games (and
if so, the same for Google), but that again is not a problem for
dev-security-policy mailing list

Re: Anomalous Certificate Issuances based on historic CAA records

2017-11-29 Thread Nick Lamb via dev-security-policy
On Wed, 29 Nov 2017 22:37:08 +
Ben Laurie via dev-security-policy

> Presumably only for non-DNSSEC, actually? For DNSSEC, you have a clear
> chain of responsibility for keys, and that is relatively easy to
> build on.

For DNSSEC a CA could (and I would hope that they do) collect enough
records to show that the CAA result they relied on was authentic after
the fact.

It is in the nature of a distributed system like DNS that it would be
possible that this was not the _only_ authentic result available on the
network at the time of issuance, and the CA has no way to know of any
other results that are inconsistent with issuance once they have one
which is consistent.

Of course the existence of contradictory authentic results SHOULD not
be ordinarily the case for a well-managed domain but we know it
happens, and it would be even more likely for test systems although
they should have the know-how to control this.
dev-security-policy mailing list

  1   2   >