Re: Intermediates Supporting Many EE Certs

2017-02-14 Thread Peter Gutmann via dev-security-policy
Jakob Bohm via dev-security-policy  
writes:

>Unfortunately, for these not-quite-web-server things (printers, routers
>etc.), automating use of the current ACME Let's encrypt protocol with or
>without hardcoding the Let's Encrypt URL is a non-starter for anyone using
>these things in a more secure network and/or beyond the firmware renewal
>availability from the vendor.

That's one of the least concerns with IoS devices.  For one thing they're
mostly going to have RFC 1918 addresses or non-qualified names, which CAs
aren't supposed to issue certs for (not that that's ever stopped them in the
past).  Then the CA needs to connect back to the device to verify connection
to the domain name it's issuing the cert for, which shouldn't be possible for
any IoS device that's set up properly.  And I'm sure there's more...

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Taiwan GRCA Root Renewal Request

2017-02-12 Thread Peter Gutmann via dev-security-policy
Gervase Markham via dev-security-policy  
writes:

>Peter: you are going to have to re-summarise your question. And then, if you
>are asking why Mozilla code works in a certain way, mozilla.dev.security or
>mozilla.dev.tech.crypto are almost certainly far better venues.

Sure, no problem.  I was just replying to a post by Kathleen on this list, and
it seemed like a policy issue so I figured it was the right forum.  I'll CC it
to dev.security as well...

The original post was about the fact the Mozilla runs into lots of problems
with top-down path construction:

>Indeed, and as per your comment here:
>https://bugzilla.mozilla.org/show_bug.cgi?id=1056341#c24

I asked:

So just to satisfy my curiosity, it's been known ever since top-down
construction was first advocated by PKI loon^H^H^Htheoreticians:

https://www.youtube.com/watch?v=CoOrmK4OueY

that you work bottom-up, not top-down.  If that's not obvious just from about
a beer's worth of analysis then it should have been when one of said PKI
theoreticians described trying to implement it at a conference and pointed out
that his implementation ran for three days without terminating, after which he
tried the same thing again.

Did no-one see that this was going to happen?  Why would anyone try and do it
this way?  Rather baffled minds want to know...

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Researcher Says API Flaw Exposed Symantec Certificates, Including Private Keys

2017-03-28 Thread Peter Gutmann via dev-security-policy
Nick Lamb via dev-security-policy  
writes:

>In order for Symantec to reveal anybody's private keys they'd first need to
>have those keys

That's standard practice for many CAs, they generate the key and certificate
for you and email it to you as a PKCS #12.  It seems to be more common among
lesser-known CAs though, particularly ones with government-mandated monopolies
for some reason, so I'm not sure if Symantec does it.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Over 14K 'Let's Encrypt' SSL Certificates Issued To PayPal Phishing Sites

2017-03-27 Thread Peter Gutmann via dev-security-policy
Martin Heaps via dev-security-policy  
writes:

>This topic is frustrating in that there seems to be a wide attempt by people
>to use one form of authentication (DV TLS) to verify another form of
>authentication (EV TLS).

The overall problem is that browser vendors have decreed that you can't have
encryption unless you have a certificate, i.e. a CA-supplied magic token to
turn the crypto on.  Let's Encrypt was an attempt to kludge around this by
giving everyone one of these magic tokens.  Like a lot of other kludges, it
had negative consequences...

So it's now being actively exploited... how could anyone *not* see this
coming?  How can anyone actually be surprised that this is now happening?  As
the late Bob Jueneman once said on the PKIX list (over a different PKI-related
topic), "it's like watching a train wreck in slow motion, one freeze-frame at
a time".  It's pre-ordained what's going to happen, the most you can do is
artificially delay its arrival.

>The end nessecity is that the general public need to be educated [...]

Quoting Vesselin Bontchev, "if user education was going to work, it would have
worked by now".  And that was a decade ago.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA Validation quality is failing

2017-04-19 Thread Peter Gutmann via dev-security-policy
Kurt Roeckx via dev-security-policy  
writes:

>Both the localityName and stateOrProvinceName are Almere, while the province 
>is Flevoland.

How much checking is a CA expected to do here?  I know that OV and DV certs 
are just "someone at this site responded to email" or whatever, but for an 
EV cert how much further does the CA actually have to go?  When e-Szignó 
Hitelesítés-Szolgáltató in Hungary certifies Autolac Car Services, Av Los 
Frutales 487 urb., Lima, Peru, are they expected to verify that it's really 
in Av Los Frutales and not Los Tolladores, or do they just go ahead and
issue the cert?  Can someone point to the bit of the BR that says that this
is obviously right or wrong?

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA Validation quality is failing

2017-04-20 Thread Peter Gutmann via dev-security-policy
Ryan Sleevi  writes:

>For an EV cert, you look in 
>https://cabforum.org/wp-content/uploads/EV-V1_6_1.pdf   

It was meant as a rhetorical question, the OP asked whether doing XYZ in an
EV certificate was allowed and I was pointing out that the CAB Forum 
guidelines should provide the answer.  Vincent Lynch's reply was the appropriate
one, pointing out the text that covers this situation.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Criticism of Mozilla Re: Google Trust Services roots

2017-03-10 Thread Peter Gutmann via dev-security-policy
Kurrasch via dev-security-policy  writes:

>* Types of transfers:  I don't think the situation was envisioned where a
>single root would be transferred between entities in such a way that company
>names and branding would become intermingled.

This has happened many times in the past, root certs have been sold and re-
sold for years.

>* Manner of transfer:  As we learned from Ryan H., a second HSM was
>introduced for the transfer of the private key meaning that for a period of
>time 2 copies of the private key were in existence.

I would be surprised if only two copies were in existence, given the value of
root keys I'd hope CAs have multiple backup copies.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DigiCert-Symantec Announcement

2017-08-02 Thread Peter Gutmann via dev-security-policy
Jeremy Rowley via dev-security-policy  
writes:

>Today, DigiCert and Symantec announced that DigiCert is acquiring the
>Symantec CA assets, including the infrastructure, personnel, roots, and
>platforms.

I realise this is a bit off-topic for the list but someone has to bring up the
elephant in the room: How does this affect the Google vs. Symantec situation?
Is it pure coincidence that Symantec now re-emerges as DigiCert, presumably
avoiding the sanctions since now things will chain up to DigiCert roots?

Just curious here, this seems like a bit too much of a coincidence.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DigiCert-Symantec Announcement

2017-08-02 Thread Peter Gutmann via dev-security-policy
Peter Bowen  writes:

>Gerv's email was clear that sale to DigiCert will not impact the plan,
>saying: "any change of control of some or all of Symantec's roots would not
>be grounds for a renegotiation of these dates."
>
>So the sanctions are still intact.

Ah, I phrased my question a bit unclearly, what I meant was that the existing
certs, which now chain up to to-be-untrusted Symantec roots, can be moved
across to trusted DigiCert roots and continue as before.  I'm assuming that
was the intent of exercise, that it's business as usual, just the name has
changed.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certificate with invalid dnsName issued from Baltimore intermediate

2017-07-19 Thread Peter Gutmann via dev-security-policy
Hanno Böck via dev-security-policy  
writes:

>More dotdot-certificates:

Given how widespread (meaning from different CAs) these are, is there some
quirk of a widely-used resolver library that allows them?  I've done a bit of
impromptu testing of various tools/bits of code but none of them seem to allow
double-dot domain names, so I'm wondering why there's so many of them that no-
one's ever caught, until now by explicitly searching for them.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Re: Machine- and human-readable format for root store information?

2017-06-30 Thread Peter Gutmann via dev-security-policy
David Adrian via dev-security-policy  
writes:

>I'd like to see either a reliable URL to fetch that can be converted to PEM
>(i.e. what Microsoft does), or some API you can hit to the store (e.g. what
>CT does).

PEM.  You keep using that word... I do not think it means what you think it
does.  Technically speaking, PEM is the data format for Privacy Enhanced Mail,
usually applied to the ASCII wrapping for the binary data.  In practice, it's
used to denote OpenSSL's proprietary private-key format.  Neither of those
seem terribly useful for communicating trusted certificates.

If you do want a standard format for them that pretty much anything should
already be able to understand, why not use CMS/PKCS #7 certificate
sets/collections/chains?  Almost anything that deals with certs should already
be able to read those.  Sure, it won't do metadata, but for that you'll need
to spend three years arguing in a standards group and produce a 100-page RFC
that no-one can get interoperability on.  OTOH PKCS #7 works right now.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Re: Machine- and human-readable format for root store information?

2017-06-30 Thread Peter Gutmann via dev-security-policy
Peter Gutmann via dev-security-policy <dev-security-policy@lists.mozilla.org> 
writes:

>You keep using that word... I do not think it means what you think it does.

"... what you think it means".  Dammit.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certificates with invalidly long serial numbers

2017-08-07 Thread Peter Gutmann via dev-security-policy
Ryan Sleevi via dev-security-policy  
writes:

>>Pragmatically, does anything known break on the extra byte there?
>
>Yes. NSS does. Because NSS properly implements 5280.

I would say that's probably more a flaw in NSS then.  Does anyone's
implementation seriously expect things to work, and by extension break if they
don't, as 5280 says it should?  What happens to NSS if it sees a policy
explicitText longer than 200 bytes?  If it sees a CRL with an unknown critical
extension?  If it sees a CRL with one of the extensions where you ignore the
actual contents of the CRL and instead use revocation information hidden in a
sub-extension (sorry, can't remember the name of that one at the moment).

That's just the first few things that came to mind, there are a million (well,
thousands.  OK, maybe hundreds.  At least a dozen) bizarre, arbitrary, and
often illogical requirements (for example with the critical extension thing
the only sensible action is to do the opposite of what the RFC says) in 5280
that I'm pretty sure NSS, and probably most other implementations as well,
don't conform to, or are even aware of.  So saying "it happens to break our
code" is a perfectly valid response, but claiming better standards conformance
than everyone else is venturing onto thin ice.

More generally, I don't think there's any PKI implementation that can claim to
"properly implement 5280" because there's just too much weird stuff in there
for anyone to fully comprehend and be conformant to.  As a corollary, since
there are also things in there that are illogical, a hypothetical
implementation that really was fully conformant could be regarded as broken
when it does things that the spec requires but that no-one would expect an
implementation to do.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certificates with invalidly long serial numbers

2017-08-07 Thread Peter Gutmann via dev-security-policy
Matthew Hardeman via dev-security-policy 
 writes:

>One question: the choice of 20 bytes of serial number is an unusual length
>for an integer type.  It's not a nice clean power of 2.  It doesn't align to
>any native integer data type length on any platform I'm aware of.

It exactly matches the SHA-1 hash size.  SHA-1 was the universal go-to hash
function when 2459 and its successors were created, and is implicitly
hardcoded into various parts of the spec.  See for example the suggestions for
generating the keyIdentifier.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certificates with invalidly long serial numbers

2017-08-08 Thread Peter Gutmann via dev-security-policy
Matthew Hardeman via dev-security-policy 
 writes:

>I merely raise the point that IF the framers of the 20 bytes rule did, in
>fact, simultaneously intend that arbitrary SHA-1 hash results should be able
>to be stuffed into the serial number field AND SIMULTANEOUSLY that the DER
>encoded integer field value must be a positive integer and that insertion of
>a leading 0x00 byte to ensure that the high order bit would be 0 (thus
>regarded as a positive value per the coding), THEN it must follow that at
>least in the minds of those who engineered the rule, that the inserted 0x00
>byte must not be part of the 20 byte maximum size of the value AS legitimate
>SHA-1 values of 20 bytes do include values where the high order bit would be
>1 and without pre-padding the proper interpretation of such a value would be
>as a negative integer.

That sounds like sensible reasoning.  So you need to accept at least 20 + 1
bytes, or better yet just set it to 32 or 64 bytes and be done with it because
there are bound to be implementations out there that don't respect the 20-byte
limit.  At the very least though you'd need to be able to handle 20 + 1.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Re: Configuring Graduated Trust for Non-Browser Consumption

2017-05-16 Thread Peter Gutmann via dev-security-policy
Ryan Sleevi  writes:

>Mozilla updates every six to eight weeks. And that works. That's all that
>matters for this discussion.

Do all the world's CAs know this?

Peter.

  
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Re: Configuring Graduated Trust for Non-Browser Consumption

2017-05-16 Thread Peter Gutmann via dev-security-policy
Ryan Sleevi  writes:

>I can't help but feel you're raising concerns that aren't relevant.

CAs issue roots with effectively infinite (20 to 40-year) lifetimes because
it's too painful to do otherwise.  You're proposing instead:

  require that all CAs must generate (new) roots on some interval (e.g. 3
  years) for inclusion.

(that's quoted from the original message I replied to).  How do you propose
that Mozilla is going to get every commercial CA on earth to do this?

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Re: Configuring Graduated Trust for Non-Browser Consumption

2017-05-16 Thread Peter Gutmann via dev-security-policy
Ryan Sleevi via dev-security-policy  
writes:

>An alternative solution to the ossification that Alex muses about is to
>require that all CAs must generate (new) roots on some interval (e.g. 3
>years) for inclusion. That is, the 'maximum' a root can be included in a
>Mozilla product is 3 years (or less!)

Unless someone has a means of managing frequent updates of the root
infrastructure (and there isn't one, or at least none that work), this will
never fly.  There's a reason why roots have 20-40 year lifetimes and why they
get on-sold endlessly across different owners rather than simply being
replaced when required.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: April CA Communication: Results

2017-05-16 Thread Peter Gutmann via dev-security-policy
Jakob Bohm via dev-security-policy  
writes:

>Indeed, I strongly suspect Microsoft *customers* combined with Microsoft
>untrustworthiness (they officially closed their Trustworthy Computing
>initiative!) may be the major hold out, specifically:
>
>1. [...]

5. Microsoft has SHA-1 deeply hardcoded into their cert-management
   infrastructure, and in some places it can't be replaced.  For example their
   NDES cert management server replies to a SHA-2 request with a SHA-1
   response that can't be decrypted, implying that it's never even been tested
   with SHA-2.  If you submit an MD5 request then everything works as expected
   (as does SHA-1).
   
   That's MD5, in 2017.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Re: Configuring Graduated Trust for Non-Browser Consumption

2017-05-16 Thread Peter Gutmann via dev-security-policy
Michael Casadevall via dev-security-policy 
 writes:

>I learned something new today. I'm about to run out the door right now so I
>can't read the RFCs but do you know off the top of your head why that was
>removed?

>From the PKIX RFC?  There was never any coherent reason given, it just got
turned into a no-no.

(Note the qualifier "coherent".  There were various reasons proposed for
removing it, but none that made any sense).

Peter.

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Re: P-521

2017-06-27 Thread Peter Gutmann via dev-security-policy
Alex Gaynor via dev-security-policy  
writes:

>I'll take the opposite side: let's disallow it before it's use expands :-)
>P-521 isn't great, and there's really no value in proliferation of crypto
>algorithms, as someone told me: "Ciphersuites aren't pokemon, you shouldn't
>try to catch 'em all".

"Elliptic Curve Cryptography in Practice", FC'14, for SSH P256 support is
99.9%, P521 support is 0.01%, P384 support is 0.00%.  So you can pretty much
just assume that if it supports ECC, it'll be P256.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Machine- and human-readable format for root store information?

2017-06-27 Thread Peter Gutmann via dev-security-policy
Jos Purvis (jopurvis) via dev-security-policy 
 writes:

>One possibility would be to look at the Trust Anchor Management Protocol
>(TAMP - RFC5934).

Note that TAMP is one of PKIX' many, many gedanken experiments that were
created with little, most likely no, real-world evaluation before it was
declared ready.  It may or may not actually work, and may or may not (and
looking at its incredible complexity and flexbility, almost certainly "may
not") interoperate with any other implementation that turns up.  So you'd need
to write a second spec which is a profile of TAMP that nails down what's
expected by an implementation, and then run interop tests to see whether it
works at all.

(In case you're wondering why the CMP protocol, another PKIX cert management
protocol that in theory already does what TAMP does, starts at version 2, it's
because when attempts were made to deploy the initial spec it was found that
it didn't work, so they had to create a "version 2" that tried to patch up the
published standard.  Even then, try finding two CMP implementations that can
interop out of the box...).

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Re: CA generated keys

2017-12-13 Thread Peter Gutmann via dev-security-policy
Matthew Hardeman via dev-security-policy 
 writes:

>In principle, I support Mr. Sleevi's position, practically I lean toward Mr.
>Thayer's and Mr. Hollebeek's position.

I probably support at least one of those, if I can figure out who's been
quoted as saying what.

>Sitting on my desk are not less than 3 reference designs.  At least two of
>them have decent hardware RNG capabilities.  

My code runs on a lot (and I mean a *lot*) of embedded, virtually none of
which has hardware RNGs.  Or an OS, for that matter, at least in the sense of
something Unix-like.  However, in all cases the RNG system is pretty secure,
you preload a fixed seed at manufacture and then get just enough changing data
to ensure non-repeating values (almost every RTOS has this, e.g. VxWorks has
the very useful taskRegsGet() for which the docs tell you "self-examination is
not advisable as results are unpredictable", which is perfect).

In all of these cases, the device is going to be a safer place to generate
keys than the CA, in particular because (a) the CA is another embedded
controller somewhere so probably no better than the target device and (b)
there's no easy way to get the key securely from the CA to the device.

However, there's also an awful lot of IoS out there that uses shared private
keys (thus the term "the lesser-known public key" that was used at one
software house some years ago).  OTOH those devices are also going to be
running decade-old unpatched kernels with every service turned on (also years-
old binaries), XSS, hardcoded admin passwords, and all the other stuff that
makes the IoS such a joy for attackers.  So in that case I think a less-then-
good private key would be the least of your worries.

So the bits we need to worry about are what falls between "full of security
holes anyway" and "things done right".  What is that, and does it matter if
the private keys aren't perfect?

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: On the value of EV

2017-12-13 Thread Peter Gutmann via dev-security-policy
Tim Shirley via dev-security-policy  
writes:

>But regardless of which (or neither) is true, the very fact that EV certs are
>rarely (never?) used on phishing sites

There's no need:

https://info.phishlabs.com/blog/quarter-phishing-attacks-hosted-https-domains

In particular, "the rate at which phishing sites are hosted on HTTPS pages is
rising significantly faster than overall HTTPS adoption".

It's like SPF and site security seals, adoption by spammers and crooks was
ahead of adoption by legit users because the bad guys have more need of a
signalling mechanism like that than anyone else.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA generated keys

2017-12-18 Thread Peter Gutmann via dev-security-policy
Ryan Hurst via dev-security-policy  
writes:

>Unfortunately, the PKCS#12 format, as supported by UAs and Operating Systems
>is not a great candidate for the role of carrying keys anymore. You can see
>my blog post on this topic here: http://unmitigatedrisk.com/?p=543

It's even worse than that, I use it as my teaching example of now not to
design a crypto standard:

https://www.cs.auckland.ac.nz/~pgut001/pubs/pfx.html

In other words its main function is as a broad-spectrum antipattern that you
can use for teaching purposes.

>The core issue is the use of old cryptographic primitives that barely live up
>to the equivalent cryptographic strengths of keys in use today. The offline
>nature of the protection involved also enables an attacker to grind any value
>used as the password as well.

That, and about five hundred other issues.  An easier solution would be to use
PKCS #15, which dates from roughly the same time as #12 but doesn't have any
of those problems (PKCS #12 only exists because it was a political compromise
created to appease Microsoft, who really, really wanted everyone to use their
PFX design).

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: OCSP Responder monitoring (was Re: Violations of Baseline Requirements 4.9.10)

2017-12-11 Thread Peter Gutmann via dev-security-policy
Rob Stradling via dev-security-policy  
writes:

>CAs / Responder URLs that are in scope for, but violate, the BR prohibition 
>on returning a signed a "Good" response for a random serial number

Isn't that perfectly valid?  Despite the misleading name, OCSP's "Good" just
means "not revoked", and a not-revoked reply to a random serial number is 
correct because it's not revoked.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Francisco Partners acquires Comodo certificate authority business

2017-10-31 Thread Peter Gutmann via dev-security-policy
mw--- via dev-security-policy  writes:

>So they sell multiple roots over to a company that is "the leader in Deep
>Packet Inspection (DPI) and we've got a lot going on in that space" and
>enable them to issue trusted certificates and mitm all encrypted connections
>with that? That is a good halloween joke!

Francisco Partners is more a general investment company, but in that regard
they also have a stake in firms like Blue Coat, whose products have been used
by repressive regimes against their citizens.

Still, it's amusing that a perfect mechanism for performing MITM attacks is
now controlled by a company who has other arms that actively perform MITM
attacks.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Disallowed company name

2018-06-03 Thread Peter Gutmann via dev-security-policy
Matthew Hardeman  writes:
>>On Thu, May 31, 2018 at 8:38 PM, Peter Gutmann 
>>wrote:
>>
>>>Banks, trade vendors, etc, tend to reject accounts with names like this.
>>
>>Do they?
>>
>>https://www.flickr.com/photos/nzphoto/6038112443/
>
>I would hope that we could agree that there is generally a different risk
>management burden in getting a store loyalty tracking card versus getting a
>loan or even opening a business demand deposit account.

I haven't gone through the full process of opening an account since I didn't
want to actually open a real account, but got most of the way through with
Bobby Tables, so it seems possible here.  The account name is pretty much
irrelevant, all that matters is the account number.  Then on making a payment
you get texted the details of the transaction (to/from/amount/etc) and asked
to approve it.  The name never crops up.

In terms of tax filing it's the same, what matters is your taxpayer number,
not whether you want to file your return as Mister Mxyzptlk.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Re: Disallowed company name

2018-05-31 Thread Peter Gutmann via dev-security-policy
Matthew Hardeman writes:
>On Thu, May 31, 2018 at 5:03 PM, Kristian Fiskerstrand  wrote:
>
>> New business enterprise name:   ';UPDATE TAXRATE SET RATE = 0 WHERE NAME =
>> 'EDVIN SYSE'
>
>That's hilarious.  Where I'm from they'd accuse you of attempting to hack
>them, though likely not actually attempt to prosecute it.

Some years ago I sent a cert request to a public CA's test server that
contained, among other things, the following:

static const CERT_DATA certReqData[] = {
/* Identification information */
{ CRYPT_CERTINFO_COUNTRYNAME, IS_STRING, 0, TEXT( "US" ) },
{ CRYPT_CERTINFO_ORGANIZATIONNAME, IS_STRING, 0, TEXT( "Dave's 
Wetaburgers" ) },
{ CRYPT_CERTINFO_ORGANIZATIONALUNITNAME, IS_STRING, 0, TEXT( "SSL 
Certificates" ) },
{ CRYPT_CERTINFO_COMMONNAME, IS_STRING, 0, TEXT( "Robert';DROP TABLE 
certificates;--" ) },

(it's part of the standard self-test data that I use for my own code, used to
be a different SQLI string but I changed it to Bobby Tables as an homage to
XKCD).

Their test server went offline for several days.

I was nice enough not to submit the request to their production systems.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Re: Disallowed company name

2018-05-31 Thread Peter Gutmann via dev-security-policy
Matthew Hardeman writes:

>I wonder if you've ever annoyed a taxing authority?  They have far less humor
>than one might imagine.

I used to have the account name administrator@, after trying
various SQLI@ names and being somewhat disappointed that no
fireworks ensued.  They were rather amused, and probably a bit proud of the
fact that no fireworks ensued.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Disallowed company name

2018-05-31 Thread Peter Gutmann via dev-security-policy
Matthew Hardeman writes:

>Banks, trade vendors, etc, tend to reject accounts with names like this.

Do they?

https://www.flickr.com/photos/nzphoto/6038112443/

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DYMO Root CA installed by Label Printing Software

2018-01-09 Thread Peter Gutmann via dev-security-policy
Nicholas Humfrey via dev-security-policy 
 writes:

>What is the correct way for them to achieve what they are trying to do?

I'm not sure if there is a correct way, just a least awful way.  The problem
is that the browser vendors have decreed that you can only talk SSL if you use
a certificate from a commercial CA, which obviously isn't possible in this
case, or in numerous other cases (well, there are many commercial CAs who will
happily sell you a cert for "localhost", but that's another story).

So you can hack something with "the cloud", but now your label printing
software relies on external internet access to work, and you're sending
potentially sensitive data that never actually needs to go off-site, off-site
for no good reason.

Perhaps the least awful way is to install a custom root CA cert that only ever
signs one cert, "localhost" (and the CA's private key is held by Dymo, not
hardcoded into the binary).  You've got a shared private key for localhost,
but it's less serious than having a universal root CA there.

The problem is really with the browsers, not with Dymo.  There's no easy
solution from Dymo's end, so what they've done, assuming they haven't
hardcoded the CA's private key, is probably the least awful workaround to the
problem.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DYMO Root CA installed by Label Printing Software

2018-01-09 Thread Peter Gutmann via dev-security-policy
Ryan Sleevi  writes:

>Of course, if that doesn’t tickle your fancy, there are other ways that are
>supported that you may not have heard about - for example:
>https://docs.microsoft.com/en-us/microsoft-edge/extensions/guides/native-messaging
>
>https://developer.mozilla.org/en-US/Add-ons/WebExtensions/Native_messaging
>
>https://developer.chrome.com/apps/nativeMessaging

So I've had a quick look at these and unless I've missed something they're
just a means of talking to an app on your local machine.  As soon as you go
outside that boundary, e.g. to configure a router or printer on your local
network via a browser, you're back to having to add a new root CA to the cert
store for it to work.  Or have I missed something?

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DYMO Root CA installed by Label Printing Software

2018-01-09 Thread Peter Gutmann via dev-security-policy
Ryan Sleevi  writes:

>First, there are non-commercial CAs that are trusted. 

By "commercial CAs" I meant external business entities, not an in-house CA
that the key or cert owner controls.  Doesn't matter if they charge money or
not, you still need to go to an external organisation to ask permission to use
encryption.

>Second, you'e stated "there are many commercial CAs who will happily sell you
>a cert for 'localhost'". To that, I say POC||GTFO, 

“An Observatory for the SSLiverse”, Peter Eckersley and Jesse Burns,
presentation at Defcon 18, July 2010,
http://www.eff.org/files/DefconSSLiverse.pdf.  That lists *six thousand* certs
issued for localhost from Comodo, Cybertrust, Digicert, Entrust, Equifax,
GlobalSign, GoDaddy, Microsoft, Starfield, Verisign, and many others.  Then
there's tens of thousands of certs that other studies have found for
unqualified names, RFC 1918 names, and so on.

(The naming-and-shaming via CT is certainly cutting down on this, but given
the widespread mis-issuance of these certs in the past, are you really
confident that it's not still happening when CT can't see it?).

>Third, I'm happy to inform you there is a correct way. The Secure Contexts
>spec ( https://www.w3.org/TR/secure-contexts/#is-origin-trustworthy )

... available on display in the bottom of a locked filing cabinet stuck in a
disused lavatory with a sign on the door saying "Beware of the Leopard".

Given that a number of vendors have resorted to hardcoding their own root CAs,
Secure Contexts is either not working or there's insufficient awareness of it
for it to be effective (or both).  Having just skimmed parts of that lengthy
and complex spec, which I'd never heard of until now, it's pretty hard to see
what this actually gives me, and that it can (according to you) make
connecting to localhost secure.  In particular the text "The following
features are at-risk, and may be dropped during the CR period: [...] The
localhost carveout" and "This carveout is 'at risk', as there’s currently only
one implementation" doesn't inspire confidence either in it being widely
supported or it continuing to be supported.

>There are other remarks in your response that are also wrong, but in the
>spirit of only focusing on the most important (to this specific reporter's
>question), I've omitted them. 

Please, go ahead.  I'm happy to defend them, with references to studies and
whatnot if available.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DYMO Root CA installed by Label Printing Software

2018-01-09 Thread Peter Gutmann via dev-security-policy
Jonathan Rudenberg  writes:

>For communicating with other machines, the correct thing to do is to issue a
>unique certificate for each device from a publicly trusted CA. The way Plex
>does this is a good example:
>https://blog.filippo.io/how-plex-is-doing-https-for-all-its-users/

But the Plex solution required DynDNS, partnering with a CA for custom hash-
based wildcard certificates (and for which the CA had to create a new custom
CA cert), and other tricks, I don't think that generalises.  In effect this
has given Plex their own in-house CA (by proxy), which is a point solution for
one vendor but not something that any vendor can build into a product.

Anyone from Plex want to comment on how much effort was involved in this? It'd
be interesting to know what was required to negotiate this deal, and how long
it took, just as a reference point for anyone else considering it.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DYMO Root CA installed by Label Printing Software

2018-01-09 Thread Peter Gutmann via dev-security-policy
Ryan Sleevi  writes:

>Or is your viewpoint that because this happened in the past, one should
>assume that it will forever happen, no matter how much the ecosystem changes -
>including explicitly prohibiting it for years?

Pretty much.  See the followup message, which shows it was still happening as
of a few months ago.

>one should assume that it will forever happen, no matter how much the
>ecosystem changes - including explicitly prohibiting it for years?

Buffer overflows, XSS, SQL injection, the list is endless.  None of these
security issues have gone away, why would another widespread problem, issuance
of certs that shouldn't have been issued, magically disappear just because
someone says it should?  Do you honestly believe we won't see more mis-issued
certs just because the BR says you're not allowed to do it?  Just check the
list over any period of time for examples of the ones that someone's actually
noticed, who knows how many have gone unnoticed until someone like Tavis
comes along.

>Quick check: will anything you cite newer than 2010?

See my other reply.  I just used the best-known one, which was the first that
came to mind, and shows how widespread the issue was before the naming-and-
shaming cut some of it down.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DYMO Root CA installed by Label Printing Software

2018-01-09 Thread Peter Gutmann via dev-security-policy
Ryan Sleevi  writes:

>I hope you can see how I responded to precisely the problem provided.

You responded to that one specific limited instance.  That doesn't work for
anything else where you've got a service that you want to make available over
HTTPS.  Native messaging is a hack to get around a problem with browsers, as
soon as you move off the local machine it reappears again, which is what I was
pointing out.

Since this is something that keeps cropping up, and from all signs will keep
on cropping up, perhaps the browser vendors could publish some sort of
guide/BCP on how to do it right that everyone could follow.  For example:

  HTTPS to localhost: Use Native Messaging
  HTTPS to device on local network (e.g. RFC 1918): ???
  HTTPS to device with non-FQDN: ???
  HTTPS to device with static IP address: ???

This would solve... well, at least take a step towards solving the same issue
that keeps coming up again and again.  If there's a definitive answer,
developers could refer to that and get it right.

Oh, and saying "you need to negotiate a custom deal with a
commercial/public/whatever-you-want-to-call-it CA" doesn't count as a
solution, it has to be something that's actually practical.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DYMO Root CA installed by Label Printing Software

2018-01-09 Thread Peter Gutmann via dev-security-policy
Ryan Sleevi  writes:

>I similarly suspect you’re unaware of https://wicg.github.io/cors-rfc1918/ in
>which browsers seek to limit or restrict communication to such devices?

A... blog post?  Not sure what that is, it's labelled "A Collection of
Interesting Ideas", stashed on Github under the WICG's repository?  No, for
some inexplicable reason I seem to have missed that one.  Is there a "Beware
of the Leopard" sign somewhere?

It talks a lot about details of CORS, but I'm not sure what it says about
allowing secure HTTPS to devices at RFC 1918 addresses.  The doc says "we
propose a mitigation against these kinds of attacks that would require
internal devices to explicitly opt-in to requests from the public internet",
which indicates it's targeted at something altogether different.

>while also acknowledging that you have not kept up with the state of the
>industry for the past near-decade of improvements or enhancements

If the industry actually publicised some of this stuff rather than posting
articles with names like "A Collection of Interesting Ideas" to GitHub (which
in any case doesn't look like it actually addresses the problem) then I might
have kept up with it a bit more.  And as I've already pointed out, given the
number of vendors who are resorting to slipping in their own root CAs and
other tricks, I'm not the only one who's missing all these well-hidden
industry solutions.

>>  HTTPS to device with non-FQDN: ???
>>  HTTPS to device with static IP address: ???
>
>I suspect any answer such as “Don’t do this” or “This is intentionally not
>supported” will be met by you as “impractical”.

Try me.  The reason why I ruled out "negotiate a custom deal with a commercial
CA" is that it genuinely doesn't scale, you can't expect 10,000, 50,000,
100,000 (whatever the number is) device vendors to all cut a special deal with
a commercial/public/whatever CA just to allow a browser to talk to their $30
Internet-connected whatsit.

It's a simple enough question, so I'll repeat it again, a vendor selling some
sort of Internet-connected device that needs to be administered via HTTP (i.e.
using a browser), a printer, router, HVAC unit, whatever you like, wants to
add encryption to the connection.  How should they do this for the fairly
common scenarios of:

HTTPS to device on local network (e.g. RFC 1918).
HTTPS to device with non-FQDN.
HTTPS to device with static IP address.

What's the recommended BCP for a vendor to allow browser-based HTTPS access
for these scenarios?  I'm genuinely curious.  And please publish the
recommendation so others can follow it (not on GitHub labelled "A Collection
of Interesting Ideas").

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: A vision of an entirely different WebPKI of the future...

2018-08-17 Thread Peter Gutmann via dev-security-policy
Matthew Hardeman via dev-security-policy 
 writes:

>What if the various user agents' root programs all lobbied ICANN to impose a
>new technical requirement upon TLD REGISTRY operators?

That was actually debated by one country, that whenever anyone bought a domain
they'd automatically get a certificate for it included.  Makes perfect sense,
owning the domain is a pretty good proof of ownership of the domain for
certificate purposes.  It eventually sank under the cost and complexity of
registrars being allowed to operate CAs that were trusted by browsers [0].

Peter.

[0] Some details simplified, and identities protected.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: A vision of an entirely different WebPKI of the future...

2018-08-19 Thread Peter Gutmann via dev-security-policy
Matthew Hardeman via dev-security-policy 
 writes:

>That's very interesting.  I would be curious to know the timing of this.  Was
>this before or after massive deployment of DNSSEC by the registries?

Some time before.  To the best of my knowledge DNSSEC considerations had
nothing to do with this either way, it just seemed a commonsense way to get
sites issued with certs without doing the same validation and whatnot twice
over.  Having the registry you buy your domain name from also be the one that
issues the cert saying you own the domain name seems like a complete no-
brainer.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Re: [FORGED] TeletexString

2018-07-08 Thread Peter Gutmann via dev-security-policy
​Ryan Sleevi  writes:

>Is that because you believe it forbidden by spec, or simply unwise?

The spec allows almost anything, and in particular because there isn't any one
definitive "spec" you can have ten incompatible interpretations that are all
compliant to something that can claim to be the spec (see the Style Guide
description).

However, the chances of anything displaying this stuff correctly is
essentially zero.

>The value of a linter is fairly proportional to its value in spec adherence.

Which of the half-dozen to dozen interpretations of what constitutes "the
spec" do you want it to enforce, and why that particular one and not the
others?

Also, if it knows that the chances of anything being able to correctly handle
a particular string form is essentially zero, even if some interpretation of
the spec can claim it's OK, shouldn't it warn?

>making them errors puts burden on CAs and the community to evaluate whether
>or not it's an "actual  violation" or just something "monumentally stupid"

No, it's a way of telling CAs that if they do this, things will break.  That's
exactly what the original lint did, "this is permitted in the spec but you
probably weren't intending to do that".  It's cert*lint*, not
certstrictcompliancecheckertoarbitraryunworkablerules.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] TeletexString

2018-07-08 Thread Peter Gutmann via dev-security-policy
Kurt Roeckx  writes:

>I have yet to see a certificate that doesn't just put latin1 in it, which
>should get rejected.

There were some Deutsche Telekom certificates from the late 1990s that used
T61String floating diacritics for which I had some custom code to identify the
two-character sequences and convert them to latin-1, which things could
actually understand (this was slightly risky because some of those are also
plausible latin-1 combinations, so the code checked specifically for likely
umlauted a, o and u).  That was one of the certs I referred to earlier where
we were unable to identify anything that could display them, except possibly
custom apps also from Deutsche Telekom.  In any case the next release of the
certs moved to latin-1, presumably in response to complaints that their certs
contained garbage strings that nothing could display.

So the most sensible approach would be to assume T61String = latin1, at least
that way what a CA puts in a cert will display OK.

Just out of interest, which country did the T61String-containing cert come
from?  With which interpretation of T61String did the resulting strings
display correctly?  Were they in fact latin-1?

Peter.

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] TeletexString

2018-07-06 Thread Peter Gutmann via dev-security-policy
Peter Bowen via dev-security-policy  
writes:

>In reviewing a recent CA application, the question came up of what is allowed
>in a certificate in data encoded as "TeletexString" (which is also sometimes
>called T61String).

For the full story of T.61 strings, see the X.509 style guide,
https://www.cs.auckland.ac.nz/~pgut001/pubs/x509guide.txt, it's a flat text
file but grep for "T.61/TeletexString" for the text that covers it.

Some further notes, at the time a lot of implementations just treated it as
8859-1 (which the guide mentions with the comment on assuming T.61 = latin-1),
which worked OK for most cases where it was used, e.g. umlauts and other
accented characters for European languages.  Also at one point a bunch of
people tried to identify any implementation that would display even something
as basic as umlauts via floating diacritics and were unable to find anything
that did it.

So for certlint I'd always warn for T61String with anything other than ASCII
(which century are they living in? Point them at UTF8 and tell them to come
back when they've implemented it), treat it as a probably 8859-1 string when
checking for validity, and report an error if they try anything like character
set switching and fancy escape sequences, which are pretty much guaranteed not
to work (i.e. display) properly.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] TeletexString

2018-07-08 Thread Peter Gutmann via dev-security-policy
Kurt Roeckx  writes:

>I think it should generate an error on any character not defined in 102 and
>the space character. So any time you try to use anything in C0, C1 and G1,
>and those 6 in 102 that are not defined.

Yep, sounds good.  With a possible check for latin-1 validity as well in case
they've used the T.61 = 8859-1 approximation/kludge.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] TeletexString

2018-07-09 Thread Peter Gutmann via dev-security-policy
Kurt Roeckx  writes:

>As I understand it, it was the Swedish government, and they claimed that
>Microsoft said it was ok. They just contained latin1.

That sounds right then, latin-1 would cover Sweden, and given the T61String =
latin1 that most (all?) implementations apply it should work fine.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Sigh. stripe.ian.sh back with EV certificate for Stripe, Inc of Kentucky....

2018-04-14 Thread Peter Gutmann via dev-security-policy
Jakob Bohm via dev-security-policy  
writes:

>It's like a fire drill where the mayor "pretends" that an old school building
>is on fire, and the firemen then proceed to evacuate the building and douse
>it in enough water to put out a real fire.

Well, not quite: It's like a fire drill where the mayor "pretends" that an old
school building is on fire, and the firemen then look at the burning building
and say "that's all burning according to the baseline requirements, everything
appears to be in order" and leave again.  In the meantime the building burns
to the ground.

During an after-dinner discussion yesterday, someone (not me :-) made the
observation that if anything ever deserved the moniker "broken as designed",
it's this.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DigiCert Assured ID Root CA and Global Root CA EV Request

2018-12-15 Thread Peter Gutmann via dev-security-policy
Rob Stradling via dev-security-policy  
writes:

>The public exponent (65537) in https://crt.sh/?asn1=628933973 is encoded as
>02 04 00 01 00 01 (02=INTEGER, 04=length in bytes), whereas the only valid
>encoding is 02 03 01 00 01.

Yep, this is what dumpasn1 says about it:

 5574:   INTEGER 65537
 : Error: Integer '00 01 ...' has non-DER encoding.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Re: [FORGED] Re: Incident report - Misissuance of CISCO VPN server certificates by Microsec

2018-12-07 Thread Peter Gutmann via dev-security-policy
Paul Wouters via dev-security-policy  
writes:

>I'm not sure how that is helpful for those crypto libraries who mistakenly
>believe a certificate is a TLS certificate and thus if the EKU is not empty
>it should have serverAuth or clientAuth.

Sure, it wouldn't help with current libraries that neither acknowledge non-TLS
use nor know about the tlsCompabitility EKU, but it would act as a signalling
mechanism going forward to inform RP's about what's going on.  So if you get
notified about an apparently-wrong cert you can see the tlsCompabitility EKU
and realise what's going on.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Re: Incident report - Misissuance of CISCO VPN server certificates by Microsec

2018-12-06 Thread Peter Gutmann via dev-security-policy
Paul Wouters via dev-security-policy  
writes:

>Usually X509 is validated using standard libraries that only think of the TLS
>usage. So most certificates for VPN usage still add EKUs like serverAuth or
>clientAuth, or there will be interop problems.

So just to make sure I've got this right, implementations are needing to add
dummy TLS EKUs to non-TLS certs in order for them to "work"?  In that case why
not add a signalling EKU or policy value, a bit like Microsoft's
systemHealthLoophole EKU (I don't know what its official name is, 1 3 6 1 4 1
311 47 1 3) where the normal systemHealth key usage is meant to indicate
compliance with a system or corporate security policy and the
systemHealthLoophole key usage is for systems that don't comply with the
policy but that need a systemHealth certificate anyway.

In theory there's the anyExtendedKeyUsage that seems to do something like
this:

   If a CA includes extended key usages to satisfy such applications,
   but does not wish to restrict usages of the key, the CA can include
   the special KeyPurposeId anyExtendedKeyUsage in addition to the
   particular key purposes required by the applications. 

but thats vague enough, and little-supported enough, that expecting existing
implementations to handle it correctly out of the box seems pretty risky.
Better to define a new EKU, "tlsCompabitility", telling the relying party that
the TLS EKUs are present for compatibility purposes and can be ignored if it's
a non-TLS use.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: P-521 Certificates

2019-01-11 Thread Peter Gutmann via dev-security-policy
Jakob Bohm via dev-security-policy  
writes:

>On 11/01/2019 13:04, Peter Gutmann wrote:
>> Jason via dev-security-policy  writes:
>> 
>>> I would say that the problem here would be that a child certificate can't 
>>> use
>>> a higher cryptography level than the issuer
>> 
>>Why not?  If the issuer uses strong-enough crypto, what difference does it
>>make what the child uses?
>
>Really?  If the CA key is weaker than the child key, an attacker can break
>the CA key and sign a fake certificate with less effort than breaking the
>child key directly

You've apparently missed the fact that I said "strong-enough crypto".  The
attacker can't break either the issuer key or the child key, no matter how
much stronger the child key may be than the issuer.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: P-521 Certificates

2019-01-11 Thread Peter Gutmann via dev-security-policy
Jason via dev-security-policy  writes:

>I would say that the problem here would be that a child certificate can't use
>a higher cryptography level than the issuer

Why not?  If the issuer uses strong-enough crypto, what difference does it
make what the child uses?

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Online exposed keys database

2018-12-19 Thread Peter Gutmann via dev-security-policy
Ryan Hurst via dev-security-policy  
writes:

>My first thought is by using SPKI you have limited the service unnecessarily
>to X.509 related keys, I imagined something like this covering PGP, JWT as
>well as other formats. It would be nice to see the scope increased
>accordingly.

You can't do it for PGP, that hashes in a pile of additional stuff unrelated
to the key so there's no way to uniquely identify a specific key, only "the
key and this specific set of metadata".  Using the SPKI for the hash is the
best option, I use that internally as the unique ID for keys, including PGP
ones.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DarkMatter Concerns

2019-02-27 Thread Peter Gutmann via dev-security-policy
tomasshredder--- via dev-security-policy 
 writes:

>We still get asked by customers to implement sequential serial numbers from
>time to time, but it's getting more and more rare.

Another reason for using random data, from the point of view of a software
toolkit provider, is that it's the only thing you can guarantee is unique in a
cert since there's no coordination between users over namespace use.  A user
can configure their software or CA to have any name they like, and for small-
scale use that's often the case, "Web CA" or something similar.  By providing
an unlikely-to-be-duplicated random value as the serial number, you don't run
into problems with Web CA #1's certs clashing with Web CA #2's certs.

In terms of sequential numbers, if for some reason the current serial number
isn't written to permanent storage correctly, or there's a system failure and
when things are restored the record of the last-used serial number is lost or
corrupted, you're in trouble.  So overall it just made more sense to use
random values.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DarkMatter Concerns

2019-02-26 Thread Peter Gutmann via dev-security-policy
Mike Kushner via dev-security-policy  
writes:

>EJBCA was possible the first (certainly one of the first) CA products to use
>random serial numbers.

Random serial numbers have been in use for a long, long time, principally to
hide the number of certs a CA was (or wasn't) issuing.  Here's the first one
that came up in my collection, from twenty-five years ago:

  0 551: SEQUENCE {
  4 400:   SEQUENCE {
  8   9: INTEGER 00 A0 98 0F FC 30 AC A1 02
 19  13: SEQUENCE {
 21   9:   OBJECT IDENTIFIER md5WithRSAEncryption (1 2 840 113549 1 1 4)
 32   0:   NULL
   :   }
[...]
 81  43:   SET {
 83  41: SEQUENCE {
 85   3:   OBJECT IDENTIFIER organizationalUnitName (2 5 4 11)
 90  34:   PrintableString 'A Free Internet and SET Class 1 CA'
   :   }
   : }
   :   }
126  26: SEQUENCE {
128  11:   UTCTime '960901Z'
   : Error: Time is encoded incorrectly.

RFC 3280 (2002) explicitly added handling for random data as serial numbers:

   Given the uniqueness requirements above, serial numbers can be
   expected to contain long integers.  Certificate users MUST be able to
   handle serialNumber values up to 20 octets. 

(20 bytes being a SHA-1 hash, which was the fashion at the time).

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DarkMatter Concerns

2019-02-24 Thread Peter Gutmann via dev-security-policy
Matt Palmer via dev-security-policy  
writes:

>Imagine if a CA said "we generate a 64-bit serial by getting values from the
>CSPRNG repeatedly until the value is one greater than the previously issued
>certificate, and use that as the serial number.".

Well, something pretty close to that works for Bitcoin (the relation is <
rather than >).  Come to think of it, you could actually mine cert serial
numbers, and then record them in a public blockchain, for auditability of
issued certificates.

(Note: This is satire.  I'm not advocating using blockchain anything for
anything other than (a) pump-and-dump digital currency schemes and (b)
attracting VC funding).

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Re: What's the meaning of "non-sequential"? (AW: EJBCA defaulting to 63 bit serial numbers)

2019-03-11 Thread Peter Gutmann via dev-security-policy
Matthew Hardeman via dev-security-policy 
 writes:

>But, maybe "non-sequential" doesn't mean that.  It's a pity a concept like
>that isn't clearly objective.

I assume what the text was meaning to say was "unpredictable", but it was
unfortunately phrased badly, presumably as a rushed response to "MD5
considered harmful today" which took advantage of the fact that RapidSSL used
a counter to create its serial numbers.

Given that we've now got several more interpretations of what 7.1 is
requiring, and it's only Monday (at least for you lot), I think this really,
really needs an update to clarify what's actually required.  The 7.1 text is
clearly inadequate to convey precisely what should be going into the serial
number field, given the number of interpretations and the amount of debate
about what is and isn't allowed.  The "modest proposal" sounds like a good
fit for the updated text.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Serial Number Origin Transparency proposal (was Re: A modest proposal for a better BR 7.1)

2019-03-12 Thread Peter Gutmann via dev-security-policy
Rob Stradling via dev-security-policy  
writes:

>I've been working on an alternative proposal for a serial number generation
>scheme, for which I intend to write an I-D and propose to the LAMPS WG.

This seems really, really complicated.  In all of the endless debate over
this, the one thing that hasn't actually come under question is how to
generate the random values themselves.  What has come up over and over is how
to encapsulate those values as an ASN.1 integer.  So I really prefer the
Modest Proposal version, which directly addresses the bit-bagging problems
that are the real issue with 7.1.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Pre-Incident Report - GoDaddy Serial Number Entropy

2019-03-08 Thread Peter Gutmann via dev-security-policy
Daymion Reynolds via dev-security-policy 
 writes:

>Our goal is to reissue all the certificates within the next 30 days. 

Before everyone goes into an orgy of mass revocation, see the message I just
posted "Why BR 7.1 allows any serial number except 0".  As long as your serial
number isn't zero, there's no such thing as a non-compliant serial number, so
no need to revoke and replace great masses of certificates.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: A modest proposal for a better BR 7.1

2019-03-08 Thread Peter Gutmann via dev-security-policy
Matthew Hardeman via dev-security-policy 
 writes:

>shall be 0x75

Not 0x71?

>If anyone thinks any of this has merit, by all means run with it.

Sounds good, and saves me having to come up with something (the
bitsort(CSPRNG64()) nonsense took enough time to type up).  The only thing I
somewhat disagree with is #3, since this is now very concise and requires "the
first 64 bits of output" you can just make it a CSPRNG, which is well-
understood and presumably available to any CA, since it's a standard feature
of all HSMs.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: EJBCA defaulting to 63 bit serial numbers

2019-03-08 Thread Peter Gutmann via dev-security-policy
Dimitris Zacharopoulos via dev-security-policy 
 writes:

>If we have to count every CA that had this interpretation, then I suppose all
>CAs that were using EJBCA with the default configuration have the same
>interpretation.

There's also an unknown number of CAs not using EJBCA that may have even
further interpretations.  For example my code, which I'll point out in advance
has nothing to do with the BR and predates the existence of the CAB Forum
itself, may or may not be compliant with whatever Mozilla's interpretation of
7.1 is.  I literally have no idea whether it meets Mozilla's expectations.  It
doesn't do what EJBCA does, so at least it's OK there, but beyond that I have
no idea whether it does what Mozilla wants or not.  

I assume any number of other CAs are in the same position, and given that if
they guessed wrong they have to revoke an arbitrarily large number of certs,
it's in their best interests to keep their heads down and wait for this to
blow over.

So perhaps instead of trying to find out which of the hundreds of CAs in the
program aren't compliant, we can check which ones are.  Would any CA that
thinks it's compliant let us know, and indicate why they think they're
compliant?  For example "we take 64 bits of CSPRNG output, pad it with a
leading , and use that as the serial number", in other words what
Matthew Hardeman suggested, would seem to be OK.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Why BR 7.1 allows any serial number except 0

2019-03-08 Thread Peter Gutmann via dev-security-policy
I didn't post this as part of yesterday's message because I didn't want to
muddy the waters even further, but let's look at the exact wording of BR 7.1:

  CAs SHALL generate non-sequential Certificate serial numbers greater than
  zero (0) containing at least 64 bits of output from a CSPRNG

Note the comment I made yesterday:

  That's the problem with rules-lawyering, if you're going to insist on your
  own very specific interpretation of a loosely-worded requirement then it's
  open season for anyone else to find dozens of other fully compatible but
  very different interpretations.

So lets look at the most pathologically silly but still fully compliant with
BR 7.1 serial number you can come up with.  Most importantly, 7.1 it never
says what form those bits should be in, merely that it needs to contain "at
least 64 bits of output from a CSPRNG".  In particular, it doesn't specify
which order those bits should be in, or which bits should be used, as long as
there's at least 64.

So the immediate application of this observation is to make any 64-bit value
comply with the ASN.1 encoding rules: If the first bit is 1 (so the sign bit
is set), swap it with any convenient zero bit elsewhere in the value.
Similarly, if the first 9 bits are zero, swap one of them with a one bit from
somewhere else.  Fully compliant with BR 7.1, and now also fully compliant
with ASN.1 DER.

Let's take it further.  Note that there's no requirement for the order to be
preserved.  So let's define the serial number as:

  serialNumber = sortbits( CSPRNG64() );

On average you're going to get a 50:50 mix of ones and zeroes, so your serial
numbers are all going to be:

  0x

plus/minus a few bits around the middle.  When encoded, this will actually be
0x00, with the remaining zero bits implicit - feel free to debate
whether the presence of implict zero bits is compliant with BR 7.1 or not.

Anyway, continuing, you can also choose to alternate the bits so you still get
a fixed-length value:

  0x

(plus/minus a bit or two at the LSB, as before).

Or you could sort the bits into patterns, for example to display as rude
messages in ASCII:

  "BR7SILLY"

Or, given that you've got eight virtual pixels to play with, create ASCII art
in a series of certificates, e.g. encode one line of an emoji in each serial
number.

Getting back to the claim that "BR 7.1 allows any serial number except 0",
here's how you get this:

At one end of the range, your bit-selection rule is "discard every one bit
except the 64th one", so your serial number is:

  0x0001

or, when DER encoded:

  0x01

At the other end of the scale, "discard every zero bit except the first one":

  0x7FFF

or INT_MAX.

All fully compliant with the requirement that:

  CAs SHALL generate non-sequential Certificate serial numbers greater than
  zero (0) containing at least 64 bits of output from a CSPRNG

I should note in passing that this also allows all the certificates you issue
to have the same serial number, 1, since they're non-sequential and greater
than zero.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Why BR 7.1 allows any serial number except 0

2019-03-08 Thread Peter Gutmann via dev-security-policy
I wrote:

>So the immediate application of this observation is to make any 64-bit value
>comply with the ASN.1 encoding rules: If the first bit is 1 (so the sign bit
>is set), swap it with any convenient zero bit elsewhere in the value.
>Similarly, if the first 9 bits are zero, swap one of them with a one bit from
>somewhere else.  Fully compliant with BR 7.1, and now also fully compliant
>with ASN.1 DER.

Oops, need to clarify that: Note the specific use of "swap one of them".  You
can't just drop in a zero bit you made up yourself, you have to use one of the
original zero bits that came from the CSPRNG or you won't be compliant with BR
7.1 any more.  So you need to swap in a genuine zero bit from elsewhere in the
value, not just replace it with your own made-up zero bit.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Why BR 7.1 allows any serial number except 0

2019-03-08 Thread Peter Gutmann via dev-security-policy
Ryan Sleevi  writes:

>I'm not sure this will be a very productive or valuable line of discussion.

What I'm pointing out is that beating up CAs over an interpretation of the
requirements that didn't exist until about a week ago when it was pointed out
in relation to DarkMatter is unfair on the CAs.  If you're going to impose a
specific interpretation on them then get it added to the BRs at a future date
and enforce it then, don't retroactively punish CAs for something that didn't
exist until a week or two ago.

>Of course, there are quite glaring flaws in the argument, particularly that
>"all" of these are compliant. None of them are compliant under any reasonable
>reading.

Again, it's your definition of "reasonable".  A number of CAs, who applied
their own reasonable reading of the same requirements, seem to think
otherwise.  They're now being punished for the fact that their reasonable
reading differs from Mozilla's reasonable reading.

>I would strongly caution CAs against adopting any of these interpretations,
>and suggest it would be best for CAs to wholly ignore the message referenced.

"Pay no attention to the message behind the curtain".

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Pre-Incident Report - GoDaddy Serial Number Entropy

2019-03-14 Thread Peter Gutmann via dev-security-policy
Jaime Hablutzel via dev-security-policy  
writes:

>>>Again, maths were wrong here, sorry. Correct calculation is:
>>>
>>>log2(18446744073708551615) = 63.93
>> 
>>I love the way that people are calculating data on an arbitrarily-chosen value
>>pulled entirely out of thin air 
>
>Can you confirm if the motivation for the "64 bits of output from a CSPRNG"
>can be found in [1]?.

I actually thought it was from "Chosen-prefix collisions for MD5 and
applications" or its companion papers ("Short chosen-prefix collisions for MD5
and the creation of a rogue CA certificate", "Chosen-Prefix Collisions for MD5
and Colliding X.509 Certificates for Different Identities"), but it's not in
any of those.  Even the CCC talk slides only say "We need defense in depth ->
random serial numbers" without giving a bit count.  So none of the original
cryptographic analysis papers seem to give any value at all.  It really does
seem to be a value pulled entirely out of thin air.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Pre-Incident Report - GoDaddy Serial Number Entropy

2019-03-14 Thread Peter Gutmann via dev-security-policy
Jaime Hablutzel via dev-security-policy  
writes:

>Again, maths were wrong here, sorry. Correct calculation is:
>
>log2(18446744073708551615) = 63.93

I love the way that people are calculating data on an arbitrarily-chosen value
pulled entirely out of thin air to 14 decimal places.  It's like summing a
diverging series.  Or calculating how many angels can fit on the head of a
pin.  Or something.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: EJBCA defaulting to 63 bit serial numbers

2019-03-07 Thread Peter Gutmann via dev-security-policy
Matthew Hardeman  writes:

>Can the CA's agent just request the cert, review the to-be-signed certificate
>data, and reject and retry until they land on a prime?  Then issue that
>certificate?
>
>Does current policy address that? Should it?

Yeah, you can get arbitrarily silly with this.  For example my code has always
used 8-byte serial numbers (based on the German Tank Problem, nothing to do
with the BR), it requests 9 bytes of entropy and, if the first byte of the 8
that gets used is zero uses the surplus byte, and if that's still zero sets it
to 1 (again nothing to do with the BR, purely as an ASN.1 encoding thing so
you always get a fixed-length value).   So there's a bias of 1/64K values.  Is
that small enough?  What if I make it 32 bits, so it's 1/4G values?  What
about 48 bits?  What if I use a variant of what you're suggesting, a >64-bit
structured value that contains 64 bits of entropy (so perhaps something using
parity bits or similar), is that valid?

As I said above, you can get arbitrarily silly with this.  I'm sure if we
looked at other CA's code at the insane level of nitpickyness that
DarkMatter's use of EJBCA has been examined, we'd find reasons why their
implementations are non-compliant as well.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: EJBCA defaulting to 63 bit serial numbers

2019-03-07 Thread Peter Gutmann via dev-security-policy
Jakob Bohm via dev-security-policy  
writes:

>This raises 3 derived concerns:

And a fourth, which has been overlooked during all the bikeshedding...
actually I'll call it question 0, since that's what it should have been:

0. Given that the value of 64 bits was pulled out of thin air (or possibly
   less well-lit regions), does it really matter whether it's 63 bits, 64
   bits, 65 3/8th bits, or e^i*pi bits?

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: EJBCA defaulting to 63 bit serial numbers

2019-03-07 Thread Peter Gutmann via dev-security-policy
Matthew Hardeman  writes:

>As if on queue, comes now GoDaddy with its confession.  

I swear I didn't plan that in advance :-).

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: EJBCA defaulting to 63 bit serial numbers

2019-03-07 Thread Peter Gutmann via dev-security-policy
I wrote:

  As I said above, you can get arbitrarily silly with this.  I'm sure if we
  looked at other CA's code at the insane level of nitpickyness that
  DarkMatter's use of EJBCA has been examined, we'd find reasons why their
  implementations are non-compliant as well.

Seconds after sending it, this arrived:

  As of 9pm AZ on 3/6/2019 GoDaddy started researching the 64bit certificate
  Serial Number issue. We have identified a significant quantity of
  certificates (> 1.8million) not meeting the 64bit serial number requirement.

I rest my case.

Oh, and the BR's need an update so that half the CAs on the planet aren't
suddenly non-BR compliant based on the DarkMatter-specific interpretation.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: EJBCA defaulting to 63 bit serial numbers

2019-03-07 Thread Peter Gutmann via dev-security-policy
Matt Palmer via dev-security-policy  
writes:

>If you generate a 64-bit random value, then discard some values based on any
>sort of quality test, the end result is a 64-bit value with less-than-64-bits
>of randomness.

That's not what 7.1 says, merely:

  CAs SHALL generate non-sequential Certificate serial numbers greater than
  zero (0) containing at least 64 bits of output from a CSPRNG

There's nothing there about whether you can, for example, discard values that
you don't like and generate another one (in fact it specifically requires that
you reject the value 0 and generate another one).  In particular, for your
objection, how is one totally random value different from another?
Specifically, if I discard a totally random value that has the high bit set
(because of ASN.1 encoding issues) and take the next value generated, how is
that (a) not compliant with 7.1 and (b) different from another totally random
value that happens to not have the high bit set in the first place?

What if I call every cert that would end up with the sign bit set a test cert
and only issue the ones where they're not set?  Again, fully compliant with
the wording of 7.1, but presumably not compliant with your particular
interpretation of the wording (OK, it might be, I'm sure you'll let me know if
it is or isn't). That's the problem with rules-lawyering, if you're going to
insist on your own very specific interpretation of a loosely-worded
requirement then it's open season for anyone else to find dozens of other
fully compatible but very different interpretations.

And, again, question zero: Given that the value of 64 bits was pulled out of
thin air, why does it even matter?  

Can we just agree that the bikeshed can be any colour people want as long as
you're not using lead-based paint and move on from this bottomless pit?

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Pre-Incident Report - GoDaddy Serial Number Entropy

2019-03-13 Thread Peter Gutmann via dev-security-policy
Richard Moore via dev-security-policy  
writes:

>If any other CA wants to check theirs before someone else does, then now is
>surely the time to speak up.

I'd already asked previously whether any CA wanted to indicate publicly that
they were compliant with BR 7.1, which zero CAs responded to (I counted them
twice).  This means either there are very few CAs bothering with dev-security-
policy, or they're all hunkering down and hoping it'll blow over, which given
that they're going to be forced to potentially carry out mass revocations
would be the game-theoretically sensible approach to take:

Option 1: Keep quiet case 1 (very likely): -> No-one notices, nothing happens.
  Keep quite case 2 (less likely): -> Someone notices, revocation 
issues.
Option 2: Say something -> Revocation issues.

So keeping your head down would be the sensible/best policy.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Arabtec Holding public key?

2019-04-11 Thread Peter Gutmann via dev-security-policy
admin--- via dev-security-policy  writes:

>The risk here, of course, is low in that having a certificate you do not
>control a key for doesn't give you the ability to do anything.

As far as we know.  Presumably someone has an interesting (mis)use for it
otherwise they wouldn't have bothered obtaining it.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Fwd: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-13 Thread Peter Gutmann via dev-security-policy
Daniel Marschall via dev-security-policy 
 writes:

>I share the opinion with Jakob, except with the CVE. Please remove this
>change. It is unnecessary and kills the EV market.

And that was my motivation for the previous question: We know from a decade of
data that EV certs haven't made any difference to security.  The only thing
they've affected is CA's bottom line, since they can now go back to charging
1990s prices for EV certs rather than $9.95 for non-EV certs.  Removing the UI
bling for the more expensive certs makes sense from a security point of view,
but not from a business point of view: "it kills the [very lucrative] EV
market".

It'd be interesting to hear what CAs think of this.  Will the next step be EEV
certs and a restart of the whole cycle, as was predicted when EV certs first
came out?

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Re: Fwd: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-14 Thread Peter Gutmann via dev-security-policy
Peter Bowen via dev-security-policy  
writes:

>I have to admit that I'm a little confused by this whole discussion.  While
>I've been involved with PKI for a while, I've never been clear on the
>problem(s) that need to be solved that drove the browser UIs and creation of
>EV certificates.

Oh, that's easy:

  A few years ago certificates still cost several hundred dollars, but now
  that the shifting baseline of certificate prices and quality has moved to
  the point where you can get them for $9.95 (or even for nothing at all) the
  big commercial CAs have had to reinvent themselves by defining a new
  standard and convincing the market to go back to the prices paid in the good
  old days.

  This déjà-vu-all-over-again approach can be seen by examining Verisign’s
  certificate practice statement (CPS), the document that governs its
  certificate issuance.  The security requirements in the EV-certificate 2008
  CPS are (except for minor differences in the legalese used to express them)
  practically identical to the requirements for Class 3 certificates listed in
  Verisign’s version 1.0 CPS from 1996 [ ].  EV certificates simply roll back
  the clock to the approach that had already failed the first time it was
  tried in 1996, resetting the shifting baseline and charging 1996 prices as a
  side-effect.  There have even been proposals for a kind of sliding-window
  approach to certificate value in which, as the inevitable race to the bottom
  cheapens the effective value of established classes of certificates, they’re
  regarded as less and less effective by the software that uses them (for
  example browsers would no longer display a padlock for them), and the
  sliding window advances to the next generation of certificates until
  eventually the cycle repeats.

That was written about a decade ago.  As recent events have shown, it was
remarkably accurate.  The sliding window has just slid.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Fwd: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-16 Thread Peter Gutmann via dev-security-policy
Leo Grove via dev-security-policy  
writes:

>Are you referring to EV Code Signing certificates? I agree that needs to be
>addressed in another forum, but this discussion in on EV SSL/TLS and their
>value (or lack thereof) in the browser UI. Browsers do not support EV Code
>Signing in the UI as far as I know.
>
>It's been documented that EV Code Signing certificates are on the black
>market. Did you see the same thing for EV SSL/TLS?

Yes, you can buy both, I used the code-signing EV one because I happened to
have a screenshot handy from a writeup I'm working on.  In addition, EV code-
signing certs are much higher value, particularly when they come with
SmartScreen ratings, because they give you instant malware execution on a
billion plus systems, while EV web site certs are kinda meh.  So EV code
signing is the holy grail, the hardest to get, and yet they're readily
available on the black market.  EV web site certs are an afterthought in
comparison, "we also have those if you want 'em".

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Fwd: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-18 Thread Peter Gutmann via dev-security-policy
Daniel Marschall via dev-security-policy 
 writes:

>I just looked at Opera and noticed that they don't have any UI difference at
>all, which means I have to open the X.509 certificate to see if it is EV or
>not.

Does anyone know when Opera made the change?  They had EV UI at one point, and
then there's this bug report:

https://forums.opera.com/topic/17923/ev-certificate-looks-like-ov

which blames the lack of EV UI on Chromium, so something inherited from
Chrome.  It looks like it's then just a side-effect of the Chrome change and
allegedly "fixed in 44.0.2494.0", but Chrome 57 was from 2017, which means at
some point the change got reinstated.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Fwd: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-15 Thread Peter Gutmann via dev-security-policy
Eric Mill  writes:

>CAs should be careful about casually and dramatically overestimating the
>roadblocks that EV certificates present to attackers.

See also the screenshot I posted earlier.  That was from a black-market web
site selling EV certificates to anyone with the stolen credit cards to pay for
them.  These are legit EV certs issued to legit companies, available off the
shelf for criminals to use.  For a little extra payment you can get ones with
high SmartShield scores so your malware is instantly trusted by the victim's
PC.

>The burden is not on the web browsers to prove that EV is detrimental to
>security - the burden is on third parties to prove that EV is beneficial.

Yup, as per my previous post.  We've got a vast amounts of data on this, if
there was a benefit to users then it shouldn't be hard to show that from the
data.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Fwd: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-15 Thread Peter Gutmann via dev-security-policy
Doug Beattie  writes:

>So far I see is a number of contrived test cases picking apart small
>components of EV, and no real data to back it up.

See the phishing stats from any source you care to use.  I've already
mentioned the APWG which I consider the premier source, and also linked to the
SSL Store blog which happened to be the first Google hit, but feel free to
take any source of stats you trust, and see if you can find any that show that
phishing decreased and/or security increased due to EV certs.

I could also reverse this and say: You claim that EV certs are useful. Produce
some stats showing this.  We could agree on using the APWG as our source,
since they're a pretty authoritative.

In either case, we've got a good, decade-long, reliable, heavily-analysed data
source, it's up to the two sides to use it to support their case.  I've
already made mine.

>Yes, I work for a CA that issues EV certificates, but if there was no value
>in them, then our customers would certainly not be paying extra for them.

Must remember that one for the quotes file :-).

In case you're wondering why I find it amusing, consider this variant:

  Yes, I work for Monster Cable, but if there was no value in our cables then
  our customers would certainly not be paying extra for them.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Fwd: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-15 Thread Peter Gutmann via dev-security-policy
Doug Beattie  writes:

>Do you have any empirical data to backup the claims that there is no benefit
>from EV certificates?

Uhhh... I don't even know where to start.  We have over ten years of data and
research publications on this, and the lack of benefit was explicitly cited by
Google and Mozilla as the reason for removing the EV bling... one example is
the most obvious statistic, maintained by the Anti-Phishing Working Group
(APWG), which show an essentially flat trend for phishing over the period of a
year in which EV certificates were phased in, indicating that they had no
effect whatsoever on phishing.  There's endless other stats showing that the
trend towards security is negative, i.e. it's getting worse every year, here's
some five-year stats from a quick google:

https://www.thesslstore.com/blog/wp-content/uploads/2019/05/Phishing-by-Year.png

If EV certs had any effect at all on security we'd have seen a decrease in
phishing/increase in security.

There is one significant benefit from EV certificates, which I've already
pointed out, which is to the CAs selling them.  So when I say "there's no
benefit" I mean "there's no benefit to end users", which is who the
certificates are putatively helping.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Fwd: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-12 Thread Peter Gutmann via dev-security-policy
Wayne Thayer via dev-security-policy  
writes:

>Mozilla has announced that we plan to relocate the EV UI in Firefox 70, which
>is expected to be released on 22-October. Details below.

Just out of interest, how are the CAs taking this?  If there's no more reason
to pay a substantial premium to enable additional UI bling in browsers, isn't
this going to kill the market for EV certs?

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Fwd: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-16 Thread Peter Gutmann via dev-security-policy
Doug Beattie  writes:

>One of the reasons that phishers don’t get EV certificates is because the
>vetting process requires several interactions and corporate repositories
>which end up revealing more about their identity.  This leaves a trail back
>to the individual that set up the fake site which discourages the use of EV.

Again, this is how it works in theory and in CA sales pitches (OK, that second
bit was redundant).  Since you can buy EV certs off-the-shelf from underground
web sites, or get them directly yourself if you want to put in the effort, it
obviously doesn't work that way in practice.

In any case though that's just a distraction: Since phishing has been on the
increase year after year, the existence of EV certs is entirely irrelevant.
There's a great Dave Barry joke [0] where he explains how to threaten someone
with dynamite: You call them up, hold the burning dynamite fuse up to the
handset and say "You hear that? That's dynamite baby!".

EV certs are the same thing.  "You see that? That's an EV cert baby!".  It's
as effective a threat to phishing as Dave Barry's dynamite threat.

Peter.

[0] This joke has been credited to a number of sources, including Dave Barry.
It sounds like a Dave Barry to me.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-16 Thread Peter Gutmann via dev-security-policy
Corey Bonnell via dev-security-policy  
writes:

>the effectiveness of the EV UI treatment is predicated on whether or not the
>user can memorize which websites always use EV certificates *and* no longer
>proceed with using the website if the EV treatment isn't shown. That's a huge
>cognitive overhead for everyday web browsing

In any case things like Perspectives and Certificate Patrol already do this
for you, with no overhead for the user, and it's not dependent on whether the
cert is EV or not.  They're great add-ons for detecting sudden cert changes.

Like EV certs though, they have no effect on phishing.  They do very
effectively detect MITM, but for most users it's phishing that's the real
killer.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-16 Thread Peter Gutmann via dev-security-policy
Jakob Bohm via dev-security-policy  
writes:

>Your legendary dislike for all things X.509 is showing. 

My dislike for persisting mindlessly with stuff we already know doesn't work
is showing (see in particular the quote typically misattributed to Einstein
about the definition of insanity), and given the rich target environment
that's available in the security field that's in no way limited to X.509.
Apart from that, you're quite correct.

It's not working.  

It's obvious that it's not working.  

It's been obvious for years that it's not working.

Time to try a new approach, rather than just repeating a new variant of what
we already know doesn't work all over again.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-27 Thread Peter Gutmann via dev-security-policy
Jakob Bohm via dev-security-policy  
writes:

> and
> both took advantage of weaknesses in two
>government registries 

They weren't "weaknesses in government registries", they were registries
working as designed, and as intended.  The fact that they don't work in
they way EV wishes they did is a flaw in EV, not a problem with the
registries.

>Both demonstrations caused the researchers real name and identity to become 
>part of the CA record, which was hand waved away by claiming that could 
>have been avoided by criminal means.

It wasn't "wished away", it's avoided without too much trouble by criminals,
see my earlier screenshot of just one of numerous black-market sites where
you can buy fraudulent EV certs from registered companies.  Again, EV may 
wish this wasn't the case, but that's not how the real world works.

>12 years old study involving en equally outdated browser.

So you've published a more recent peer-reviewed academic study that
refutes the earlier work?  Could you send us the reference?

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DigiCert OCSP services returns 1 byte

2019-08-27 Thread Peter Gutmann via dev-security-policy
Curt Spann via dev-security-policy  
writes:

>I created the following bug:
>https://bugzilla.mozilla.org/show_bug.cgi?id=1577014

Maybe it's an implementation of OCSP SuperDietLite, 1 = revoked, 0 = not
revoked.

In terms of it being unsigned, you can get the same effect by setting
respStatus = TRYLATER, no signature required.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-31 Thread Peter Gutmann via dev-security-policy
Kirk Hall via dev-security-policy  
writes:

>does GSB use any EV certificate identity data in its phishing algorithms.

Another way to think about this this is to look at it from the criminals'
perspective: What's the value to criminals?  To use a silly example, the value
to criminals of an unregistered handgun is quite high, while the value to
criminals of a plastic water pistol is negligible.  We know from black-market
EV-cert vendors that the value of an EV code-signing cert to criminals is
high, and one with reputation attached is even higher because it gets you
instant malware execution with no warnings from anti-malware software.  OTOH
the value to criminals of EV web site certs appears to be low to nonexistent
because the sites selling them advertise them as also-rans, "we've also got
some of these if you want them", they barely feature.

Since the value to criminals of EV web certs is low, it seems they're not
doing much to stop what the criminals are doing.  If they did have any value
then criminals would be prepared to pay more for them, like they already do
for EV code-signing certs.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: PrintableString, UTF8String, and RFC 5280

2019-11-20 Thread Peter Gutmann via dev-security-policy
Ryan Sleevi  writes:

>I don't think the hyperbole helps here.

It wasn't hyperbole, it was extreme surprise.  When someone told me about this
I couldn't believe it was still happening after the massive amount of
publicity it got at the time, so it was more a giant "WTF?!??" than anything
else.

Other CA certs with this issue include further audited and (in some cases) EV-
approved certs, all from Microsoft:

https://crt.sh/?id=988218851=x509lint,zlint,cablint
https://crt.sh/?id=988140328=zlint,x509lint,cablint
https://crt.sh/?id=988215004=zlint,cablint,x509lint
https://crt.sh/?id=988137612=x509lint,cablint
https://crt.sh/?id=1197076917=x509lint,cablint
https://crt.sh/?id=1197067049=x509lint,cablint
https://crt.sh/?id=1197079848=x509lint,cablint
https://crt.sh/?id=1197075787=x509lint,cablint
https://crt.sh/?id=554380367=x509lint
https://crt.sh/?id=918173942=x509lint,cablint

I don't know who trust what where, but Chrome at least seems to trust these
two:

https://crt.sh/?id=554380367=x509lint
https://crt.sh/?id=918173942=x509lint,cablint

>It's probably better to start a new thread if you'd like to talk about it 
>further.

Sure, it just came to mind when I saw this thread, which is why I posted it
here.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Re: Firefox removes UI for site identity

2019-10-24 Thread Peter Gutmann via dev-security-policy
Paul Walsh via dev-security-policy  
writes:

>we conducted the same research with 85,000 active users over a period of 
>12 months

As I've already pointed out weeks ago when you first raised this, your
marketing department conducted a survey of EV marketing effectiveness.  If
you have a refereed, peer-reviewed study published at a conference or in 
an academic journal, please reference it, not a marketing survey 
masquerading as a "study".

A second suggestion, if you don't want to publish any research (by which I
mean real research, not rent-seeking CA marketing) supporting your position, 
is that you fork Firefox - it is after all an open-source product - add 
whatever EV UI you like to it, and publish it as an alternative to Firefox.  
If your approach works as you claim, it'll be so obviously superior to 
Firefox that everyone will go with your fork rather than the original.

For everyone else who feels this interminable debate has already gone on
far too long and I'm not helping it, yeah, sorry, I'd consigned the thread 
to the spam folder for awhile, had a brief look back, and saw this, which 
indicates it's literally gone nowhere in about a month.

I can see why Mozilla avoided this endless broken-record discussion, it's
not contributing anything but just going round and round in circles.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Re: Germany's cyber-security agency [BSI] recommends Firefox as most secure browser

2019-10-18 Thread Peter Gutmann via dev-security-policy
Paul Walsh via dev-security-policy  
writes:

>I have no evidence to prove what I’m about to say, but I *suspect* that the
>people at BSI specified “EV” over the use of other terms because of the
>consumer-visible UI associated with EV (I might be wrong).

Except that, just like your claims about Mozilla, they never did that, they
just give a checklist of cert types, DV, OV, and EV.  If there was a Mother-
validated cert type, the list would no doubt have included MV as well.

In fact if you're going to go to sheep's-entrails levels of interpretation,
they place EV last on their list, and it's phrased more as an afterthought
than the first two ("must support DV, OV, and also EV").

You're really grasping at straws here...

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Re: How Certificates are Verified by Firefox

2019-11-28 Thread Peter Gutmann via dev-security-policy
Ben Laurie via dev-security-policy  
writes:

>In short: caching considered harmful.

Or "cacheing considered necessary to make things work"?  In particular:

>caching them and filling in missing ones means that failure to present
>correct cert chains is common behaviour.

Which came first?  Was cacheing a response to broken chains or broken chains a
response to cacheing?

Just trying to sort out cause and effect.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: PrintableString, UTF8String, and RFC 5280

2019-11-20 Thread Peter Gutmann via dev-security-policy
Ryan Sleevi  writes:

>Do you believe it’s still applicable in the Web PKI of the past decade?

Yes, the specific cert I referenced is current valid and passed WebTrust and
EV audits.

>If you could link to the crt.sh entry, that might be easier.

Here's the Microsoft one I mentioned:

  Microsoft RSA Root Certificate Authority 2017

  https://crt.sh/?id=988218851=x509lint,zlint,cablint

There are numerous others.  This particular one isn't just a CA cert, it's a
root cert.

>It could be that you’re referencing the use of BMPString

I'm just quoting X509lint:

   ERROR: URL contains a null character

Given that this was exposed as a major security hole ten years ago, I was
surprised when someone notified me that these things exist, and that no-one
seems to have done anything about it.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: PrintableString, UTF8String, and RFC 5280

2019-11-20 Thread Peter Gutmann via dev-security-policy
Ryan Sleevi via dev-security-policy  
writes:

>In https://bugzilla.mozilla.org/show_bug.cgi?id=1593814 , Rob Stradling,
>Jeremy Rowley, and I started discussing possible steps that might be taken to
>prevent misencoding strings in certificates

Is there any official position on strings that have completely invalid
encodings like embedded NULL characters in them (presumably in memoriam of the
Kaminsky/Marlinspike certificate-spoofing bug) as one of Microsoft's CA
certificates among numerous others do?

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Website owner survey data on identity, browser UIs, and the EV UI

2019-09-22 Thread Peter Gutmann via dev-security-policy
Kirk Hall via dev-security-policy  
writes:

>To remedy this, Entrust Datacard surveyed all of its TLS/SSL web server
>certificate customers

And what a marvellously disingenous "survey" it is too, artfully constructed
to produce exactly the result the CA's marketing department wants.  Mixed in
with a series of motherhood-and-apple-pie leading questions that no-one could
answer "no" to to yes-prime the respondents, a few vaguely-word EV questions
that you want the yes response to (by vague I mean nonsense like "Do you
believe that positive visual signals in the browser UI are important to
encourage website owners to choose EV certificates and undergo the EV
validation process for their organization?", which translates to "Do you
believe browsers should act as marketing agents for our EV certificates?"
while totally avoiding ever asking the real question, "Do you believe EV
certificates make the web safer to use?").

Even then, the response is a little surprising because the priming questions
aren't 100% - what sort of pinko commie subversive answers "no" to "Customers
/ users have the right to know which organization is running a website if the
website asks the user to provide sensitive data"?.

Allow me to propose an equivalent dishonest poll that gets the exact opposite
result.  First a few push-poll questions to set the scene, "Given that Russian
criminals have stolen $2B from US citizens via web browser phishing attacks in
the last 12 months + ".  Then the same motherhood-and-apple-pie questions to yes-bias the
repondents.  Finally, the question I want the yes answer to.  What would you
like?  Fine CAs whose certificates are misused?  Force browser vendors to
provide security guarantees for the web sites where they display all-OK
indicators?  Death penalty for phishers?  I can get you any result you like,
what's it worth to you?

In any case, as with a previous EV cert poll done by another CA a few years
ago, this one surveys the efficacy of EV certificate marketing, not their
utility in preventing phishing and whatnot.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Re: [FORGED] Re: Website owner survey data on identity, browser UIs, and the EV UI

2019-10-02 Thread Peter Gutmann via dev-security-policy
Ronald Crane via dev-security-policy  
writes:

>Please cite the best study you know about on this topic (BTW, I am *not* 
>snidely 
>implying that there isn't one).

Sure, gimme a day or two since I'm away at the moment.

Alternatively, there's been such a vast amount of work done on this that a few
seconds of googling should find plenty of publications.  As the first search 
text that came to mind, "browser ui phishing" returns just under half a million 
hits.  Making it "browser ui phishing inurl:.pdf" to get just papers (rather 
than
web articles, blog posts, etc) reduces that to 30,000 results.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Mozilla's Expectations for OCSP Incident Reporting

2020-05-10 Thread Peter Gutmann via dev-security-policy
Wayne Thayer via dev-security-policy  
writes:

>It was recently reported [1] that IdenTrust experienced a multi-day OCSP
>outage about two weeks ago.

Just to understand the scope of this, what was the impact on end users?  If it
went on for multiple days then presumably no-one noticed it, the second
reference:

https://community.letsencrypt.org/t/identrust-ocsp-producing-errors/120677

states:

  Usually few clients do OCSP checks of the intermediate cert, thus this
  probably doesn’t show up very often.

>From the report it looks like a very specific config was required to even
notice it.  If an OCSP responder crashes on the Internet and no-one checks it,
does it make a difference?

(Interesting to see that the Wikipedia page for this philosophical question
helpfully shows a photo of "A fallen tree in a forest" to illustrate the
concept).

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla's Expectations for OCSP Incident Reporting

2020-05-12 Thread Peter Gutmann via dev-security-policy
>Just to understand the scope of this, what was the impact on end users?

Following up on this, would it be correct to assume that, since no-one has
pointed out any impact that this had on anything, that it's more a
certificational issue than anything with real-world consequences?

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla's Expectations for OCSP Incident Reporting

2020-05-12 Thread Peter Gutmann via dev-security-policy
Ryan Sleevi  writes:

>>Following up on this, would it be correct to assume that, since no-one has
>>pointed out any impact that this had on anything, that it's more a
>>certificational issue than anything with real-world consequences?
>
>That seems quite a suppositional leap, don't you think?

It's been more than two weeks since the issue was first reported, if no-one's
been able to identify any actual impact in that time - compare this to say
certificate-induced outages which make the front page of half the tech news
sites on the planet when they occur - then it seems reasonable to assume that
the impact is minimal if not nonexistent.

In any case I was inviting people to provide information on whether there's
been any adverse effect in order to try and gauge the magnitude, or lack
thereof, of this event.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Digicert issued certificate with let's encrypts public key

2020-05-16 Thread Peter Gutmann via dev-security-policy
Kurt Roeckx via dev-security-policy  
writes:

>Browsing crt.sh, I found this: https://crt.sh/?id=1902422627
>
>It's a certificate for api.pillowz.kz with the public key of Let's Encrypt
>Authority X1 and X3 CAs.

How could that have been issued?  Since a (PKCS #10) request has to be self-
signed, does this mean Digicert aren't validating signatures on requests?

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Digicert issued certificate with let's encrypts public key

2020-05-17 Thread Peter Gutmann via dev-security-policy
Peter Bowen  writes:

>There is no requirement to submit a PKCS#10 CSR. 

Hmm, so what sort of issue process allows you to obtain a certificate for a key 
you don't control?

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Digicert issued certificate with let's encrypts public key

2020-05-17 Thread Peter Gutmann via dev-security-policy
Corey Bonnell  writes:

>Certificate renewal that uses the existing certificate as input, rather than
>a CSR. The (presumably expiring) certificate supplies the domains,
>organization info, and the public key for the renewal certificate request. In
>this case there is no proof of key possession absent some out-of-band process
>(TLS handshake with the web server, etc).

But if it's a renewal based on an existing cert, meaning that someone already
had a cert for a key they don't control, that means that at some point in the
past the CA turned a CSR for a key the requester doesn't control into a cert.

In particular, there must have been some authorisation carried out at some
point, or perhaps that wasn't carried out, that indicates who requested the
cert.  What I'm trying to discover is where the gap was, and what's required
to fix it in the future.

Peter.



 



 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


  1   2   >