Re: CA generated keys

2017-12-28 Thread Jakob Bohm via dev-security-policy

On 15/12/2017 22:33, Ryan Hurst wrote:

On Tuesday, December 12, 2017 at 1:08:24 PM UTC-8, Jakob Bohm wrote:

On 12/12/2017 21:39, Wayne Thayer wrote:

On Tue, Dec 12, 2017 at 7:45 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


On 12/12/2017 19:39, Wayne Thayer wrote:


The outcome to be avoided is a CA that holds in escrow thousands of
private keys used for TLS. I don’t think that a policy permitting a CA to
generate the key pair is bad as long as the CA doesn’t hold on to the key
(unless  the certificate was issued to the CA or the CA is hosting the
site).

What if the policy were to allow CA key generation but require the CA to
deliver the private key to the Subscriber and destroy the CA’s copy prior
to issuing a certificate? Would that make key generation easier? Tim, some
examples describing how this might be used would be helpful here.



That would conflict with delivery in PKCS#12 format or any other format
that delivers the key and certificate together, as users of such
services commonly expect.

Yes, it would. But it's a clear policy. If the requirement is to deliver

the key at the same time as the certificate, then how long can the CA hold
the private key?




Point is that many end systems (including Windows IIS) are designed to
either import certificates from PKCS#12 or use a specific CSR generation
procedure.  If the CA delivered the key and cert separately, then the
user (who is apparently not sophisticated enough to generate their own
CSR) will have a hard time importing the key+cert into their system.




It would also conflict with keeping the issuing CA key far removed from
public web interfaces, such as the interface used by users to pick up
their key and certificate, even if separate, as it would not be fun to
have to log in twice with 1 hour in between (once to pick up key, then
once again to pick up certificate).

I don't think I understand this use case, or how the proposed policy

relates to the issuing CA.



If the issuing CA HSM is kept away from online systems and processes
vetted issuance requests only in a batched offline manner, then a user
responding to a message saying "your application has been accepted,
please log in with your temporary password to retrieve your key and
certificate" would have to download the key, after which the CA can
delete key and queue the actual issuance to the offline CA system, and
only after that can the user actually download their certificate.

Another thing with similar effect is the BR requirement that all the
OCSP responders must know about issued certificates, which means that
both the serial number and a hash of the signed certificate must be
replicated to all the OCSP machines before the certificate is delivered.
(One of the good OCSP extensions is to include a hash of the valid
certificate in the OCSP response, thus allowing the relying party
software to check that a "valid" response is actually for the
certificate at hand).







It would only really work with a CSR+key generation service where the
user receives the key at application time, then the cert after vetting.
And many end systems cannot easily import that.

Many commercial CAs could accommodate a workflow where they deliver the

private key at application time. Maybe you are thinking of IOT scenarios?
Again, some use cases describing the problem would be helpful.



One major such use case is IIS or Exchange at the subscriber end.
Importing the key and cert at different times is just not a feature of
Windows server.




A policy allowing CAs to generate key pairs should also include provisions

for:
- The CA must generate the key in accordance with technical best practices
- While in possession of the private key, the CA must store it securely

Wayne








Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded


I agree that the "right way(tm)" is to have the keys generated in a HSM, the 
keys exported in ciphertext and for this to be done in a way that the CA can not decrypt 
the keys.

Technically the PKCS#12 format would allow for such a model as you can encrypt 
the keybag to a public key (in a certificate. You could, for example generate a 
key in a HSM, export it encrypted to a public key, and the CA would never see 
the key.

This has several issues, the first is, of course, you must trust the CA not to 
use a different key; this could be addressed by requiring the code performing 
this logic to be made public, and that it utilize some transparent logging 
mechanism (ex: merkle trees, etc) that could be audited against the HSM logs in 
some way. The second is once you use such a mechanism you now have produced 
PKCS#12 files that cannot be opened by OpenSSL or Windows.

Another approach would be to write 

Re: [FORGED] Re: CA generated keys

2017-12-23 Thread Michael Ströder via dev-security-policy
Matthew Hardeman wrote:
> On Wednesday, December 13, 2017 at 5:52:16 PM UTC-6, Peter Gutmann wrote:
>> In all of these cases, the device is going to be a safer place to generate
>> keys than the CA, in particular because (a) the CA is another embedded
>> controller somewhere so probably no better than the target device and (b)
>> there's no easy way to get the key securely from the CA to the device.
> 
> Agreed, as I mentioned the secure transport aspect is essential for
> remote key generation to be a secure option at any level.

I have strong doubts that all these Internet-of-shitty-things
manufactures will get ever anything like this right.
I agree with Peter: Private key generation is the least you have to
worry about when using such devices.

Also I'm seriously concerned that if the policy is changed to allow
CA-side key generation and this gets adopted, the CAs will be forced to
implement key escrow disclosing keys to .

=> Mozilla policy *shall not* be changed to allow CAs to generate the
end entities' keys.

(The only reasonable use-case for a CA generating the private keys is to
ensure that they are immediately stored in a secure device. But that's
not really applicable in this broad use-case.)

Ciao, Michael.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA generated keys

2017-12-18 Thread Peter Gutmann via dev-security-policy
Ryan Hurst via dev-security-policy  
writes:

>Unfortunately, the PKCS#12 format, as supported by UAs and Operating Systems
>is not a great candidate for the role of carrying keys anymore. You can see
>my blog post on this topic here: http://unmitigatedrisk.com/?p=543

It's even worse than that, I use it as my teaching example of now not to
design a crypto standard:

https://www.cs.auckland.ac.nz/~pgut001/pubs/pfx.html

In other words its main function is as a broad-spectrum antipattern that you
can use for teaching purposes.

>The core issue is the use of old cryptographic primitives that barely live up
>to the equivalent cryptographic strengths of keys in use today. The offline
>nature of the protection involved also enables an attacker to grind any value
>used as the password as well.

That, and about five hundred other issues.  An easier solution would be to use
PKCS #15, which dates from roughly the same time as #12 but doesn't have any
of those problems (PKCS #12 only exists because it was a political compromise
created to appease Microsoft, who really, really wanted everyone to use their
PFX design).

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: CA generated keys

2017-12-18 Thread Tim Hollebeek via dev-security-policy

> On 15/12/17 16:02, Ryan Hurst wrote:
> > So I have read this thread in its entirety now and I think it makes
sense for it
> to reset to first principles, specifically:
> >
> > What are the technological and business goals trying to be achieved,
> > What are the requirements derived from those goals, What are the
> > negative consequences of those goals.
> >
> > My feeling is there is simply an abstract desire to allow for the CA, on
behalf
> of the subject, to generate the keys but we have not sufficiently
articulated a
> business case for this.
> 
> I think I'm in exactly this position also; thank you for articulating it.
One might
> also add:
> 
> * What are the inevitable technical consequences of a scheme which meets
> these goals? (E.g. "use of PKCS#12 for key transport" might be one answer
to
> that question.)

I actually agree with Ryan, too.  I think it's more of an issue of what sort
of future we want, and we have time.  I'm actually far less interested in
the PKCS#12 use case, and more interested with things like RFC 7030, which
keep popping up in the IoT space.

Also, in response to Ryan's other comments on PKCS#12, replacing it with
something more modern for the use cases where it is currently common (e.g.
client certificates, email certificates) would also be a huge improvement.

-Tim


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA generated keys

2017-12-15 Thread Gervase Markham via dev-security-policy
On 15/12/17 16:02, Ryan Hurst wrote:
> So I have read this thread in its entirety now and I think it makes sense for 
> it to reset to first principles, specifically:
> 
> What are the technological and business goals trying to be achieved,
> What are the requirements derived from those goals,
> What are the negative consequences of those goals.
> 
> My feeling is there is simply an abstract desire to allow for the CA, on 
> behalf of the subject, to generate the keys but we have not sufficiently 
> articulated a business case for this.

I think I'm in exactly this position also; thank you for articulating
it. One might also add:

* What are the inevitable technical consequences of a scheme which meets
these goals? (E.g. "use of PKCS#12 for key transport" might be one
answer to that question.)

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA generated keys

2017-12-15 Thread Ryan Hurst via dev-security-policy
On Friday, December 15, 2017 at 1:34:30 PM UTC-8, Matthew Hardeman wrote:
> On Friday, December 15, 2017 at 3:21:54 PM UTC-6, Ryan Hurst wrote:
>  
> > Unfortunately, the PKCS#12 format, as supported by UAs and Operating 
> > Systems is not a great candidate for the role of carrying keys anymore. You 
> > can see my blog post on this topic here: http://unmitigatedrisk.com/?p=543
> > 
> > The core issue is the use of old cryptographic primitives that barely live 
> > up to the equivalent cryptographic strengths of keys in use today. The 
> > offline nature of the protection involved also enables an attacker to grind 
> > any value used as the password as well.
> > 
> > Any plan to allow a CA to generate keys on behalf of users, which I am not 
> > against as long as there are strict and auditable practices associated with 
> > it, needs to take into consideration the protection of those keys in 
> > transit and storage.
> > 
> > I also believe any language that would be adopted here would clearly 
> > addresses cases where a organization that happens to operate a CA but is 
> > also a relying party. For example Amazon, Google and Apple both operate 
> > WebTrust audited CAs but they also operate cloud services where they are 
> > the subscriber of that CA. Any language used would need to make it clear 
> > the relative scopes and responsibilities in such a case.
> 
> I had long wondered about the PKCS#12 issue.  To the extent that any file 
> format in use today is convenient for delivering a package of certificates 
> including a formal validation chain and associated private key(s), PKCS#12 is 
> so convenient and fairly ubiquitous.
> 
> It is a pain that the cryptographic and integrity portions of the format are 
> showing their age -- at least, as you point out, in the manner in which 
> they're actually implemented in major software today.

So I have read this thread in its entirety now and I think it makes sense for 
it to reset to first principles, specifically:

What are the technological and business goals trying to be achieved,
What are the requirements derived from those goals,
What are the negative consequences of those goals.

My feeling is there is simply an abstract desire to allow for the CA, on behalf 
of the subject, to generate the keys but we have not sufficiently articulated a 
business case for this.

In my experience building and working with embedded systems I, like Peter, have 
found it is possible to build a sufficient pseudo random number generator on 
these devices, In practice however deployed devices commonly either do not do 
so or seed them poorly. 

This use case is one where transport would likely not need to be PKCS#12 given 
the custom nature of these solutions.

At the same time, these devices are often provisioned in a production line and 
the key generation could just as easily (and probably more appropriately) 
happen there.

In my experience as a CA the desire to do server side key generation almost 
always stems from a desire to reduce the friction for customers to acquire 
certificates for use in regular old web servers. Seldom does this case come up 
with network appliances as they do not support the PKCS#12 format normally. 
While the reduction of friction is a laudable goal, it seems the better way to 
do that would be to adopt a protocol like ACME for certificate lifecycle 
managment.

As I said in a earlier response I am not against the idea of server side key 
generation as long as:
There is a legitimate business need,
This can be done in a way that the CA does not have access to the key,
The process in which that this is done is fully transparent and auditable,
The transfer of the key is done in a way that is sufficiently secure,
The storage of the key is done in a way that is sufficiently secure,
We are extremely clear in how this can be done securely.

Basically I believe due to the varying degrees of technical background and 
skill in the CA operator ecosystem allowing this without being extremely is 
probably a case of the cure is worse than the ailment.

With that background I wonder, is this even worth exploring?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA generated keys

2017-12-15 Thread Matthew Hardeman via dev-security-policy
On Friday, December 15, 2017 at 3:21:54 PM UTC-6, Ryan Hurst wrote:
 
> Unfortunately, the PKCS#12 format, as supported by UAs and Operating Systems 
> is not a great candidate for the role of carrying keys anymore. You can see 
> my blog post on this topic here: http://unmitigatedrisk.com/?p=543
> 
> The core issue is the use of old cryptographic primitives that barely live up 
> to the equivalent cryptographic strengths of keys in use today. The offline 
> nature of the protection involved also enables an attacker to grind any value 
> used as the password as well.
> 
> Any plan to allow a CA to generate keys on behalf of users, which I am not 
> against as long as there are strict and auditable practices associated with 
> it, needs to take into consideration the protection of those keys in transit 
> and storage.
> 
> I also believe any language that would be adopted here would clearly 
> addresses cases where a organization that happens to operate a CA but is also 
> a relying party. For example Amazon, Google and Apple both operate WebTrust 
> audited CAs but they also operate cloud services where they are the 
> subscriber of that CA. Any language used would need to make it clear the 
> relative scopes and responsibilities in such a case.

I had long wondered about the PKCS#12 issue.  To the extent that any file 
format in use today is convenient for delivering a package of certificates 
including a formal validation chain and associated private key(s), PKCS#12 is 
so convenient and fairly ubiquitous.

It is a pain that the cryptographic and integrity portions of the format are 
showing their age -- at least, as you point out, in the manner in which they're 
actually implemented in major software today.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA generated keys

2017-12-15 Thread Ryan Hurst via dev-security-policy
On Tuesday, December 12, 2017 at 1:08:24 PM UTC-8, Jakob Bohm wrote:
> On 12/12/2017 21:39, Wayne Thayer wrote:
> > On Tue, Dec 12, 2017 at 7:45 PM, Jakob Bohm via dev-security-policy <
> > dev-security-policy@lists.mozilla.org> wrote:
> > 
> >> On 12/12/2017 19:39, Wayne Thayer wrote:
> >>
> >>> The outcome to be avoided is a CA that holds in escrow thousands of
> >>> private keys used for TLS. I don’t think that a policy permitting a CA to
> >>> generate the key pair is bad as long as the CA doesn’t hold on to the key
> >>> (unless  the certificate was issued to the CA or the CA is hosting the
> >>> site).
> >>>
> >>> What if the policy were to allow CA key generation but require the CA to
> >>> deliver the private key to the Subscriber and destroy the CA’s copy prior
> >>> to issuing a certificate? Would that make key generation easier? Tim, some
> >>> examples describing how this might be used would be helpful here.
> >>>
> >>>
> >> That would conflict with delivery in PKCS#12 format or any other format
> >> that delivers the key and certificate together, as users of such
> >> services commonly expect.
> >>
> >> Yes, it would. But it's a clear policy. If the requirement is to deliver
> > the key at the same time as the certificate, then how long can the CA hold
> > the private key?
> > 
> > 
> 
> Point is that many end systems (including Windows IIS) are designed to
> either import certificates from PKCS#12 or use a specific CSR generation
> procedure.  If the CA delivered the key and cert separately, then the
> user (who is apparently not sophisticated enough to generate their own
> CSR) will have a hard time importing the key+cert into their system.
> 
> > 
> >> It would also conflict with keeping the issuing CA key far removed from
> >> public web interfaces, such as the interface used by users to pick up
> >> their key and certificate, even if separate, as it would not be fun to
> >> have to log in twice with 1 hour in between (once to pick up key, then
> >> once again to pick up certificate).
> >>
> >> I don't think I understand this use case, or how the proposed policy
> > relates to the issuing CA.
> > 
> 
> If the issuing CA HSM is kept away from online systems and processes
> vetted issuance requests only in a batched offline manner, then a user
> responding to a message saying "your application has been accepted,
> please log in with your temporary password to retrieve your key and
> certificate" would have to download the key, after which the CA can
> delete key and queue the actual issuance to the offline CA system, and
> only after that can the user actually download their certificate.
> 
> Another thing with similar effect is the BR requirement that all the
> OCSP responders must know about issued certificates, which means that
> both the serial number and a hash of the signed certificate must be
> replicated to all the OCSP machines before the certificate is delivered.
> (One of the good OCSP extensions is to include a hash of the valid
> certificate in the OCSP response, thus allowing the relying party
> software to check that a "valid" response is actually for the
> certificate at hand).
> 
> 
> 
> 
> > 
> >> It would only really work with a CSR+key generation service where the
> >> user receives the key at application time, then the cert after vetting.
> >> And many end systems cannot easily import that.
> >>
> >> Many commercial CAs could accommodate a workflow where they deliver the
> > private key at application time. Maybe you are thinking of IOT scenarios?
> > Again, some use cases describing the problem would be helpful.
> > 
> 
> One major such use case is IIS or Exchange at the subscriber end.
> Importing the key and cert at different times is just not a feature of
> Windows server.
> 
> > 
> >> A policy allowing CAs to generate key pairs should also include provisions
> >>> for:
> >>> - The CA must generate the key in accordance with technical best practices
> >>> - While in possession of the private key, the CA must store it securely
> >>>
> >>> Wayne
> >>>
> >>>
> >>
> 
> 
> 
> Enjoy
> 
> Jakob
> -- 
> Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
> Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
> This public discussion message is non-binding and may contain errors.
> WiseMo - Remote Service Management for PCs, Phones and Embedded

I agree that the "right way(tm)" is to have the keys generated in a HSM, the 
keys exported in ciphertext and for this to be done in a way that the CA can 
not decrypt the keys.

Technically the PKCS#12 format would allow for such a model as you can encrypt 
the keybag to a public key (in a certificate. You could, for example generate a 
key in a HSM, export it encrypted to a public key, and the CA would never see 
the key. 

This has several issues, the first is, of course, you must trust the CA not to 
use a different key; this could be addressed by requiring the code performing 
this logic to be made public, 

Re: CA generated keys

2017-12-15 Thread Ryan Hurst via dev-security-policy
On Tuesday, December 12, 2017 at 11:31:18 AM UTC-8, Tim Hollebeek wrote:
> > A policy allowing CAs to generate key pairs should also include provisions
> > for:
> > - The CA must generate the key in accordance with technical best practices
> > - While in possession of the private key, the CA must store it securely
> 
> Don't forget appropriate protection for the key while it is in transit.  I'll 
> look a bit closer at the use cases and see if I can come up with some 
> reasonable suggestions.
> 
> -Tim

Unfortunately, the PKCS#12 format, as supported by UAs and Operating Systems is 
not a great candidate for the role of carrying keys anymore. You can see my 
blog post on this topic here: http://unmitigatedrisk.com/?p=543

The core issue is the use of old cryptographic primitives that barely live up 
to the equivalent cryptographic strengths of keys in use today. The offline 
nature of the protection involved also enables an attacker to grind any value 
used as the password as well.

Any plan to allow a CA to generate keys on behalf of users, which I am not 
against as long as there are strict and auditable practices associated with it, 
needs to take into consideration the protection of those keys in transit and 
storage.

I also believe any language that would be adopted here would clearly addresses 
cases where a organization that happens to operate a CA but is also a relying 
party. For example Amazon, Google and Apple both operate WebTrust audited CAs 
but they also operate cloud services where they are the subscriber of that CA. 
Any language used would need to make it clear the relative scopes and 
responsibilities in such a case.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: CA generated keys

2017-12-14 Thread Tim Hollebeek via dev-security-policy
Within 24 hours?  Once the download completes?  It doesn’t seem significantly 
harder than the other questions we grapple with.  I’m sure there are plenty of 
reasonable solutions.

 

If you want to deliver the private key first, before issuance, that’d be fine 
too.  It just means two downloads instead of one and I tend to prefer avoiding 
unnecessary complexity.

 

-Tim

 

From: Wayne Thayer [mailto:wtha...@mozilla.com] 
Sent: Wednesday, December 13, 2017 5:40 PM
To: Tim Hollebeek <tim.holleb...@digicert.com>
Cc: mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: CA generated keys

 

On Wed, Dec 13, 2017 at 4:06 PM, Tim Hollebeek via dev-security-policy 
<dev-security-policy@lists.mozilla.org 
<mailto:dev-security-policy@lists.mozilla.org> > wrote:


Wayne,

For TLS/SSL certificates, I think PKCS #12 delivery of the key and certificate
at the same time should be allowed, and I have no problem with a requirement
to delete the key after delivery.

 

How would you define a requirement to discard the private key "after delivery"? 
This seems like a very slippery slope.

 

  I also think server side generation along
the lines of RFC 7030 (EST) section 4.4 should be allowed.  I realize RFC 7030
is about client certificates, but in a world with lots of tiny communicating
devices that interface with people via web browsers, there are lots of highly
resource constrained devices with poor access to randomness out there running
web servers.  And I think we are heading quickly towards that world.
Tightening up the requirements to allow specific, approved mechanisms is fine.
We don't want people doing random things that might not be secure.

Why is it unreasonable in this IoT scenario to require the private key to be 
delivered prior to issuance?



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Re: CA generated keys

2017-12-13 Thread Matthew Hardeman via dev-security-policy
On Wednesday, December 13, 2017 at 5:52:16 PM UTC-6, Peter Gutmann wrote:

> >Sitting on my desk are not less than 3 reference designs.  At least two of
> >them have decent hardware RNG capabilities.  
> 
> My code runs on a lot (and I mean a *lot*) of embedded, virtually none of
> which has hardware RNGs.  Or an OS, for that matter, at least in the sense of
> something Unix-like.  However, in all cases the RNG system is pretty secure,
> you preload a fixed seed at manufacture and then get just enough changing data
> to ensure non-repeating values (almost every RTOS has this, e.g. VxWorks has
> the very useful taskRegsGet() for which the docs tell you "self-examination is
> not advisable as results are unpredictable", which is perfect).

I agree - and this same technique (the use of a stateful deterministic 
pseudo-random number generator seeded with adequate entropy) - is what I was 
proposing be utilized in the case of the generation of random data needs for EC 
signatures, ECDHE exchanges, etc.

This mechanism is only safe if that seed data process actually happens under 
secure circumstances, but for many devices and device manufacturers that can be 
assured.

> 
> In all of these cases, the device is going to be a safer place to generate
> keys than the CA, in particular because (a) the CA is another embedded
> controller somewhere so probably no better than the target device and (b)
> there's no easy way to get the key securely from the CA to the device.

Agreed, as I mentioned the secure transport aspect is essential for remote key 
generation to be a secure option at any level.

> 
> However, there's also an awful lot of IoS out there that uses shared private
> keys (thus the term "the lesser-known public key" that was used at one
> software house some years ago).  OTOH those devices are also going to be
> running decade-old unpatched kernels with every service turned on (also years-
> old binaries), XSS, hardcoded admin passwords, and all the other stuff that
> makes the IoS such a joy for attackers.  So in that case I think a less-then-
> good private key would be the least of your worries.

So, the platforms I'm talking about are the kind of stuff that sit somewhere in 
the middle of this.  They're intended for professional consumption into the 
device development cycle, intended to be tweaked to the specifics of the use 
case.  Often, the "manufacturer" makes quite few changes to the hardware 
reference design, fewer to the software reference design -- sometimes as 
shallow as branding -- and ships.

A lot of platforms with great potential at the hardware level and shockingly 
under-engineered, minimally designed software stacks are coming to prominence.  
They're cheap and in the right hands can be very effective.  Unfortunately, 
some of these reference software stacks encourage good enough practice that 
they won't be quickly caught out -- no pre-built single shared private key, yet 
a first-boot random initialized with a script that seeds a PRNG with uptime 
microseconds, clock ticks since reset, or something like that, which across 
that line will be a very narrow band of values for a given first boot of a 
given reference design and set of boot scripts.

Nevertheless, many of these stacks do at least minimize extraneous services and 
the target customers (pseudo-manufacturers to manufactures) have gotten savvy 
to ancient kernels and known major remotely exploitable holes.  We could call 
it the Internet of DeceptiveInThatImSomewhatShittyButHideItAtFirstGlance.

> 
> So the bits we need to worry about are what falls between "full of security
> holes anyway" and "things done right".  What is that, and does it matter if
> the private keys aren't perfect?

Agreed and I attempt address the first half of that just above -- my "Internet 
Of ." description.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Re: CA generated keys

2017-12-13 Thread Peter Gutmann via dev-security-policy
Matthew Hardeman via dev-security-policy 
 writes:

>In principle, I support Mr. Sleevi's position, practically I lean toward Mr.
>Thayer's and Mr. Hollebeek's position.

I probably support at least one of those, if I can figure out who's been
quoted as saying what.

>Sitting on my desk are not less than 3 reference designs.  At least two of
>them have decent hardware RNG capabilities.  

My code runs on a lot (and I mean a *lot*) of embedded, virtually none of
which has hardware RNGs.  Or an OS, for that matter, at least in the sense of
something Unix-like.  However, in all cases the RNG system is pretty secure,
you preload a fixed seed at manufacture and then get just enough changing data
to ensure non-repeating values (almost every RTOS has this, e.g. VxWorks has
the very useful taskRegsGet() for which the docs tell you "self-examination is
not advisable as results are unpredictable", which is perfect).

In all of these cases, the device is going to be a safer place to generate
keys than the CA, in particular because (a) the CA is another embedded
controller somewhere so probably no better than the target device and (b)
there's no easy way to get the key securely from the CA to the device.

However, there's also an awful lot of IoS out there that uses shared private
keys (thus the term "the lesser-known public key" that was used at one
software house some years ago).  OTOH those devices are also going to be
running decade-old unpatched kernels with every service turned on (also years-
old binaries), XSS, hardcoded admin passwords, and all the other stuff that
makes the IoS such a joy for attackers.  So in that case I think a less-then-
good private key would be the least of your worries.

So the bits we need to worry about are what falls between "full of security
holes anyway" and "things done right".  What is that, and does it matter if
the private keys aren't perfect?

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA generated keys

2017-12-13 Thread Matthew Hardeman via dev-security-policy
On Wednesday, December 13, 2017 at 12:50:38 PM UTC-6, Ryan Sleevi wrote:
> On Wed, Dec 13, 2017 at 1:24 PM, Matthew Hardeman 
> wrote:
> 
> > As I pointed out, it can be demonstrated that quality ECDHE exchanges can
> > happen assuming a stateful DPRNG with a decent starting entropy corpus.
> >
> 
> Agreed - but that's also true for the devices Tim is mentioning.

I do not mean this facetiously.  If I kept a diary, I might make a note.  I 
feel like I've accomplished something.

> 
> Which I guess is the point I was trying to make - if this can be 'fixed'
> relatively easily for the use case Tim was bringing up, what other use
> cases are there? The current policy serves a purpose, and although that
> purpose is not high in value nor technically rigorous, it serves as an
> external check.
> 
> And yes, I realize the profound irony in me making such a comment in this
> thread while simultaneously arguing against EV in a parallel thread, on the
> basis that the purpose EV serves is not high in value nor technically
> rigorous - but I am having trouble, unlike in the EV thread, understanding
> what harm is caused by the current policy, or what possible things that are
> beneficial are prevented.

I, for one, respect that you pointed out the dichotomy.  I think I understand 
it.

I believe that opening the door to ca-side key generation under specific terms 
and circumstances offers an opportunity for various consumers of PKI key pairs 
to acquire higher quality key pairs than a lot of the alternatives which would 
otherwise fill the void.

> 
> I don't think we'll see significant security benefit in some circumstances
> - I think we'll see the appearances of, but not the manifestation - so I'm
> trying to understand why we'd want to introduce that risk?

Sometime we accept one risk, under terms that we can audit and control, in 
order to avoid the risks which we can reasonably predict the rise of in a 
vacuum.  I am _not_ well qualified to weigh this particular set of risk 
exposures, most especially in the nature of the risk of an untrustworthy CA 
intentionally acting to cache these keys, etc.  I am well qualified to indicate 
that both risks exist.  I believe they should probably be weighed in the nature 
of a "this or that" dichotomy.

> 
> I also say this knowing how uninteroperable the existing key delivery
> mechanisms are (PKCS#12 = minefield), and how terrible the cryptographic
> protection of those are. Combine that with CAs repeated failure to
> correctly implement the specs that are less ambiguous, and I'm worried
> about a proliferation of private keys flying around - as some CAs do for

It _is_ absolutely essential that the question of secure transport and 
destruction be part of what is controlled for and monitored in a scheme where 
key generation by the CA is permitted.  The mechanism becomes worse than almost 
everything else if that falls apart.


> their other, non-TLS certificates. So I see a lot of potential harm in the
> ecosystem, and question the benefit, especially when, as you note, this can
> be mitigated rather significantly by developers not shoveling crap out the
> door. If developers who view "time to market" as more important than
> "Internet safety" can't get their toys, I ... don't lose much sleep.

Aside from the cryptography enthusiast or professional, it is hard to find 
developers with the right intersection of skill and interest to address the 
security implications.  It becomes complicated further when security 
implications aren't necessarily a business imperative.  Further complicated 
when the customer base realizes it has real costs and begins to question the 
value.  It's not just the developers.  The trend of good _looking_ quick 
reference designs lately is that they have a great spec sheet and take every 
imaginable short cut where the requirements are not explicitly stated and 
audited.  It's an ecosystem problem that is really hard to solve.

A couple of years ago, I and my team were doing interop testing between a 
device and one of our products.  In that course of events, we discovered a 
nasty security issue that was blatantly obvious to someone skilled in our 
particular application area.  We worked with the manufacturer to trace the 
product design back to a reference design from a Chinese ODM.  They were 
amenable to fixing the issue ultimately, but we found at least 14 affected 
distinct products in the marketplace based upon that design that did pull in 
those changes as of a year later.

Even as the line between hardware engineer and software developer get more and 
more blurred, there remains a stark division of skill set, knowledge base, and 
even understanding of each others' needs.  That's problematic.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA generated keys

2017-12-13 Thread Matthew Hardeman via dev-security-policy

> As an unrelated but funny aside, I once heard about a expensive, high 
> assurance device with a embedded bi-stable circuit for producing high quality 
> hardware random numbers.  As part of a rigorous validation and review process 
> in order to guarantee product quality, the instability was noticed and 
> corrected late in the development process, and final testing showed that the 
> output of the key generator was completely free of any pesky one bits that 
> might interfere with the purity of all zero keys.
> 

More perniciously, an excellent PRNG algorithm will "whiten" sufficiently that 
the standard statistical tests will not be able to distinguish the random 
output stream as completely lacking in seed entropy.

I believe the CC EAL target evaluations standards require that during the 
testing a mode be enabled to access the raw uncleaned, 
pre-algorithmic-balancing, values so that tests can be incorporated to check 
the raw entropy source for that issue.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA generated keys

2017-12-13 Thread Ryan Sleevi via dev-security-policy
On Wed, Dec 13, 2017 at 1:24 PM, Matthew Hardeman 
wrote:

> As I pointed out, it can be demonstrated that quality ECDHE exchanges can
> happen assuming a stateful DPRNG with a decent starting entropy corpus.
>

Agreed - but that's also true for the devices Tim is mentioning.

Which I guess is the point I was trying to make - if this can be 'fixed'
relatively easily for the use case Tim was bringing up, what other use
cases are there? The current policy serves a purpose, and although that
purpose is not high in value nor technically rigorous, it serves as an
external check.

And yes, I realize the profound irony in me making such a comment in this
thread while simultaneously arguing against EV in a parallel thread, on the
basis that the purpose EV serves is not high in value nor technically
rigorous - but I am having trouble, unlike in the EV thread, understanding
what harm is caused by the current policy, or what possible things that are
beneficial are prevented.

I don't think we'll see significant security benefit in some circumstances
- I think we'll see the appearances of, but not the manifestation - so I'm
trying to understand why we'd want to introduce that risk?

I also say this knowing how uninteroperable the existing key delivery
mechanisms are (PKCS#12 = minefield), and how terrible the cryptographic
protection of those are. Combine that with CAs repeated failure to
correctly implement the specs that are less ambiguous, and I'm worried
about a proliferation of private keys flying around - as some CAs do for
their other, non-TLS certificates. So I see a lot of potential harm in the
ecosystem, and question the benefit, especially when, as you note, this can
be mitigated rather significantly by developers not shoveling crap out the
door. If developers who view "time to market" as more important than
"Internet safety" can't get their toys, I ... don't lose much sleep.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: CA generated keys

2017-12-13 Thread Tim Hollebeek via dev-security-policy
So ECHDE is an interesting point that I had not considered, but as Matt noted, 
the quality of randomness in the devices does generally improve with time.  It 
tends to be the initial bootstrapping where things go horribly wrong.

 

A couple years ago I was actually on the opposite side of this issue, so it’s 
very easy for me to see both sides.  I just don’t see it as useful to 
categorically rule out something that can provide a significant security 
benefit in some circumstances.

 

-Tim

 

As an unrelated but funny aside, I once heard about a expensive, high assurance 
device with a embedded bi-stable circuit for producing high quality hardware 
random numbers.  As part of a rigorous validation and review process in order 
to guarantee product quality, the instability was noticed and corrected late in 
the development process, and final testing showed that the output of the key 
generator was completely free of any pesky one bits that might interfere with 
the purity of all zero keys.

 

From: Ryan Sleevi [mailto:r...@sleevi.com] 
Sent: Wednesday, December 13, 2017 11:11 AM
To: Tim Hollebeek <tim.holleb...@digicert.com>
Cc: r...@sleevi.com; mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: CA generated keys

 

Tim,

 

I appreciate your reply, but that seems to be backwards looking rather than 
forwards looking. That is, it looks and assumes static-RSA ciphersuites are 
acceptable, and thus the entropy risk to TLS is mitigated by client-random to 
this terrible TLS-server devices, and the issue to mitigate is the poor entropy 
on the server.

 

However, I don't think that aligns with what I was mentioning - that is, the 
expectation going forward of the use of forward-secure cryptography and 
ephemeral key exchanges, which do become more relevant to the quality of 
entropy. That is, negotiating an ECDHE_RSA exchange with terrible ECDHE key 
construction does not meaningfully improve the security of Mozilla users.

 

I'm curious whether any use case can be brought forward that isn't "So that we 
can aid and support the proliferation of insecure devices into users everyday 
lives" - as surely that doesn't seem like a good outcome, both for Mozilla 
users and for society at large. Nor do I think the propose changes meaningfully 
mitigate the harm caused by them, despite the well-meaning attempt to do so.

 

On Wed, Dec 13, 2017 at 12:40 PM, Tim Hollebeek via dev-security-policy 
<dev-security-policy@lists.mozilla.org 
<mailto:dev-security-policy@lists.mozilla.org> > wrote:

As I’m sure you’re aware, RSA key generation is far, far more reliant on the 
quality of the random number generation and the prime selection algorithm than 
TLS is dependent on randomness.  In fact it’s the combination of poor 
randomness with attempts to reduce the cost of RSA key generation that has and 
will continue to cause problems.



While the number of bits in the key pair is an important security parameter, 
the number of potential primes and their distribution has historically not 
gotten as much attention as it should.  This is why there have been a number of 
high profile breaches due to poor RSA key generation, but as far as I know, no 
known attacks due to the use of randomness elsewhere in the TLS protocol.  This 
is because TLS, like most secure protocols, has enough of gap between secure 
and insecure that small deviations from ideal behavior don’t break the entire 
protocol.  RSA has a well-earned reputation for finickiness and fragility.



It doesn’t help that RSA key generation has a sort of birthday paradoxy feel to 
it, given that if any two key pairs share a prime number, it’s just a matter of 
time before someone uses Euclid’s algorithm in order to find it.  There are 
PLENTY of possible primes of the appropriate size so that this should never 
happen, but it’s been seen to happen.  I would be shocked if we’ve seen the 
last major security breach based on poor RSA key generation by resource 
constrained devices.



Given that there exist IETF approved alternatives that could help with that 
problem, they’re worth considering.  I’ve been spending a lot of time recently 
looking at the state of the IoT world, and it’s not good.



-Tim



From: Ryan Sleevi [mailto:r...@sleevi.com <mailto:r...@sleevi.com> ]
Sent: Wednesday, December 13, 2017 9:52 AM
To: Tim Hollebeek <tim.holleb...@digicert.com 
<mailto:tim.holleb...@digicert.com> >
Cc: mozilla-dev-security-pol...@lists.mozilla.org 
<mailto:mozilla-dev-security-pol...@lists.mozilla.org> 
Subject: Re: CA generated keys








On Wed, Dec 13, 2017 at 11:06 AM, Tim Hollebeek via dev-security-policy 
<dev-security-policy@lists.mozilla.org 
<mailto:dev-security-policy@lists.mozilla.org>  
<mailto:dev-security-policy@lists.mozilla.org 
<mailto:dev-security-policy@lists.mozilla.org> > > wrote:


Wayne,

For TLS/SSL certificates, I think PKCS #12 delivery of the k

Re: CA generated keys

2017-12-13 Thread Matthew Hardeman via dev-security-policy
> I appreciate your reply, but that seems to be backwards looking rather than
> forwards looking. That is, it looks and assumes static-RSA ciphersuites are
> acceptable, and thus the entropy risk to TLS is mitigated by client-random
> to this terrible TLS-server devices, and the issue to mitigate is the poor
> entropy on the server.
>
> However, I don't think that aligns with what I was mentioning - that is,
> the expectation going forward of the use of forward-secure cryptography and
> ephemeral key exchanges, which do become more relevant to the quality of
> entropy. That is, negotiating an ECDHE_RSA exchange with terrible ECDHE key
> construction does not meaningfully improve the security of Mozilla users.
>

As I pointed out, it can be demonstrated that quality ECDHE exchanges can
happen assuming a stateful DPRNG with a decent starting entropy corpus.

Beyond that, I should point out, I'm not talking about legacy devices
already in market.  I'm not sure the community fully understands how much
hot-off-the-presses stuff (at least the stuff in the cheap, and so selected
by the marketplace) is really really set up for failure in terms of
security.

What I want to emphasize is that I don't believe policy here will make
things better.  In fact, there are real dangers that it gets worse.

It would be an egregiously bad decision -- but in the eyes of a budding
device software stack developer -- to just implement an RSA key pair
generation alg in Javascript and rely upon the browser to build the key set
that will form the raw private key and the CSR.  That's definitely not
secure or better.

Assuming you lock javascript to the point that large integer primitives and
operations are unavailable outside secure mode, these people will just
stand up an HTTP endpoint that spits out newly generated random RSA or EC
key pair to feed to the device.  And it'll be unsigned and not even
protected by HTTPS, unless required, and then they'll do the bare minimum.

The device reference design space is improving and is becoming more
security conscious but you're YEARS away from anything resembling best
practice.  I just don't believe anything Mozilla or anyone else outside
that world can do will speed it along.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA generated keys

2017-12-13 Thread Ryan Sleevi via dev-security-policy
Tim,

I appreciate your reply, but that seems to be backwards looking rather than
forwards looking. That is, it looks and assumes static-RSA ciphersuites are
acceptable, and thus the entropy risk to TLS is mitigated by client-random
to this terrible TLS-server devices, and the issue to mitigate is the poor
entropy on the server.

However, I don't think that aligns with what I was mentioning - that is,
the expectation going forward of the use of forward-secure cryptography and
ephemeral key exchanges, which do become more relevant to the quality of
entropy. That is, negotiating an ECDHE_RSA exchange with terrible ECDHE key
construction does not meaningfully improve the security of Mozilla users.

I'm curious whether any use case can be brought forward that isn't "So that
we can aid and support the proliferation of insecure devices into users
everyday lives" - as surely that doesn't seem like a good outcome, both for
Mozilla users and for society at large. Nor do I think the propose changes
meaningfully mitigate the harm caused by them, despite the well-meaning
attempt to do so.

On Wed, Dec 13, 2017 at 12:40 PM, Tim Hollebeek via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> As I’m sure you’re aware, RSA key generation is far, far more reliant on
> the quality of the random number generation and the prime selection
> algorithm than TLS is dependent on randomness.  In fact it’s the
> combination of poor randomness with attempts to reduce the cost of RSA key
> generation that has and will continue to cause problems.
>
>
>
> While the number of bits in the key pair is an important security
> parameter, the number of potential primes and their distribution has
> historically not gotten as much attention as it should.  This is why there
> have been a number of high profile breaches due to poor RSA key generation,
> but as far as I know, no known attacks due to the use of randomness
> elsewhere in the TLS protocol.  This is because TLS, like most secure
> protocols, has enough of gap between secure and insecure that small
> deviations from ideal behavior don’t break the entire protocol.  RSA has a
> well-earned reputation for finickiness and fragility.
>
>
>
> It doesn’t help that RSA key generation has a sort of birthday paradoxy
> feel to it, given that if any two key pairs share a prime number, it’s just
> a matter of time before someone uses Euclid’s algorithm in order to find
> it.  There are PLENTY of possible primes of the appropriate size so that
> this should never happen, but it’s been seen to happen.  I would be shocked
> if we’ve seen the last major security breach based on poor RSA key
> generation by resource constrained devices.
>
>
>
> Given that there exist IETF approved alternatives that could help with
> that problem, they’re worth considering.  I’ve been spending a lot of time
> recently looking at the state of the IoT world, and it’s not good.
>
>
>
> -Tim
>
>
>
> From: Ryan Sleevi [mailto:r...@sleevi.com]
> Sent: Wednesday, December 13, 2017 9:52 AM
> To: Tim Hollebeek <tim.holleb...@digicert.com>
> Cc: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Re: CA generated keys
>
>
>
>
>
>
>
> On Wed, Dec 13, 2017 at 11:06 AM, Tim Hollebeek via dev-security-policy <
> dev-security-policy@lists.mozilla.org <mailto:dev-security-policy@
> lists.mozilla.org> > wrote:
>
>
> Wayne,
>
> For TLS/SSL certificates, I think PKCS #12 delivery of the key and
> certificate
> at the same time should be allowed, and I have no problem with a
> requirement
> to delete the key after delivery.  I also think server side generation
> along
> the lines of RFC 7030 (EST) section 4.4 should be allowed.  I realize RFC
> 7030
> is about client certificates, but in a world with lots of tiny
> communicating
> devices that interface with people via web browsers, there are lots of
> highly
> resource constrained devices with poor access to randomness out there
> running
> web servers.  And I think we are heading quickly towards that world.
> Tightening up the requirements to allow specific, approved mechanisms is
> fine.
> We don't want people doing random things that might not be secure.
>
>
>
> Tim,
>
>
>
> I'm afraid that the use case to justify this change seems to be inherently
> flawed and insecure. I'm hoping you can correct my misunderstanding, if I
> am doing so.
>
>
>
> As I understand it, the motivation for this is to support devices with
> insecure random number generators that might be otherwise incapable of
> generating secure keys. The logic goes that by having the CAs generate
> these keys, we end up with better security - fewer keys leaking.
>
>
>
>

Re: CA generated keys

2017-12-13 Thread Matthew Hardeman via dev-security-policy
e flaw you mention), or, in more
> recent discussions, the ROCA-affected keys. Or, for the academic take,
> https://factorable.net/weakkeys12.extended.pdf , or the research at
> https://crocs.fi.muni.cz/public/papers/usenix2016 that itself appears to
> have lead to ROCA being detected.
>
> Quite simply, the population you're targeting - "tiny communication devices
> ... with poor access to randomness" - are inherently insecure in a TLS
> world. TLS itself depends on entropy, especially for the ephemeral key
> exchange ciphersuites required for use in HTTP/2 or TLS 1.3, and so such
> devices do not somehow become 'more' secure by having the CA generate the
> key, but then negotiate poor TLS ciphersuites.
>
> More importantly, the change you propose would have the incidental effect
> of making it more difficult to detect such devices and work with vendors to
> replace or repair them. This seems to overall make Mozilla users less
> secure, and the ecosystem less secure.
>
> I realize that there is somewhat a conflict - we're today requiring that
> CDNs and vendors can generate these keys (thus masking off the poor entropy
> from detection), while not allowing the CA to participate - but I think
> that's consistent with a viewpoint that the CA should not actively
> facilitate insecurity, which I fear your proposal would.
>
> Thus, I would suggest that the current status quo - a prohibition against
> CA generated keys - is positive for the SSL/TLS ecosystem in particular,
> and any such devices that struggle with randomness should be dismantled and
> replaced, rather than encouraged and proliferated.
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: CA generated keys

2017-12-13 Thread Tim Hollebeek via dev-security-policy
As I’m sure you’re aware, RSA key generation is far, far more reliant on the 
quality of the random number generation and the prime selection algorithm than 
TLS is dependent on randomness.  In fact it’s the combination of poor 
randomness with attempts to reduce the cost of RSA key generation that has and 
will continue to cause problems.

 

While the number of bits in the key pair is an important security parameter, 
the number of potential primes and their distribution has historically not 
gotten as much attention as it should.  This is why there have been a number of 
high profile breaches due to poor RSA key generation, but as far as I know, no 
known attacks due to the use of randomness elsewhere in the TLS protocol.  This 
is because TLS, like most secure protocols, has enough of gap between secure 
and insecure that small deviations from ideal behavior don’t break the entire 
protocol.  RSA has a well-earned reputation for finickiness and fragility.

 

It doesn’t help that RSA key generation has a sort of birthday paradoxy feel to 
it, given that if any two key pairs share a prime number, it’s just a matter of 
time before someone uses Euclid’s algorithm in order to find it.  There are 
PLENTY of possible primes of the appropriate size so that this should never 
happen, but it’s been seen to happen.  I would be shocked if we’ve seen the 
last major security breach based on poor RSA key generation by resource 
constrained devices.

 

Given that there exist IETF approved alternatives that could help with that 
problem, they’re worth considering.  I’ve been spending a lot of time recently 
looking at the state of the IoT world, and it’s not good.

 

-Tim

 

From: Ryan Sleevi [mailto:r...@sleevi.com] 
Sent: Wednesday, December 13, 2017 9:52 AM
To: Tim Hollebeek <tim.holleb...@digicert.com>
Cc: mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: CA generated keys

 

 

 

On Wed, Dec 13, 2017 at 11:06 AM, Tim Hollebeek via dev-security-policy 
<dev-security-policy@lists.mozilla.org 
<mailto:dev-security-policy@lists.mozilla.org> > wrote:


Wayne,

For TLS/SSL certificates, I think PKCS #12 delivery of the key and certificate
at the same time should be allowed, and I have no problem with a requirement
to delete the key after delivery.  I also think server side generation along
the lines of RFC 7030 (EST) section 4.4 should be allowed.  I realize RFC 7030
is about client certificates, but in a world with lots of tiny communicating
devices that interface with people via web browsers, there are lots of highly
resource constrained devices with poor access to randomness out there running
web servers.  And I think we are heading quickly towards that world.
Tightening up the requirements to allow specific, approved mechanisms is fine.
We don't want people doing random things that might not be secure.

 

Tim,

 

I'm afraid that the use case to justify this change seems to be inherently 
flawed and insecure. I'm hoping you can correct my misunderstanding, if I am 
doing so.

 

As I understand it, the motivation for this is to support devices with insecure 
random number generators that might be otherwise incapable of generating secure 
keys. The logic goes that by having the CAs generate these keys, we end up with 
better security - fewer keys leaking.

 

Yet I would challenge that assertion, and instead suggest that CAs generating 
keys for these devices inherently makes the system less secure. As you know, 
CAs are already on the hook to evaluate keys against known weak sets and reject 
them. There is absent a formal definition of this in the BRs, other than 
calling out illustrative examples such as Debian-generated keys (which share 
the flaw you mention), or, in more recent discussions, the ROCA-affected keys. 
Or, for the academic take, https://factorable.net/weakkeys12.extended.pdf , or 
the research at https://crocs.fi.muni.cz/public/papers/usenix2016 that itself 
appears to have lead to ROCA being detected.

 

Quite simply, the population you're targeting - "tiny communication devices ... 
with poor access to randomness" - are inherently insecure in a TLS world. TLS 
itself depends on entropy, especially for the ephemeral key exchange 
ciphersuites required for use in HTTP/2 or TLS 1.3, and so such devices do not 
somehow become 'more' secure by having the CA generate the key, but then 
negotiate poor TLS ciphersuites.

 

More importantly, the change you propose would have the incidental effect of 
making it more difficult to detect such devices and work with vendors to 
replace or repair them. This seems to overall make Mozilla users less secure, 
and the ecosystem less secure.

 

I realize that there is somewhat a conflict - we're today requiring that CDNs 
and vendors can generate these keys (thus masking off the poor entropy from 
detection), while not allowing the CA to participate - but I think that's 
consistent with a viewpoint that the CA sh

Re: CA generated keys

2017-12-13 Thread Ryan Sleevi via dev-security-policy
On Wed, Dec 13, 2017 at 11:06 AM, Tim Hollebeek via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

>
> Wayne,
>
> For TLS/SSL certificates, I think PKCS #12 delivery of the key and
> certificate
> at the same time should be allowed, and I have no problem with a
> requirement
> to delete the key after delivery.  I also think server side generation
> along
> the lines of RFC 7030 (EST) section 4.4 should be allowed.  I realize RFC
> 7030
> is about client certificates, but in a world with lots of tiny
> communicating
> devices that interface with people via web browsers, there are lots of
> highly
> resource constrained devices with poor access to randomness out there
> running
> web servers.  And I think we are heading quickly towards that world.
> Tightening up the requirements to allow specific, approved mechanisms is
> fine.
> We don't want people doing random things that might not be secure.
>

Tim,

I'm afraid that the use case to justify this change seems to be inherently
flawed and insecure. I'm hoping you can correct my misunderstanding, if I
am doing so.

As I understand it, the motivation for this is to support devices with
insecure random number generators that might be otherwise incapable of
generating secure keys. The logic goes that by having the CAs generate
these keys, we end up with better security - fewer keys leaking.

Yet I would challenge that assertion, and instead suggest that CAs
generating keys for these devices inherently makes the system less secure.
As you know, CAs are already on the hook to evaluate keys against known
weak sets and reject them. There is absent a formal definition of this in
the BRs, other than calling out illustrative examples such as
Debian-generated keys (which share the flaw you mention), or, in more
recent discussions, the ROCA-affected keys. Or, for the academic take,
https://factorable.net/weakkeys12.extended.pdf , or the research at
https://crocs.fi.muni.cz/public/papers/usenix2016 that itself appears to
have lead to ROCA being detected.

Quite simply, the population you're targeting - "tiny communication devices
... with poor access to randomness" - are inherently insecure in a TLS
world. TLS itself depends on entropy, especially for the ephemeral key
exchange ciphersuites required for use in HTTP/2 or TLS 1.3, and so such
devices do not somehow become 'more' secure by having the CA generate the
key, but then negotiate poor TLS ciphersuites.

More importantly, the change you propose would have the incidental effect
of making it more difficult to detect such devices and work with vendors to
replace or repair them. This seems to overall make Mozilla users less
secure, and the ecosystem less secure.

I realize that there is somewhat a conflict - we're today requiring that
CDNs and vendors can generate these keys (thus masking off the poor entropy
from detection), while not allowing the CA to participate - but I think
that's consistent with a viewpoint that the CA should not actively
facilitate insecurity, which I fear your proposal would.

Thus, I would suggest that the current status quo - a prohibition against
CA generated keys - is positive for the SSL/TLS ecosystem in particular,
and any such devices that struggle with randomness should be dismantled and
replaced, rather than encouraged and proliferated.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: CA generated keys

2017-12-13 Thread Tim Hollebeek via dev-security-policy

Wayne,

For TLS/SSL certificates, I think PKCS #12 delivery of the key and certificate 
at the same time should be allowed, and I have no problem with a requirement 
to delete the key after delivery.  I also think server side generation along 
the lines of RFC 7030 (EST) section 4.4 should be allowed.  I realize RFC 7030 
is about client certificates, but in a world with lots of tiny communicating 
devices that interface with people via web browsers, there are lots of highly 
resource constrained devices with poor access to randomness out there running 
web servers.  And I think we are heading quickly towards that world. 
Tightening up the requirements to allow specific, approved mechanisms is fine. 
We don't want people doing random things that might not be secure.

As usual, non-TLS certificates have a completely different set of concerns. 
Demand for escrow of client/email certificates is much higher and the practice 
is much more common, for a variety of business reasons.

-Tim


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA generated keys

2017-12-12 Thread Jakob Bohm via dev-security-policy

On 12/12/2017 21:39, Wayne Thayer wrote:

On Tue, Dec 12, 2017 at 7:45 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


On 12/12/2017 19:39, Wayne Thayer wrote:


The outcome to be avoided is a CA that holds in escrow thousands of
private keys used for TLS. I don’t think that a policy permitting a CA to
generate the key pair is bad as long as the CA doesn’t hold on to the key
(unless  the certificate was issued to the CA or the CA is hosting the
site).

What if the policy were to allow CA key generation but require the CA to
deliver the private key to the Subscriber and destroy the CA’s copy prior
to issuing a certificate? Would that make key generation easier? Tim, some
examples describing how this might be used would be helpful here.



That would conflict with delivery in PKCS#12 format or any other format
that delivers the key and certificate together, as users of such
services commonly expect.

Yes, it would. But it's a clear policy. If the requirement is to deliver

the key at the same time as the certificate, then how long can the CA hold
the private key?




Point is that many end systems (including Windows IIS) are designed to
either import certificates from PKCS#12 or use a specific CSR generation
procedure.  If the CA delivered the key and cert separately, then the
user (who is apparently not sophisticated enough to generate their own
CSR) will have a hard time importing the key+cert into their system.




It would also conflict with keeping the issuing CA key far removed from
public web interfaces, such as the interface used by users to pick up
their key and certificate, even if separate, as it would not be fun to
have to log in twice with 1 hour in between (once to pick up key, then
once again to pick up certificate).

I don't think I understand this use case, or how the proposed policy

relates to the issuing CA.



If the issuing CA HSM is kept away from online systems and processes
vetted issuance requests only in a batched offline manner, then a user
responding to a message saying "your application has been accepted,
please log in with your temporary password to retrieve your key and
certificate" would have to download the key, after which the CA can
delete key and queue the actual issuance to the offline CA system, and
only after that can the user actually download their certificate.

Another thing with similar effect is the BR requirement that all the
OCSP responders must know about issued certificates, which means that
both the serial number and a hash of the signed certificate must be
replicated to all the OCSP machines before the certificate is delivered.
(One of the good OCSP extensions is to include a hash of the valid
certificate in the OCSP response, thus allowing the relying party
software to check that a "valid" response is actually for the
certificate at hand).







It would only really work with a CSR+key generation service where the
user receives the key at application time, then the cert after vetting.
And many end systems cannot easily import that.

Many commercial CAs could accommodate a workflow where they deliver the

private key at application time. Maybe you are thinking of IOT scenarios?
Again, some use cases describing the problem would be helpful.



One major such use case is IIS or Exchange at the subscriber end.
Importing the key and cert at different times is just not a feature of
Windows server.




A policy allowing CAs to generate key pairs should also include provisions

for:
- The CA must generate the key in accordance with technical best practices
- While in possession of the private key, the CA must store it securely

Wayne








Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA generated keys

2017-12-12 Thread Wayne Thayer via dev-security-policy
On Tue, Dec 12, 2017 at 7:45 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On 12/12/2017 19:39, Wayne Thayer wrote:
>
>> The outcome to be avoided is a CA that holds in escrow thousands of
>> private keys used for TLS. I don’t think that a policy permitting a CA to
>> generate the key pair is bad as long as the CA doesn’t hold on to the key
>> (unless  the certificate was issued to the CA or the CA is hosting the
>> site).
>>
>> What if the policy were to allow CA key generation but require the CA to
>> deliver the private key to the Subscriber and destroy the CA’s copy prior
>> to issuing a certificate? Would that make key generation easier? Tim, some
>> examples describing how this might be used would be helpful here.
>>
>>
> That would conflict with delivery in PKCS#12 format or any other format
> that delivers the key and certificate together, as users of such
> services commonly expect.
>
> Yes, it would. But it's a clear policy. If the requirement is to deliver
the key at the same time as the certificate, then how long can the CA hold
the private key?



> It would also conflict with keeping the issuing CA key far removed from
> public web interfaces, such as the interface used by users to pick up
> their key and certificate, even if separate, as it would not be fun to
> have to log in twice with 1 hour in between (once to pick up key, then
> once again to pick up certificate).
>
> I don't think I understand this use case, or how the proposed policy
relates to the issuing CA.


> It would only really work with a CSR+key generation service where the
> user receives the key at application time, then the cert after vetting.
> And many end systems cannot easily import that.
>
> Many commercial CAs could accommodate a workflow where they deliver the
private key at application time. Maybe you are thinking of IOT scenarios?
Again, some use cases describing the problem would be helpful.


> A policy allowing CAs to generate key pairs should also include provisions
>> for:
>> - The CA must generate the key in accordance with technical best practices
>> - While in possession of the private key, the CA must store it securely
>>
>> Wayne
>>
>>
>
> Enjoy
>
> Jakob
> --
> Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
> Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
> This public discussion message is non-binding and may contain errors.
> WiseMo - Remote Service Management for PCs, Phones and Embedded
>
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA generated keys

2017-12-12 Thread Jakob Bohm via dev-security-policy

On 12/12/2017 19:39, Wayne Thayer wrote:

On Mon, Dec 11, 2017 at 9:43 AM, Tim Hollebeek via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:



I don't know but it's worth talking about.  I think the discussion should
be
"when should this be allowed, and how can it be done securely?"

The outcome to be avoided is a CA that holds in escrow thousands of

private keys used for TLS. I don’t think that a policy permitting a CA to
generate the key pair is bad as long as the CA doesn’t hold on to the key
(unless  the certificate was issued to the CA or the CA is hosting the
site).

What if the policy were to allow CA key generation but require the CA to
deliver the private key to the Subscriber and destroy the CA’s copy prior
to issuing a certificate? Would that make key generation easier? Tim, some
examples describing how this might be used would be helpful here.



That would conflict with delivery in PKCS#12 format or any other format
that delivers the key and certificate together, as users of such
services commonly expect.

It would also conflict with keeping the issuing CA key far removed from
public web interfaces, such as the interface used by users to pick up
their key and certificate, even if separate, as it would not be fun to
have to log in twice with 1 hour in between (once to pick up key, then
once again to pick up certificate).

It would only really work with a CSR+key generation service where the
user receives the key at application time, then the cert after vetting.
And many end systems cannot easily import that.


A policy allowing CAs to generate key pairs should also include provisions
for:
- The CA must generate the key in accordance with technical best practices
- While in possession of the private key, the CA must store it securely

Wayne




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: CA generated keys

2017-12-12 Thread Tim Hollebeek via dev-security-policy

> A policy allowing CAs to generate key pairs should also include provisions
> for:
> - The CA must generate the key in accordance with technical best practices
> - While in possession of the private key, the CA must store it securely

Don't forget appropriate protection for the key while it is in transit.  I'll 
look a bit closer at the use cases and see if I can come up with some 
reasonable suggestions.

-Tim


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA generated keys

2017-12-12 Thread Wayne Thayer via dev-security-policy
On Mon, Dec 11, 2017 at 9:43 AM, Tim Hollebeek via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

>
> I don't know but it's worth talking about.  I think the discussion should
> be
> "when should this be allowed, and how can it be done securely?"
>
> The outcome to be avoided is a CA that holds in escrow thousands of
private keys used for TLS. I don’t think that a policy permitting a CA to
generate the key pair is bad as long as the CA doesn’t hold on to the key
(unless  the certificate was issued to the CA or the CA is hosting the
site).

What if the policy were to allow CA key generation but require the CA to
deliver the private key to the Subscriber and destroy the CA’s copy prior
to issuing a certificate? Would that make key generation easier? Tim, some
examples describing how this might be used would be helpful here.

A policy allowing CAs to generate key pairs should also include provisions
for:
- The CA must generate the key in accordance with technical best practices
- While in possession of the private key, the CA must store it securely

Wayne
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: CA generated keys

2017-12-11 Thread Tim Hollebeek via dev-security-policy

> The more I think about it, the more I see this is actually a interesting
question :-)

I had the same feeling.  It seems like an easy question to answer until you
start thinking about it.

> I suspect the first thing Mozilla allowing this would do would be to make
it much more common. (Let's assume 
> there are no other policy barriers.) I suspect there are several simpler
workflows for certificate issuance and
> installation that this could enable, and CAs would be keen to make their
customers lives easier and reduce 
> support costs.

This may or may not be true.  I think it probably isn't.  The standard
method via a CSR is actually simpler, so I think that will continue to be
the predominant way of doing things.  I think it's more likely to remain
limited to large enterprise customers with unique requirements, IoT use
cases, and so on.

> > First, third parties who are *not* CAs can run key generation and 
> > escrow services, and then the third party service can apply for a  
> > certificate for the key, and deliver the certificate and the key to a
customer.
>
> That is true. Do you know how common this is in SSL/TLS?

I know it happens.  I can try to find out how common it is, and what the use
cases are.

> > Second, although I strongly believe that in general, as a best 
> > practice, keys should be generated by the device/entity it belongs to 
> > whenever possible, we've seen increasing evidence that key generation 
> > is difficult and many devices cannot do it securely.  I doubt that 
> > forcing the owner of the device to generate a key on a commodity PC is 
> > any better (it's probably worse).
> 
> That's also a really interesting question. We've had dedicated device key
generation failures, but we've also had 
> commodity PC key generation failures (Debian weak keys, right?). Does that
mean it's a wash? What do the risk 
> profiles look like here? One CA uses a MegaRNG2000 to generate hundreds of
thousands of certs.. and then a
> flaw is found in it. Oops.
> Better or worse than a hundred thousand people independently using a
broken OpenSSL shipped by their 
> Linux vendor?

I'd argue that the second is worse, since the large number of independent
people are going to have a much harder time becoming aware of the issue,
applying the appropriate fixes, and performing whatever remediation is
necessary.

The general rule is that you're able to do more rigorous things at scale
than you can when you're generating a key or two a year.

> > With an increasing number of small devices running web servers, keys 
> > generated by audited, trusted third parties under whatever rules 
> > Mozilla chooses to enforce about secure key delivery may actually in 
> > many circumstances be superior than what would happen if the practice is
banned.
> 
> Is there a way to limit the use of this to those circumstances?

I don't know but it's worth talking about.  I think the discussion should be
"when should this be allowed, and how can it be done securely?"

-Tim


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: CA generated keys

2017-12-11 Thread Steve Medin via dev-security-policy
Loosen the interpretation of escrow from a box surrounded by KRAs, KROs, and 
access controls with a rolling LTSK and escrow could describe what many white 
glove and CDN tier hosting operations do. The CDN has written consent, but the 
end customer never touches the TLS cert.


> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+steve.medin=digicert@lists.mozilla.org] On Behalf Of Jeremy
> Rowley via dev-security-policy
> Sent: Monday, December 11, 2017 11:18 AM
> To: Gervase Markham <g...@mozilla.org>; mozilla-dev-security-
> pol...@lists.mozilla.org
> Subject: RE: CA generated keys
> 
> I think key escrow services are pretty rare related to TLS certs. However,
> there's lots of CAs and services that escrow signing keys for s/MIME certs.
> Although, I'm not sure how companies can claim non-repudiation if they've
> escrowed the signing key, a lot of enterprises use dual-use keys and want at
> least the encryption portion in case an employee leaves.
> 
> -Original Message-
> From: dev-security-policy
> [mailto:dev-security-policy-
> bounces+jeremy.rowley=digicert.com@lists.mozilla
> .org] On Behalf Of Gervase Markham via dev-security-policy
> Sent: Monday, December 11, 2017 12:48 AM
> To: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Re: CA generated keys
> 
> Hi Tim,
> 
> The more I think about it, the more I see this is actually a interesting
> question :-)
> 
> I suspect the first thing Mozilla allowing this would do would be to make it
> much more common. (Let's assume there are no other policy
> barriers.) I suspect there are several simpler workflows for certificate
> issuance and installation that this could enable, and CAs would be keen to
> make their customers lives easier and reduce support costs.
> 
> On 09/12/17 18:20, Tim Hollebeek wrote:
> > First, third parties who are *not* CAs can run key generation and
> > escrow services, and then the third party service can apply for a
> > certificate for the key, and deliver the certificate and the key to a
> customer.
> 
> That is true. Do you know how common this is in SSL/TLS?
> 
> > I'm not
> > sure how this could be prevented.  So if this actually did end up
> > being a Mozilla policy, the practical effect would be that SSL keys
> > can be generated by third parties and escrowed, *UNLESS* that party is
> trusted by Mozilla.
> 
> Another way of putting it it: "unless that party were the party the customer
> is already dealing with and trusts". IoW, there's a much lower barrier for
> the customer in getting the CA to do it (trust and
> convenience) compared to someone else. So removing this ban would
> probably
> make it much more common, as noted above. If it's something we want to
> discourage even if we can't prevent it, the current ban makes sense.
> 
> > Second, although I strongly believe that in general, as a best
> > practice, keys should be generated by the device/entity it belongs to
> > whenever possible, we've seen increasing evidence that key generation
> > is difficult and many devices cannot do it securely.  I doubt that
> > forcing the owner of the device to generate a key on a commodity PC is
> > any better (it's probably worse).
> 
> That's also a really interesting question. We've had dedicated device key
> generation failures, but we've also had commodity PC key generation
> failures
> (Debian weak keys, right?). Does that mean it's a wash? What do the risk
> profiles look like here? One CA uses a MegaRNG2000 to generate hundreds
> of
> thousands of certs.. and then a flaw is found in it. Oops.
> Better or worse than a hundred thousand people independently using a
> broken
> OpenSSL shipped by their Linux vendor?
> 
> > With an increasing number of small devices running web servers, keys
> > generated by audited, trusted third parties under whatever rules
> > Mozilla chooses to enforce about secure key delivery may actually in
> > many circumstances be superior than what would happen if the practice
> is
> banned.
> 
> Is there a way to limit the use of this to those circumstances?
> 
> Gerv
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://clicktime.symantec.com/a/1/7IiyfqIYVYHVo4yJwZ1gujE6ewgPbVhd
> eNR8nQYMk
> tE=?d=-Wn_VctZunngdEk_ioG0-
> YmJpPH0bSY7avkVy2G5jkppW7WbRwmFtauXnqI4GVKzIanQD2
> ZA6NInKdI3JGkcf9ryTq6n-s4c4pg5s3wE4vkp4yda03M7jQfN5_Ag8-
> 70lEsjQb45m2On8sIoG_
> dT07uGS0eLuIUFBs5Ejb7aU7SMDef-
> aiw2SMmHSy34HrobgXESUV5rtJhwEAyCZvSWTdlhTt2mUM
> XVuNXdmFtAYun19fEnhCuxZTm44Inip_9XUfKb73PIvmELdwusC79xu_

RE: CA generated keys

2017-12-11 Thread Jeremy Rowley via dev-security-policy
I think key escrow services are pretty rare related to TLS certs. However,
there's lots of CAs and services that escrow signing keys for s/MIME certs.
Although, I'm not sure how companies can claim non-repudiation if they've
escrowed the signing key, a lot of enterprises use dual-use keys and want at
least the encryption portion in case an employee leaves.

-Original Message-
From: dev-security-policy
[mailto:dev-security-policy-bounces+jeremy.rowley=digicert.com@lists.mozilla
.org] On Behalf Of Gervase Markham via dev-security-policy
Sent: Monday, December 11, 2017 12:48 AM
To: mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: CA generated keys

Hi Tim,

The more I think about it, the more I see this is actually a interesting
question :-)

I suspect the first thing Mozilla allowing this would do would be to make it
much more common. (Let's assume there are no other policy
barriers.) I suspect there are several simpler workflows for certificate
issuance and installation that this could enable, and CAs would be keen to
make their customers lives easier and reduce support costs.

On 09/12/17 18:20, Tim Hollebeek wrote:
> First, third parties who are *not* CAs can run key generation and 
> escrow services, and then the third party service can apply for a  
> certificate for the key, and deliver the certificate and the key to a
customer.

That is true. Do you know how common this is in SSL/TLS?

> I'm not
> sure how this could be prevented.  So if this actually did end up 
> being a Mozilla policy, the practical effect would be that SSL keys 
> can be generated by third parties and escrowed, *UNLESS* that party is
trusted by Mozilla.

Another way of putting it it: "unless that party were the party the customer
is already dealing with and trusts". IoW, there's a much lower barrier for
the customer in getting the CA to do it (trust and
convenience) compared to someone else. So removing this ban would probably
make it much more common, as noted above. If it's something we want to
discourage even if we can't prevent it, the current ban makes sense.

> Second, although I strongly believe that in general, as a best 
> practice, keys should be generated by the device/entity it belongs to 
> whenever possible, we've seen increasing evidence that key generation 
> is difficult and many devices cannot do it securely.  I doubt that 
> forcing the owner of the device to generate a key on a commodity PC is 
> any better (it's probably worse).

That's also a really interesting question. We've had dedicated device key
generation failures, but we've also had commodity PC key generation failures
(Debian weak keys, right?). Does that mean it's a wash? What do the risk
profiles look like here? One CA uses a MegaRNG2000 to generate hundreds of
thousands of certs.. and then a flaw is found in it. Oops.
Better or worse than a hundred thousand people independently using a broken
OpenSSL shipped by their Linux vendor?

> With an increasing number of small devices running web servers, keys 
> generated by audited, trusted third parties under whatever rules 
> Mozilla chooses to enforce about secure key delivery may actually in 
> many circumstances be superior than what would happen if the practice is
banned.

Is there a way to limit the use of this to those circumstances?

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://clicktime.symantec.com/a/1/7IiyfqIYVYHVo4yJwZ1gujE6ewgPbVhdeNR8nQYMk
tE=?d=-Wn_VctZunngdEk_ioG0-YmJpPH0bSY7avkVy2G5jkppW7WbRwmFtauXnqI4GVKzIanQD2
ZA6NInKdI3JGkcf9ryTq6n-s4c4pg5s3wE4vkp4yda03M7jQfN5_Ag8-70lEsjQb45m2On8sIoG_
dT07uGS0eLuIUFBs5Ejb7aU7SMDef-aiw2SMmHSy34HrobgXESUV5rtJhwEAyCZvSWTdlhTt2mUM
XVuNXdmFtAYun19fEnhCuxZTm44Inip_9XUfKb73PIvmELdwusC79xu_WgoRGUvPUEFfEYMZQJLz
r1wo3PfgH3YtIhu55H4aSMlU8UOVe5JjW6WYG0wIKfKfGKta_cm5JB9HGONmcRvB8nw-A2xd5kr6
jSh2Pb6kH9EJMOhxcnioBU4Gm_IH7he9MnhbhTu2BATkoSNvbqOoNB=https%3A%2F%2Flists
.mozilla.org%2Flistinfo%2Fdev-security-policy


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA generated keys

2017-12-11 Thread Nick Lamb via dev-security-policy
On Sat, 9 Dec 2017 18:20:56 +
Tim Hollebeek via dev-security-policy
 wrote:

> First, third parties who are *not* CAs can run key generation and
> escrow services, and then the third party service can apply for a
> certificate for the key, and deliver the certificate and the key to a
> customer.  I'm not sure how this could be prevented.  So if this
> actually did end up being a Mozilla policy, the practical effect
> would be that SSL keys can be generated by third parties and
> escrowed, *UNLESS* that party is trusted by Mozilla. This seems .
> backwards, at best.

I'm actually astonished that CAs would _want_ to be doing this.

A CA like Let's Encrypt can confidently say that it didn't lose the
subscriber's private keys, because it never had them, doesn't want them.
If there's an incident where the Let's Encrypt subscriber's keys go
"walk about" we can start by looking at the subscriber - because that's
where the key started.

In contrast a CA which says "Oh, for convenience and security we've
generated the private keys you should use" can't start from there. We
have to start examining their generation and custody of the keys. Was
generation predictable? Were the keys lost between generation and
sending? Were they mistakenly kept (even though the CA can't possibly
have any use for them) after sending? Were they properly secured during
sending?

So many questions, all trivially eliminated by just not having "Hold
onto valuable keys that belong to somebody else" as part of your
business model.

> Second, although I strongly believe that in general, as a best
> practice, keys should be generated by the device/entity it belongs to
> whenever possible, we've seen increasing evidence that key generation
> is difficult and many devices cannot do it securely.

I do not have any confidence that a CA will do a comprehensively better
job. I don't doubt they'd _try_ but the problem is Debian were trying,
we have every reason to assume Infineon were trying. Trying wasn't
enough.

If subscribers take responsibility for generating keys we benefit from
heterogeneity, and the subscriber gets to decide directly to choose
better quality implementations versus lower costs. Infineon's "Fast
Prime" was optional, if you were happy with a device using a proven
method that took a few seconds longer to generate a key, they'd sell
you that. Most customers, it seems, wanted faster but more dangerous.

Aside from the Debian weak keys (which were so few you could usefully
enumerate all the private keys for yourself) these incidents tend to
just make the keys easier to guess. This is bad, and we aim to avoid
it, but it's not instantly fatal. But losing a customer keys to a bug
in your generation, dispatch or archive handling probably _is_
instantly fatal, and it's unnecessary when you need never have those
keys at all.


Nick.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA generated keys

2017-12-10 Thread Gervase Markham via dev-security-policy
Hi Tim,

The more I think about it, the more I see this is actually a interesting
question :-)

I suspect the first thing Mozilla allowing this would do would be to
make it much more common. (Let's assume there are no other policy
barriers.) I suspect there are several simpler workflows for certificate
issuance and installation that this could enable, and CAs would be keen
to make their customers lives easier and reduce support costs.

On 09/12/17 18:20, Tim Hollebeek wrote:
> First, third parties who are *not* CAs can run key generation and escrow
> services, and then the third party service can apply for a  certificate for
> the key, and deliver the certificate and the key to a customer.

That is true. Do you know how common this is in SSL/TLS?

> I'm not
> sure how this could be prevented.  So if this actually did end up being a
> Mozilla policy, the practical effect would be that SSL keys can be generated
> by third parties and escrowed, *UNLESS* that party is trusted by Mozilla.

Another way of putting it it: "unless that party were the party the
customer is already dealing with and trusts". IoW, there's a much lower
barrier for the customer in getting the CA to do it (trust and
convenience) compared to someone else. So removing this ban would
probably make it much more common, as noted above. If it's something we
want to discourage even if we can't prevent it, the current ban makes sense.

> Second, although I strongly believe that in general, as a best practice,
> keys should be generated by the device/entity it belongs to whenever
> possible, we've seen increasing evidence that key generation is difficult
> and many devices cannot do it securely.  I doubt that forcing the owner of
> the device to generate a key on a commodity PC is any better (it's probably
> worse).

That's also a really interesting question. We've had dedicated device
key generation failures, but we've also had commodity PC key generation
failures (Debian weak keys, right?). Does that mean it's a wash? What do
the risk profiles look like here? One CA uses a MegaRNG2000 to generate
hundreds of thousands of certs.. and then a flaw is found in it. Oops.
Better or worse than a hundred thousand people independently using a
broken OpenSSL shipped by their Linux vendor?

> With an increasing number of small devices running web servers,
> keys generated by audited, trusted third parties under whatever rules
> Mozilla chooses to enforce about secure key delivery may actually in many
> circumstances be superior than what would happen if the practice is banned.

Is there a way to limit the use of this to those circumstances?

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


CA generated keys

2017-12-09 Thread Tim Hollebeek via dev-security-policy
 

Apologies for the new thread.  It's difficult for me to reply to messages
that were sent before I joined Digicert.

 

With respect to CA generated SSL keys, there are a few points that I feel
should be considered.

 

First, third parties who are *not* CAs can run key generation and escrow
services, and then the third party service can apply for a  certificate for
the key, and deliver the certificate and the key to a customer.  I'm not
sure how this could be prevented.  So if this actually did end up being a
Mozilla policy, the practical effect would be that SSL keys can be generated
by third parties and escrowed, *UNLESS* that party is trusted by Mozilla.
This seems . backwards, at best.

 

Second, although I strongly believe that in general, as a best practice,
keys should be generated by the device/entity it belongs to whenever
possible, we've seen increasing evidence that key generation is difficult
and many devices cannot do it securely.  I doubt that forcing the owner of
the device to generate a key on a commodity PC is any better (it's probably
worse).  With an increasing number of small devices running web servers,
keys generated by audited, trusted third parties under whatever rules
Mozilla chooses to enforce about secure key delivery may actually in many
circumstances be superior than what would happen if the practice is banned.

 

-Tim

 



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy