SHA-1 serverAuth cert issued by HydrantID (QuoVadis) in January 2017

2017-02-15 Thread Rob Stradling via dev-security-policy
This currently unrevoked cert has the serverAuth EKU and 
dNSName=qvsslrca3-v.quovadisglobal.com:

https://crt.sh/?id=83114602

Its issuer is trusted for serverAuth by Mozilla:
https://crt.sh/?caid=1333

--
Rob Stradling
Senior Research & Development Scientist
COMODO - Creating Trust Online
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


SHA-1 serverAuth cert issued by Trustis in November 2016

2017-02-15 Thread Rob Stradling via dev-security-policy
This currently unrevoked cert has a SHA-1/RSA signature, the serverAuth 
EKU and CN=hmrcset.trustis.com:

https://crt.sh/?id=50773741=cablint

It lacks the SAN extension, but that doesn't excuse it from the ban on 
SHA-1!


Its issuer is trusted for serverAuth by Mozilla:
https://crt.sh/?caid=920=mozilladisclosure

--
Rob Stradling
Senior Research & Development Scientist
COMODO - Creating Trust Online
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Intermediates Supporting Many EE Certs

2017-02-15 Thread okaphone.elektronika--- via dev-security-policy
On Wednesday, 15 February 2017 18:27:28 UTC+1, Gervase Markham  wrote:
> On 13/02/17 17:34, okaphone.elektron...@gmail.com wrote:
> > Isn't this mostly something that CAs should keep in mind when they
> > setup "shop"?
> > 
> > I mean it would be nice to have a way of avoiding that kind of impact
> > of course, but if they think it's best to put all their eggs in one
> > basket... ;-)
> 
> Well, if it's harder for us to dis-trust an intermediate with many leafs
> due to the site impact, the CA may decide to do it that way precisely
> because it is harder!

Ehm... play chicken? Nah, perhaps better not. ;-)

So you really would like to make distrust more doable. But if it doesn't "hurt" 
enough you don't get the effect you want either. Difficult to know what level 
would be optimum.

So I guess that means what you really need is a certain scalability in the 
solution.

(Thanks for explaining. I'm just trying to understand what is happening here.)
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Intermediates Supporting Many EE Certs

2017-02-15 Thread Jakob Bohm via dev-security-policy

On 14/02/2017 22:03, Nick Lamb wrote:

On Tuesday, 14 February 2017 17:55:18 UTC, Jakob Bohm  wrote:

Unfortunately, for these not-quite-web-server things (printers, routers
etc.), automating use of the current ACME Let's encrypt protocol with
or without hardcoding the Let's Encrypt URL is a non-starter for anyone
using these things in a more secure network and/or beyond the firmware
renewal availability from the vendor.


Whilst I agree there are challenges, I think greater automation is both 
possible and necessary for these things.


On a simple network where public certs are acceptable, such devices
will often need to get renewed certificates long past the availability
of upstream firmware updates to adapt to ecosystem changes (such as
Let's Encrypt switching to an incompatible ACME version in the year
2026 or WoSign free certs becoming a thing of the past in 2016).


Ecosystem changes that make stuff stop working are much more likely to be 
algorithmic changes (Does your printer know SHA-3? Elliptic curve crypto? Will 
it work if we need quantum-resistant crypto?).



Broken algorithms can still be used on closed networks where the
encryption is secondary to the perimeter protection.  Biggest problem
would be Browsers aggressively removing algorithms by (once again)
failing to consider the intranet use cases.

The real world equivalent is the use of ultra-primitive locks on the
inside doors of a house, while using high quality locks on outside
doors (public servers).


On a secure network, existence and address of each such device should
not be revealed to an outside entity (such as Let's encrypt admins),
let alone anyone who knows how to read CT logs.  For such devices I
generally use an in-house CA which is trusted only in-house and uses
the validation procedure "The subject is known personally to the CA
admin and the transport of the CSR and cert have been secured by
out-of-band means"


Like the manual verification of SSH host fingerprints, I fear such a

> system most often looks successful because it's not coming up against
> any serious adversaries rather than because it's actually implemented
> in a sound way. Unless everybody is very careful it easily becomes the
> Yale lock of PKIs, successfully keeping out small children and 
sufficient

> to show legally that you intended to forbid entry, but not exactly an
> impediment to organised criminals.


Also it's weird that you mentioned transporting the CSR and certificate

> out of band. I can kind of get that if you take the CSR from the device
> to the CA issuer by hand then you feel as though you avoid MITM

replacement of the CSR so it makes your reasoning about the Subject

> simpler. But why the certificate ?




Cert transport is important only for the devices where PKCS#12
transport is the norm.  Not every embedded CPU has a high quality RNG.


In this scenario (personal knowledge of subject's identity) I am

> currently fairly confident that something like SCEP is the right
> approach. As with your manual system I expect that SCEP will often
> be deployed in a fashion that does not resist attack, but in
> _principle_ it's possible to have this work well and unlike hordes
> of workers traipsing about with CSR files it might actually scale.




Who said anything about hordes of workers?  The process is centralized
with a small number of people completing the whole process.  Process
would be different if the IoT devices did not pass through the central
office before deployment or if a huge number of devices needed to be
set up on an assembly line.


Similarly, it would be useful to have an easily findable tool/script
for doing ACME in a semi-offline way that doesn't presume that the ACME
client has any kind of direct control over the servers that will be
configured with the certificates.  Such a tool could be installed once
by a site and then used to generate certs for the various "web-managed"
devices that need them.


Probably for that type of environment you'd want to do DNS validation:

> That is, have your certificate-obtaining tool able to reach out to your
> DNS service and add TXT records for validation so that it can obtain a
> certificate for any name in the domains you control. Of course the

parameters of how exactly this works will vary from one site to another,
> particularly depending on which DNS servers they use, and whether 
they're

> a Unix house or not. Also it matters whether you're going to have the
> devices create CSRs, or just inject a new private key when you give
> them a certificate.


Here's an example of Steve, who hand-rolled such a solution, his approach

> also deploys the certificates via SSH but for the "web-managed" devices
> you mention that isn't an option and may need to remain manual for now.


https://www.crc.id.au/using-centralised-management-with-lets-encrypt/

You'll also see people building on other Unix shell tools like:

https://github.com/srvrco/getssl  or  https://acme.sh/

Inevitably an 

Re: Intermediates Supporting Many EE Certs

2017-02-15 Thread Gervase Markham via dev-security-policy
On 13/02/17 19:22, Jeremy Rowley wrote:
> As we tied the intermediate to a specific set of companies (which correlated
> roughly to a specific volume of certificates), renewal and pinning were
> non-issues. As long as each company was identified under the same umbrella,
> an entity renewing, ordering a new cert, or pinning received the same
> intermediate each time and was tied to the specific entity.

This seems like a sane idea. Any CA which was required to rotate its
intermediates would not be required to rotate them on a time basis; they
could choose any rotation scheme they liked which kept them within the
per-intermediate limits.

_However_, if multiple intermediates are being issued under at once, and
there is a process or other problem, the likelihood of them all being
affected is high. (The rest of the validation path would likely be the
same.) Therefore, you haven't necessarily solved the problem.

Can a more complex rotation scheme square this circle?

Gerv

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Intermediates Supporting Many EE Certs

2017-02-15 Thread Gervase Markham via dev-security-policy
On 13/02/17 17:34, okaphone.elektron...@gmail.com wrote:
> Isn't this mostly something that CAs should keep in mind when they
> setup "shop"?
> 
> I mean it would be nice to have a way of avoiding that kind of impact
> of course, but if they think it's best to put all their eggs in one
> basket... ;-)

Well, if it's harder for us to dis-trust an intermediate with many leafs
due to the site impact, the CA may decide to do it that way precisely
because it is harder!

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Intermediates Supporting Many EE Certs

2017-02-15 Thread Gervase Markham via dev-security-policy
On 13/02/17 16:17, Steve Medin wrote:
> Getting all user agents with interest is issuance limits to implement
> the CA Issuers form of AIA for dynamic path discovery and educating
> server operators to get out of the practice of static chain
> installation on servers would make CA rollovers fairly fluid and less
> subject to operator error of failing to install the proper
> intermediate.

Regardless of the merits of this proposal, this is:
https://bugzilla.mozilla.org/show_bug.cgi?id=399324
which was reported 10 years ago, and resolved WONTFIX a year ago. It
seems unlikely that this decision will be reversed, it will be
implemented in Firefox and Chrome for Android, and then become
ubiquitous, any time soon.

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Suspicious test.com Cert Issued By GlobalSign

2017-02-15 Thread Gervase Markham via dev-security-policy
On 13/02/17 14:34, Doug Beattie wrote:
> This was for  GlobalSign account used for testing, so it was a
> GlobalSIgn employee.  Customers are not, nor have they ever been,
> permitted to add domains without GlobalSign enforcing the domain
> verification process.

But currently GlobalSign employees still are?

If so, can you help us understand why that's necessary? Given that you
control the domains used for testing, you should be able to set them up
to auto-pass some form of automated validation, so imposing a validation
requirement for every addition would not, at least on a superficial
understanding, lead to increased friction in testing.

Gerv

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: GoDaddy Misissuance Action Items

2017-02-15 Thread Gervase Markham via dev-security-policy
On 13/02/17 23:13, Santhan Raj wrote:
> One thing to highlight here is that the WebTrust audits are performed
> against the BRs and not against the root program requirements. 

This is true, although (apart from the relative importance of domain
validation) this is similarly true of many items in the Mozilla
requirements which we cannot directly check.

> So hopefully
> 169  makes it's way to BR soon.

I am hopeful that most or all of the methods will be back in the BRs soon.

Gerv

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy