RE: Symantec Update on SubCA Proposal

2017-08-13 Thread Jeremy Rowley via dev-security-policy
Hi wizard,

Although DigiCert will acquire the assets related to Symantec’s CA business, 
DigiCert is not required to use those assets in its business operations.  We 
are organizing the operations of DigiCert to meet the requirements established 
in the Managed CA proposal. This includes having all validation and issuance 
performed through DigiCert’s existing PKI and using DigiCert processes 
accompanied by DigiCert leadership.  

Our interpretation of the Google and Mozilla requirements is similar to yours – 
that the goal is to migrate from Symantec’s existing PKI to a third party while 
implementing systematic and operational controls over the issuing and 
validation processes.  Post close, we plan to continue towards these objectives 
using the path adopted by the browsers in the Managed CA process. This path 
includes regular audits during the transition, a migration away from Symantec’s 
issuing and validation systems, and implementation of operational controls to 
prevent mis-issuance.  Our plan is to transition completely away from the 
Symantec issuance platform and validation processes by December 1 and work 
towards the distrust dates set by Mozilla for the end of 2018.  

The Managed CA requirements seemed designed to (1) give Symantec time to 
reengineer processes and systems and (2) work towards rebuilding trust in the 
Symantec’s operations.  The acquisition eliminates the need to reengineer the 
process and makes the question of restoring trust moot.  With only DigiCert 
performing the validation and operating the CA, the risks identified to be 
fixed by the Managed CA proposal are remediated as of closing.

Of course, we’re always open to feedback and additional ideas on how to build 
community trust.  Feel free to message us or submit follow-up questions and 
ideas about how we can answer the community’s concerns. 

Thanks!

Jeremy



-Original Message-
From: dev-security-policy 
[mailto:dev-security-policy-bounces+jeremy.rowley=digicert@lists.mozilla.org]
 On Behalf Of wizard--- via dev-security-policy
Sent: Friday, August 11, 2017 9:12 PM
To: mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: Symantec Update on SubCA Proposal

Steve,

Thank you for responding relatively promptly (at least as compared to previous 
Symantec responses) to Devon's questions.

However, these responses seem to imply that a side effect of the sale *is* to 
skirt the remediation requirements imposed by Google and Mozilla. 

In particular, the agreed upon plan requires issuance (and information 
verification) by a managed SubCA that does *not* involve Symantec processes, 
equipment, personnel, etc., until trust in those equipment, people, and 
processes is established.

if Digicert were *not* acquiring any of the equipment/personnel/processes from 
Symantec, only the customers, this would seem to meet the spirit and letter of 
the Symantec remediation plan. 

However, the publicly announced details of the acquisition [Devon ref. 2] 
explicitly state that equipment and personnel will be transferred from Symantec 
to Digicert. Combined with the answers below, this means that as soon as the 
deal closes and this transfer occurs, there is no barrier to the 
formerly-Symantec-but-now-Digicert equipment and personnel from immediately 
assisting in the issuance of new certificates (presumably under the Digicert 
roots). This seems to go against the spirit (and possibly letter) of the 
remediation plan, which was designed to prevent the bad practices within the 
existing Symantec CA organization from being involved in further issuances 
until a level of trust could be demonstrated. 

Perhaps you or Digicert could clarify why you believe the above to not be the 
case.

Thank you.

On Friday, August 11, 2017 at 8:32:33 PM UTC-4, Steve Medin wrote:
> > -Original Message-
> > From: dev-security-policy [mailto:dev-security-policy-
> > bounces+steve_medin=symantec@lists.mozilla.org] On Behalf Of
> > Devon O'Brien via dev-security-policy
> > Sent: Wednesday, August 09, 2017 12:24 PM
> > To: mozilla-dev-security-pol...@lists.mozilla.org
> > Subject: [EXT] Re: Symantec Update on SubCA Proposal
> >
> > Hello m.d.s.p.,
> >
> > I'd just like to give the community a heads up that Chrome’s plan 
> > remains to put up a blog post echoing our recent announcement on 
> > blink-dev [1], but in the meantime, we are reviewing the facts 
> > related to Symantec’s sale of their PKI business to DigiCert [2].
> >
> > Recently, it has come to our attention that Symantec may have 
> > selected DigiCert from the RFP process to become a Managed CA 
> > Partner. As defined in Google’s first Managed CA proposal [3], then 
> > supported by Symantec’s commitment to “[cover] all aspects of the 
> > SubCA proposal” [4], and finally reiterated in Google’s final 
> > proposal [1], the requirement has always been that the Managed 
> > Partner Infrastructure be operated by an independent and 
> > non-affiliated CA while 

Re: 2017.08.10 Let's Encrypt Unicode Normalization Compliance Incident

2017-08-13 Thread Peter Bowen via dev-security-policy
On Sun, Aug 13, 2017 at 5:59 PM, Matt Palmer via dev-security-policy
 wrote:
> On Fri, Aug 11, 2017 at 06:32:11PM +0200, Kurt Roeckx via dev-security-policy 
> wrote:
>> On Fri, Aug 11, 2017 at 11:48:50AM -0400, Ryan Sleevi via 
>> dev-security-policy wrote:
>> >
>> > Could you expand on what you mean by "cablint breaks" or "won't complete in
>> > a timely fashion"? That doesn't match my understanding of what it is or how
>> > it's written, so perhaps I'm misunderstanding what you're proposing?
>>
>> My understand is that it used to be very slow for crt.sh, but
>> that something was done to speed it up. I don't know if that change
>> was something crt.sh specific. I think it was changed to not
>> always restart, but have a process that checks multiple
>> certificates.
>
> I suspect you're referring to the problem of certlint calling out to an
> external program to do ASN.1 validation, which was fixed in
> https://github.com/awslabs/certlint/pull/38.  I believe the feedback from
> Rob was that it did, indeed, do Very Good Things to certlint performance.

I just benchmarked the current cablint code, using 2000 certs from CT
as a sample.  On a single thread of a Intel(R) Xeon(R) CPU E5-2670 v2
@ 2.50GHz, it processes 394.5 certificates per second.  This is 2.53ms
per certificate or 1.4 million certificates per hour.

Thank you Matt for that patch!  This was a _massive_ improvement over
the old design.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: 2017.08.10 Let's Encrypt Unicode Normalization Compliance Incident

2017-08-13 Thread Matt Palmer via dev-security-policy
On Fri, Aug 11, 2017 at 06:32:11PM +0200, Kurt Roeckx via dev-security-policy 
wrote:
> On Fri, Aug 11, 2017 at 11:48:50AM -0400, Ryan Sleevi via dev-security-policy 
> wrote:
> > On Fri, Aug 11, 2017 at 11:40 AM, Nick Lamb via dev-security-policy <
> > dev-security-policy@lists.mozilla.org> wrote:
> > 
> > > On Friday, 11 August 2017 14:19:57 UTC+1, Alex Gaynor  wrote:
> > > > Given that these were all caught by cablint, has Let's Encrypt 
> > > > considered
> > > > integrating it into your issuance pipeline, or automatically monitoring
> > > > crt.sh (which runs cablint) for these issues so they don't need to be
> > > > caught manually by researchers?
> > >
> > > The former has the risk of being unexpectedly fragile,
> > 
> > 
> > Could you expand on this? It's not obvious what you mean.
> > 
> > 
> > > This way: If cablint breaks, or won't complete in a timely fashion during
> > > high volume issuance, it doesn't break the CA itself. But on the other 
> > > hand
> > > it also doesn't wail on Comodo's generously offered public service crt.sh.
> > >
> > 
> > Could you expand on what you mean by "cablint breaks" or "won't complete in
> > a timely fashion"? That doesn't match my understanding of what it is or how
> > it's written, so perhaps I'm misunderstanding what you're proposing?
> 
> My understand is that it used to be very slow for crt.sh, but
> that something was done to speed it up. I don't know if that change
> was something crt.sh specific. I think it was changed to not
> always restart, but have a process that checks multiple
> certificates.

I suspect you're referring to the problem of certlint calling out to an
external program to do ASN.1 validation, which was fixed in
https://github.com/awslabs/certlint/pull/38.  I believe the feedback from
Rob was that it did, indeed, do Very Good Things to certlint performance.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certificates with less than 64 bits of entropy

2017-08-13 Thread Nick Lamb via dev-security-policy
On Sunday, 13 August 2017 04:04:45 UTC+1, Eric Mill  wrote:
> While not every issuing CA may take security seriously enough to employ
> engineers on staff who can research, author and deploy a production code
> fix in a 24 hour period, every issuing CA should be able to muster the
> strength to keep the community informed of their plans and progress in
> however long it takes to address the issue.

In my opinion the correct incentive structure here is: We don't care whether 
you ever start issuing again but if you have a limited time to stop the 
problem, if you can't fix it quickly that will be by ceasing issuance.

Switching off the issuance pipeline in a timely fashion when a problem is 
uncovered (so that things stop getting worse) needs to be something every CA 
can do. It should always be within the skill set of personnel available "on 
call" when things go wrong. But whether they have engineers able to actually 
fix a problem the same day, the next day or a month later is an operational 
detail for the CA leadership. For commercial CAs there is presumably some 
trade-off between the need to be seen as a reliable supplier for repeat 
subscribers and the cost of having on-call engineers. But it needn't concern 
m.d.s.policy where they think best to draw the line, so long as they prevent 
the problem recurring by switching off an affected issuance pipeline until it's 
fixed.

I am minded to draw comparison to "emergency plumber" services. Despite it 
being an "emergency" the plumber will be no more quickly able to source parts 
from a discontinued product line, or plan and install complex new systems than 
a non-emergency plumber. Those things may still take weeks. But what they can 
always do immediately is switch off supply of water or gas so as to stop things 
getting worse.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy