On 20/07/2017 16:39, Gervase Markham wrote:
On 18/07/17 17:51, Matthew Hardeman wrote:
The broader point I wish to make is that much can be done do improve the 
strength of the various subset of the 10 methods which do rely solely on 
network reliant automated validation methodologies.  The upside would be a 
significant, demonstrable increase in difficulty for even well placed ISP 
admins to compromise a compliant CAs validation processes.  The downside would 
be increases in cost and complexity borne by the compliant CA.

Your point, in the abstract, is a reasonable one, but so is your further
point about trade-offs. The only way we can really make progress is for
you to propose specific changes to the language, and we can then discuss
the trade-offs of each.

I noticed that too.  I assume it is still tied up in IPR hell?

No. IPR issues are solved. We are currently in arguments about what, if
any, additional necessary fixes to the text should go into the "restore
the text" ballot and what should go into a subsequent ballot, along with
the question of whether and which existing domain validations to
grandfather in and which to require that they be redone.

I would advocate a level playing field here.  This would have the bonus upside 
of helping to fix bad DNSSEC deployments.  If broken DNSSEC broke ability to 
get a certificate anywhere, either the incorrect deployment would likely be 
rolled back in the worst case or fixed in the best.

Certainly for CAA, we don't allow broken DNSSEC to fail open. I hope
that will be true of DNS-based validation methods - either after 190
passes, or soon after that.

I believe there would be a massive improvement in the security of DNS query and 
HTTP client fetch type validations if the CA were required to execute multiple 
queries (ideally at least 3 or 4), sourced from different physical locations 
(said locations having substantial network and geographic distance between 
them) and each location utilizing significantly different internet 
interconnection providers.

How could such a requirement be concretely specced in an auditable way?


This could be audited as part of general security/implementation
auditing.  Also, the CA could/should log the list of deployed probes
that checked/softfailed each domain as part of the usual evidence
logging.

As this would probably require most CAs to set up additional "probe
servers" at diverse locations, while still maintaining the high
auditable level of network security, a longer than usual phase in for
such a requirement would be in order.  (I am thinking mostly of smaller
CAs here, whose security may have been previously based on keeping
everything except off-line backups in one or two secure buildings).

A new requirement would be that as part of the 10 approved methods:
 - All DNS lookups should be done from at least 5 separate locations
  with Internet connectivity from different ISPs.  4 out of 5 must
  return the same result before that result is used either directly
  or as part of a second step.
 - All repeatable network connections (such as HTTP probes and whois
  lookups) must be done from 5 separate locations with Internet
  connectivity from different ISPs using DNS results checked as above,
  again 4 out of 5 must agree.
 - All difficult to repeat network connections (such as sending mails),
  must be done from randomly selected locations chosen out of at least
  4 that are simultaneously available (not down) and have Internet
  connection from different ISPs.  And still using DNS results checked
  as above.

The exact number of and details of the separate locations should be kept
secret, except for the auditors and a small number of CA employees, so
that attackers will not know when and where to set up man-in-the middle
network attacks such that 80% of the probes are fooled.

Implementation examples (not requirements):

In practice, a CA would typically set up 5 "probe" servers around the
geographic area served (which may be a country, continent or the world),
each capable of relaying the relevant network traffic from the central
validation system.  If one "probe" goes off line, validation can
continue, but with 0 failures allowed, while if two out of 5 go down,
validation cannot be done (thus some CAs may want to use 10 or more
locations for added redundancy).

The "probe" servers could be relatively simple VPN boxes, carefully
hardened and audited and then encased in welded shut steel boxes before
being transported to 3rd party data centers.  Central software
continuously verifies that it is talking to a box with a known
private/public key and that various network tests confirm that the box
is still connected to the expected remote network as seen both from
inside and outside.  A CA employee should also be dispatched to
physically check after any power or connectivity failure, but this may
be delayed by a few days.

Keeping extra probes and not always using all of them can also help hide
the complete list of probe locations from attackers (who might otherwise
just log the accesses to one of their own servers during a legitimate request).

A public many-locations VPN service such as TOR could be used as a
supplemental check, but cannot be audited to CA network security
standards and thus would be an additional check.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
_______________________________________________
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

Reply via email to