> Yes, however I don't think Matthew's concern was about systems owned by the 
> CA but rather systems proximate to them in the network. For example if the CA 
> purchases Internet service from a single local Internet Service Provider, the 
> BRs obviously don't require that this ISP have all the security procedures in 
> place of a CA. That ISP would be able to act as a MitM for the entire rest of 
> the Internet, and isn't subject to the BRs so that this might happen at the 
> whim of a 17-year old unpaid intern let loose with the routing tables.

You are correct as to my concern, except in as far as it is more insidious than 
even that.  Not only is it a question of trusting your ISP.  The CA's ISP need 
do nothing wrong.  Another ISP trusted by your ISP could be the vector of 
injection of a quite temporary and very narrowly scoped route hijack.  
Furthermore, it can absolutely even be done if the CA's ISP's primary IP 
transit ISP purchases transit from another ISP (this is quite common) which in 
turn trusts other peers.

For example, I myself manage a network interconnected to the Digital Realty / 
Telx ATL1 TIE (Telx Intenet Exchange).  Across that exchange, I have (for 
example) a peering session with Hurricane Electric.  I have no doubt I could 
leak a prefix briefly to HE that would get picked up.  Another ISP who uses HE 
as a primary transit path would almost certainly accept the advertisement from 
HE and that traffic would flow my way.  For many ISPs the scope of this would 
be limited to the southeast USA in my example, but assuming that I were 
targeting a CA in the southeast USA, that would be a bonus -- it would severely 
limit the number of worldwide eyes who might notice my brief hijacking 
activity.  If I wanted to target a west coast CA in the bay area, or Seattle, 
or the LA area, I would just need to be one out of a universe of hundreds of 
well peered network participants on the prevailing IXP exchange at San 
Francisco / Palo Alto, the Seattle Westin building, or CoreSite's One Wilshi
 re respectively.

> Only some of the 10 Blessed Methods involve the actual network. Domain 
> Authorization Documents would get the job done and needn't travel over the 
> network. If your main threat is from network-based adversaries, such 
> documents are an admirable choice to prevent that.

Of course, but the real threat one faces is what other CAs will accept as 
proof, not what one would wish that other CAs accept as proof.  CAA obviously 
does a great deal to help here, especially the combination of CAA with DNSSEC.

The broader point I wish to make is that much can be done do improve the 
strength of the various subset of the 10 methods which do rely solely on 
network reliant automated validation methodologies.  The upside would be a 
significant, demonstrable increase in difficulty for even well placed ISP 
admins to compromise a compliant CAs validation processes.  The downside would 
be increases in cost and complexity borne by the compliant CA.

> [Aside: Did the CA/B really still not manage to pass a resolution fixing the 
> list of Blessed Methods all these months later? I guess Mozilla's 
> intervention here was more necessary than I'd appreciated]

I noticed that too.  I assume it is still tied up in IPR hell?

> Where a domain has enabled DNSSEC, it is possible for the CA to rely upon 
> DNSSEC to prevent tampering with records for that domain. So that secures 
> DNS-based validations. We can argue about whether the DNSSEC cryptography 
> would withstand attack by a resourceful adversary, but it certainly raises 
> the bar very considerably compared to just fiddling with a routing table.

This does greatly enhance the defensive capability for a given domain.

> Unlike a typical end user, the CA is certainly in a position to implement 
> DNSSEC validation in its DNS resolver correctly and to reject attempts to 
> validate control which run into problems with DNS server correctness. I know 
> that Let's Encrypt does this, and judging from their user forums a small but 
> noticeable fraction of applicants run into problems because their DNS server 
> is crap and replies SERVFAIL (or times out) for legal DNS queries.

Agreed.  At least let any tax related to implementation of DNSSEC fall where it 
is due -- upon the party that incorrectly implemented it.


> There is doubtless a strong temptation for commercial reasons for a CA to 
> ignore such problems and press on with the compromised validation, but the 
> BRs don't require that, and it would not be unreasonable to "level the 
> playing field" by updating them, or Mozilla's programme requirements, to 
> demand the CA reject validation when an applicant's DNS servers won't answer 
> correctly.

I would advocate a level playing field here.  This would have the bonus upside 
of helping to fix bad DNSSEC deployments.  If broken DNSSEC broke ability to 
get a certificate anywhere, either the incorrect deployment would likely be 
rolled back in the worst case or fixed in the best.

> 
> > 3. The location from which to fetch the confirmation file for HTTP based
> >    validation is generally dictated by the CA, not the applicant.
> 
> The Blessed Methods specifically call out a sub-path of the IETF's reserved 
> /.well-known/ URLs for this purpose. ACME has its own path, which being 
> IETF-standardized will be suitable as well (the Blessed Methods say you can 
> use a different path if it's on IANA's list and IETF standardization includes 
> adding things to IANA's list as an automatic step), but unless somebody else 
> in the industry has an IETF standards track protocol under the radar those 
> are the only two valid choices under the Blessed Methods.
> 
> There definitely are lots of non-Blessed Methods approaches deployed when I 
> last looked (and so perhaps still today) which use other paths, but you're 
> correct that they're usually chosen by the CA. This is always going to be 
> more dangerous than letting the IETF control it, so the Blessed Methods are 
> making a good change here.

I was terribly unclear as to my meaning here and I apologize.  I was not 
speaking to the URL path segment at all.  I was speaking as to the combination 
of both physical and logical point of interconnection of the element on the 
network into the internet, from which the CA's validation query activity will 
originate toward the target resources to be validated.

I believe there would be a massive improvement in the security of DNS query and 
HTTP client fetch type validations if the CA were required to execute multiple 
queries (ideally at least 3 or 4), sourced from different physical locations 
(said locations having substantial network and geographic distance between 
them) and each location utilizing significantly different internet 
interconnection providers.

Despite the fact that this would massively increase the burden to quietly and 
momentarily hijack DNS server IPs in order to trick a CA, I believe that there 
is presently no commercial impetus or advantage for a CA in the marketplace to 
implement such security measures.  On that basis, I raise the question of 
whether such discussion is appropriate for consideration to begin a dialogue on 
what rules or requirements should perhaps issue upon CAs to combat the risk.
_______________________________________________
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

Reply via email to