Many of the concerns you list below are already covered in different ways.
1. I believe (though others may know better) that the high general requirements for the security of CA systems also apply to the systems performing the validation procedures in question. 2. For all DV (Domain Validated) certificate validation methods, it is basically accepted that if an attacker can hijack access to a domain for the duration of the validation, then that attacker can fool even the most secure CA into giving the attacker a DV certificate. This is because the problem is fundamentally unsolvable. 3. The location from which to fetch the confirmation file for HTTP based validation is generally dictated by the CA, not the applicant. So one CA might require the file to be at "http://www.example.com/check1234.html", another might require it to be at "http://www.example.com/.well-known/check5678.txt" and so on. One of the numerous issues that lead to WoSign becoming distrusted was that they allowed the applicant to specify the port, leading to multiple unauthorized certificates being issued, some of which were not revoked when they were told about it! 4. Exact variations within the 10 permitted domain validation methods are very much up to the ingenuity of the CA doing the work. For example the advanced secure checks developed by "Let's Encrypt" are technically just extra good variations of some of these 10 methods. On 18/07/2017 00:08, Matthew Hardeman wrote:
Hi all, I was just reading through the baseline requirements -- specifically 188.8.131.52 and its children -- and noted that while there are particular standards as to the blessed methods of validation of authority & control for domain names (and host names within domain names), there is nothing specified regarding the technical requirements for the infrastructure and procedures for this validation. Instead, simple protocol names are called out as the method (like over HTTP/HTTPS, or establishment of TXT record in the DNS). Nothing more specific is noted. My own background is originally in software development, but specifically with an emphasis on network applications. Additionally, I've been involved in quite a number of small / medium regional ISP interconnection matters over the years. I'm extremely familiar with the various mechanisms of ISP to ISP connectivity, whether via purchase from a transit ISP, direct private peering over an independent physical link, connectivity over IXP switched infrastructure, whether via private VLAN, private BGP session over switched ethernet on IXP, or via IXP route servers, or any combination of these (very common). It has occurred to me that a small certificate authority might plausibly have their principal operations infrastructure at a single data center. Even in instances where multiple ISPs provide access to this CA, they will almost inevitably be pulled from a single data center or cluster of physically close data centers. Quite frequently, those ISPs will perform regional peering between each other at one of a small number of data centers in the geographic region. Presumably, best practice for a DNS challenge currently involves: 1. Do the various things that negotiate between the CA and the authentication client what actual DNS record needs to get created (TXT record with certain name or similar). 2. Client creates record and if necessary allows it to propagate in their or their providers' infrastructure. 3. Client pings CA as ready for validation test. 4. CA presumably uses a smart DNS resolver to resolve (with DNSSEC for as far as possible) from the IANA root to the TLD name servers to determine the authoritative name servers for the zone in question. 5. Having the authoritative DNS servers now known, the CA infrastructure queries directly to one or more of the authoritatives for the domain to get the result. Cache policy presumably is 0 or near zero. In actuality, if that is "best practice", it falls short of handling or attempting to handle certain network interconnection / interception attacks which could definitely be mitigated significantly, though imperfectly and at some cost. The trouble is that for many domains served by independent DNS infrastructure, you might only need to "steal" routing for a small network (say a /23) for a very brief period and only at the nearest major interconnection hub to the CA's infrastructure to briefly hijack the DNS queries from the CA infrastructure to the authoritative DNS servers for the registered domain. If you know or can proximately control when the validation test will run within even minutes, it's quite possible the "route leak" wouldn't be noticed. I should note that it is similarly possible to leak such an advertisement to hijack an http rather than DNS test as well. While it will probably not be possible to guarantee that the route seen to certain infrastructure that a CA wishes to test control of can not be hijacked at all, there are definitely ways to greatly reduce the risk and significantly curb the number of organizations well positioned to execute such an attack. Questions I pose: 1. Should specific procedurals as to how one _correctly_ and via best practice performs the validation of effective control of a file served up on the web server or correctly validates a DNS challenge be part of the baseline requirements and/or root program requirements? 2. What specific elements would strike the right balance of need for security vs cost of implementation? 3. Are CAs today already contemplating this? I note that code commits in Let's Encrypt's Boulder CA recently include the notion of remotely reached validation agents and coordinating the responses that the validation agents got and establishing rules for quorate interpretation of the results of dispersed validators. I can not imagine that said work occurred in a vacuum or without some thought as to the kinds of risks I am speaking of. Even if we stop short of specifying the kinds of disparate networks and locations that CA validation infrastructure should measure validations from, there are other questions I think that are appropriate to discuss: For example, 184.108.40.206.6 mentioned validation via HTTP or HTTPS access to an FQDN for a given blessed path. It never says how to fetch that or how to look up where to fetch it from. It may be tempting to say "In the absence of other guidance, behave like a browser." I believe, however, that this would be an error. A browser would accept non-standard ports. We should probably only allow 80 or 443. A browser wouldn't load over HTTPS if the current certificate were untrusted. This is presumably irrelevant to the validation check and should probably ignore the certificate. An HSTS preload might well be incorporated into a browser, but should probably be ignored by the validator. At the network and interconnection layer, I think there are significant opportunities for a bad actor to compromise domain (and email, etc, etc) validation in ways that parties not intimately familiar with how service providers interconnect and route between themselves could fail to even minimally mitigate. If I am correctly reading between the lines in commit messages and capabilities being built into Let's Encrypt's Boulder CA software, it would appear that there are others concerned about the limitations inherent in single point of origination DNS queries being relied upon for validation purposes. If that is the case, what is the appropriate forum to discuss the risks and potential mitigations? If there is reasonable consensus as to those, what is the proper place to lobby for adoption of standards? Thanks, Matt Hardeman
Enjoy Jakob -- Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com Transformervej 29, 2860 Søborg, Denmark. Direct +45 31 13 16 10 This public discussion message is non-binding and may contain errors. WiseMo - Remote Service Management for PCs, Phones and Embedded _______________________________________________ dev-security-policy mailing list firstname.lastname@example.org https://lists.mozilla.org/listinfo/dev-security-policy