On 2/5/20 23:55, Brian Dickson wrote:
HTTPS certificate validation is based on the assumption that the client connects to the actual IP of the server (aka "the real server"). The real server presents the certificate, and the TLS handshake proves that the server possesses the private key of the certificate.

Loss of control of the private key is an issue ONLY if an attacker is able to direct a client to an IP under control of the attacker.

The DNS component of WebPKI certificate validation is critical; it goes "FQDN" -> DNS -> IP -> (TLS connection with SNI, get cert, validate cert via handshake). You can't connect to some random IP that hands you the cert you ask for, you have to know you are contacting the legitimate web site first (via the DNS to IP bits).

The issue isn't actually WebPKI /mismatches/, so much as it is about the non-possession of the private key for the /real/ certificate (or issuance of another certificate matching the name).

If an attacker is able to obtain the private key, there is no difficulty in obtaining the certificate itself (indeed, the real server serves it up, as required by TLS).


Sure, and I'm certain that whatever material the legitimate zone administrator uses to authenticate changes to the DNS zone can also be exfiltrated. Security relies on administrative hygiene as much as it does technical mechanisms.



The only roadblock to an attacker using a certificate after a private key is obtained, is DNS.


Or BGP hijacking, which is on the same order of magnitude of difficulty as what you posit.


Only DNSSEC can protect against cache poisoning attacks, by which an attacker can direct clients to the IP under control of the attacker, and succeed in impersonating the legitimate web site, which could be used to elicit personal information from unwary users. (Cache poisoning attacks aren't completely trivial, but they are many orders of magnitude more feasible than brute strength attacks against DNSSEC or WebPKI keys.)

So, the argument fails because it is reduced to the claim that "private keys can't be obtained by attackers" (which cannot be guaranteed, clearly.)

Maybe some language that explains how/why the risk exists would help clarify the threat?


I don't think that's useful. All security systems can be defeated, and it's probably not worth going into the details of the various Achilles' heels of the name resolution, routing, and application protocols involved.

At a general level, the issue I am highlighting is that the cost/benefit analysis changes dramatically when there is already a system in place (flawed, as all security systems are) to protect against arbitrary impersonation as compared to situations when there is not, and that it would be more realistic for the document to accurately represent that.

For the specific example chosen, it's been made pretty clear over the years that at least the clients for the specific service you cite have no interest in incurring this additional cost. If that's the working group consensus, then I have no interest in over-riding it. But ignoring operational realities seems kind of ivory tower-ish, which feels like the kind of thing that undermines the general credibility of the rest of the document.

/a

_______________________________________________
dns-privacy mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/dns-privacy

Reply via email to