On Wed, Jul 22, 2015 at 02:55:02AM -0400, Keith Moore wrote: > Clearly this draft isn't insisting on "very secure", as it's relying on TLS > certs and the model that any trusted CA is trusted for all domains, which is > already known to have problems. But to me, using TLS certs seems like a > good compromise for now. They're somewhat secure and reasonably well > understood.
The secret swept under the rug is that there is no security at all in the provisioning process for DV certificates. All those keys securely obtained over TLS are authenticated via a certificate authorized by responding to a cleartext email to [email protected], delivered to the MX host of example.com by the CA's SMTP server that does not bother to use DNSSEC. > By contrast, unsigned TLSA records offer essentially zero security. > There are way too many ways to fool a client into accepting or caching a > bogus DNS response, and the typical DNS recursive server/proxy is > implemented in a consumer grade router that is itself woefully insecure. I don't see any contrast. The same insecure process as for obtaining DV certs. > (Also, I'm all for having DNSSEC-signed TLSA records as a second trust > anchor if both that and a x.509 TLS cert were required. I'm much less > enamored with the idea of a signed TLSA record being used instead of an > x.509 cert. That's a mistake. The TLSA records for my domain are provisioned via a more robust process than any CA certificate I might reasonably obtain. > I feel like that's a sideways step that's marginally better > for some use cases, but probably not an improvement overall. This feeling should pass with time. > It's replacing something that has well-understood limitations, with > something that has poorly-understood limitations. For instance, I'd > never trust a separate server to do DNSSEC signature validation.) Not sure what separate server you have in mind. On my mail server, DNSSEC validation is performed by unbound running on 127.0.0.1. The DNSSEC code in unbound is maintained by DNS experts. I would not expect equal quality of any stand-alone dns library that happens to ship with the OS. In fact I know of multiple such libraries that are flawed, and I would NOT trust DNSSEC validation performed *outside* a well-maintained validating iterative resolver. > That's not what concerns me. What concerns me is whether all of those mail > domains would permit such updates. This draft doesn't define an API for > posting or updating such information, but we'd need one. Even if we had > such a standard, it's not clear to me whether most or all mail domains > supporting AQRY would support the update specification. Updates will be a local matter between the mailbox provider and the user. Gmail has a settings interface, as does Yahoo, Outlook.com. It is very unlikely that Postfix (for example) will ever implement an interface for remotely modifying database tables (such as address to public key) tables over SMTP. We're working on the scalability problem for key distribution (O(n^2)), not on the scalability problem for managing one's own keys (O(n)). > More generally, if the owner of a domain is already outsourcing email, web, > and DNS to different providers, should any of those providers be > axiomatically granted the ability to authenticate keys for users at that > domain? I rather doubt it. (Actually I was just making a note to have the > next version of the AQRY document recommend against using the same domain > name for either an MX or redirect SMTP server that is also being used for a > web server, because if you use the same domain name for both you're > effectively allowing your web server to authenticate AQRY responses where > you intended that or not.) This is nicely addressed with DANE for SMTP, because the key binding is per port. I should mention that there's no way that you're going to get all the domains using self-signed certs for SMTP (e.g. mine) to start deploying CA-issued certificates just to publish public keys for our own users. If you want to pay for a CA with DANE, nothing stopping you, but CA certs for MX hosts are simply NOT going to scale. This proposal is a non-starter without a scalable authentication model, and the WebPKI is not it. Even DANE seems weak to you, consider that non-deployment is even weaker. https://tools.ietf.org/html/draft-ietf-dane-smtp-with-dane-19#section-1.3 https://tools.ietf.org/html/rfc7435 Because an essential step in this proposal is a connection to the recipient domain's MX hosts, its security depends critically on DNS security, which naturally leads to DANE, because once you're doing DNSSEC to get to the right MX host, TLSA lookups are a negligible barrier. Note also that with DANE the MX host operator can use the same certificate and same TLSA RRset for an arbitrary number of hosted domains. All the client has to do is publish a DNSSEC-validated MX RRset. Yes, very domains do DNSSEC today, but this is not a problem for a green-fields draft. No MTA support the spec, and very few users have SMIME or PGP keys. Speaking of keys, this draft needs a result format that can vend (as in DANE), digests of signatures keys as well as full keys for payload encryption. We'll to spend some time on the payload format once the protocol issues are hashed out. -- Viktor. _______________________________________________ Uta mailing list [email protected] https://www.ietf.org/mailman/listinfo/uta
