On 07/30/2015 01:52 PM, Viktor Dukhovni wrote:
On Thu, Jul 30, 2015 at 11:34:43AM -0400, Keith Moore wrote:
The secret swept under the rug is that there is no security at all
in the provisioning process for DV certificates.
Ok, I agree with you there. A DNSSEC signature is not less reliable than a
DV certificate.
Thanks. I see this as a healthy starting point for the security
discussion. As I see it, in terms of trustworthiness:
EV >> DANE >> DV
Now we can't expect more than a tiny fraction of email domains to
have EV certs (these might account for a very large fraction of
the users, but any standard needs to be more comprehensive than
just catering to the largest email providers).
So as I see it our choice is between DANE and DV, and *if* we can
more domains to deploy DNSSEC (yes I admit that it is an "if"),
then DANE is a much better fit for SMTP than DV.
I'm okay with supporting both (if we can work out the problems), but I
don't want to force domains to choose one or the other. Partially this
is because the vast majority of users are lumped into a very small
number of email domains which can quite adequately support TLS certs
(even EV certs if clients bother to check), and these are
well-understood and relatively simple for clients to verify. Insisting
on DANE authentication for these domains would make AQRY much less
deployable. The second reason is that, long term, we're better off not
forcing clients to rely on a single trust anchor.
Part of the catch-22 has been lack of a compelling incentive to
deploy DNSSEC,
Perhaps, but that's not DNSSEC's only problem. Getting people to
understand how to use it and how to manage their keys, getting support
for it in registrars and DNS service providers, getting client library
bugs fixed - these are all significant barriers. The lack of a usable
standard DNS update protocol doesn't help either (too many routers
intercept port 43 traffic and break dynamic update). And finally, the
idea that a DNS client could simply trust its local resolver to do
DNSSEC validation for it never made any sense at all.
So basically I think DNSSEC as originally conceived is severely
flawed. And while I think those flaws are fixable, they don't help the
overall DNSSEC deployability situation.
And I've changed my thinking somewhat: I'm amenable to specifying use of
TLSA with AQRY (as an alternative trust anchor), if we can somehow specify
it in such a way that:
(a) mail domains have clear guidance as to what they have to do to advertise
keys that are likely to be considered trustworthy by clients, and
Just publish either "3 1 1" or "2 0 1" TLSA records that bear a
SHA2-256 digest of the leaf public key or the trust-anchor certificate:
https://tools.ietf.org/html/draft-ietf-dane-smtp-with-dane-19#section-3.1.1
https://tools.ietf.org/html/draft-ietf-dane-smtp-with-dane-19#section-3.1.2
https://tools.ietf.org/html/draft-ietf-dane-ops-14#section-5.1
https://tools.ietf.org/html/draft-ietf-dane-ops-14#section-5.2
https://tools.ietf.org/html/draft-ietf-dane-ops-14#section-8.1
https://tools.ietf.org/html/draft-ietf-dane-ops-14#section-8.4
(Many sites choose "3 0 1", which is also fine).
(b) we can specify client requirements for DNSSEC validation that actually,
in practice, assure that the TLSA records are authentic, given the
unpredictability of client operating environments and the generally poor
and unpredictable state of DNSSEC validation code that currently exists
in the wild.
While DNSSEC from mobile devices is a challenge in captive portal
environments and on the multitude of end-user platforms, the
requirement for this draft would DNSSEC between the *MSA* and the
recipient-domain's MTA (MX host).
I am not aware of any significant obstacles to the use of DNSSEC
in a server-to-server environment, where captive portals and other
"middle boxes" don't play a major role.
Hmmm. As far as I can tell, the chief benefit of using DANE (or, more
generally, DNSSEC-signed keys) over server certs is that DANE could
permit MUAs to directly verify the signatures on returned data, by
having those keys signed by their domains and looking up the domains'
signatures using DNS. I really don't like having MUAs trust MSAs to do
their verification for them, as MSAs are too easily compromised. The
only reason AQPX exists in the current proposal is to work around port
25 filtering and the consequent inability of MUAs to directly contact MX
servers.
I don't expect (b) to be at all easy.
In my experience (b) "just works" (establishing authenticity of
TLSA RRs).
Just because something appears to work doesn't mean it's secure.
The TLSA records for my domain are provisioned
via a more robust process than any CA certificate I might reasonably
obtain.
Perhaps, but that's your domain. The important question is, how does a
client know that your TLSA records are reliable?
DNSSEC RRSIG.
Right, but what libraries can a client use to do the verification
correctly, and without making that verification vulnerable to attack?
Not sure what separate server you have in mind. On my mail server,
DNSSEC validation is performed by unbound running on 127.0.0.1.
The DNSSEC code in unbound is maintained by DNS experts. I would
not expect equal quality of any stand-alone dns library that happens
to ship with the OS. In fact I know of multiple such libraries
that are flawed, and I would NOT trust DNSSEC validation performed
*outside* a well-maintained validating iterative resolver.
So the question for AQRY use of TLSA becomes: How do we specify it in such
a way that a client vendor can ship code that will properly validate
DNSSEC-signatures on TLSA records, regardless of the environment in which it
runs, given that such code can't rely on having access to a well-maintained
validating iterative resolver?
Because the destination MTA is reached on port 25, which is almost
universally blocked for end-user systems. I don't see any role
for direct MUA to MTA key lookup. All lookups are via the MSA.
Axiomatically trusting the MSA is not sufficient. We need a better way.
The provider's MSA will not have much difficulty with DNSSEC/DANE.
I'll be adding support for DANE to OpenSSL. I'll probably be at
the Atlanta MAAWG meeting in October, anyone who wants to discuss
implementation pitfalls should free to corner me there...
I'm not planning on attending, but it's not far from me. I could
probably get down there if you want to meet in person.
Updates will be a local matter between the mailbox provider and
the user. Gmail has a settings interface, as does Yahoo, Outlook.com.
If there's no standard means of updating user profiles, it will greatly
hinder use of AQRY in practice. That doesn't necessarily mean that updates
have to be handled via SMTP.
The "standard way" is initially to upload the keys via some your
provider's website. A real standard can be developed separately
if there's enough interest between the providers and MUA implementors.
I think that's separate from this effort. If/when this effort bears
fruit, there may be some incentive to handle the follow-on problem.
(May we have such problems...)
I think it's necessary but agree that it's a separate effort. But it's
really not a follow-on problem, unless we want to build a key
distribution protocol that is not actually widely usable in practice.
(IMO, a huge problem with many IETF security efforts is that not nearly
enough thought has been given to the whole picture - so we've developed
several technologies that, even if they were sound by themselves, turned
out to not be very useful.)
And for most users, it's even harder to get DNSSEC set up than to get a
CA-issued cert. But again, I'm fine with giving mail domains a choice
about whether to use a CA-issued cert, a DNSSEC-signed TLSA record, or both
(or for that matter, if the MX record is DNSSEC-signed and it points to an
SMTP server with a CA-issued cert with the name matching the target of the
MX record.) IF we can specify rules for the client that provide reasonable
assurance of resistance to attack.
Indeed one could simply let the MSA choose how to authenticate the
remote MTA per local policy. If for some MSAs they have a better
way than DANE to authenticate some MTAs (pinned certs, WebPKI, ...)
they should be free to do so. Basically, authenticate the remote
domain by whatever means are suitable, but if this is to scale,
for now there's no real alternative to DANE (provided DNSSEC adoption
moves forward).
The more I think about this, the more convinced I become that MUAs must
be able to do their own verification.
Because an essential step in this proposal is a connection to the
recipient domain's MX hosts, its security depends critically on
DNS security
Actually no. It only depends on having a certificate that matches the mail
domain of the address for which you're requesting information. Though as
has been pointed out, this is especially difficult for MSPs that serve large
numbers of mail domains.
See Section 1.3 of the DANE SMTP draft. Outlook.com hosts many
thousands of domains (likely 10's or 100's of thousands, but it
does not matter). The MX host certificate lists none of these.
Obviously from my previous response, I'm aware of this.
Just because this is a "green-fields draft" doesn't mean that it can hope to
be successful if it ignores deployment issues. While I'm very much in
favor of both DNSSEC and TLSA, and want them both to succeed, I also cannot
ignore that there are serious problems with DNSSEC, both in deployment and
in trusting signature verification, and this inherently affects the utility
of TLSA.
Deployment suffers from last mile on mobile devices, but that's
not a barrier here. I still don't know what "trusting" issue
you have in mind.
You're assuming that having the MSA do the verification for the client
is sufficient. I'm fairly certain that it is insufficient.
Speaking of keys, this draft needs a result format that can vend
(as in DANE), digests of signatures keys as well as full keys for
payload encryption. We'll to spend some time on the payload format
once the protocol issues are hashed out.
Agree. Really I think that 95% of the work will be in getting the data
model right.
I hope we get to the meat of the problem in the not too distant
future, but I think we do first need to clear the protocol hurdles.
Actually I think it's the other way around. We need to focus our
attention on the data model sooner rather than later. Once we get that
straightened out, we'll know what changes we need to make to the protocol.
Keith
_______________________________________________
Uta mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/uta