On 07/31/2015 04:33 PM, Viktor Dukhovni wrote:
On Fri, Jul 31, 2015 at 02:23:25PM -0400, Keith Moore wrote:

Perhaps you're using the phrase "local resolver" in some novel way
that I don't understand.  Why shouldn't an application trust
responses from 127.0.0.1:53?
The vast majority of hosts don't operate a resolver, and even if they do,
there's no particular reason to believe that it's well-maintained.  (though
this will differ from one platform to another; some vendors are good about
software updates, others not so good).
This is irrelevant.  The only hosts that would need a local resolver
would be the MSAs.

That would considerably weaken this proposal, by making the MSAs extremely attractive targets for compromise.

They'd also need new SMTP software to support
AQRY.  That software would need to be well maintained too.  I trust
the DNSSEC validation code in unbound and BIND more than I would
trust the same to some application library.  The former are likely
to get more relevant scrutiny.

And you might be right. But what you find trustworthy, and what is available to most users, are completely different things.


Those are in my view more trustworthy
than any library the application might attempt to use to perform
validation.
The MUA vendor can at least control what library it uses.   It has no
control over what resolver is available on the customer's host or enterprise
network.
The MUA is not the one doing the DNSSEC verification of the remote MTA.

You can keep saying that as many times as you want, and it still won't be acceptable to impose that restriction.

It _might_ be okay for an MUA running in a resource-constrained environment to trust the MSA to do DNSSEC verification and signature verification on the data. But if an MUA implementor is going to go that route, it might as well just be a split MUA and have the server side of that MUA responsible for encryption. I find it dubious that an environment that is too resource constrained for key verification is not too resource constrained to do public key encryption of the message.

I don't think we can stop people from building split MUAs that do encryption on the server side, any more than we can stop people from implementing webmail clients that do encryption and decryption on the server side. But that doesn't mean we should cripple our key distribution protocol to enforce that kind of restriction.

The local resolver is far more likely for example to have a working
implementation of RFC 5011 (and thus not have a stale root trust
anchor).
Sure, but there are other reasonable ways of updating trust anchors,
including normal software update mechanisms (if they're properly
authenticated).
I'm not holding my breath for those.

I'm not holding my breath to have DNSSEC and trustworthy resolvers for it widely deployed.
In the context of this draft, if authentication is
mandatory, then it makes it possible to deploy MX host certificates
at scale, without trusting DV or being easily vulnerable to DNS
MiTM.
Perhaps.   But DNSSEC has to be deployed by each mail domain (requiring a
steep learning curve and support from the domain's registrar and DNS service
provider).
Only the domains that want to publish e2e keys, which is even more
bleeding-edge than DNSSEC.

Nope.

And the client needs to have reason to trust either the resolver or library
that's used (or both), and those are actually much harder at present than
being able to trust a TLS server cert.
You can trust anything you want, but the DV TLS cert does not merit
any trust.

Neither do DNSSEC-signed TLSA records, then. As far as I can tell, they're approximately equivalent in that respect.

Remote port 25 from the MUA is just as unreachable with or without
DNSSEC.  So DANE does not authenticate an impossible connection
>from the MUA to an unreachable server, and often the MUA is in
a captive portal where DNSSEC is not available.
If the information returned from AQRY were signed, and there were a way to
validate the public keys with which those information were signed, AQPX
could simply pass through those signatures to the client.
How does the MUA learn trustworthy valid signature keys for that?

I have some ideas, but want to work them out in more detail before I discuss them further.


Making solutions to "the introduction problem" scale while trying
to avoid trusted third parties (introducers) is a balacing act
between rock and hard place.  You can't have it all.  Trade-offs
are unavoidable.

Yes, but you're recommending "trade-offs" that appear to create serious vulnerabilities.

Doing this without trusted third parties is unavoidable, and I'm
not going to trust the CNNIC CA to introduce me to my next door
neighbours.  At least DNSSEC is sensibly partitioned.

I agree with that much. What I don't agree with is axiomatically trusting either the MSA or its DNS resolver code.

The client wouldn't have to trust the MSA to validate those signatures.
Sure it would, because it would learn the trust anchors for that
from the MSA, or have to trust every self-appointed CA on the
planet.

Just because it learns the trust anchors from the MSA doesn't mean that it has to axiomatically trust those trust anchors.

Also, even with the current proposal, any MUA vendor that wishes to do so can provide its own proxy to bypass local port 25 blocking. For vendors that don't mind paying for those servers, that seems far superior to trusting the MSA.


The MSA needs to be trusted, because the mailbox provider controls
the reverse channel (mail you receive) and any keys published for
you in the reverse direction.  So your mailbox provider can MiTM
the reverse traffic, recovering most of the forward traffic.
a) The MSA is not necessarily operated by the same party as the mailbox
provider, and there are good reasons to not do this.
With DMARC, DKIM, SPF, ... sure it is.  How else do you authenticate
to the MSA and get to send mail from your mailbox domain?

I use my own domain names (as do many enterprises) that don't impose those restrictions.
b) If the mailbox provider always lies about the keys that you publish, you
can detect this.
(detecting cases where the mailbox provider selectively lies is harder)
Yes, selective lies to certain clients which are then MiTMed are
more difficult to detect.  The issue is largely symmetric, lying
by the MSA to the MUA is also prone to discovery.  This is not an
easy space in which to operate invisibly.

Offhand, it seems harder to detect the MSA lying without the MUA having an independent verification path. And if it it has an independent verification path, there's no reason for the MUA to trust the MSA.
If you use AQPX, that's true.   But I'm now thinking that this needs to be
fixed.
I think you'll lose my support with any plausible "fix", but I'm
willing to suspend disbelief for now.
Well, I can't tell what bugs you about the fix, but I hope you'll wait to see it before passing judgment.

You Chris Newman and I can probably chat over Skype too.  I open
for that, and it would likely save some bandwidth on the list.
Either or both is fine.   We should probably take this off the UTA list
anyway, as I suspect the work won't end up being done in this WG.
Chris has my Skype contact info.  I'm in US/Eastern, let's try to
set something up.

Let's work out the details off-list.

It wasn't intended to be a core feature.   But I do admit that port 25
blocking is widespread, which is why I now think AQPX requiring the client
to trust the MSA is a serious flaw.
I call it a design feature.  I think AQPX makes this scale much
better,
I don't see how. It doesn't lessen the load on either the MX server or the MUA client, and it increases network traffic overall.

  and allows the MSA to MX protocol to evolve separately from
the MUA to MSA protocol (product upgrade cycles for MUAs are
glacial).

Actually, the current proposal requires the MUAs to track changes in AQRY response format. Even if the query uses AQPX, the MSA just returns the data provided by the MX server.

The protocol just carries the payload around and authenticates the
peer conveying it.  The two parts are independent, and if we can't
reach consensus on the architecture of the key distribution system,
we're working on the wrong draft.  The payload issues are important,
but not difficult.  I don't at present have the cycles to do both
discussions in parallel.  I think we need to decide whether:

     MUA <---> MSA  <---> recipient domain's MX host

is the right model.
It's not the right model if the MSA has to be trusted to verify the MX
host's cert.   It's more-or-less the same problem as trusting the DNS
resolver to honestly set the AD bit.   Both increase the attack surface
significantly.
The resolver on the MSA is part of the OS operated by the same
sysadmin who operates the MSA.

Perhaps. But axiomatically expecting the MUA to trust the MSA is still a Bad Idea. If the MSA host is compromised, it doesn't matter whether the compromise is in the MSA code itself or in its resolver.

  He should trust himself at least,
but also implement monitoring (even Roland Reagan new about "doveryaj
no proveryaj").

[ I guess that not too many folks on this list have postmastered
   a domain with 80,000 or more users for a decade (the MTAs and
   MSAs had local resolvers way before DNSSEC, that was just the
   right thing to do anyway to make them more performant/reliable). ]

Sure, having a local resolver is a good idea for performance reasons. That's an entirely separate issue from designing a key discovery protocol that axiomatically requires a user to trust his MSA to not lie.

The key question is whether the MUA trusts the MSA to deliver the
right keys or not (likely to be discovered if not and the correspondents
share additional channels of communication).

How exactly the MSA authenticates the MTA so it does not get fooled
by a bogus upstream is a secondary problem, that warrants attention
of course, but since we've not yet agreed on the big picture should
likely not distract us from the main issue.

I'd prefer to try to fix this problem up front, rather than leave a gaping security hole to be fixed later.

What alternative did you have in mind?
I'll work on it for the next revision.
I think it makes sense to discuss the architectural outline before
investing too much time in a formal writeup.

I think it makes sense to not waste time arguing about a proposal that isn't yet developed in sufficient detail. That time is better spent actually developing a proposal. I appreciate the feedback on the current proposal and hope it leads to making the next proposal stronger.

Keith

_______________________________________________
Uta mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/uta

Reply via email to