I’m unconvinced by these two design reasoning documents.
They start with the assumption that signed routing information, security policy
and reporting should be conflated. I consider those separate functions. The
only relationship between signed routing and security policy is that security
policy might require that routing be signed.
I am not convinced it is desirable to allow sites with strong security to
downgrade to weak security. Since allowing that adds so much complexity and
security risk to the proposal, it seems you need a much stronger justification
to include that complexity than the claim that it’s a form of “customer
lock-in”. Assuming we do simplify the proposal, then once a domain advertises
SMTP STS, they will subsequently require their hosting provider to offer STS.
As domain owners usually pay their hosting providers, this creates a financial
incentive for hosting providers to offer STS and thus there is no “customer
lock-in” problem as long as this is standardized properly. Indeed, I’d argue
that removing the timeout and security downgrade mechanisms from the security
policy part of the proposal make it simpler, more secure and likely to deploy
faster due to financial incentives. Seems like a win-win-win to me.
I feel very strongly that policy for SMTP relay should be advertised by SMTP
and protected by SMTP TLS. Doing anything else creates a much larger attack
surface on the policy information and is both more complex and less secure. The
preferred mechanism to sign SMTP relay routing (DNS MX) is DNSSEC. I’m open to
proposals for alternate mechanisms to sign MX record information due to the
difficulty of deploying DNSSEC, but such mechanisms should not be confused with
security policy.
I’m open to proposals to advertise reporting information separately from (or
together with) SMTP security policy and to provide a mechanism to report SMTP
security without requiring SMTP security policy to be present. Given the
interest in reporting for a broad range of SMTP relay security features, we
might want to split the reporting function into a separate document.
- Chris
On April 1, 2016 at 16:44:20 , Daniel Margolis ([email protected]) wrote:
Viktor et al,
Two things:
First, I've taken a stab at a "design reasoning FAQ" document:
https://github.com/mrisher/smtp-sts/wiki/FAQ. I think most of the contents of
this were already discussed on this list, but hopefully it provides a useful
reference.
Second, I've tried to write up the reasoning you expressed here about "why
DNSSEC?" (for lack of a better term) here:
https://github.com/mrisher/smtp-sts/wiki/Why-DNSSEC-at-all%3F. (Not that my
writing is necessarily more clear than yours, but rather to extract it for
handy reference in preparation for hopefully a bit more feedback from other
people here.)
Feedback welcome.
Dan
On Tue, Mar 22, 2016 at 8:58 AM, Daniel Margolis <[email protected]> wrote:
Hi Viktor,
Thanks for all the thoughtful comments. I'll respond to what (at a glance)
seems to be the most significant three first.
On Tue, Mar 22, 2016 at 7:35 AM, Viktor Dukhovni <[email protected]> wrote:
A significant obstacle to a successful roll-out of WebPKI with
SMTP is not such much that obtaining and deploying CA certs is
onerous (enabling DNSSEC is likely more difficult at present),
but rather that there is no single set of CAs that sending and
receiving systems can (or perhaps should) reasonably agree on.
On the one hand, because MTAs employing STS are non-interactive
background processes with no human-operator in the loop to
"click OK" for each exception, the set of CAs a sending system
that employs STS would need to trust would need to be "comprehensive
enough" to include all the CAs used by all the domains one
might need to send email to.
This is of course a big topic, but I don't think we should assume that in
common deployment web browsers *do* rely upon a user to make some sound human
decision about whether to trust a CA. For example, here's the multi-stage
process to get Chrome to accept an untrusted certificate:
https://docs.oracle.com/cd/E24628_01/install.121/e39876/img/scrnshot_advanced.gif.
It's not insignificant to me that Oracle felt the need to document this
process for the users with screenshots: we *don't* expect the common browser
user to make a personal determination of what CAs to accept, so the "background
process" argument is, I think, based on a false premise.
In truth, we expect browser users to defer to their platform vendors to make
sane decisions for them, and we expect the owners of large websites to be able
to pick a CA that is widely trusted. With the rise of Let's Encrypt and other
free CAs, this doesn't seem overly burdensome.
This requires domains that publish STS records to duplicate
their MX records in the STS RRset. It is not clear why that's
useful. If the STS record itself is not DNSSEC-validated, the
payload is not more secure than the MX RRset. If the payload
is DNSSEC-validated, then the MX RRset in the same zone would
(barring unexpected zone cuts) be equally secure. I posit that
this field is both onerous and superfluous.
There are two differences with the MX records here:
1. A (most likely) longer TTL on the STS policy versus the MX record
2. The option for wildcard's or broader patterns than merely a list of valid
hosts
This permits the publishing domain to declare that "for the next year, I plan
to always host my mail at example.com" without publishing specific MX records
that have a 1-year TTL (which of course could be brittle).
It would be expensive for MTAs to attempt repeated HTTPS
connections that timeout trying to connect to port 443 at
the majority of domains which have not deployed STS.
All that's needed in DNS to support a pure WebPKI STS is a
boolean value to signal the existence of the STS resource URI.
This data can be obtained efficiently. If the "_smtp-sts" RR
exists (pick a suitable RRtype and fixed short payload) then
the HTTPS URI should be consulted, otherwise the HTTPS URI is
not consulted (at first contact), or is consulted asynchronously,
in parallel with the first mail delivery (with appropriate spacing
between probes, ...).
Thus some MTAs might compress the STS DNS record to zero bits,
and just use asynchronous suitably spaced HTTPS probes to the
domains for which no policy is presently known. However the
1-bit encoding is likely better.
I also dislike the copying of the policy into two places, for the reasons Dave
noted here:
https://mailarchive.ietf.org/arch/msg/ietf-smtp/nqWeUTe03mxJLltC-wHOsVvfbKo.
The only real reason to do this is to allow MTAs to cheaply see if the policy
has been updated. This seems like a silly thing, since in normal usage they
never need to check as long as they have a cached policy, but it's exactly that
asymmetry that worries me: Since in normal usage the traffic to the HTTPS
endpoint will be minuscule compared to the SMTP traffic to the domain, hosts
are likely to dramatically underprovision the former versus the latter.
Unfortunately, then, in the case of a validation *failure*, the HTTPS endpoint
would (absent some other policy TTL separate, and shorter from, the expiration
lifetime) be bombarded with an HTTP GET *once* *per* email.
So all we're really doing here (in the webpki case) is leveraging DNS as a sort
of caching layer. An alternative solution would be what I hinted at above: to
simply embed a short "TTL" in the policy. Or to say we don't care and encourage
the use of HEAD.
If we are willing to force the webpki validation method (as you assumed, I
think, above) and don't mind making senders handle this short-term TTL logic,
I think we could make the publishing process simpler, with the advantage Dave
noted in the above-linked thread (i.e. an easier deployment for customers of
hosted mail services).
I'm somewhat on the fence about this trade-off myself, but I don't think it's
unreasonable. Thoughts?
_______________________________________________
Uta mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/uta
_______________________________________________
Uta mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/uta