On 16/07/17 09:07, Jonathan Kamens via dmarc-discuss wrote:
my impression that DMARC is unreliable because of problematic elements
scattered throughout its design and implementation.
DMARC is only "unreliable" if you start with unrealistic expectations.
The idea that domain registrants get to tell receivers what to do with
email, or even to require them to provide feedback, is pretty obviously
absurd. DMARC permits domain registrants to request these things and -
to the extent the receivers are willing - to receive feedback.
Of necessity, DMARC reflects the email environment as it is, rather than
an idealised form of it. As in so many other contexts, once the lights
come on, the rough edges become visible.
That DMARC is being peddled by some as the current FUSSP is unfortunate,
but that doesn't invalidate DMARC, only the positions of those who are
doing this.
Simply put, I want to make it more likely that legitimate emails from
our domain will be delivered,
DMARC p=none helps indirectly by providing a means for you to discover
DNS/MTA issues under your control, and therefore to promptly fix them.
Obviously this requires automatic monitoring and alerting of feedback,
exactly because you aren't likely to read XML reports day in and day out
indefinitely.
and more likely that forged emails purporting to be from our domain
will be rejected.
This remains available, although as you've noticed it comes at a cost:
We do not have enough of a problem with forged emails that I am
willing to do anything that will cause legitimate emails that have
been accepted in the past to start being bounced because of DMARC.
That's a really important decision. p=none is therefore the only option
that makes sense for you to use.
* Reviewing DMARC aggregate reports by hand -- i.e., just loading
the XML files into a text editor and searching them for potential
problems -- on an ongoing basis is far too time-consuming to be
sustainable. Instead, you can skip getting the reports; or you can
file the reports in a separate folder and only look at them when
you're investigating a known delivery issue; or you can feed the
reports through a service like dmarcian and review them there.
Most of the available value comes from automated monitoring over an
extended period but, yes, I also simply accumulate aggregate reports for
reactive diagnosis in some cases.
* Not everybody who pays attention to DMARC records generates
aggregate reports. Not everybody who pays attention to DMARC
records generates failure reports. It's entirely possible that
some legitimate emails from us will be rejected due to DMARC
failures without my finding out that's happening until much later,
if at all.
Yes. Most of these issues were apparent to me when I first saw DMARC
presented at MAAWG. I therefore asked, and the response was that some
feedback is better than no feedback and that some implementation of
proposed disposition was better than none. That has been the guiding
principle from the outset and, pretty clearly, is the only way this can
work. DMARC is not a piece of software that you install and run in a
closed system that you control, it's a series of requests to third
parties who have a range of higher and/or conflicting priorities.
The idea that DMARC is bad because it doesn't give perfect feedback or
perfect control rather misses the point.
* There is no on-ramp for DMARC that will allow me to know with
certainty what the impact of DMARC will be before it starts
causing some legitimate emails to be bounced. Though the DMARC
spec tries to create such an on-ramp, the way email providers
interpret DMARC in real life is quirky and highly variable from
provider to provider.
DMARC does not attempt to create on-ramp that gives you certainty. It
does do various things to ease transition, certainly. For contrast,
compare SPF -all (even if you're willing to field an instrumented DNS
server), DomainKeys o=-, and ADSP discardable. In each case, domain
registrants employing these mechanisms were essentially shooting
blindly, unless they were large enough to maintain (or pay for the use
of) seed boxes. DMARC's reporting mechanism profoundly changed that
situation, and is arguably the primary reason that it succeeded where
e.g. ADSP did not. (There were also some important differences in
development process that were relevant.)
o One example of this is the fact that although one would think
that "p=none pct=100" and "p=reject pct=0" would have exactly
the same practical effect, in fact they behave differently at
some sites.
I assume that you mean _p=quarantine; pct=0_; . _p=reject; pct=0;_ is
supposed to behave differently.
This is a neat hack that breaks the principle of pct=0, however it does
so in a way which benefits domain registrants, so is hardly a problem.
The great concern initially was that receivers would act in ways that
moved in the reverse direction.
o Another example is, as Roland wrote, the fact that "you can't
reliably infer that a failure report received at p=none (or 0%
quarantine) will mean a reject at p=reject."
Indeed, but this is not a critique of DMARC, it's an acknowledgement
that receivers make their own decisions.
* There are currently several known issues with entirely legitimate
messages sent through major email providers being bounced due to
p=reject, most notably the Microsoft issue that Terry mentioned.
Indeed. Again, not a DMARC problem; the email system contains many
species of weirdness like this. DMARC exposes them more clearly than
before, certainly.
It feels to me like my unease about DMARC stems from the fact that the
folks who wrote the spec and the sites that are enforcing DMARC have a
markedly different philosophy than I do about email. This difference
was highlighted by a comment in Roland's email, when he asked "how
much collateral damage [I'm] willing to accept." This approach -- that
some "collateral damage" to legitimate email delivery is acceptable
when trying to stop forgeries -- was entirely foreign to how email was
thought about when I started working it, and it's still
extraordinarily difficult for me to come to grips with.
I understand that. I would suggest that you're not alone in this, and
that a large part of the failure of the previous mechanisms was that the
IETF process couldn't get past this and so kept making bad decisions.
DMARC made progress by letting go of several sacred cows.
So, where do I go from here?
a) Give up on DMARC entirely, at least until things improve to the
point where the major providers are no longer suffering from issues
such as the Microsoft issue.
As John has suggested, mocking people who are reporting DMARC p!=none as
a vulnerability might be a more appropriate choice.
If you see no value in visibility into what's happening to your email
when it leaves your control then, sure, stop spending time on DMARC.
b) Set our DMARC record to "p=none; pct=0" and unset "rua" and "ruf".
But then if we start doing something different from email delivery at
some point in the future, e.g., we start using a new third-party
service provider that is sending emails on our behalf and forget to
configure SPF or DKIM properly for them, we won't know about it.
c) Like (b), but use an "rua" that sends the emails to somebody like
dmarcian that will process them for us in a way that makes them more
useful and less time-consuming.
This is pretty much a foregone conclusion. Monitoring requires software.
You really don't want to do it by hand.
- Roland
_______________________________________________
dmarc-discuss mailing list
[email protected]
http://www.dmarc.org/mailman/listinfo/dmarc-discuss
NOTE: Participating in this list means you agree to the DMARC Note Well terms
(http://www.dmarc.org/note_well.html)