Vlatko Salaj writes: > [A DNS-based whitelist] does require u make a whitelist, ofc, but > other than inputing it into DNS, doesn't require u do anything else > with it,
Inputting it into DNS is a *very* big deal, however. First, it's not at all clear that it scales to precisely the sites I (at least) care about most: the "> 1 million users" mailbox providers. Several people have commented here that they expect pushback from their DNS teams on any DNS-based solution. Also, both populating the list and culling it of "undesirables" have no good solutions yet (admittedly, nobody has really tried, there probably *are* "pretty good" solutions, but are they good enough for Yahoo!?) Most of us control MTAs (or are close buddies with those who do) -- "let's try it and see, then fix what ain't so great" will fly for MTA changes. DNS experiments are less likely, perhaps much less likely, to fly if DNS is in a different department. Second, DNS entries almost automatically provide a reasonably long window where they are cached downstream (or you greatly increase the DNS load at the Author Domain). Eg, right now the TXT resource for _dmarc.yahoo.com appears to have a TTL of 1800 (30 min). But spammers can move millions of messages in 5 minutes. If abuse is possible, you're wide open. Third, this is a very unconventional use of the DNS. I know, SPF, DKIM, and DMARC, but those require O(1) subdomain per Author Domain, not O(users) or even O(forwarders), dynamically named. Fourth, inputting it into DNS either requires a global list (which is probably feasible from an operations standpoint) or a proliferation of artificial domains representing <user, forwarder> pairs, which isn't going to make the DNS team happy at all (especially since even with a terribly long TTL they'll have a ruinously low cache hit rate). But a global list (ie, one entry per "trusted" forwarder) means that you've opened a much bigger window for abuse. By contrast, changes to the MTA mean much more flexibility for making decisions at message injection time, based on a local lookup in a user profile database. I prefer that (knowing that my preferred MTAs' developers are responsive to user demands for security features). > any additional work is on DMARC receiver which has to upgrade its > DMARC verifier to handle 3rd party alignment, but which it has to > do anyway, as DMARC is still new and changing, I tend to disagree with your logic, again for several reasons. First, the originators who *need* this are relatively few (Yahoo!, AOL, and who else?); Author Domains used only for "transactional email" will never need it, and the rest of us probably have a fair amount of time before we *need* it (granted, several posters here clearly want it badly so they can use "p=reject" too, but like the rest of us they've survived for several years without it, and continue to do so). Better to impose the costs (and risks of experimentation) on those who benefit most. Second, to the extent that receivers need to make adjustments in their MTAs to handle new versions of DMARC, originators will, too. And weighted by mail volume, the relevant originators and recipients are on the same, fairly short, list of major mailbox providers. Third, whether done by DNS or by MTA, the hard work is in determining what policy you want to publish. And with the MTA the policy is surely more flexible than with the DNS. Finally, there's some risk in making changes to any system (and the amount of data you propose adding to DNS, along with the extra queries and the likely effect on cache hit rate, make it a systemic change). Which would you rather have go down on you, your MTA or your DNS? I'll take a working DNS any time! _______________________________________________ dmarc mailing list [email protected] https://www.ietf.org/mailman/listinfo/dmarc
