Hi Dilyan, On Fri 25/Oct/2019 12:51:43 +0200 Дилян Палаузов wrote: > > I do not see how this helps for DMARC. An email either validates DMARC, or > fails DMARC and the aggregate repors say per sending IP server (only direct > mail flow is reported), whether DMARC validates or fails. With this > information it is sufficient to determine, if the DMARC/DKIM implementations > on sender and receiver are either both bug-free, or both have the same > bugs.
Looking at aggregate reports, you cannot tell whether an authentication failure is a sacrosanct signaling of your domain being abused rather than a legitimate user going through external forwarders. > I do not see, how the information you ask to add, while interesting, does > help DMARC.> > What is the purposes of the aggregate and non-aggregate reports? What are > non-goals? I asked several times here, nobody answered. Perhaps a > discussion on the goals and non-goal would help. That was probably discussed already. Now that we have some experience, we can discuss further. I know some very acknowledgeable WG participants accumulate aggregate report values in their own MySQL database (I'm not sure about the details). Many people, instead, outsource reports to specialized DMARC analyzers, who display nice graphical summaries. I run an XSLT transform of DMARC reports into an HTML tabular format of one row per record. In theory, reports can be something more than a debugging aid. It has the potential to assemble a community where bad actors are identified and dismissed. > If it is a goal to reuse the dmarc-reporting mechanism to report also about > perceived spam probability, then it can be discussed in more details how > this can be achieved. Well, spam score usually is hight for phishing too. To counter phishing is DMARC core business. > My experience is, that asking a provider, why an obviously non-spam mail was > evaluated as spam, virtually never leads to a useful answer. So nobody > wants to reveal how its spam system weigths factors and if there is lack of > such interest, extending the report format will not help, as nobody will be > willing the report the data. This is a problem, indeed. Large mailbox providers may fear that giving bad scores to an IP can result in senders complaining against against their weighting method, requiring more personnel to answer back. It should be made clear that reports are given out AS IS, as a favor to senders, without liability. Anyway, reporting MTAs don't have to reveal the method, just the result. > Exchanging information on hard-coded rules in Spam-Assassing (IP reputation, > HTML mime part without text/plain, the “Nigeria money” phrase), that is not > based on filter training, does not help neither, as sender can run the tests > on its own and predict how the recipient will evaluate these set of > criteria. Changing point of view, perspective also changes. In addition, by comparing external scores to internal predictions, one has a chance to evaluate the goodness of the reporting MTA. Best Ale _______________________________________________ dmarc mailing list dmarc@ietf.org https://www.ietf.org/mailman/listinfo/dmarc