On Sun 01/Aug/2021 20:56:55 +0200 Douglas Foster wrote:
Ale, I tried to explain my objections in the original post.   However, it is a very important question, so I am happy to revise and extend my points. Forgive me for being long-winded , I am trying to be thorough because I see problems at many levels.


You're adding useless complications.


Random Guessing can increase the volume of wrong decisions.

The basic math does not work.   Assume that a message sequence has a probability P of being unwanted, and a probability of Q = 1-P of being wanted.   Does it make sense to use a random number based on P to discard messages?

Probability of outcomes:

·P*P – unwanted messages, correctly blocked
·P*Q – unwanted messages, incorrectly accepted
·Q*P – wanted messages, incorrectly blocked
·Q*Q – wanted messages correctly accepted

Total error rate is 2*P*Q. We have exchanged a one-sided error (allowing P unwanted messages) for a two-sided error distribution?    Does it improve the overall error rate.   Specifically, when is 2*P*Q < P ?

Cancelling P from both sides (P>0) yields 2*Q < 1 and Q < 0.5

If the message stream is more than 50% unwanted, then random guessing might produce fewer total errors than allow-all.   If the message stream at least 50% wanted, then random guessing produces inferior results.


In plain English, if you have more spam than ham, then blocking at random is correct in most cases. That's an obvious statement which adds nothing to the discussion.


Other filtering stages will raise Q and lower P

Since the specific issue is failed DMARC Authentication, we also need to consider how this task fits into the evaluation process.    I believe my process is typical:

·First, messages from known-bad senders are blocked.
·Second, sender authentication is performed, at which point some messages may be discarded.
·Third, content filtering is applied, and suspicious content is blocked.
·Fourth, end-user activity occurs, where some messages are ignored or discarded.

One effect of the first stage is that it lowers P and raises Q.   During sender authentication, Q is likely to be above 50% even if the initial mail stream has a Q below 50%.


The purpose of authentication is to recognize senders by name rather than by IP number. Thus, according to the sense of "known-bad senders", authentication can be considered a prerequisite of the first stage. If authentication fails, you don't know who the sender is, therefore you don't know if it's good or bad.


If a false negative occurs during sender authentication, causing an unwanted message to be allowed, the message may be blocked during content filtering or it may be ignored by the user.  Consequently, if the probability P is applicable during sender authentication, the probability of a threat being successful is less than P.


No. A successful authentication of a spammy message is not a false negative. The fact that a message is unwanted has nothing to do with DMARC.


Random guessing will increase the volume of unrecoverable errors.

If a false positive occurs during sender authentication, causing a wanted message to be blocked, there is no opportunity for recovery.


Actually, an opportunity of recovery exists. The sender can have feedback mechanisms, such as 5yz SMTP replies, delivery notifications, return receipts or other web-based actions. It can use feedback to recover from authentication errors. Such errors happen in an apparent random fashion too. For example, when the word "from" followed by a space appears in the beginning of a line, some agent insert a greater-than sign ('>') before it, thereby breaking a DKIM signature. As soon as the sender recognizes that delivery failed, it can repeat sending the same message several times until, by chance, it gets a toss greater than its pct. Phishers, OTOH, are known for not retrying.

The above is a use case for pct!= 0 and pct!=100.


Therefore, false positives are a greater problem than false negatives, and
the random guessing algorithm has the effect of replacing false negatives
with false positives.

Replacing what...?


Sender’s probability has no relation to Evaluator’s probability

For any single domain, incoming messages can be broken into three categories:

·Legitimately-sourced messages which arrive with valid credentials.
·Legitimately-sourced messages which arrive with failed credentials.
·Impersonation messages which arrive with failed credentials.

For simplicity, assume that sender and receiver interests are aligned – the receiver wants to accept all legitimately-sourced messages from the domain. Since the sender is moving toward P=REJECT and the recipient wants to enforce P=REJECT, we will also assume that mailing lists are not part of the mail stream.

Neither sender nor receiver know the volume of unwanted impersonating messages.   This means that the denominator is unknown, but would be determined by the volume of impersonation + legitimate messages.   The numerator for computing wanted message rates (Q) is all of the legitimate messages.  The numerator for computing unwanted message rates (P) is all of the impersonation messages.

Because the recipient wants all of the legitimately-source messages, the percentage of legitimate messages sent with imperfect credentials is irrelevant.


It is not irrelevant if the receiver rejects on DMARC fail.


Assuming that the source domain knows the volume of messages which are sent without complete credentials, and publishes a percentage based on that knowledge.    Can the evaluator benefit from that information?   I don’t think so.


Certainly not. In the use case outlined above, the published pct can be a hint to the sender for the number of retries before resorting to something else.


Credentials at origin are determined by whether the source is configured to apply correct SPF and DKIM credentials or not.   The source domain could determine message volumes by server to compute a weighted statistic for percentage of messages with correct credentials.    But any single evaluator will need see the same weighted distribution of message sources.   It may not receive any messages from non-compliant servers, it may receive messages only from non-compliant servers, or any other possibly weight distribution. Applying the source-domain’s percentage estimate to the received message stream would only make sense if the weighting is comparable.


I don't think pct can be somehow calculated based on the percentage of failed authentications. Even if one has the same percentage of failed authentication for every receiver, it still would make no sense to set pct at that value. What would a sender obtain?


More importantly, the assumed goal for both sender and receiver is to have all legitimately-sourced messages to be accepted.  Arbitrarily blocking some wanted messages, for the sake of notifying about credentialling problems, works against the goal of the evaluator and his user base.  It is too high a price to pay.


It is still a percentage of the price you pay with pct=100.


On Sun, Aug 1, 2021 at 5:13 AM Alessandro Vesely <[email protected]> wrote:


I snip the original message. Interested readers store the whole thread and it is in dmarc-ietf's archive anyway.


Best
Ale
--










_______________________________________________
dmarc mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/dmarc

Reply via email to