On Wednesday, February 15, 2023 5:23:34 AM EST Alessandro Vesely wrote:
> On Tue 14/Feb/2023 23:42:36 +0100 Scott Kitterman wrote:
> > On Tuesday, February 14, 2023 4:16:00 PM EST Evan Burke wrote:
> >> On Tue, Feb 14, 2023 at 11:44 AM Michael Thomas <m...@mtcc.com> wrote:
> >>> On Tue, Feb 14, 2023 at 11:18 AM Michael Thomas <m...@mtcc.com> wrote:
> >>>> Have you considered something like rate limiting on the receiver side
> >>>> for
> >>>> things with duplicate msg-id's? Aka, a tar pit, iirc?
> >> 
> >> I believe Yahoo does currently use some sort of count-based approach to
> >> detect replay, though I'm not clear on the details.
> >> 
> >>>> As I recall that technique is sometimes not suggested because (a) we
> >>>> can't
> >>>> come up with good advice about how long you need to cache message IDs
> >>>> to
> >>>> watch for duplicates, and (b) the longer that cache needs to live, the
> >>>> larger of a resource burden the technique imposes, and small operators
> >>>> might not be able to do it well.
> >>> 
> >>> At maximum, isn't it just the x= value? It seems to me that if you don't
> >>> specify an x= value, or it's essentially infinite, they are saying they
> >>> don't care about "replays". Which is fine in most cases and you can just
> >>> ignore it. Something that really throttles down x= should be a tractable
> >>> problem, right?
> 
> The ration between duplicate count and x= is the spamming speed.
> 
> >>> But even at scale it seems like a pretty small database in comparison to
> >>> the overall volume. It's would be easy for a receiver to just prune it
> >>> after a day or so, say.
> >> 
> >> I think count-based approaches can be made even simpler than that, in
> >> fact.
> >> I'm halfway inclined to submit a draft using that approach, as time
> >> permits.> 
> > I suppose if the thresholds are high enough, it won't hit much in the way
> > of legitimate mail (as an example, I anticipate this message will hit at
> > least hundreds of mail boxes at Gmail, but not millions), but of course
> > letting the first X through isn't ideal.
> 
> Scott's message hit my server exactly once.  Counting is a no-op for small
> operators.
> 
> > If I had access to a database of numerically scored IP reputation values
> > (I
> > don't currently, but I have in the past, so I can imagine this at least),
> > I
> > think I'd be more inclined to look at the reputation of the domain as a
> > whole (something like average score of messages from an SPF validated
> > Mail From, DKIM validated d=, or DMARC pass domain) and the reputation of
> > the IP for a message from that domain and then if there was sufficient
> > statistical confidence that the reputation of the IP was "bad" compared
> > to the domain's reputation I would infer it was likely being replayed and
> > ignore the signature.
> Some random forwarder in Nebraska can be easily mistaken for a spammer that
> way.  Reputation is affected by email volume.  Even large operators have
> little knowledge of almost silent MTAs.
> 
> Having senders' signatures transmit the perceived risk of an author would
> contribute an additional evaluation factor here.  Rather than discard
> validated signatures, have an indication to weight them.  (In that respect,
> let me note the usage of ARC as a sort of second class DKIM, when the
> signer knows nothing about the author.)

Any reputation based solution does have down scale limits.  Small mail sources 
(such as your random Nebraska forwarder) generally will have no reputation 
vice a negative one and so wouldn't get penalized in a scheme like the one I 
suggested.  This does, however, highlight where the performance challenge is.  
We've moved it from duplicate detection to rapid assessment of reputation for 
hosts that have sudden volume increases.

I think that's fine as that's not at all a problem that's unique to this 
challenge and ultimately, I think if replay attacks end up more complicated 
because instead of blasting 1,000,000 messages from one host they have to 
trickle 1.000 messages from 1,000 hosts it's a win.

I don't think this is a problem that's going to have a singular mechanical 
solution to that makes it go away.  This is substantially about making this 
particular technique less effective so maybe they move on to something else or 
at least less bad stuff gets delivered.

> > I think that approaches the same effect as a "too many dupes" approach
> > without the threshold problem.  It does require reputation data, but I
> > assume any entity of a non-trivial size either has access to their own or
> > can buy it from someone else.
> 
> DNSWLs exist.

I'm not sure how that's relevant.  Please expand on this if you think it's 
important.

Scott K


_______________________________________________
Ietf-dkim mailing list
Ietf-dkim@ietf.org
https://www.ietf.org/mailman/listinfo/ietf-dkim

Reply via email to