On Tue, 1 Aug 2006, John D. Hardin wrote:
On Tue, 1 Aug 2006, John Rudd wrote:

They don't really even have to "queue".  They just have to retry.

It's a lightweight solution to getting around greylisting.

Crap. That's good.

Yeah, it would be a very simple way of getting around
greylisting.

However, don't assume that it kills the benefit of greylisting
completely:  if you can delay processing that questionable
message for 30 minutes or an hour, that greatly increases the
chances it will end up on a realtime blacklist of some type.
Basically, even though this reduces the effectiveness of
greylisting, greylisting will take away the element of surprise,
which could be valuable.

Now, thinking of realtime blacklists in combination with
greylisting has got me thinking of a strange concept.  Might be
new or might not be, but I'll mention it just in case.  When a
spammer sends out spam, each computer they're using to send it
(whether zombie, open relay, or whatever) will be sending out
zillions of messages.  And greylisting at an individual site
tracks sources of messages, but only tracks based on traffic
at an individual site.

So here's the idea:  what if a greylist server filed a report
in a distributed database every time it saw a message from
an unknown sender (and tempfailed it)?  So, for example,
a spammer's zombie at 1.2.3.4 sends to acme.com.  acme.com
greylists it since it doesn't know 1.2.3.4 and files a report
with the realtime distributed database.  Then foo.com also
receives a message from 1.2.3.4.  It's also an unknown source
for foo.com, so it files a report with the same database.
More and more sites keep getting connections from 1.2.3.4,
and all the ones that don't recognize 1.2.3.4 as having a
history with them all file reports of suspicious activity.

Then the spammer goes for a second pass through the list to
try to defeat greylisting.  The servers that had greylisted
the messages will receive it again but will check the
distributed database.  The distributed database will have a
zillion reports of suspicious activity from that IP address.
That won't absolutely indicate that the message is spam,
but it might be worth adding a score of 1 or 2 points.

Like dcc, this would sometimes penalize legitimate bulk mail
(whenever a new server appears on the internet and starts
sending en masse immediately, it would be penalized).  But if
it's part of a larger strategy, could it be useful?  It seems
like it would do a fairly good job of automatically detecting
bulk senders.  For what it's worth, the distributed database
could also keep track of IP addresses that the individual sites'
greylists *did* recognize, so that something would only be
considered spam if (say) 95% of the sites reporting on that
address didn't recognize it.

  - Logan

Reply via email to