Jeff Chan wrote:
Currently there's no provision for updating the hard coded list
other than releasing a new version of SA.  Something more dynamic
could perhaps be engineered, short of another RBL.

There are a number of reasons for not doing a whitelist RBL:

1.  Excessive queries:  Whitehat domains come up a lot in
messages.

Setting the whitelisted TTLs to 7 days (the default bind max-cache-ttl) should take care of excessive load to the SURBL servers.


Of course, to be of any advantage, the whitelisted records would have to be part of the zone containing the blacklisted records. Which would be a problem for any application just doing a simple lookup, paying no attention to what it actually resolves as...

2.  Potential misuse:  Inadvertently blacklisting whitehats, i.e.
user error.

New zones called 'whiteandblack.surbl.org' (or something similar), containing the data from the current zones and the whitelisted domains, should be pretty obvious that the zone contains both whitelisted and blacklisted domains, and that an application needs to pay attention to what a lookup actually resolves to.


The existing blacklist zones would continue as they operate now.

3.  Possibility of negative scoring:  Some application would
probably try to negative score them, which would simply cause
spammers to load up their spams with a lot of whitehat domains,
which would drive up mail processing loads, DNS queries, etc.,
and potentially get spam through filters.
...

Protecting end users from their own stupidity is bad enough. So long as the RBL is clearly, and visibly, documented (citing reasons why not to negative score whitelists) I don't believe there is a need to protect application developers from their stupidity too. Anyone who goes adding RBLs to their application without looking into info about that RBL deserves bad results anyway.


Developers could fall in the same trap, assigning negative scores, based on SPF records, but we still use them. I don't see much of a difference.


Daryl



Reply via email to