On Fri, 2014-09-19 at 13:47 +0200, Reindl Harald wrote:
> Am 19.09.2014 um 13:44 schrieb RW:
> > On Thu, 18 Sep 2014 22:09:23 -0700
> > Ian Zimmerman wrote:
> > 
> >> On Wed, 17 Sep 2014 13:43:49 +0100,
> >> RW <rwmailli...@googlemail.com> wrote:
> >>
> >> RW> A lot of people don't put mailing lists through Spamassassin, most
> >> RW> of them have already been spam filtered, and to get the best
> >> RW> results you have to extend your internal network and maintain it.
> >>
> >> Do you mean the "trusted_networks" setting here?
> > 
> > Most DNSBL tests are done on the last relay into the internal network.
> > 
> > I'm not say this should be done, I'm saying that it's one reason why
> > scanning mailing list can be more trouble than it's worth
> 
> but how is that different to any other mail?
> 
Probably because the last relay (the listserver) can be trusted (almost
by definition) but its message sources cannot and some mailing lists
don't check all submissions for spam. IME that applies especially to
mailing lists that work alongside web forums to provide subscriber
access by both web browser and e-mail.

Obviously it is as easy to scan incoming e-mails before submitting them
to the listserver and copying them to the web forum as it is to scan any
inbound mail stream. 

However, it seems to be a lot harder to scan web input before accepting
it into the forum and copying it to the associated mailing list.
Consequently, a large proportion of the spam in these combined lists
seems to be posted via the web forum. While volunteers usually monitor
for and remove spam from the forum, that doesn't prevent web submitted
spam from going out on the mailing list before the volunteers can
recognise and deal with it.

At the same time the web-sourced spam has no header-type information to
show where it came from before it hit the listserver, so SA checks on it
are pretty much limited to body, rawbody and URI rules: reliably scoring
spam from these sources is hard. 


Martin




Reply via email to