> > > avoiding checking google ranges in DNS*Ls could make sense, at least we'd > avoid excessive DNS requests towards them and getting blocked there. > > however the only SA way I can think of is adding > 209.85.128.0/17 and 74.125.0.0/16 to trusted_networks which would result > into ALL_TRUSTED hitting. > > and I'm sure thats exactly what he doesn't want, whitelisting one of the > biggest spammers, yeah, that makes sense.... not... > I'd be interested in reducing the impact of the two rules involved here and relying more on other rules based on content or other header factors perhaps by creating some type of meta that offsets points.
I can't explain to users that day after day, there's some random gmail user being blocked because one of the world's largest providers is allowing their IPs used by millions of other users to be abused. There's nothing my users or the legitimate users sending email to my users can do about it, and it's always reactionary - it's not like I can explain to them that if their users use one particular Google IP that they could have incoming mail blocked.