On 3/28/2013 10:52 AM, Sarah Caswell wrote:
Hi all,
I had a question about greylisting (with spamd) in production.
I've successfully run spamd on firewalls (as a frontend to either barracuda or
SpamAssassin) and have really liked the reduction in SPAM volume.
Unfortunately my employer's wife does not like the delays that this introduces
into our mail delivery, since she uses email for quick turn-around
communication.
The main problem occurs with senders like Gmail, yahoo, hotmail, etc. ...i.e.
all the senders that have large farms of smtp servers from which they can retry
delivery after initial greylisting delay.
I know this means I'm not doing proper whitelisting of those major sender
domains, but I'm at a loss on how to best construct and maintain such a
whitelist.
Are there any up-to-date lists that already track the MTAs of these large mail
providers?
Or will this mostly be a DIY effort on my part?
Any thoughts/insights/experiences would be greatly appreciated.
:-)
Sarah
Hi,
Years ago I was faced with the same frustration on my own system. I
ended up writing a shell/awk script that I run 2x a day.
Basically, you build up a list of "trusted" hosts and whitelist them.
Whenever I got delayed mail that I noticed, I would add the hostname to
the "trusted" list and my script would automatically whitelist them the
next time it ran (or when I ran it manually).
It may not be perfect, but it's worked flawlessly for probably 4 years now.
It's designed to work with sites that use "spf" records, and it doesn't
know about ip6, not an issue in my case
If you are interested in my script, feel free to contact me off list
The output for google.com is:
#---------------
# google.com
#---------------
# Got 5 elements in [v=spf1 include:_spf.google.com ip4:216.73.93.70/31
ip4:216.73.93.72/31 ~all]
# queueing for spf lookup: [_spf.google.com]
216.73.93.70/31
216.73.93.72/31
# ==========
# Recursing for additional spf records
# ==========
#---------------
# _spf.google.com
#---------------
# Got 5 elements in [v=spf1 include:_netblocks.google.com
include:_netblocks2.google.com include:_netblocks3.google.com ?all
]
# queueing for spf lookup: [_netblocks.google.com]
# queueing for spf lookup: [_netblocks2.google.com]
# queueing for spf lookup: [_netblocks3.google.com]
# ==========
# Recursing for additional spf records
# ==========
#---------------
# _netblocks.google.com
#---------------
# Got 12 elements in [v=spf1 ip4:216.239.32.0/19 ip4:64.233.160.0/19
ip4:66.249.80.0/20 ip4:72.14.192.0/18 ip4:209.85.128.0/
17 ip4:66.102.0.0/20 ip4:74.125.0.0/16 ip4:64.18.0.0/20
ip4:207.126.144.0/20 ip4:173.194.0.0/16 ?all]
216.239.32.0/19
64.233.160.0/19
66.249.80.0/20
72.14.192.0/18
209.85.128.0/17
66.102.0.0/20
74.125.0.0/16
64.18.0.0/20
207.126.144.0/20
173.194.0.0/16
#---------------
# _netblocks2.google.com
#---------------
# Got 8 elements in [v=spf1 ip6:2001:4860:4000::/36
ip6:2404:6800:4000::/36 ip6:2607:f8b0:4000::/36 ip6:2800:3f0:4000::/36 i
p6:2a00:1450:4000::/36 ip6:2c0f:fb50:4000::/36 ?all]
# UNKNOWN: [ip6:2001:4860:4000::/36]
# UNKNOWN: [ip6:2404:6800:4000::/36]
# UNKNOWN: [ip6:2607:f8b0:4000::/36]
# UNKNOWN: [ip6:2800:3f0:4000::/36]
# UNKNOWN: [ip6:2a00:1450:4000::/36]
# UNKNOWN: [ip6:2c0f:fb50:4000::/36]
#---------------
# _netblocks3.google.com
#---------------
# Got 2 elements in [v=spf1 ?all]
# Returning from recursion
# Returning from recursion