https://bz.apache.org/SpamAssassin/show_bug.cgi?id=7182

            Bug ID: 7182
           Summary: SPF records routinely exceed the maximum 10
           Product: Spamassassin
           Version: SVN Trunk (Latest Devel Version)
          Hardware: PC
                OS: Windows 7
            Status: NEW
          Severity: normal
          Priority: P2
         Component: Plugins
          Assignee: dev@spamassassin.apache.org
          Reporter: kmcgr...@pccc.com

As discussed in Bug 7112, the SPF RFC, https://tools.ietf.org/html/rfc7208, is
clear that the nested records for SPF are limited to 10.

 SPF implementations MUST limit the number of mechanisms and modifiers
   that do DNS lookups to at most 10 per SPF check, including any
   lookups caused by the use of the "include" mechanism or the
   "redirect" modifier.  If this number is exceeded during a check, a
   PermError MUST be returned.  The "include", "a", "mx", "ptr", and
   "exists" mechanisms as well as the "redirect" modifier do count
   against this limit.  The "all", "ip4", and "ip6" mechanisms do not
   require DNS lookups and therefore do not count against this limit.
   The "exp" modifier does not count against this limit because the DNS
   lookup to fetch the explanation string occurs after the SPF record
   has been evaluated.

This is also a real-world DOS prevention technique that ignoring is wrong.

Large players like Google handle this fine (though they aren't perfect... keep
reading):

dig -t txt _spf.google.com     
_spf.google.com.        199     IN      TXT     "v=spf1
include:_netblocks.google.com include:_netblocks2.google.com
include:_netblocks3.google.com ~all"

dig -t txt _netblocks.google.com   
_netblocks.google.com.  2891    IN      TXT     "v=spf1 ip4:64.18.0.0/20
ip4:64.233.160.0/19 ip4:66.102.0.0/20 ip4:66.249.80.0/20 ip4:72.14.192.0/18
ip4:74.125.0.0/16 ip4:173.194.0.0/16 ip4:207.126.144.0/20 ip4:209.85.128.0/17
ip4:216.58.192.0/19 ip4:216.239.32.0/19 ~all"

dig -t txt _netblocks2.google.com     
_netblocks2.google.com. 3251    IN      TXT     "v=spf1 ip6:2001:4860:4000::/36
ip6:2404:6800:4000::/36 ip6:2607:f8b0:4000::/36 ip6:2800:3f0:4000::/36
ip6:2a00:1450:4000::/36 ip6:2c0f:fb50:4000::/36 ~all"

dig -t txt _netblocks3.google.com 
_netblocks3.google.com. 3249    IN      TXT     "v=spf1 ~all"

Overall, looks good and under the limit of 10.


OTHERS, like ebay have at least 13 lookups which clearly breaks the RFC. From
tests on 4/29/2015:

dig -t txt ebay.com
dig -t txt s._spf.ebay.com
dig -t txt c._spf.ebay.com
dig -t txt p._spf.ebay.com
dig -t txt emarsys.net
dig -t txt _spf.salesforce.com
dig -t txt _mtablock1.salesforce.com
dig -t txt p2._spf.ebay.com
dig -t txt docusign.net
dig -t txt sendgrid.net
dig -t txt cmail1.com
dig -t txt sendgrid.biz
dig -t txt pp._spf.paypal.com


And places like SecureServer/GoDaddy I also got to 13 and stopped going
manually:

dig -t txt smtp.secureserver.net
smtp.secureserver.net.  79508   IN      TXT     "v=spf1
include:spf.secureserver.net -all"
dig -t txt spf.secureserver.net       
spf.secureserver.net.   373     IN      TXT     "v=spf1
include:spf100.secureserver.net include:spf200.secureserver.net -all"
dig -t txt spf100.secureserver.net
spf100.secureserver.net. 368    IN      TXT     "v=spf1
include:spf101.secureserver.net include:spf102.secureserver.net
include:spf103.secureserver.net include:spf104.secureserver.net
include:spf105.secureserver.net include:spf106.secureserver.net
include:spf107.secureserver.net -all"
dig -t txt spf200.secureserver.net   
spf200.secureserver.net. 361    IN      TXT     "v=spf1
include:spf201.secureserver.net include:spf202.secureserver.net
include:spf203.secureserver.net -all"

This is ridiculous and causing real work PERMERRORS in SPF.

SpamAssassin raised our limit to max_dns_interactive_terms => 15,
https://svn.apache.org/viewvc?view=rev&rev=1646363 and already we are seeing
limits bumped for places using GoDaddy/SecureServer that exceed 15.


eBay and GoDaddy should be on top of these things.  I don't have the time to
explain RFCs or shame major companies into following them.  

Additionally, from testing, Gmail clearly doesn't follow the 10 limit and
parses limits at least as high as 16.  That means that they clearly see the
same practical, real-world issue and have raised their limit making them
vulnerable to being used as middle-men in a DOS attack as discussed in the RFC.

Therefore, I am raising the limit from 15 to 20 for trunk and 3.4 branch. 
Additionally, we might want to make this a configurable option and LOWER the
default to 10 to match the RFC with the recommendation that in real world
operations, 16 or even 20 might be necessary.

Will add commits when testing completes.

regards,
KAM

-- 
You are receiving this mail because:
You are the assignee for the bug.

Reply via email to