https://issues.apache.org/SpamAssassin/show_bug.cgi?id=6061





--- Comment #8 from Sidney Markowitz <[email protected]>  2009-02-07 11:21:32 
PST ---
(In reply to comment #6)
Thanks for the offer. However, we already have a table of valid TLDs in
SpamAssassin used in parsing URIs and I don't think the tradeoff of a network
access per parse versus the memory requirements would be worth it in this case.

(in reply to comment #5)
Is the idea to accept anything that begins with "http://"; as a URL? I would
like to have some idea as to how many false positives that leads to -- Not FPs
on spam detection, although that is important too, but for this, how many false
identification of strings as URLs and how many resulting unnecessary calls to
URIRBLs? The reason for the current URI parse code (in trunk -- I'm still
waiting for that one more review and vote to put it in the 3.2 branch) is to
only send to the RBL what are possibly real links.

Which brings up another point. Is health.sharpdecimal as opposed to
health.sharpdecimal.com in the RBLs anyway? If not, what would be the point of
parsing it as a URL?


-- 
Configure bugmail: 
https://issues.apache.org/SpamAssassin/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug.

Reply via email to