On Mon, 7 Jun 2004, Scott Rothgaber wrote:
> Can someone explain this? Looks like at least half a dozen tests should
> have hit it. Furthermore, if the score is 0.0, why wasn't it
> autolearned? Thinking that maybe SA was just to busy at the time and
> took a break, I ran it through manually. Same thing, but it got
> autolearned as ham.
[snip..]
Cannot explain the autolearn anomaly, but my guess on the score is
that this is a spam "groomed" to get past SA. It is a known fact that
some spammers are "pre-flight" testing their trash against captive
installations of SA so that they can fine-tune the spam to get it to
achive low scores to slip past SA "in the wild".
Ran it here and it hit half a dozen tests, but they were all add-ins
(SARE, locally written, & Bayes) none of the stock distribution tests
hit. So this would be an argument to keep training your Bayes and
utilize new rule sources such as SARE and this list. ;)
Here's the rules that it hit here:
Content analysis details: (20.5 points, 6.0 required, autolearn=spam)
pts rule name description
---- ---------------------- ------------------------------------------
2.2 SARE_HEAD_SPAM Message headers used which identify spam
5.5 L_BAD_HEADERS1 Headers that ony spam uses
5.4 BAYES_99 BODY: Bayesian spam probability is 99 to 100%
[score: 1.0000]
2.5 SARE_RD_TO_BAD_TLD URI: Redirect to bad TLD (info|cc|ws|biz)
1.4 SARE_RD_YAHOO URI: Uses unsecure Yahoo redirect
3.5 L_URI_REDIR URI redirector
--
Dave Funk University of Iowa
<dbfunk (at) engineering.uiowa.edu> College of Engineering
319/335-5751 FAX: 319/384-0549 1256 Seamans Center
Sys_admin/Postmaster/cell_admin Iowa City, IA 52242-1527
#include <std_disclaimer.h>
Better is not better, 'standard' is better. B{