Ahh, that makes more sense now.  "ham" is just what does not pass the
"spam" threshold. In this light, if Sniffer is "hyper accurate" and
catches more real spam than all others, it will appear "less accurate"
overall because of the deficienes in the other tests.  For some reason,
I was thinking that "ham" was being calculated differently.

Thanks for the tests, as well.


PS - I did read your stuff about hyper-accuracy, but everything wasn't
meshing for me, hence my question :)

-----Original Message-----
[mailto:[EMAIL PROTECTED] On Behalf Of Pete McNeil
Sent: Saturday, April 02, 2005 4:43 PM
To: Jay Sudowski - Handy Networks LLC
Subject: Re: [sniffer] MDLP Tests

On Saturday, April 2, 2005, 4:09:31 PM, Jay wrote:

JSHNL> Hello -
JSHNL> I am reviewing your MDLP report at 
JSHNL> http://www.sortmonster.com/MDLP/MDLP-Example-Long.html, and find 
JSHNL> some tests that are seemingly quite effective that I'm not 
JSHNL> familiar with.  If anyone has any informaiton about these tests,
please let me know:

JSHNL> - FABEL (is this the same as FABELSOURCES at
JSHNL> http://www.declude.com/Articles.asp?ID=97&Redirected=Y?)

FABEL           ip4r    spamsources.fabel.dk  


MXRATE-BLACK    ip4r    pub.mxrate.net        
MXRATE-WHITE    ip4r    pub.mxrate.net        
MXRATE-SUSP     ip4r    pub.mxrate.net        


UCEPROTECRDO    ip4r    dnsbl-1.uceprotect.net
UCEPROTECCMUL   ip4r    dnsbl-2.uceprotect.net
UCEPROTECCVIR   ip4r    dnsbl-3.uceprotect.net

JSHNL> Also, perhaps I am misunderstanding the data, but SNIFFER has a 
JSHNL> .802 - isn't that relatively "bad" ?

Actually, that's the hyper-accuracy penalty at work. I wrote a bunch
about that on the MDLP page. What's going on is that SNF frequently
catches spam that virtually no other tests are catching yet and as a
result the total weight never reaches the threshold. Every one of those
events shows up counting against it.

We research these periodically (we used to look at them constantly) and
with very rare exceptions we find that these are not false positives.

In fact, on our systems last year SNF had fewer than 10 FP. (several of
those were messages from customers that actually contained examples of
spam, malware, or logs with spammy URI). Of course, our numbers are a
more than bit skewed because the vast majority of traffic on our system
is spam... so we can't use that to calculate a "false positive rate"
that has any real meaning.

The closest we can really get to an indication of false positive rates
from SNF is to point at our FP rate page:


This page shows counts of all false positives reported to us on a daily
basis for all of our customers. At least two of these systems are
service providers with 10 or more licenses which submit false positives
automatically as they are reported from their customers.

So anyway, the short answer is that the SA and SQ values on the SNIFFER
tests are skewed by the hyper-accuracy penalty inherent in how MDLP
develops these scores. The true accuracy values are very much higher and
this is regularly confirmed by both hard reviews of the data and
anecdotal evidence from our customers.

Hope this helps,


This E-Mail came from the Message Sniffer mailing list. For information
and (un)subscription instructions go to

This E-Mail came from the Message Sniffer mailing list. For information and 
(un)subscription instructions go to 

Reply via email to