Re[10]: [sniffer] Persistent Sniffer
On Saturday, April 2, 2005, 1:07:56 PM, Andrew wrote: CA Pete, your metaphors are wonderful. :-) snip/ CA If I remember correctly, the MaxPollTime was originally much lower. I CA now use the full 4 seconds, but I don't know how often that's needed. I CA easily see Declude processes taking longer than this, sometimes at 100% CA of my CPU (with Task Manager update speed set to High) In persistent mode the max poll time does not matter. It would only matter if the system fell back into peer-server mode. With persistent mode the client instances coordinate their timing with the server instance based on the data in the .stat file. CA I also set Lifetime to 0 (because I don't expect the service to need CA stopping), and Persistence to 12 hours. I'm hedging my bet with CA Persistence, because I normally expect a twice daily rulebase update, CA and my update mechanism should initiate a reload. This seems fine given that you issue reload with your updates. However, you should know that udpates are generally much more frequent than every 12 hours. More in the range of every 5 hours or so at this time. Best, _M This E-Mail came from the Message Sniffer mailing list. For information and (un)subscription instructions go to http://www.sortmonster.com/MessageSniffer/Help/Help.html
[sniffer] MDLP Tests
Hello - I am reviewing your MDLP report at http://www.sortmonster.com/MDLP/MDLP-Example-Long.html, and find some tests that are seemingly quite effective that I'm not familiar with. If anyone has any informaiton about these tests, please let me know: - FABEL (is this the same as FABELSOURCES at http://www.declude.com/Articles.asp?ID=97Redirected=Y?) - MXRATE-* - UCEPROTEC* Also, perhaps I am misunderstanding the data, but SNIFFER has a SQ of .802 - isn't that relatively bad ? Thanks! - Jay Sudowski // Handy Networks LLC Director of Technical Operations Providing Shared, Reseller, Semi Managed and Fully Managed Windows 2003 Hosting Solutions Tel: 877-70 HANDY x882 | Fax: 888-300-2FAX www.handynetworks.com http://www.handynetworks.com/ This E-Mail came from the Message Sniffer mailing list. For information and (un)subscription instructions go to http://www.sortmonster.com/MessageSniffer/Help/Help.html
Re: [sniffer] MDLP Tests
On Saturday, April 2, 2005, 4:09:31 PM, Jay wrote: JSHNL Hello - JSHNL I am reviewing your MDLP report at JSHNL http://www.sortmonster.com/MDLP/MDLP-Example-Long.html, and find some JSHNL tests that are seemingly quite effective that I'm not familiar with. If JSHNL anyone has any informaiton about these tests, please let me know: JSHNL - FABEL (is this the same as FABELSOURCES at JSHNL http://www.declude.com/Articles.asp?ID=97Redirected=Y?) FABEL ip4rspamsources.fabel.dk127.0.0.2 JSHNL - MXRATE-* MXRATE-BLACKip4rpub.mxrate.net 127.0.0.2 MXRATE-WHITEip4rpub.mxrate.net 127.0.0.3 MXRATE-SUSP ip4rpub.mxrate.net 127.0.0.4 JSHNL - UCEPROTEC* UCEPROTECRDOip4rdnsbl-1.uceprotect.net 127.0.0.2 UCEPROTECCMUL ip4rdnsbl-2.uceprotect.net 127.0.0.2 UCEPROTECCVIR ip4rdnsbl-3.uceprotect.net 127.0.0.2 JSHNL Also, perhaps I am misunderstanding the data, but SNIFFER has a SQ of JSHNL .802 - isn't that relatively bad ? Actually, that's the hyper-accuracy penalty at work. I wrote a bunch about that on the MDLP page. What's going on is that SNF frequently catches spam that virtually no other tests are catching yet and as a result the total weight never reaches the threshold. Every one of those events shows up counting against it. We research these periodically (we used to look at them constantly) and with very rare exceptions we find that these are not false positives. In fact, on our systems last year SNF had fewer than 10 FP. (several of those were messages from customers that actually contained examples of spam, malware, or logs with spammy URI). Of course, our numbers are a more than bit skewed because the vast majority of traffic on our system is spam... so we can't use that to calculate a false positive rate that has any real meaning. The closest we can really get to an indication of false positive rates from SNF is to point at our FP rate page: http://www.sortmonster.com/MessageSniffer/Performance/FalseReportsRates.jsp This page shows counts of all false positives reported to us on a daily basis for all of our customers. At least two of these systems are service providers with 10 or more licenses which submit false positives automatically as they are reported from their customers. So anyway, the short answer is that the SA and SQ values on the SNIFFER tests are skewed by the hyper-accuracy penalty inherent in how MDLP develops these scores. The true accuracy values are very much higher and this is regularly confirmed by both hard reviews of the data and anecdotal evidence from our customers. Hope this helps, _M This E-Mail came from the Message Sniffer mailing list. For information and (un)subscription instructions go to http://www.sortmonster.com/MessageSniffer/Help/Help.html
RE: [sniffer] MDLP Tests
Ahh, that makes more sense now. ham is just what does not pass the spam threshold. In this light, if Sniffer is hyper accurate and catches more real spam than all others, it will appear less accurate overall because of the deficienes in the other tests. For some reason, I was thinking that ham was being calculated differently. Thanks for the tests, as well. -Jay PS - I did read your stuff about hyper-accuracy, but everything wasn't meshing for me, hence my question :) -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Pete McNeil Sent: Saturday, April 02, 2005 4:43 PM To: Jay Sudowski - Handy Networks LLC Subject: Re: [sniffer] MDLP Tests On Saturday, April 2, 2005, 4:09:31 PM, Jay wrote: JSHNL Hello - JSHNL I am reviewing your MDLP report at JSHNL http://www.sortmonster.com/MDLP/MDLP-Example-Long.html, and find JSHNL some tests that are seemingly quite effective that I'm not JSHNL familiar with. If anyone has any informaiton about these tests, please let me know: JSHNL - FABEL (is this the same as FABELSOURCES at JSHNL http://www.declude.com/Articles.asp?ID=97Redirected=Y?) FABEL ip4rspamsources.fabel.dk127.0.0.2 JSHNL - MXRATE-* MXRATE-BLACKip4rpub.mxrate.net 127.0.0.2 MXRATE-WHITEip4rpub.mxrate.net 127.0.0.3 MXRATE-SUSP ip4rpub.mxrate.net 127.0.0.4 JSHNL - UCEPROTEC* UCEPROTECRDOip4rdnsbl-1.uceprotect.net 127.0.0.2 UCEPROTECCMUL ip4rdnsbl-2.uceprotect.net 127.0.0.2 UCEPROTECCVIR ip4rdnsbl-3.uceprotect.net 127.0.0.2 JSHNL Also, perhaps I am misunderstanding the data, but SNIFFER has a JSHNL SQ of JSHNL .802 - isn't that relatively bad ? Actually, that's the hyper-accuracy penalty at work. I wrote a bunch about that on the MDLP page. What's going on is that SNF frequently catches spam that virtually no other tests are catching yet and as a result the total weight never reaches the threshold. Every one of those events shows up counting against it. We research these periodically (we used to look at them constantly) and with very rare exceptions we find that these are not false positives. In fact, on our systems last year SNF had fewer than 10 FP. (several of those were messages from customers that actually contained examples of spam, malware, or logs with spammy URI). Of course, our numbers are a more than bit skewed because the vast majority of traffic on our system is spam... so we can't use that to calculate a false positive rate that has any real meaning. The closest we can really get to an indication of false positive rates from SNF is to point at our FP rate page: http://www.sortmonster.com/MessageSniffer/Performance/FalseReportsRates. jsp This page shows counts of all false positives reported to us on a daily basis for all of our customers. At least two of these systems are service providers with 10 or more licenses which submit false positives automatically as they are reported from their customers. So anyway, the short answer is that the SA and SQ values on the SNIFFER tests are skewed by the hyper-accuracy penalty inherent in how MDLP develops these scores. The true accuracy values are very much higher and this is regularly confirmed by both hard reviews of the data and anecdotal evidence from our customers. Hope this helps, _M This E-Mail came from the Message Sniffer mailing list. For information and (un)subscription instructions go to http://www.sortmonster.com/MessageSniffer/Help/Help.html This E-Mail came from the Message Sniffer mailing list. For information and (un)subscription instructions go to http://www.sortmonster.com/MessageSniffer/Help/Help.html
RE: [sniffer] MDLP Tests
Jay, here's more web information on the mxrate tests: http://www.mxrate.com/lookup/dns.htm Andrew 8) -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Pete McNeil Sent: Saturday, April 02, 2005 1:43 PM To: Jay Sudowski - Handy Networks LLC Subject: Re: [sniffer] MDLP Tests On Saturday, April 2, 2005, 4:09:31 PM, Jay wrote: JSHNL Hello - JSHNL I am reviewing your MDLP report at JSHNL http://www.sortmonster.com/MDLP/MDLP-Example-Long.html, and find JSHNL some tests that are seemingly quite effective that I'm not JSHNL familiar with. If anyone has any informaiton about these tests, JSHNL please let me know: JSHNL - FABEL (is this the same as FABELSOURCES at JSHNL http://www.declude.com/Articles.asp?ID=97Redirected=Y?) FABEL ip4rspamsources.fabel.dk127.0.0.2 JSHNL - MXRATE-* MXRATE-BLACKip4rpub.mxrate.net 127.0.0.2 MXRATE-WHITEip4rpub.mxrate.net 127.0.0.3 MXRATE-SUSP ip4rpub.mxrate.net 127.0.0.4 JSHNL - UCEPROTEC* UCEPROTECRDOip4rdnsbl-1.uceprotect.net 127.0.0.2 UCEPROTECCMUL ip4rdnsbl-2.uceprotect.net 127.0.0.2 UCEPROTECCVIR ip4rdnsbl-3.uceprotect.net 127.0.0.2 JSHNL Also, perhaps I am misunderstanding the data, but SNIFFER has a JSHNL SQ of .802 - isn't that relatively bad ? Actually, that's the hyper-accuracy penalty at work. I wrote a bunch about that on the MDLP page. What's going on is that SNF frequently catches spam that virtually no other tests are catching yet and as a result the total weight never reaches the threshold. Every one of those events shows up counting against it. We research these periodically (we used to look at them constantly) and with very rare exceptions we find that these are not false positives. In fact, on our systems last year SNF had fewer than 10 FP. (several of those were messages from customers that actually contained examples of spam, malware, or logs with spammy URI). Of course, our numbers are a more than bit skewed because the vast majority of traffic on our system is spam... so we can't use that to calculate a false positive rate that has any real meaning. The closest we can really get to an indication of false positive rates from SNF is to point at our FP rate page: http://www.sortmonster.com/MessageSniffer/Performance/FalseReportsRates. jsp This page shows counts of all false positives reported to us on a daily basis for all of our customers. At least two of these systems are service providers with 10 or more licenses which submit false positives automatically as they are reported from their customers. So anyway, the short answer is that the SA and SQ values on the SNIFFER tests are skewed by the hyper-accuracy penalty inherent in how MDLP develops these scores. The true accuracy values are very much higher and this is regularly confirmed by both hard reviews of the data and anecdotal evidence from our customers. Hope this helps, _M This E-Mail came from the Message Sniffer mailing list. For information and (un)subscription instructions go to http://www.sortmonster.com/MessageSniffer/Help/Help.html This E-Mail came from the Message Sniffer mailing list. For information and (un)subscription instructions go to http://www.sortmonster.com/MessageSniffer/Help/Help.html