On Tuesday, August 24, 2004 it appears that Alexander S. Kunz
wrote the following in reply to my comments re: "BayeIT Macros":

ASK> 24-Aug-2004 18:33, you wrote:

>> Hi Gerard. If that's the case, then do you have any ideas to explain
>> the difference between the BayesIT  stats & the # of junk emails I
>> have in my current file?

ASK> Do you have your own filters that deal with junk email, too?

  Yes

ASK> If those
ASK> aren't caught by BayesIt and you sort them to the same junk email folder,
ASK> that would explain the difference.

  Doesn't this depend on whether BayesIT screens before or after TB!
  filters -- that's one thing I'm trying to find out.

  If it's before, then the stats are dead wrong. If it's after that
  might explain some of the difference but.... see below.


ASK> For a start, use your junk email to train BayesIt with them: mark all of
ASK> the junk mails [snip] should at least
ASK> make a significant difference on the false negatives statistics: the
ASK> percentage value that says "...guessed right 99.5% of the time" should drop
ASK> a lot if BayesIt only detected 2 mails while in reality 95 mails were junk,
ASK> ahem... :-)

  OK, did as you suggested. Here are "new" stats:

Spam Stats, last 24 hours (BayesIt! 0.5.11)
Total Spam Emails: 2
Total Clean Emails: 186
BayesIT guessed right 99.5% of the time
My email is 1.06383% spam

  Now what?

  TIA

-- 
Jan Rifkinson
Ridgefield CT USA
TB! v2.13 "Lucky" Beta/7 W2K v5.0 Service Pack 4











________________________________________________
Current version is 2.12.00 | 'Using TBUDL' information:
http://www.silverstones.com/thebat/TBUDLInfo.html

Reply via email to