> -----Original Message-----
> From: Marc Perkel [mailto:[EMAIL PROTECTED]
> Sent: Monday, December 27, 2004 7:35 AM
> To: Spamassassin Dev List
> Subject: Re: A Feature I've always wanted - Test for multiple hits on
> same rule
>
>
> I'm not sure you would have to take this into account - but it would be
> interesting to test the results. I would want just a couple of system
> settable commands like:
>
> TwoHitFactor = 1.2
> ThreeHitFactor = 1.3
>


This isn't germane to the scoring algorithm per se, but perhaps more having
to do with fine-tuning the Bayesian algorithm, somewhat along the lines you
outline above:

http://www.andrew.cmu.edu/user/dgovinda/pdf/multinomial-aaaiws98.pdf

Abstract
Recent approaches to text classifcation have used two
different first-order probabilistic models for classification
both of which make the naive Bayes assumption.
Some use a multi-variate Bernoulli model, that is, a
Bayesian Network with no dependencies between words
and binary word features, e.g. Larkey and Croft,
Koller and Sahami
Others use a multinomial
model, that is, a uni-gram language model with integer
word counts, e.g. Lewis and Gale, Mitchell.

This paper aims to clarify the confusion by describing
the differences and details of these two models and by
empirically comparing their classification performance
on five text corpora. We find that the multi-variate
Bernoulli performs well with small vocabulary sizes,
but that the multinomial performs usually performs
even better at larger vocabulary sizes providing on
average a 27% reduction in error over the multi-variate
Bernoulli model at any vocabulary size.


Reply via email to